corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
229,923,250 | UNDERSTANDING AND IMPROVING ENCODER LAYER FUSION IN SEQUENCE-TO-SEQUENCE LEARNING | Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embeddings. The source code will be released. * Work was done when Xuebo Liu and Liang Ding were interning at Tencent AI Lab. . Layer-wise cross-view decoding for sequence-to-sequence learning. arXiv, 2020. | [
13747425,
3626819,
52157637,
91184134,
174799399,
44172616,
201306636,
21850704,
59310641,
52078335,
11212020,
52011544,
836219,
162183964,
51880415
] | UNDERSTANDING AND IMPROVING ENCODER LAYER FUSION IN SEQUENCE-TO-SEQUENCE LEARNING
Xuebo Liu
Department of Computer and Information Science
NLP 2 CT Lab
University of Macau
Longyue Wang
Tencent AI Lab
Derek F Wong
Department of Computer and Information Science
NLP 2 CT Lab
University of Macau
Liang Ding
The University of Sydney
Lidia S Chao [email protected]
Department of Computer and Information Science
NLP 2 CT Lab
University of Macau
Zhaopeng Tu [email protected]
Tencent AI Lab
UNDERSTANDING AND IMPROVING ENCODER LAYER FUSION IN SEQUENCE-TO-SEQUENCE LEARNING
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embeddings. The source code will be released. * Work was done when Xuebo Liu and Liang Ding were interning at Tencent AI Lab. . Layer-wise cross-view decoding for sequence-to-sequence learning. arXiv, 2020.
INTRODUCTION
Sequence-to-Sequence (Seq2Seq) learning has advanced the state of the art in various natural language processing (NLP) tasks, such as machine translation (Bahdanau et al., 2015;Vaswani et al., 2017;Wu et al., 2019), text summarization (Wang et al., 2019b;Zhang et al., 2020), and grammatical error correction (Kiyono et al., 2019;Kaneko et al., 2020). Seq2Seq models are generally implemented with an encoder-decoder framework, in which a multi-layer encoder summarizes a source sequence into a sequence of representation and another multi-layer decoder produces the target sequence conditioned on the encoded representation.
Recent studies reveal that fusing the intermediate encoder layers (EncoderFusion) is beneficial for
Seq2Seq models, such as layer attention (Bapna et al., 2018), layer aggregation (Dou et al., 2018;Wang et al., 2019c), and layer-wise coordination (He et al., 2018). Despite its effectiveness, not much is known about how fusing encoder layer representations work. The intuitive explanation is that fusing encoder layers exploits surface and syntactic information embedded in the lower encoder layers (Belinkov et al., 2017;Peters et al., 2018). However, other studies show that attending to lower encoder layers (excluding the encoder embedding layer) does not improve model performance (Domhan, 2018), which is conflicted with existing conclusions. It is still unclear why and when fusing encoder layers should work in Seq2Seq models.
This paper tries to shed light upon behavior of Seq2Seq models augmented with EncoderFusion method. To this end, we propose a novel fine-grained layer attention to evaluate the contribution of individual encoder layers. We conduct experiments on several representative Seq2Seq NLP tasks, including machine translation, text summarization, and grammatical error correction. Through a series of analyses, we find that the uppermost decoder layer pays more attention to the encoder embedding layer. Masking the encoder embedding layer significantly drops model performance by generating hallucinatory (i.e. fluent but unfaithful to the source) predictions. The encoded representation of the standard Seq2Seq models (i.e. w/o fusing encoder layers) may not have enough capacity to model both semantic and surface features (especially at the encoder embedding layer). We call the problem described above the source representation bottleneck.
Based on this observation, we simplify the EncoderFusion approaches by only connecting the encoder embedding layer to softmax layer (SurfaceFusion). The SurfaceFusion approach shortens the path distance between source and target embeddings, which can help to learn better bilingual embeddings with direct interactions. Experimental results on several Seq2Seq NLP tasks show that our method consistently outperforms both the vanilla Seq2Seq model and the layer attention model. Extensive analyses reveal that our approach produces more aligned bilingual word embeddings by shortening the path distance between them, which confirm our claim.
Our main contributions are as follows:
• We introduce a fine-grained layer attention method to qualitatively and quantitatively evaluate the contribution of individual encoder layers.
• We demonstrate that the encoder embedding layer is essential for fusing encoder layers, which consolidates conflicted findings reported by previous studies.
• We propose a simple yet effective SurfaceFusion approach to directly exploit the encoder embedding layer for the decoder, which produces more expressive bilingual embeddings.
PRELIMINARIES
SEQUENCE-TO-SEQUENCE LEARNING
Seq2Seq learning aims to maximize the log-likelihood of a target sequence y = {y 1 , . . . , y J } conditioned on a source sequence x = {x 1 , . . . , x I }, which is formulated as:ŷ = arg max log P (y|x). Typically, Seq2Seq learning can be implemented as various architectures (Bahdanau et al., 2015;Gehring et al., 2017;Vaswani et al., 2017;Wu et al., 2019), among which the Transformer (Vaswani et al., 2017) has advanced the state of the art. Without loss of generality, we introduce Transformer as the testbed in this paper. Transformer consists of an encoder E equipped with N identical layers to map the source sequence x into distributed representations, based on which a decoder D equipped with M identical layers generates the target sequence y:
X N = E(X 0 ) N := n=1 FFN ATT(X n−1 , X n−1 , X n−1 ) (1) Y M = D(Y 0 , X N ) M := m=1 FFN ATT ATT(Y m−1 , Y m−1 , Y m−1 ), X N , X N(2)
where X 0 denotes the sum of the word embeddings X emb and position embeddings X pos of x, Y 0 denotes that of the shifted right y, FFN(·) denotes a position-wise feed-forward network, and ATT(·) denotes a multi-head dot-product attention network with three arguments-query, key and value. Residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) are used in each sub-layer, which are suppressed in Equation 1 and 2 for clarity. Finally, the output representation Y M of the decoder is projected into the probability P (y|x), which is optimized during model training.
EXPERIMENTAL SETUP
To validate the universality of source representation bottleneck in Seq2Seq models, we conducted experiments on three representative tasks, which vary from the distance between input and output domains and the scale of training data:
Machine translation takes a sentence in one language as input, and outputs a semantically-equivalent sentence in another language. We conducted experiments on three benchmarking datasets: smallscale WMT16 Romanian-English (Ro-En; 0.6M instances), medium-scale WMT14 English-German (En-De; 4.5M instances), and large-scale WMT14 English-French (En-Fr; 36.0M instances). The tokenized BLEU score (Papineni et al., 2002) was used for all the translation tasks.
Text summarization takes a long-text document as input, and outputs a short and adequate summary in the same language. We used the CNN/Daily Mail corpus (0.3M instances). We evaluated with the standard ROUGE metric (Lin, 2004), i.e. Rouge-1, Rouge-2, and Rouge-L.
Grammatical error correction takes a sentence with grammatical errors as input, and outputs a corrected sentence. We used CONLL14 datasets as the testbed (1.4M instances). The MaxMatch (M 2 ) scores (Dahlmeier & Ng, 2012) were used for evaluation with precision, recall, and F 0.5 values.
The machine translation task has distant input/output domains (i.e. in different languages), while the other tasks have similar input/output domains (i.e. in the same language). We used Transformer (Vaswani et al., 2017) as the Seq2Seq model. Details of the datasets and model training are listed in Appendix A.1.
BEHAVIOR OF ENCODERFUSION
In this section, we first formulate our research hypothesis of source representation bottleneck ( §3.1) that EncoderFusion expects to solve. In the following subsections, we propose a fine-grained layer attention model ( §3.2) to validate our hypothesis on well-designed experiments ( §3.3).
SOURCE REPRESENTATION BOTTLENECK
Seq2Seq models learn more abstract features with the increase of layer level (i.e. X 0 → X N and Y 0 → Y M ) (Belinkov et al., 2017). It has been extensively validated that a reasonable use of both the abstract representations (at higher-level layers) and the surface representations (at lower-level layers) is beneficial for various NLP (Lu & Li, 2013;Hu et al., 2014;Dou et al., 2018;Peters et al., 2018) and CV (Long et al., 2014;Pinheiro et al., 2016;Lin et al., 2017;Chen et al., 2018a) tasks.
However, the Seq2Seq decoder only takes the abstract representations at uppermost layer X N as input (Equation 2), while ignores other usefully surface representations at other layers X n (n < N ). Although X N has encoded surface features from low-level representations through layer-by-layer abstraction and residual connections, we hypothesize that its limited representation capacity may not sufficiently model those surface features from lower encoder layers, especially the embedding layer. We call such an issue as source representation bottleneck.
FINE-GRAINED LAYER ATTENTION
For each decoder layer, layer attention (Bapna et al., 2018;Peters et al., 2018) assigns normalized scalar weights to all encoder layers, providing a direct way for evaluating the contributions made by each encoder layer. However, the capacity of a simple scalar weight is limited, leading to insufficient evaluation of the contributions.
Motivated by fine-grained attention (Choi et al., 2018) that each element of a context vector receives an individual attention weight, we propose a fine-grained layer attention model to combine the advantages of both techniques. This allows us to more convincingly evaluate the contribution of individual encoder layer to the model performance. Besides, the nature of fine-grained attention enables us to give in-depth analyses of the representation power in §3.3.
Specifically, we replace the layer-agnostic source representation X N with the layer-aware representation S m for each decoder layer Y m , which is calculated as:
S m = N n=0ŵ m,n X n ,ŵ m,n = ŵ m,n,1 , . . . ,ŵ m,n,D ,ŵ m,n,d = exp(w m,n,d ) N n =0 exp(w m,n ,d )
where denotes an element-wise multiplication, and w m,n,d denotes an element in the learnable attention weight W ∈ R M ×(N +1)×D , where D is the dimensionality of the source representation. When n = 0, we use the word embeddings X emb without position embeddings as X 0 , which has been empirically proved effective. We applied a regularization technique -DropConnect (Wan et al., 2013) to the attention weight W for a stable training, which randomly drops each w m,n,d with a probability p and divides W by 1 − p. We set it to 0.3 for all the experiments. Table 2 lists the results. The proposed fine-grained layer attention model consistently outperforms the vanilla Transformer across Seq2Seq tasks, demonstrating the benefit of fusing surface features at lower-level layers. We evaluated several EncoderFusion methods in Table 1, including layer aggregation (Dou et al., 2018), layer-wise coordination (He et al., 2018), and coarse-grained layer attention (Bapna et al., 2018). Their results are respectively 34.05, 34.19, and 34.32, which are all lower than that of fine-grained layer attention (34.45). Based on these experimental results, we thus choose fine-grained layer attention as a representative of EncoderFusion in the following analyses.
BEHAVIOR CHANGES ACROSS ENCODER LAYERS
In this section, we investigate whether the surface features at lower encoder layers (especially the encoder embedding layer) contribute to the model performance via carefully designed experiments.
1 2 3 4 5 6 Decoder Layer 6 5 4 3 2 1 Emb Encoder Layer
. 15 .18 .17 .15 .12 .11 .16 .18 .17 .16 .14 .13 .14 .15 .15 .15 .14 .13 .15 .14 .14 .13 .14 .14 .14 .12 .12 .13 .14 .15 .14 .11 .11 .13 .14 . 16 .13 .11 .13 .15 .17 .19 (a) Translation: Ro-En 1 2 3 4 5 6
Decoder Layer 6 5 4 3 2 1 Emb .14 .13 .14 .12 .13 .12 .12 .13 .14 .13 .14 .14 .12 .13 .13 .15 .13 .11 .12 .13 .14 .17 .13 .11 .14 .14 .14 . 18 .13 .12 .16 .15 .15 .14 .13 .14 .19 .18 .17 .11 .21 .26 (b) Summarization 1 2 3 4 5 6
Decoder Layer 6 5 4 3 2 1 Emb .14 . 17 .21 .19 .16 .14 .16 .18 .18 .15 .12 .11 .14 .15 .15 .13 .10 .10 .13 .14 .13 .13 .11 .12 .14 .12 .12 .13 .15 .16 .16 .13 .11 .14 .17 .18 .13 .11 .10 .14 .18 .19 (c) Correction Figure 1: Attention distribution that each decoder layer (x-axis) attending to encoder layers (y-axis). Figure 1, in which each weight is the averaged attention weights over all dimensions. Generally, a higher weight denotes more contribution of an encoder layer to the corresponding decoder layer.
Visualization of layer attention We first visualize the learned layer attention distribution in
Clearly, in all tasks higher decoder layers especially the uppermost ones pay more attention to the encoder embedding layer, which indicates that the surface representations potentially bring some additional useful features to the model performance. Voita et al. (2019) reveal that the upper layers of decoder are responsible for the translation part while the lower layers for the language modeling part. Similarly, our results show that surface representations might play an important role in learning to translate source tokens.
Among the Seq2Seq models, there are still considerable differences in the attention heatmaps. In the summarization model, almost all decoder layers focus more on the encoder embedding layer, while in the other two models the intermediate decoder layers pay more attention to the higher-level encoder layers. This is consistent with the findings of Rothe et al. (2019), in which they reveal that the summarization task, as a typical extractive generation task, tends to use more surface features to generate extractive summaries. In contrast, both machine translation and error correction tasks require a large amount of syntactic and semantic information, which are generally embedded in higher-level encoder layers (Peters et al., 2018).
However, we still cannot conclude that source representation bottleneck does exist in Seq2Seq models, since the surface features might act as a noise regularizer to improve the robustness of encoder output representations. To dispel the doubt, we further design two experiments to directly evaluate the effectiveness of surface features at the encoder embedding layer.
Contribution of individual encoder layer In this experiment, we quantitatively analyze the behaviors change of a trained Seq2Seq model when masking a specific encoder layer (i.e. turning its attention weight to zero and redistribute the other attention weights). Note that the masking operation does not affect the information flow of encoding calculation, i.e. keeping Equation 1 unchanged. Figure 2(a) shows the contribution of individual encoder layer to model performance. As seen, masking the encoder embedding layer seriously harms the model performance in all tasks, which confirms our claim that the surface features in the embedding layer are essential to Seq2Seq models.
Length
Figure 2(b) shows the results on the output length. Masking the encoder embedding layer consistently increases the length of generated output, which is especially significant for the summarization model. One possible reason is that the instances in translation and correction tasks have similar input/output lengths, while the summarization instances have distant input/output lengths. By analyzing the model outputs, we found that the Seq2Seq models tend to generate some hallucinatory (i.e. fluent but unfaithful to the source) predictions (Lee et al., 2019;Wang & Sennrich, 2020) when masking the embedding layer. Taking the correction task for an example, a right prediction "anyone" was replaced by the hallucinatory prediction "friends of anyone" in the masked model, in which the corresponding source contains no information related to "friends". This issue becomes worse in the summarization task, since the hallucinatory prediction is more likely to be a sentence.
The additional hallucinations will increase the output length and reduce the model performance. Expressivity of attended dimensions in the encoder embedding layer As shown in Figure 1, the uppermost decoder layer pays most attention to the encoder embedding layer (i.e. the lower right corner). If the embedding layer acts as a noise regularizer, the layer dimensions would be randomly attended by the fine-grained model; otherwise, the dimensions of higher attention weights should be distinguished from the other dimensions.
Starting from this intuition, we reordered the dimensions of the encoder embedding layer according to the attention weightsŵ M,0 , and split it into two equal sub-embedding matrices, i.e. more attended dimensions and less attended dimensions. We compared the expressivity of the two sub-embedding matrices by the commonly-used singular value decomposition ( From the above experiments, we prove that the encoder embedding layer indeed provides useful surface information, which is not fully exploited by the standard Seq2Seq models.
OUR METHOD
In Section 3, we show that the uppermost decoder layer requires more surface features for better representation learning. One possible reason is that the uppermost decoder layer is used for predicting individual target token, which naturally benefits from more token-level surface features than sequencelevel abstract features. To validate this assumption, we simplify fine-grained layer attention that only the uppermost decoder layer can attend to the embedding layer and output layer of the encoder. Empirical results show that the simplified variant works on par with the original one, revealing that the surface features embed at the source embedding layer is expressive.
Although layer attention model partially alleviates source representation bottleneck, it potentially introduces unnecessary intermediate encoder representations. To address this gap, we propose to directly connect the decoder softmax layer and the encoder embedding layer with a simple SurfaceFusion method.
SURFACEFUSION
Seq2Seq learning aims to maximize the log-likelihood of a target sequence y given a source sequence x. In practice, it factorizes the likelihood of the target sequence into individually token likelihoods:
y = arg max J j=1 log P (y j ) = arg max J j=1 log P (y j |y <j , x)(3)
We rewrite P (y j ) as a fused probability with the second condition term x:
log P (y j ) = Φ P (y j |y <j , x), P (y j |x)
where Φ(·) is a fusion method that will be described later, and P (y j |x) is a probability conditioned on the source surface features. Specifically, we employ a multi-head dot-product attention network (Vaswani et al., 2017) with a decoder output representation y M j as a query, encoder output representations X N as keys , and encoder surface representations X emb as values, to calculate a surface representation r(y j , x).
Then we use the pre-softmax weight V ∈ R d×|Vy| of the vanilla model to transform the surface representation r(y j , x) ∈ R d into a pre-softmax logit r(y j , x) ∈ R |Vy| . The final surface constraint probability is calculated as: P (y j |x) = exp(1 yj ( r(y j , x))/τ ) w∈Vy exp(1 w r(y j , x)/τ )
where 1 w (·) denotes an index function to take the logit of a target token y, and τ denotes a softmax temperature parameter to control the smoothness of the probability distribution P (y j |x). As τ approaches to 0, the distribution tends to be an one-hot distribution representing the token of the maximum probability. The distribution becomes uniform at a higher τ .
Choices of fusion function Φ There are many variants of fusion methods (Gulcehre et al., 2015;Sriram et al., 2017;Stahlberg et al., 2018). The aim of this paper is not to explore this whole space but simply to show that two fairly straightforward implementations works well and that SurfaceFusion helps for sequence-to-sequence models:
Hard fusion:
Φ hard = λ log P (y j |y <j , x) + (1 − λ) log P (y j |x) (6) Soft fusion: Φ soft = log(softmax(E(y j |y <j , x) + log P (y j |x))(7)
where λ is a pre-defined interpolation weight, and E(y j |y <j , x) is the pre-softmax logit of the probability P (y j |y <j , x). Compared to hard fusion, soft fusion removes the need for manually setting the hyperparameter λ.
The proposed SurfaceFusion method is easy to use. There are only two additional hyperparameters, i.e. λ (Equation 6) and τ (Equation 5). We find that λ is sensitive to the corpus scale but insensitive to the relationship of input/output domain, which was set to 0.9 for the En-De, En-Fr and correction tasks, and 0.8 for the Ro-En and summarization tasks. For τ , it was set to 5 for soft fusion and 1 for hard fusion across different benchmarks. We kept other settings all the same with the vanilla models. In practice, we observed an additional 10% inference latency with the introduction of SurfaceFusion. Model performance Table 2 lists the results of the proposed approach on different tasks. In addition to the vanilla Seq2Seq model ("Vanilla"), we also report the results of existing studies on the same datasets ("Existing") for better comparison. Our re-implementation of the vanilla models matches the results reported in previous works, which we believe make the evaluation convincing. Closeness of word embeddings SurfaceFusion shortens the path distance between source and target embeddings, which can help to learn better bilingual embeddings with direct interactions. Table 3 shows the cosine similarities between the tied source and target embeddings on the Ro-En translation task.
EXPERIMENTAL RESULTS
In the experiment, we first train an additional aligner (i.e. fast-align (Dyer et al., 2013)) on the training corpus and use the alignment links to construct a word dictionary. The results calculated over the dictionary show that the relationship between the source and target embedding becomes much closer (i.e. high cosine similarities). This can help each other to learn better representations, and has been validated to be beneficial for Seq2Seq models (Press & Wolf, 2017;Liu et al., 2019).
Expressivity of word embeddings In this experiment, we quantitatively evaluate the expressivity of the word embeddings learned by different models using the singular value decomposition. The related experimental details and executions are similar to that of Figure 3.
RELATED WORK
EncoderFusion in Seq2Seq Lower encoder layers that embed useful surface features are far away from the training signals, which poses difficulty for deep Seq2Seq models to exploit such useful features. Although residual connections (He et al., 2016) have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations (Yu et al., 2018). In response to this problem, several approaches have been proposed to fuse the encoder layers with advanced methods, such as layer attention (Bapna et al., 2018;Shen et al., 2018;Wang et al., 2019c), layer aggregation (Dou et al., 2018;Wang et al., 2018a;Dou et al., 2019;Li et al., 2020), and layer-wise coordination (He et al., 2018;Liu et al., 2020). Although these methods show promising results on different NLP tasks, not much is known about how the EncoderFusion works. In addition, some other studies show that exploiting low-layer encoder representations fail to improve model performance (Domhan, 2018).
In this paper, we consolidate the conflicting conclusions of existing studies by pointing out that the encoder embedding layer is the key, which can help Seq2Seq models to precisely predict target words. Based on this finding, we propose a novel SurfaceFusion to directly connecting the encoder embedding layer and the softmax layer, which consistently outperform current EncoderFusion approaches across different NLP tasks.
Variants of Feature Fusion Feature fusion aims to merge two sets of features into one, which is frequently employed in CV tasks, such as semantic segmentation (Long et al., 2014;Chen et al., 2018a; and object detection (Pinheiro et al., 2016;Lin et al., 2017). shows that simply fusing surface and abstract features tends to be less effective due to the gap in semantic levels.
For NLP tasks, researchers investigated fusion models for language understanding (Lu & Li, 2013;Hu et al., 2014;Peters et al., 2018) and language generation (Gulcehre et al., 2015;Sriram et al., 2017;Stahlberg et al., 2018). Nguyen & Chiang (2019) propose to fuse features at representation-level, but we empirically find this kind of fusion method is not orthogonal to multi-layer models due to the large semantic gap. Gulcehre et al. (2015) combine the predictions produced by the Seq2Seq model and external LM predictions in a later fusion manner, which pose little impact to the original information flow. Stahlberg et al. (2018) improve upon it by removing the dependence on the manually defined hyper-parameter. In this work, we demonstrate the effectiveness of the two typical probability-level fusion methods on sequence-to-sequence learning tasks. Unlike them that rely on an external model, our approach only requires a surface attention module that can be jointly trained with the vanilla Seq2Seq model.
CONCLUSION AND FUTURE WORK
In this paper, we investigate how encoder layer fusion works on solving the source representation bottleneck. Based on a series of experiments on different Seq2Seq tasks, we find that the encoder embedding layer is important to the success of EncoderFusion by exploiting the useful surface information. Based on this observation, we propose a novel SurfaceFusion to directly connect the encoder embedding layer and softmax layer. Experiments show that SurfaceFusion consistently outperforms the conventional EncoderFusion in several datasets. Extensive analyses reveal that SurfaceFusion enhances the learning of expressive bilingual word embeddings for Seq2Seq models, which confirm our claim.
Future directions include validating our findings on more Seq2Seq tasks (e.g. dialogue and speech recognition) and model architectures (RNMT+ (Chen et al., 2018b) and DynamicConv (Wu et al., 2019)). It is also worthwhile to explore more alternatives to EncoderFusion from the perspective of exploiting the embedding layer.
A.1 EXPERIMENTAL SETUP Table 4: Statistics of the datasets and hyperparameters for the experiments. All the data have been tokenized and split into joint sub-word units (Sennrich et al., 2016). "Batch" denotes the number of source tokens and target tokens used in each training step. "DP" denotes the dropout value (Srivastava et al., 2014). "LP" denotes the length penalty (Wu et al., 2016). "Base" and "Big" denote the two kinds of model variants of Transformer. We chose the checkpoint with best validation ppl for testing. Ott et al. (2018) and followed them to preprocess the data sets. The validation set is newstest2012+2013 and the test set is newstest2014.
Text summarization For CNN/Daily Mail dataset, we used the existing result and preprocessing method of Ott et al. (2019). During testing, the minimum length was set to 55 and the maximum length was set to 140, which were tuned on the development data. We also followed Paulus et al. (2018) to disallow repeating the same trigram.
Grammatical error correction For CONLL14 benchmark, the preprocessing script 3 and existing result are given by Chollampatt & Ng (2018). We applied the regularization technique SwitchOut (Wang et al., 2018b) in this task to prevent overfitting, which was set to 0.8 for the source and 0.9 for the target. Table 4 gives more details of the benchmarks. It is noted that other unmentioned hyperparameters keep the same with the original paper of Transformer (Vaswani et al., 2017). All the models are implemented by the open-source toolkit fairseq (Ott et al., 2019). 4
A.2 CASE STUDY
Tables 5, 6 and 7 give the cases from the three tasks. We can see that the hallucination issue related to surface features consistently appear over the different Seq2Seq tasks. The most representative cases are those from the correction task, in which very similar input/output sequences still make such mistakes.
Another observation is the prediction omission problem when masking the encoder output layer. The lack of abstract features leads to incomplete semantics of source representations, thus making Seq2Seq models omit generating a part of source, hurting the model performance. By looking at the cases over the three tasks, we find that the prediction omission is widespread in the prediction of modifiers, e.g. adjectives and adverbs. Table 5: Examples from the Ro-En translation task. Red words are good predictions, while blue words are bad predictions. Masking the embedding layer ("Mask Emb") of the fine-grained layer attention model leads to hallucinatory predictions, prolonging the prediction length. While masking the output layer ("Mask Out") leads to prediction omissions, shortening the length.
Hallucination
Source diseara voi merge acasa si voi dormi linistit . Reference i will go home tonight and sleep well .
Vanilla i will go home and sleep quietly . Mask Emb the device will go home and i will sleep peacefully . Mask Out i will go home and sleep quietly .
Omission
Source radem adesea mult atunci cand vorbim . Reference we often laugh a lot when we talk .
Vanilla we often laugh a lot when we talk . Mask Emb we often laugh a lot when we talk . Mask Out we often laugh when we talk . Table 6: Examples from the CNN/DM summarization task.
Hallucination Source ... But it is able to carry just as much power -400,000 volts . It is designed to be less obtrusive and will be used for clean energy purposes ...
Reference
... But it is able to carry just as much power -400,000 volts . It is designed to be less obtrusive and will be used for clean energy .
Vanilla ... But it is able to carry just as much power -400,000 volts . It is designed to be less obtrusive and will be used for clean energy . Mask Emb ... It is able to carry just as much power -400,000 volts . The design is a T-shape , with two ' hanging baskets ' either side ... Mask Out ... But it is able to carry just as much power -400,000 volts . It is designed to be less obtrusive and will be used for clean energy .
Figure 2 :
2Relative changes of (a) model performance and (b) length of output when masking individual encoder layer in the trained Seq2Seq models. As seen, masking the embedding layer leads to a significant drop of model performance and increase of output length.
In addition, Lee et al. (2019) point out that even if hallucinations occur only occasionally, the Seq2Seq model may evidently lose user trust than other prediction problems, indicating the importance to fuse surface features at the embedding layer. More cases are studied in Appendix A.2.
Figure 3 :
3Gao et al., 2019;Wang et al., 2019a;Shen et al., 2020), in which higher normalized singular values denote that the embedding is more uniformly distributed, thus are more expressive. The singular values are normalized by dividing them by the largest value and their log scale values are reported for better clarity. Log scale singular values of the three sub-embedding matrices in the fine-grained layer attention models. Higher log eigenvalues denote more expressivity of the dimensions.
Figure 3
3depicts the singular value results. For comparison, we also report the values of the randomly selected dimensions. Clearly, the more attended dimensions are most expressive, while the less attended dimensions are least expressive. These results demonstrate that the fine-grained attention model indeed extracts useful surface information from the encoder embedding layer, which does not play the role of a noise regularizer.
Figure 4 :
4Log scale singular values of the embeddings.Figures 4 shows the results of the tied source and target embeddings on the Ro-En translation task. The word embeddings of the vanilla model have fast decaying singular values, which limits the representational power of embeddings to a small sub-space. The SurfaceFusion model slows down the decaying and the singular values become more uniformly distributed, which demonstrates that the fused surface features remarkably enhance the representation learning of embeddings. This provides a better starting point for the model to effectively extract surface and abstract features, which leads to an improvement of model performance.
Table 1 :
1Results of existing encoder layer fusion
methods on the WMT16 Ro-En translation task.
Model
BLEU
Vanilla Transformer
33.80
Layer aggregation
34.05
Layer-wise coordination
34.19
Coarse-grained layer attention 34.32
Fine-grained layer attention
34.45
Table 2 :
2Results of the proposed SurfaceFusion methods on the Seq2Seq tasks."FGLA" denotes
Table 3 :
3Cosine similarities between aligned
source and target word embeddings. "All" and
"Non-Shared" denotes keeping or removing the
aligned pair when the source and target words are
the same, which are easier to be aligned.
All
Non-Shared
Vanilla
0.602
0.338
SurfaceFusion 0.650
0.417
Src/Tgt Train Dev Test Model BatchStep DP Beam LP Machine translation For WMT16 Romanian-English, we used the prepossessed data 1 and existing result from Ghazvininejad et al.(2019). The validation set is newsdev2016 and the test set is newtest2016. For WMT14 English-German, the prepossessed data 2 and existing result are derived fromOtt et al. (2018). The validation set is newstest2013 and the test set is newstest2014. For WMT14 English-French, we reported the existing result fromVocab
#Sents
Training
Testing
Ro-En
34,976
0.6M
2K
2K Base
16K 60K 0.3
4
1.0
En-De
32,768
4.5M
3K
3K Big
460K 30K 0.3
5
0.6
En-Fr
36,736 35.5M
6K
3K Big
460K 80K 0.1
5
0.9
CNN/DM 50,264
0.3M 13K 11K Base
64K 70K 0.1
4
2.0
CONLL
33,352
1.3M
5K
1K Base
64K 80K 0.2
6
0.6
Omission Source ... Opening statements in his trial are scheduled to begin Monday ... Reference ... Opening statements are scheduled Monday in the trial of James Holmes ... Vanilla ... Prosecutors are not swayed, will seek the death penalty . Opening statements in his trial are scheduled to begin Monday . Holmes says he was suffering ' a psychotic episode ' at the time ... Mask Emb ... Prosecutors are not swayed, will seek the death penalty . Opening statements in his trial are scheduled to begin Monday . Holmes says he was suffering ' a psychotic episode ' at the time ... Mask Out ... Prosecutors are not swayed and will seek the death penalty . Holmes says he was suffering ' a psychotic episode ' at the time ...
Table 7 :
7Examples from the CONLL correction task. Source They can become anyone . Reference They can become anyone . Vanilla They can become anyone . Mask Emb They can become friends with anyone . Mask Out They can become anyone . Source In conclude , people should think carefully of what is the consequences of telling the relatives his or her generic disorder issue . Reference In conclusion , people should think carefully about what is the consequences of telling the relatives his or her generic disorder issue . Vanilla In conclusion , people should think carefully about what is the consequences of telling the relatives his or her generic disorder issue . Mask Emb In conclusion , people should think carefully about what is the consequences of telling the relatives his or her generic disorder issue . Mask Out In conclusion , people should think carefully about what is the consequences of telling the relatives his or her generic issue .Hallucination
Omission
Clearly, the proposed fusion approaches outperform the baselines (i.e. "Vanilla" and "FGLA") in all cases, while there are still considerable differences among model variations. Hard fusion performs better on the translation tasks, while soft fusion is superior on the summarization and correction tasks. Unlike hard fusion that performs at the probability level, soft fusion performs at the logit level to provide an earlier and direct way for fusing surface features, which might be a better solution for the tasks with a similar input/output domain.
https://drive.google.com/uc?id=1YrAwCEuktG-iDVxtEW-FE72uFTLc5QMl 2 https://drive.google.com/uc?id=0B_bZck-ksdkpM25jRUN2X2UxMm8 3 https://github.com/nusnlp/mlconvgec2018/blob/master/data/prepare_data. sh 4 https://github.com/pytorch/fairseq
Layer normalization. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXivJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. In arXiv, 2016.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Training deeper neural machine translation models with transparent attention. Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, Yonghui Wu, In EMNLP. Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. Training deeper neural machine translation models with transparent attention. In EMNLP, 2018.
What do neural machine translation models learn about morphology? In ACL. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass, Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. What do neural machine translation models learn about morphology? In ACL, 2017.
Encoderdecoder with atrous separable convolution for semantic image segmentation. Yukun Liang-Chieh Chen, George Zhu, Florian Papandreou, Hartwig Schroff, Adam, ECCV. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder- decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018a.
The best of both worlds: Combining recent advances in neural machine translation. Orhan Mia Xu Chen, Ankur Firat, Melvin Bapna, Wolfgang Johnson, George Macherey, Llion Foster, Niki Jones, Mike Parmar, Zhifeng Schuster, Yonghui Chen, Macduff Wu, Hughes, ACL. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. The best of both worlds: Combining recent advances in neural machine translation. In ACL, 2018b.
Fine-grained attention mechanism for neural machine translation. Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio, Neurocomputing. Heeyoul Choi, Kyunghyun Cho, and Yoshua Bengio. Fine-grained attention mechanism for neural machine translation. Neurocomputing, 2018.
A multilayer convolutional encoder-decoder neural network for grammatical error correction. Shamil Chollampatt, Hwee Tou Ng, AAAI. Shamil Chollampatt and Hwee Tou Ng. A multilayer convolutional encoder-decoder neural network for grammatical error correction. In AAAI, 2018.
Better evaluation for grammatical error correction. Daniel Dahlmeier, Hwee Tou Ng, NAACL. Daniel Dahlmeier and Hwee Tou Ng. Better evaluation for grammatical error correction. In NAACL, 2012.
How much attention do you need? a granular analysis of neural machine translation architectures. Tobias Domhan, ACL. Tobias Domhan. How much attention do you need? a granular analysis of neural machine translation architectures. In ACL, 2018.
Exploiting deep representations for neural machine translation. Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, Tong Zhang, In EMNLP. Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, and Tong Zhang. Exploiting deep representations for neural machine translation. In EMNLP, 2018.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, NAACL. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In NAACL, 2019.
Bleu: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, ACL. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In ACL, 2002.
A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, ICLR. Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. In ICLR, 2018.
Deep contextualized word representations. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, NAACL. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In NAACL, 2018.
Learning to refine object segments. H O Pedro, Tsung-Yi Pinheiro, Ronan Lin, Piotr Collobert, Dollár, ECCV. Pedro H. O. Pinheiro, Tsung-Yi Lin, Ronan Collobert, and Piotr Dollár. Learning to refine object segments. In ECCV, 2016.
Using the Output Embedding to Improve Language Models. Ofir Press, Lior Wolf, EACL. Ofir Press and Lior Wolf. Using the Output Embedding to Improve Language Models. In EACL, 2017.
Leveraging pre-trained checkpoints for sequence generation tasks. Sascha Rothe, Shashi Narayan, Aliaksei Severyn, arXivSascha Rothe, Shashi Narayan, and Aliaksei Severyn. Leveraging pre-trained checkpoints for sequence generation tasks. In arXiv, 2019.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In ACL, 2016.
Rethinking batch normalization in transformers. Sheng Shen, Zhewei Yao, Amir Gholami, Michael Mahoney, Kurt Keutzer, ICML. Sheng Shen, Zhewei Yao, Amir Gholami, Michael Mahoney, and Kurt Keutzer. Rethinking batch normalization in transformers. In ICML, 2020.
Dense information flow for neural machine translation. Yanyao Shen, Xu Tan, Di He, Tao Qin, Tie-Yan Liu, NAACL. Yanyao Shen, Xu Tan, Di He, Tao Qin, and Tie-Yan Liu. Dense information flow for neural machine translation. In NAACL, 2018.
Cold fusion: Training seq2seq models together with language models. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, Adam Coates, Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. Cold fusion: Training seq2seq models together with language models. In arXiv, 2017.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, JMLRNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014.
Simple fusion: Return of the language model. Felix Stahlberg, James Cross, Veselin Stoyanov, WMT. Felix Stahlberg, James Cross, and Veselin Stoyanov. Simple fusion: Return of the language model. In WMT, 2018.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov, ACL. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In ACL, 2019.
Regularization of neural networks using dropconnect. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, Rob Fergus, ICML. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
On exposure bias, hallucination and domain shift in neural machine translation. Chaojun Wang, Rico Sennrich, arXivChaojun Wang and Rico Sennrich. On exposure bias, hallucination and domain shift in neural machine translation. In arXiv, 2020.
Improving neural language modeling via adversarial training. Dilin Wang, Chengyue Gong, Qiang Liu, ICML. Dilin Wang, Chengyue Gong, and Qiang Liu. Improving neural language modeling via adversarial training. In ICML, 2019a.
Denoising based sequence-tosequence pre-training for text generation. Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu, EMNLP. Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, and Jingming Liu. Denoising based sequence-to- sequence pre-training for text generation. In EMNLP, 2019b.
Multi-layer representation fusion for neural machine translation. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, Jingbo Zhu, COLING. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. Multi-layer representa- tion fusion for neural machine translation. In COLING, 2018a.
Learning deep transformer models for machine translation. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, Lidia S Chao, ACL. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, and Lidia S Chao. Learning deep transformer models for machine translation. In ACL, 2019c.
Switchout: An efficient data augmentation algorithm for neural machine translation. Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig, EMNLP. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. Switchout: An efficient data augmentation algorithm for neural machine translation. In EMNLP, 2018b.
Pay less attention with lightweight and dynamic convolutions. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, Michael Auli, ICLR. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In ICLR, 2019.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. In arXiv, 2016.
Deep layer aggregation. Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell, CVPR. Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In CVPR, 2018.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J Liu, ICML. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In ICML, 2020.
Exfuse: Enhancing feature fusion for semantic segmentation. Zhenli Zhang, Xiangyu Zhang, Chao Peng, Dazhi Cheng, Jian Sun, In ECCV. Zhenli Zhang, Xiangyu Zhang, Chao Peng, Dazhi Cheng, and Jian Sun. Exfuse: Enhancing feature fusion for semantic segmentation. In ECCV, 2018. |
212,756 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. | [] | Regularizing and Optimizing LSTM Language Models
7 Aug 2017
Stephen Merity
Nitish Shirish Keskar
Richard Socher
Regularizing and Optimizing LSTM Language Models
7 Aug 2017
Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.
Introduction
Effective regularization techniques for deep learning have been the subject of much research in recent years. Given the over-parameterization of neural networks, generalization performance crucially relies on the ability to regularize the models sufficiently. Strategies such as dropout (Srivastava et al., 2014) and batch normalization (Ioffe & Szegedy, 2015) have found great success and are now ubiquitous in feed-forward and convolutional neural networks. Naïvely applying these approaches to the case of recurrent neural networks (RNNs) has not been highly successful however. Many recent works have hence been focused on the extension of these regularization strategies to RNNs; we briefly discuss some of them below.
A naïve application of dropout (Srivastava et al., 2014) to an RNN's hidden state is ineffective as it disrupts the RNN's ability to retain long term dependencies (Zaremba et al., 2014). Gal & Ghahramani (2016) propose overcoming this problem by retaining the same dropout mask across multiple time steps as opposed to sampling a new binary mask at each timestep. Another approach is to regularize the network through limiting updates to the RNN's hidden state. One such approach is taken by Semeniuta et al. (2016) wherein the authors drop updates to network units, specifically the input gates of the LSTM, in lieu of the units themselves. This is reminiscent of zoneout (Krueger et al., 2016) where updates to the hidden state may fail to occur for randomly selected neurons.
Instead of operating on the RNN's hidden states, one can regularize the network through restrictions on the recurrent matrices as well. This can be done either through restricting the capacity of the matrix (Arjovsky et al., 2016;Wisdom et al., 2016;Jing et al., 2016) or through element-wise interactions (Balduzzi & Ghifary, 2016;Bradbury et al., 2016;Seo et al., 2016).
Other forms of regularization explicitly act upon activations such as batch normalization (Ioffe & Szegedy, 2015), recurrent batch normalization (Cooijmans et al., 2016), and layer normalization (Ba et al., 2016). These all introduce additional training parameters and can complicate the training process while increasing the sensitivity of the model.
In this work, we investigate a set of regularization strategies that are not only highly effective but which can also be used with no modification to existing LSTM implementations. The weight-dropped LSTM applies recurrent regularization through a DropConnect mask on the hidden-to-hidden recurrent weights. Other strategies include the use of randomized-length backpropagation through time (BPTT), embedding dropout, activation regularization (AR), and temporal activation regularization (TAR).
As no modifications are required of the LSTM implementation these regularization strategies are compatible with black box libraries, such as NVIDIA cuDNN, which can be many times faster than naïve LSTM implementations.
Effective methods for training deep recurrent networks have also been a topic of renewed interest. Once a model has been defined, the training algorithm used is required to not only find a good minimizer of the loss function but also converge to such a minimizer rapidly. The choice of the optimizer is even more important in the context of regularized models since such strategies, especially the use of dropout, can impede the training process. Stochastic gradient descent (SGD), and its variants such as Adam (Kingma & Ba, 2014) and RMSprop (Tieleman & Hinton, 2012) are amongst the most popular training methods. These methods iteratively reduce the training loss through scaled (stochastic) gradient steps. In particular, Adam has been found to be widely applicable despite requiring less tuning of its hyperparameters. In the context of word-level language modeling, past work has empirically found that SGD outperforms other methods in not only the final loss but also in the rate of convergence. This is in agreement with recent evidence pointing to the insufficiency of adaptive gradient methods (Wilson et al., 2017).
Given the success of SGD, especially within the language modeling domain, we investigate the use of averaged SGD (ASGD) (Polyak & Juditsky, 1992) which is known to have superior theoretical guarantees. ASGD carries out iterations similar to SGD, but instead of returning the last iterate as the solution, returns an average of the iterates past a certain, tuned, threshold T . This threshold T is typically tuned and has a direct impact on the performance of the method. We propose a variant of ASGD where T is determined on the fly through a non-monotonic criterion and show that it achieves better training outcomes compared to SGD.
Weight-dropped LSTM
We refer to the mathematical formulation of the LSTM,
i t = σ(W i x t + U i h t−1 ) f t = σ(W f x t + U f h t−1 ) o t = σ(W o x t + U o h t−1 ) c t = tanh(W c x t + U c h t−1 ) c t = i t ⊙c t + f t ⊙ +c t−1 h t = o t ⊙ tanh(c t ) where [W i , W f , W o , U i , U f , U o ]
are weight matrices, x t is the vector input to the timestep t, h t is the current exposed hidden state, c t is the memory cell state, and ⊙ is element-wise multiplication.
Preventing overfitting within the recurrent connections of an RNN has been an area of extensive research in language modeling. The majority of previous recurrent regularization techniques have acted on the hidden state vector h t−1 , most frequently introducing a dropout operation between timesteps, or performing dropout on the update to the memory state c t . These modifications to a standard LSTM pre-vent the use of black box RNN implementations that may be many times faster due to low-level hardware-specific optimizations.
We propose the use of DropConnect (Wan et al., 2013) on the recurrent hidden to hidden weight matrices which does not require any modifications to an RNN's formulation. As the dropout operation is applied once to the weight matrices, before the forward and backward pass, the impact on training speed is minimal and any standard RNN implementation can be used, including inflexible but highly optimized black box LSTM implementations such as NVIDIA's cuDNN LSTM.
By performing DropConnect on the hidden-to-hidden weight matrices [U i , U f , U o , U c ] within the LSTM, we can prevent overfitting from occurring on the recurrent connections of the LSTM. This regularization technique would also be applicable to preventing overfitting on the recurrent weight matrices of other RNN cells.
As the same weights are reused over multiple timesteps, the same individual dropped weights remain dropped for the entirety of the forward and backward pass. The result is similar to variational dropout, which applies the same dropout mask to recurrent connections within the LSTM by performing dropout on h t−1 , except that the dropout is applied to the recurrent weights. DropConnect could also be used on the non-recurrent weights of the LSTM [W i , W f , W o ] though our focus was on preventing overfitting on the recurrent connection.
Optimization
SGD is among the most popular methods for training deep learning models across various modalities including computer vision, natural language processing, and reinforcement learning. The training of deep networks can be posed as a non-convex optimization problem
min w 1 N N i=1 f i (w),
where f i is the loss function for the i th data point, w are the weights of the network, and the expectation is taken over the data. Given a sequence of learning rates, γ k , SGD iteratively takes steps of the form
w k+1 = w k − γ k∇ f (w k ),(1)
where the subscript denotes the iteration number and thê ∇ denotes a stochastic gradient that may be computed on a minibatch of data points. SGD demonstrably performs well in practice and also possesses several attractive theoretical properties such as linear convergence (Bottou et al., 2016), saddle point avoidance (Panageas & Piliouras, 2016) and better generalization performance (Hardt et al., 2015). For the specific task of neural language modeling, traditionally SGD without momentum has been found to outperform other algorithms such as momentum SGD (Sutskever et al., 2013), Adam (Kingma & Ba, 2014), Adagrad (Duchi et al., 2011) and RMSProp (Tieleman & Hinton, 2012) by a statistically significant margin.
Motivated by this observation, we investigate averaged SGD (ASGD) to further improve the training process. ASGD has been analyzed in depth theoretically and many surprising results have been shown including its asymptotic second-order convergence (Polyak & Juditsky, 1992;Mandt et al., 2017). ASGD takes steps identical to equation (1) but instead of returning the last iterate as the solution, returns
1 (K−T +1) K i=T w i ,
where K is the total number of iterations and T < K is a user-specified averaging trigger.
Algorithm 1 Non-monotonically Triggered ASGD (NT-ASGD) Inputs: Initial point w 0 , learning rate γ, logging interval L, non-monotone interval n.
1: Initialize k ← 0, t ← 0, T ← 0, logs ← [] 2: while stopping criterion not met do 3:
Compute stochastic gradient∇f (w k ) and take SGD step (1).
4:
if mod(k, L) = 0 and T = 0 then 5:
Compute validation perplexity v. Append v to logs 10:
t ← t + 1 11: end if 12: end while return k i=T wi (k−T +1)
Despite its theoretical appeal, ASGD has found limited practical use in training of deep networks. This may be in part due to unclear tuning guidelines for the learning-rate schedule γ k and averaging trigger T . If the averaging is triggered too soon, the efficacy of the method is impacted, and if it is triggered too late, many additional iterations may be needed to converge to the solution. In this section, we describe a non-monotonically triggered variant of ASGD (NT-ASGD), which obviates the need for tuning T . Further, the algorithm uses a constant learning rate throughout the experiment and hence no further tuning is necessary for the decay scheduling.
Ideally, averaging needs to be triggered when the SGD iterates converge to a steady-state distribution (Mandt et al., 2017). This is roughly equivalent to the convergence of SGD to a neighborhood around a solution. In the case of SGD, certain learning-rate reduction strategies such as the step-wise strategy analogously reduce the learning rate by a fixed quantity at such a point. A common strategy employed in language modeling is to reduce the learning rates by a fixed proportion when the performance of the model's primary metric (such as perplexity) worsens or stagnates. Along the same lines, one could make a triggering decision based on the performance of the model on the validation set. However, instead of averaging immediately after the validation metric worsens, we propose a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles; see Algorithm 1. Given that the choice of triggering is irreversible, this conservatism ensures that the randomness of training does not play a major role in the decision. Analogous strategies have also been proposed for learning-rate reduction in SGD (Keskar & Saon, 2015).
While the algorithm introduces two additional hyperparameters, the logging interval L and non-monotone interval n, we found that setting L to be the number of iterations in an epoch and n = 5 worked well across various models and data sets. As such, we use this setting in all of our NT-ASGD experiments in the following section and demonstrate that it achieves better training outcomes as compared to SGD.
Extended regularization techniques
In addition to the regularization and optimization techniques above, we explored additional regularization techniques that aimed to improve data efficiency during training and to prevent overfitting of the RNN model.
Variable length backpropagation sequences
Given a fixed sequence length that is used to break a data set into fixed length batches, the data set is not efficiently used. To illustrate this, imagine being given 100 elements to perform backpropagation through with a fixed backpropagation through time (BPTT) window of 10. Any element divisible by 10 will never have any elements to backprop into, no matter how many times you may traverse the data set. Indeed, the backpropagation window that each element receives is equal to i mod 10 where i is the element's index. This is data inefficient, preventing 1 10 of the data set from ever being able to improve itself in a recurrent fashion, and resulting in 8 10 of the remaining elements receiving only a partial backpropagation window compared to the full possible backpropagation window of length 10.
To prevent such inefficient data usage, we randomly select the sequence length for the forward and backward pass in two steps. First, we select the base sequence length to be seq with probability p and seq 2 with probability 1 − p, where p is a high value approaching 1. This spreads the starting point for the BPTT window beyond the base sequence length. We then select the sequence length according to N (seq, s), where seq is the base sequence length and s is the standard deviation. This jitters the starting point such that it doesn't always fall on a specific word divisible by seq or seq 2 . From these, the sequence length more efficiently uses the data set, ensuring that when given enough epochs all the elements in the data set experience a full BPTT window, while ensuring the average sequence length remains around the base sequence length for computational efficiency.
During training, we rescale the learning rate depending on the length of the resulting sequence compared to the original specified sequence length. The rescaling step is necessary as sampling arbitrary sequence lengths with a fixed learning rate favors short sequences over longer ones. This linear scaling rule has been noted as important for training large scale minibatch SGD without loss of accuracy (Goyal et al., 2017) and is a component of unbiased truncated backpropagation through time (Tallec & Ollivier, 2017).
Variational dropout
In standard dropout, a new binary dropout mask is sampled each and every time the dropout function is called. New dropout masks are sampled even if the given connection is repeated, such as the input x 0 to an LSTM at timestep t = 0 receiving a different dropout mask than the input x 1 fed to the same LSTM at t = 1. A variant of this, variational dropout (Gal & Ghahramani, 2016), samples a binary dropout mask only once upon the first call and then to repeatedly use that locked dropout mask for all repeated connections within the forward and backward pass.
While we propose using DropConnect rather than variational dropout to regularize the hidden-to-hidden transition within an RNN, we use variational dropout for all other dropout operations, specifically using the same dropout mask for all inputs and outputs of the LSTM within a given forward and backward pass. Each example within the minibatch uses a unique dropout mask, rather than a single dropout mask being used over all examples, ensuring diversity in the elements dropped out.
Embedding dropout
Following Gal & Ghahramani (2016), we employ embedding dropout. This is equivalent to performing dropout on the embedding matrix at a word level, where the dropout is broadcast across all the word vector's embedding. The remaining non-dropped-out word embeddings are scaled by 1 1−pe where p e is the probability of embedding dropout. As the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a specific word will disappear within that pass, equivalent to performing variational dropout on the connection between the one-hot embedding and the embedding lookup.
Weight tying
Weight tying (Inan et al., 2016;Press & Wolf, 2016) shares the weights between the embedding and softmax layer, substantially reducing the total parameter count in the model. The technique has theoretical motivation (Inan et al., 2016) and prevents the model from having to learn a one-to-one correspondence between the input and output, resulting in substantial improvements to the standard LSTM language model.
Independent embedding size and hidden size
In most natural language processing tasks, both pretrained and trained word vectors are of relatively low dimensionality-frequently between 100 and 400 dimensions in size. Most previous LSTM language models tie the dimensionality of the word vectors to the dimensionality of the LSTM's hidden state. Even if reducing the word embedding size was not beneficial in preventing overfitting, the easiest reduction in total parameters for a language model is reducing the word vector size. To achieve this, the first and last LSTM layers are modified such that their input and output dimensionality respectively are equal to the reduced embedding size.
Activation Regularization (AR) and Temporal Activation Regularization (TAR)
L 2 -regularization is often used on the weights of the network to control the norm of the resulting model and reduce overfitting. In addition, L 2 decay can be used on the individual unit activations and on the difference in outputs of an RNN at different time steps; these strategies labeled as activation regularization (AR) and temporal activation regularization (TAR) respectively (Merity et al., 2017). AR penalizes activations that are significantly larger than 0 as a means of regularizing the network. Concretely, AR is defined as
α L 2 (m ⊙ h t )
where m is the dropout mask, L 2 (·) = · 2 , h t is the output of the RNN at timestep t, and α is a scaling coefficient. TAR falls under the broad category of slowness regularizers (Hinton, 1989;Földiák, 1991;Luciw & Schmidhuber, 2012;Jonschkowski & Brock, 2015) which penalize the model from producing large changes in the hidden state.
Using the notation from AR, TAR is defined as
β L 2 (h t − h t+1 )
where β is a scaling coefficient. As in Merity et al. (2017), the AR and TAR loss are only applied to the output of the final RNN layer as opposed to being applied to all layers.
Experiment Details
For evaluating the impact of these approaches, we perform language modeling over a preprocessed version of the Penn Treebank (PTB) (Mikolov et al., 2010) and the WikiText-2 (WT2) data set .
PTB:
The Penn Treebank data set has long been a central data set for experimenting with language modeling. The data set is heavily preprocessed and does not contain capital letters, numbers, or punctuation. The vocabulary is also capped at 10,000 unique words, quite small in comparison to most modern datasets, which results in a large number of out of vocabulary (OoV) tokens.
WT2: WikiText-2 is sourced from curated Wikipedia articles and is approximately twice the size of the PTB data set. The text is tokenized and processed using the Moses tokenizer (Koehn et al., 2007), frequently used for machine translation, and features a vocabulary of over 30,000 words. Capitalization, punctuation, and numbers are retained in this data set. , where H is the hidden size. For training the models, we use the NT-ASGD algorithm discussed in the previous section for 750 epochs with L equivalent to one epoch and n = 5. We use a batch size of 80 for WT2 and 40 for PTB. Empirically, we found relatively large batch sizes (e.g., 40-80) performed better than smaller sizes (e.g., 10-20) for NT-ASGD. After completion, we run ASGD with T = 0 and hot-started w 0 as a fine-tuning step to further improve the solution. For this fine-tuning step, we terminate the run using the same nonmonotonic criterion detailed in Algorithm 1.
We carry out gradient clipping with maximum norm 0.25 and use an initial learning rate of 30 for all experiments. We use a random BPTT length which is N (70, 5) with probability 0.95 and N (35, 5) with probability 0.05. The values used for dropout on the word vectors, the output between LSTM layers, the output of the final LSTM layer, and embedding dropout where (0.4, 0.3, 0.4, 0.1) respectively. For the weight-dropped LSTM, a dropout of 0.5 was applied to the recurrent weight matrices. For WT2, we increase the input dropout to 0.65 to account for the increased vocabulary size. For all experiments, we use AR and TAR values of 2 and 1 respectively, and tie the embedding and softmax weights. These hyperparameters were chosen through trial and error and we expect further improvements may be possible if a fine-grained hyperparameter search were to be conducted. In the results, we abbreviate our approach as AWD-LSTM for ASGD Weight-Dropped LSTM.
Experimental Analysis
We present the single-model perplexity results for both our models (AWD-LSTM) and other competitive models in Table 1 and 2 for PTB and WT2 respectively. On both data sets we improve the state-of-the-art, with our vanilla LSTM model beating the state of the art by approximately 1 unit on PTB and 0.1 units on WT2.
In comparison to other recent state-of-the-art models, our model uses a vanilla LSTM. Zilly et al. (2016) propose the recurrent highway network, which extends the LSTM to allow multiple hidden state updates per timestep. Zoph & Le (2016) use a reinforcement learning agent to generate an RNN cell tailored to the specific task of language modeling, with the cell far more complex than the LSTM.
Independently of our work, Melis et al. (2017) apply extensive hyperparameter search to an LSTM based language modeling implementation, analyzing the sensitivity of RNN based language models to hyperparameters. Unlike our work, they use a modified LSTM, which caps the input gate i t to be min(1 − f t , i t ), use Adam with β 1 = 0 rather than SGD or ASGD, use skip connections between LSTM layers, and use a black box hyperparameter tuner for exploring models and settings. Of particular interest is that their hyperparameters were tuned individually for each data set compared to our work which shared almost all hyperparameters between PTB and WT2, including the embedding and hidden size for both data sets. Due to this, they used less model parameters than our model and found shallow LSTMs of one or two layers worked best for WT2.
Like our work, Melis et al. (2017) find that the underlying LSTM architecture can be highly effective compared to complex custom architectures when well tuned hyperparameters are used. The approaches used in our work and Melis et al. (2017) may be complementary and would be worth exploration.
Pointer models
In past work, pointer based attention models have been shown to be highly effective in improving language modeling Grave et al., 2016). Given such
Model Parameters Validation Test
Mikolov & Zweig (2012) Table 1. Single model perplexity on validation and test sets for the Penn Treebank language modeling task. Parameter numbers with ‡ are estimates based upon our understanding of the model and with reference to Merity et al. (2016). Models noting tied use weight tying on the embedding and softmax weights. Our model, AWD-LSTM, stands for ASGD Weight-Dropped LSTM. The neural cache model (Grave et al., 2016) can be added on top of a pre-trained language model at negligible cost. The neural cache stores the previous hidden states in memory cells and then uses a simple convex combination of the probability distributions suggested by the cache and the language model for prediction. The cache model has three hyperparameters: the memory size (window) for the cache, the coefficient of the combination (which determines how the two distributions are mixed), and the flatness of the cache distribution. All of these are tuned on the validation set once a trained language model has been obtained and require no training by themselves, making it quite inexpensive to use. The tuned values for these hyperparameters were (2000, 0.1, 1.0) for PTB and (3785, 0.1279, 0.662) for WT2 respectively.
Model Parameters Validation Test
In Tables 1 and 2, we show that the model further improves the perplexity of the language model by as much as 6 perplexity points for PTB and 11 points for WT2. While this is smaller than the gains reported in Grave et al. (2016), which used an LSTM without weight tying, this is still a substantial drop. Given the simplicity of the neural cache model, and the lack of any trained components, these results suggest that existing neural language models remain fundamentally lacking, failing to capture long term dependencies or remember recently seen words effectively.
To understand the impact the pointer had on the model, specifically the validation set perplexity, we detail the contribution that each word has on the cache model's overall perplexity in Table 3. We compute the sum of the total difference in the loss function value (i.e., log perplexity) between the LSTM-only and LSTM-with-cache models for the target words in the validation portion of the WikiText-2 data set. We present results for the sum of the difference as opposed to the mean since the latter undesirably overemphasizes infrequently occurring words for which the cache helps significantly and ignores frequently occurring words for which the cache provides modest improvements that cumulatively make a strong contribution.
The largest cumulative gain is in improving the handling of <unk> tokens, though this is over 11540 instances. The second best improvement, approximately one fifth the gain given by the <unk> tokens, is for Meridian, yet this word only occurs 161 times. This indicates the cache still helps significantly even for relatively rare words, further demonstrated by Churchill, Blythe, or Sonic. The cache is not beneficial when handling frequent word categories, such as punctuation or stop words, for which the language model is Table 3. The sum total difference in loss (log perplexity) that a given word results in over all instances in the validation data set of WikiText-2 when the continuous cache pointer is introduced. The right column contains the words with the twenty best improvements (i.e., where the cache was advantageous), and the left column the twenty most deteriorated (i.e., where the cache was disadvantageous).
likely well suited. These observations motivate the design of a cache framework that is more aware of the relative strengths of the two models.
Model Ablation Analysis
In Table 4, we present the values of validation and testing perplexity for different variants of our best-performing LSTM model. Each variant removes a form of optimization or regularization.
The first two variants deal with the optimization of the language models while the rest deal with the regularization. For the model using SGD with learning rate reduced by 2 using the same nonmonotonic fashion, there is a significant degradation in performance. This stands as empirical evidence regarding the benefit of averaging of the iterates. Using a monotonic criterion instead also hampered performance. Similarly, the removal of the fine-tuning step expectedly also degrades the performance. This step helps improve the estimate of the minimizer by resetting the memory of the previous experiment. While this process of fine-tuning can be repeated multiple times, we found little benefit in repeating it more than once. was pivotal in ensuring state-of-the-art performance. The most extreme perplexity jump was in removing the hiddento-hidden LSTM regularization provided by the weightdropped LSTM. Without such hidden-to-hidden regularization, perplexity rises substantially, up to 11 points. This is in line with previous work showing the necessity of recurrent regularization in state-of-the-art models (Gal & Ghahramani, 2016;Inan et al., 2016).
We also experiment with static sequence lengths which we had hypothesized would lead to inefficient data usage. This also worsens the performance by approximately one perplexity unit. Next, we experiment with reverting to matching the sizes of the embedding vectors and the hidden states. This significantly increases the number of parameters in the network (to 43M in the case of PTB and 70M for WT2) and leads to degradation by almost 8 perplexity points, which we attribute to overfitting in the word embeddings. While this could potentially be improved with more aggressive regularization, the computational overhead involved with substantially larger embeddings likely outweighs any advantages. Finally, we experiment with the removal of embedding dropout, AR/TAR and weight decay. In all of the cases, the model suffers a perplexity increase of 2-6 points which we hypothesize is due to insufficient regularization in the network.
Conclusion
In this work, we discuss regularization and optimization strategies for neural language models. We propose the weight-dropped LSTM, a strategy that uses a DropConnect mask on the hidden-to-hidden weight matrices, as a means to prevent overfitting across the recurrent connections. Further, we investigate the use of averaged SGD with a nonmonontonic trigger for training language models and show that it outperforms SGD by a significant margin. We in-vestigate other regularization strategies including the use of variable BPTT length and achieve a new state-of-the-art perplexity on the PTB and WikiText-2 data sets. Our models outperform custom-built RNN cells and complex regularization strategies that preclude the possibility of using optimized libraries such as the NVIDIA cuDNN LSTM. Finally, we explore the use of a neural cache in conjunction with our proposed model and show that this further improves the performance, thus attaining an even lower state-of-the-art perplexity. While the regularization and optimization strategies proposed are demonstrated on the task of language modeling, we anticipate that they would be generally applicable across other sequence learning tasks.
All experiments use a three-layer LSTM model with 1150 units in the hidden layer and an embedding of size 400. The loss was averaged over all examples and timesteps. All embedding weights were uniformly initialized in the interval [−0.1, 0.1] and all other weights were initialized between
Table 2 .
2Single model perplexity over WikiText-2. Models noting tied use weight tying on the embedding and softmax weights. Our
model, AWD-LSTM, stands for ASGD Weight-Dropped LSTM.
The removal of regularization strategies paints a similar picture; the inclusion of all of the proposed strategiesTable 4. Model ablations for our best LSTM models reporting results over the validation and test set on Penn Treebank and WikiText-2. Ablations are split into optimization and regularization variants, sorted according to the achieved validation perplexity on WikiText-2.PTB
WT2
Model
Validation Test Validation Test
AWD-LSTM (tied)
60.0
57.3
68.6
65.8
-fine-tuning
60.7
58.8
69.1
66.0
-NT-ASGD
66.3
63.7
73.3
69.7
-variable sequence lengths
61.3
58.9
69.3
66.2
-embedding dropout
65.1
62.7
71.1
68.1
-weight decay
63.7
61.0
71.9
68.7
-AR/TAR
62.7
60.3
73.2
70.1
-full sized embedding
68.0
65.6
73.7
70.7
-weight-dropping
71.1
68.9
78.4
74.9
Salesforce Research, Palo Alto, USA. Correspondence to: Stephen Merity <[email protected]>.
Unitary evolution recurrent neural networks. M Arjovsky, A Shah, Y Bengio, International Conference on Machine Learning. Arjovsky, M., Shah, A., and Bengio, Y. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120-1128, 2016.
Layer normalization. CoRR. J Ba, J Kiros, G E Hinton, abs/1607.06450Ba, J., Kiros, J., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016.
D Balduzzi, M Ghifary, arXiv:1602.02218Strongly-typed recurrent neural networks. arXiv preprintBalduzzi, D. and Ghifary, M. Strongly-typed recurrent neu- ral networks. arXiv preprint arXiv:1602.02218, 2016.
Optimization methods for large-scale machine learning. L Bottou, F E Curtis, J Nocedal, arXiv:1606.04838arXiv preprintBottou, L., Curtis, F. E., and Nocedal, J. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
J Bradbury, S Merity, C Xiong, R Socher, arXiv:1611.01576Quasi-Recurrent Neural Networks. arXiv preprintBradbury, J., Merity, S., Xiong, C., and Socher, R. Quasi-Recurrent Neural Networks. arXiv preprint arXiv:1611.01576, 2016.
Recurrent batch normalization. CoRR. T Cooijmans, N Ballas, C Laurent, A C Courville, abs/1603.09025Cooijmans, T., Ballas, N., Laurent, C., and Courville, A. C. Recurrent batch normalization. CoRR, abs/1603.09025, 2016.
Adaptive subgradient methods for online learning and stochastic optimization. J Duchi, E Hazan, Y Singer, Journal of Machine Learning Research. 12Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121- 2159, 2011.
Learning invariance from transformation sequences. P Földiák, Neural Computation. 32Földiák, P. Learning invariance from transformation se- quences. Neural Computation, 3(2):194-200, 1991.
A theoretically grounded application of dropout in recurrent neural networks. Y Gal, Z Ghahramani, NIPS. Gal, Y. and Ghahramani, Z. A theoretically grounded appli- cation of dropout in recurrent neural networks. In NIPS, 2016.
P Goyal, P Dollár, R Girshick, P Noordhuis, L Wesolowski, A Kyrola, A Tulloch, Y Jia, K He, Accurate, arXiv:1706.02677large minibatch sgd: Training imagenet in 1 hour. arXiv preprintGoyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Improving neural language models with a continuous cache. E Grave, A Joulin, N Usunier, arXiv:1612.04426arXiv preprintGrave, E., Joulin, A., and Usunier, N. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
Train faster, generalize better: Stability of stochastic gradient descent. M Hardt, B Recht, Y Singer, arXiv:1509.01240arXiv preprintHardt, M., Recht, B., and Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Connectionist learning procedures. G E Hinton, Artificial intelligence. 401-3Hinton, G. E. Connectionist learning procedures. Artificial intelligence, 40(1-3):185-234, 1989.
Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. H Inan, K Khosravi, R Socher, arXiv:1611.01462arXiv preprintInan, H., Khosravi, K., and Socher, R. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. arXiv preprint arXiv:1611.01462, 2016.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, ICML. Ioffe, S. and Szegedy, C. Batch normalization: Accelerat- ing deep network training by reducing internal covariate shift. In ICML, 2015.
L Jing, Y Shen, T Dubček, J Peurifoy, S Skirlo, M Tegmark, M Soljačić, arXiv:1612.05231Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. arXiv preprintJing, L., Shen, Y., Dubček, T., Peurifoy, J., Skirlo, S., Tegmark, M., and Soljačić, M. Tunable Efficient Uni- tary Neural Networks (EUNN) and their application to RNN. arXiv preprint arXiv:1612.05231, 2016.
Learning state representations with robotic priors. R Jonschkowski, O Brock, Auton. Robots. 39Jonschkowski, R. and Brock, O. Learning state represen- tations with robotic priors. Auton. Robots, 39:407-428, 2015.
A nonmonotone learning rate strategy for sgd training of deep neural networks. N Keskar, G Saon, Acoustics, Speech and Signal Processing. IEEE2015 IEEE International Conference onKeskar, N. and Saon, G. A nonmonotone learning rate strategy for sgd training of deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 4974-4978. IEEE, 2015.
Characteraware neural language models. Y Kim, Y Jernite, D Sontag, A M Rush, Thirtieth AAAI Conference on Artificial Intelligence. Kim, Y., Jernite, Y., Sontag, D., and Rush, A. M. Character- aware neural language models. In Thirtieth AAAI Con- ference on Artificial Intelligence, 2016.
D Kingma, J Ba, Adam, arXiv:1412.6980A method for stochastic optimization. arXiv preprintKingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Moses, ACL. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. Moses: Open source toolkit for statistical machine translation. In ACL, 2007.
D Krueger, T Maharaj, J Kramár, M Pezeshki, N Ballas, N Ke, A Goyal, Y Bengio, H Larochelle, A Courville, arXiv:1606.01305Regularizing RNNss by randomly preserving hidden activations. arXiv preprintKrueger, D., Maharaj, T., Kramár, J., Pezeshki, M., Bal- las, N., Ke, N., Goyal, A., Bengio, Y., Larochelle, H., Courville, A., et al. Zoneout: Regularizing RNNss by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Low complexity protovalue function learning from sensory observations with incremental slow feature analysis. M Luciw, J Schmidhuber, Artificial Neural Networks and Machine Learning-ICANN 2012. Luciw, M. and Schmidhuber, J. Low complexity proto- value function learning from sensory observations with incremental slow feature analysis. Artificial Neural Net- works and Machine Learning-ICANN 2012, pp. 279- 287, 2012.
Stochastic gradient descent as approximate bayesian inference. S Mandt, M D Hoffman, D M Blei, arXiv:1704.04289arXiv preprintMandt, S., Hoffman, M. D., and Blei, D. M. Stochastic gra- dient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289, 2017.
On the State of the Art of Evaluation in Neural Language Models. G Melis, C Dyer, P Blunsom, arXiv:1707.05589arXiv preprintMelis, G., Dyer, C., and Blunsom, P. On the State of the Art of Evaluation in Neural Language Models. arXiv preprint arXiv:1707.05589, 2017.
. S Merity, C Xiong, J Bradbury, R Socher, arXiv:1609.07843Pointer Sentinel Mixture Models. arXiv preprintMerity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer Sentinel Mixture Models. arXiv preprint arXiv:1609.07843, 2016.
Revisiting activation regularization for language rnns. S Merity, B Mccann, R Socher, arXiv:1708.01009arXiv preprintMerity, S., McCann, B., and Socher, R. Revisiting acti- vation regularization for language rnns. arXiv preprint arXiv:1708.01009, 2017.
Context dependent recurrent neural network language model. T Mikolov, G Zweig, 12SLTMikolov, T. and Zweig, G. Context dependent recurrent neural network language model. SLT, 12:234-239, 2012.
Recurrent neural network based language model. T Mikolov, M Karafiát, L Burget, J Cernocký, S Khudanpur, INTERSPEECH. Mikolov, T., Karafiát, M., Burget, L., Cernocký, J., and Khudanpur, S. Recurrent neural network based language model. In INTERSPEECH, 2010.
Gradient descent converges to minimizers: The case of non-isolated critical points. I Panageas, G Piliouras, abs/1605.00405CoRRPanageas, I. and Piliouras, G. Gradient descent converges to minimizers: The case of non-isolated critical points. CoRR, abs/1605.00405, 2016.
Acceleration of stochastic approximation by averaging. B Polyak, A Juditsky, arXiv:1608.05859Press, O. and Wolf, L. Using the output embedding to improve language models. 30arXiv preprintPolyak, B. and Juditsky, A. Acceleration of stochastic ap- proximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855, 1992. Press, O. and Wolf, L. Using the output embed- ding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
Recurrent dropout without memory loss. S Semeniuta, A Severyn, E Barth, COLING. Semeniuta, S., Severyn, A., and Barth, E. Recurrent dropout without memory loss. In COLING, 2016.
Query-Reduction Networks for Question Answering. M Seo, S Min, A Farhadi, H Hajishirzi, arXiv:1606.04582arXiv preprintSeo, M., Min, S., Farhadi, A., and Hajishirzi, H. Query- Reduction Networks for Question Answering. arXiv preprint arXiv:1606.04582, 2016.
Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, Journal of Machine Learning Research. 15Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958, 2014.
On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, International conference on machine learning. Sutskever, I., Martens, J., Dahl, G., and Hinton, G. On the importance of initialization and momentum in deep learning. In International conference on machine learn- ing, pp. 1139-1147, 2013.
Unbiasing truncated backpropagation through time. C Tallec, Y Ollivier, arXiv:1705.08209arXiv preprintTallec, C. and Ollivier, Y. Unbiasing truncated backprop- agation through time. arXiv preprint arXiv:1705.08209, 2017.
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning. T Tieleman, G Hinton, 4Tieleman, T. and Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magni- tude. COURSERA: Neural networks for machine learn- ing, 4(2):26-31, 2012.
Regularization of neural networks using dropconnect. L Wan, M Zeiler, S Zhang, Y Lecun, Fergus , R , Proceedings of the 30th international conference on machine learning (ICML-13). the 30th international conference on machine learning (ICML-13)Wan, L., Zeiler, M., Zhang, S., LeCun, Y, and Fergus, R. Regularization of neural networks using dropconnect. In Proceedings of the 30th international conference on ma- chine learning (ICML-13), pp. 1058-1066, 2013.
The marginal value of adaptive gradient methods in machine learning. A C Wilson, R Roelofs, M Stern, N Srebro, B Recht, arXiv:1705.08292arXiv preprintWilson, A. C, Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.
Full-capacity unitary recurrent neural networks. S Wisdom, T Powers, J Hershey, J Le Roux, L Atlas, Advances in Neural Information Processing Systems. Wisdom, S., Powers, T., Hershey, J., Le Roux, J., and Atlas, L. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880-4888, 2016.
W Zaremba, I Sutskever, O Vinyals, arXiv:1409.2329Recurrent neural network regularization. arXiv preprintZaremba, W., Sutskever, I., and Vinyals, O. Recur- rent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
. J G Zilly, R K Srivastava, J Koutník, J Schmidhuber, arXiv:1607.03474arXiv preprintRecurrent highway networksZilly, J. G., Srivastava, R. K., Koutník, J., and Schmid- huber, J. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
B Zoph, Q V Le, arXiv:1611.01578Neural architecture search with reinforcement learning. arXiv preprintZoph, B. and Le, Q. V. Neural architecture search with re- inforcement learning. arXiv preprint arXiv:1611.01578, 2016. |
219,969,405 | A Universal Representation Transformer Layer for Few-Shot Image Classification | Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it outperforms the best previous model on three data sources or performs the same in others. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization. Our code is available at https://github.com/liulu112601/URT * This work was done while Lu Liu was a research intern with Mila. † Canada CIFAR AI Chair Preprint. Under review. | [
71145737,
202230734,
3507990
] | A Universal Representation Transformer Layer for Few-Shot Image Classification
Lu Liu
Australian AI Institute
UTS
William Hamilton
McGill University
Guodong Long
Australian AI Institute
UTS
Jing Jiang
Australian AI Institute
UTS
Hugo Larochelle
Google Research
Brain Team Correspondence to
Mila
A Universal Representation Transformer Layer for Few-Shot Image Classification
Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it outperforms the best previous model on three data sources or performs the same in others. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization. Our code is available at https://github.com/liulu112601/URT * This work was done while Lu Liu was a research intern with Mila. † Canada CIFAR AI Chair Preprint. Under review.
Introduction
Learning tasks from small data remains a challenge for machine learning systems, which show a noticeable gap compared to the ability of humans to understand new concepts from few examples. A promising direction to address this challenge is developing methods that are capable of performing transfer learning across the collective data of many tasks. Since machine learning systems generally improve with the availability of more data, a natural assumption is that few-shot learning systems should benefit from leveraging data across many different tasks and domains-even if each individual task has limited training data available.
This research direction is well captured by the problem of multi-domain few-shot classification. In this setting, training and test data spans a number of different domains, each represented by a different source dataset. A successful approach in this multi-domain setting must not only address the regular challenge of few-shot classification-i.e., the challenge of having only a handful of examples per class. It must also discover how to leverage (or ignore) what is learned from different domains, achieving generalization and avoiding cross-domain interference.
Recently, Triantafillou et al. [20] proposed a benchmark for multi-domain few-shot classification, Meta-Dataset, and highlighted some of the challenges that current methods face when training data is heterogeneous. Crucially, they found that methods which trained on all available domains would normally obtain improved performance on some domains at the expense of others. Following on their work, progress has been made, which includes the design of adapted hyper-parameter optimization strategies [17] and more flexible meta-learning algorithms [16]. Most notable is SUR (Selecting Universal Representation) [5], a method that relies on a so-called universal representation, extracting from a collection of pre-trained and domain-specific neural network backbones. SUR prescribes a hand-crafted feature-selection procedure to infer how to weight each backbone for each task at hand, and produces an adapted representation for each task. This was shown to lead to state-of-the-art performance on Meta-Dataset.
In SUR, the classification procedure for each task is fixed and not learned. Thus, except for the underlying universal representation, there is no transfer learning performed with regards to how classification rules are inferred across tasks and domains. Yet, cross-domain generalization might be beneficial in that area as well, in particular when tasks have only few examples per class.
Present work. To explore this question, we propose the Universal Representation Transformer (URT) layer, which can effectively learn to transform a universal representation into task-adapted representations. The URT layer is inspired from Transformer networks [21] and uses an attention mechanism to learn to retrieve or blend the appropriate backbones to use for each task. By training this layer across few-shot tasks from many domains, it can support transfer across these tasks.
We show that our URT layer on top of a universal representation's pre-trained backbones sets a new state-of-the-art performance on Meta-Dataset. It succeeds at outperforming SUR on 3 dataset sources without impairing accuracy on the others. To interpret the strategy that URT learns to weigh the backbones from different domains, we visualize the attention scores for both seen and unseen domains and find that our model generates meaningful weights for the pre-trained domains. A comprehensive analysis on variants and ablations of the URT layer is provided to show the importance of various components of URT, notably the number of attention heads.
2 Few-Shot Classification
Problem Setting
In this section, we will introduce the problem setting for few-shot classification and the formulation of meta-learning for few-shot classification. Few-shot classification aims to classify samples where only few examples are available for each class. We describe a few-shot learning classification task as the pair of examples, comprising of a support set S to define the classification task and the query set Q of samples to be classified.
Meta-learning is a technique that aims to model the problem of few-shot classification as learning to learn from instances of few-shot classification tasks. The most popular way to train a meta-learning model is with episodic training. Here, tasks T = (Q, S) are sampled from a larger dataset by taking subsets of the dataset to build a support set S and a query set Q for the task. A common approach is to sample N -way-K-shot tasks, each time selecting a random subset of N classes from the original dataset and choosing only K examples for each class to add to the support set S.
The meta-learning problem can then be formulated by the following optimization:
min Θ E (S,Q)∼p(T ) [L(S, Q, Θ)] , L(S, Q, Θ) = 1 |Q| (x,y)∼Q − log p(y|x, S; Θ) + λΩ(Θ),(1)
where p(T ) is the distribution of tasks, Θ are the parameters of the model and p(y|x, S; Θ) is the probability assigned by the model to label y of query example x (given the support set S), and Ω(Θ) is an optional regularization term on the model parameters with factor λ.
Conventional few-shot classification targets the setting of N -way-K-shot, where the number of classes and examples are fixed in each episode. Popular benchmarks that follow this approach include Omniglot [8]) or benchmarks made of subsets of ImageNet, such as miniImageNet [22] and tieredImageNet [15]. In such benchmarks, the tasks used for training cover a set of classes that is disjoint from the classes found in the test set of tasks. However, with the training and test sets tasks coming from a single dataset/domain, the distribution of tasks found in either sets is similar and lacks variability, which may be unrealistic in practice.
It is in this context that Triantafillou et al. [20] proposed Meta-Dataset, as a further step towards large-scale, multi-domain few shot classification. Meta-Dataset includes ten datasets (domains), with eight of them available for training. Additionally, each task sampled in the benchmark varies in the number of classes N , with each class also varying in the number of shots K. As in all few-shot learning benchmarks, the classes used for training and testing do not overlap.
Background and Related Work
Transfer by fine-tuning A simple and effective method for few-shot classification is to perform transfer learning by first learning a neural network classifier on all data available for training and using its representation to initialize and then fine-tune neural networks on the few-shot classification tasks found at test time [2,20,4,17]. Specifically, Saikia et al. [17] have shown that competitive performance can be reached using a strong hyper-parameter optimization method applied on a carefully designed validation metric appropriate for few-shot learning.
Meta-Learning Another approach is to use meta-learning to more directly train a model to learn to perform few-shot classification, in an end-to-end way. A large variety of such models have been explored, inspired by memory-augmented networks [18], LSTMs [13] and metric-based classifiers [22]. The two most popular methods are Prototypical Networks [19] and Model Agnostic Meta-Learning (MAML) [6]. Prototypical Networks assume that every class can be represented as a prototype in a learned embedding space (represented by a neural network). Prototypes are calculated as the average of the representations of the support examples of each class. A 1-nearest centroid classifier is then adopted for classification and the neural network representation is trained to facilitate classification in few-shot tasks directly. MAML models the procedure of learning to learn as a bilevel optimization, where an outer loop backpropagates loss gradients through an optimization-based inner loop to learn its initialization. Triantafillou et al. [20] showed that prototypical networks and MAML could be combined by leveraging prototypes for the initialization of the output weights value in the inner loop. Requeima et al. [16] also proposed Conditional Neural Adaptive Processes (CNAPs) for few-shot classification, which can be seen as extending prototypical networks with a more sophisticated architecture that allows for improved task adaptation.
Universal Representatons In contrast, our work instead builds on that of Dvornik et al. [5] and their method SUR (Selecting from Universal Representations). Bilen and Vedaldi [1] introduced the term universal representation to refer to a representation that supports good performance in multiple domains. One proposal towards such a representation is to train different neural networks backbones separately on the data of each available domain, then simply to concatenate the representation learned by each. Another is to introduce some parameter sharing between the backbones, by having a single network conditioned on the domain of the provenance of each batch of training data [14], e.g. using Feature-wise Linear Modulate (FiLM) [12]. SUR proposes to leverage a universal representation in few-shot learning tasks with a feature selection procedure that assigns different weights to each of the domain-specific subvectors of the universal representation. The objective is to assign high weights only to the domain-specific representations that are specifically useful for each few-shot task at hand. The weights are inferred by optimizing a loss on the support set that encourages high accuracy of a nearest-centroid classifier. As such, the method does not involve any meta-learning-a choice motivated by the concern that meta-learning may struggle in generalizing to domains that are dissimilar to the training domains. SUR achieved state-of-the-art performance on Meta-Dataset. However, a contribution of our work is to provide evidence that meta-learning can actually be used to replace SUR's hand-designed inference procedure and improve performance further.
Transformer Networks Our meta-learning approach to leverage universal representations is inspired directly from Transformer networks [21]. Transformer networks are neural network architectures characterized by the use self-attention mechanisms to solve tasks. Our model structure is inspired by the structure of the dot-product self-attention in the Transformer, which we adapted here to multidomain few-shot learning by designing appropriate parametrizations for queries, keys and values. Self-attention was explored in the single-domain training regime by Ye et al. [23], however for a different purpose, where each representation of individual examples in a task support set is influenced by all other examples. Such a strategy is also employed by CNAPs, but with the latter using FiLM as the conditioning mechanism, instead of self-attention. Regardless, the aim of this paper is to propose a different strategy. Rather than using self-attention between individual examples in the support set, our model uses self-attention to select between different domain-specific backbones. Figure 1: Illustration of how a single-head URT layer uses a universal representation to produce a task-specific representation. This example assumes the use of four backbones, with each color illustrating their domain-specific sub-vector representation in the universal representation.
Universal Representation Transformer Layer
In this section, we describe our proposed URT layer, which uses meta-learning episodic training to learn how to combine the domain-specific backbones of a universal representation for any given few-shot learning classification task.
Conceptually, the proposed model views the support set S of a task as providing information on how to query and retrieve from the set {r i } of m pre-trained backbones the most appropriate backbone to build an adapted representation φ for the task.
We would like the model to support a variety of strategies on how to retrieve backbones. For example, it might be beneficial for the model to retrieve a single backbone from the set, especially if the domain of the given task matches perfectly that of a domain found in the training set. Alternatively, if some of the training domains benefit from much more training data than others, a better strategy might be to attempt some cross-domain generalization towards the few-shot learning task by blending many backbones together, even if none matches the domain of the task perfectly.
This motivates us to use dot-product self-attention, inspired by layers of Transformer networks [21]. For this reason, we refer to our model as a Universal Representation Transformer (URT) layer. Additionally, since each class of the support set might require a different strategy, we perform attention separately for each class and their support set S c = {x|(x, y) ∈ S and y = c}.
Single-Head URT Layer
We start by describing an URT layer consisting of a single attention head. An illustration of a single-head URT layer is shown in Figure 1. Let r i (x) be the output vector of the backbone for domain i. We then write the universal representation as
r(x) = concat(r 1 (x), . . . , r m (x)).(2)
This representation provides a natural starting point to obtain a representation of a support set class. Specifically, we will note
r(S c ) = 1 |S c | x∈Sc r(x)(3)
as the representation for the set S c . From this, we can describe the URT layer by defining the queries 3 , keys, the attention mechanism and output of the layer:
Queries q c : For each class c, we obtain a query through q c = W q r(S c ) + b q , where we have a learnable query linear transformation represented by matrix W q and bias b q .
Algorithm 1 Training of URT layer
Input: Number of tasks τ total , m pre-trained backbones ; 1: for τ ∈ {1, · · · , τ total } do 2:
Sample a few-shot task T with support set S and query set Q;
3:
# Infer adapted representation for task from S 4:
For each class, obtain representation using m pre-trained backbones as in Eq. (3); 5: Obtain attention scores using Eq. (4,5) for each head using support set S; 6: # Use adapted representation to predict labels in Q from support set S Compute loss as in Eq. (1,8) and perform gradient descent step on URT parameters Θ; 10: end for Keys k i,c : For each domain i and class c, we define keys as k i,c = W k r i (S c ) + b k , using a learnable linear transformation W k and b k and where r i (S c ) = 1/|S c | x∈Sc r i (x), using a similar notation as for r(S c ).
Attention scores α i : as for regular Transformer layers, we use scaled dot-product attention
α i,c = exp(β i,c ) i exp(β i ,c ) , β i,c = q c k i,c √ l ,(4)
where l is the dimensionality of the keys and queries. Then, these per-class scores are aggregated to obtain scores for the full support set by averaging
α i = c α i,c N .(5)
Equipped with these attention scores, the URT layer can now produce an adapted representation for the task (for the support and query set examples) by computing
φ(x) = i α i r i (x) .(6)
As we can see, this approach has the flexibility of either selecting a single domain-specific backbone (by assigning α i = 1 for a single domain) or blending different domains together (by having α i >> 0 for multiple backbones).
Multi-Head URT Layer
The URT layer described so far can only learn to retrieve a single backbone (or blending of backbones). Yet, it might be beneficial to retrieve multiple different (blended) backbones, especially for a few-shot task that would include many classes of varying complexity.
Thus, to achieve such diversity in the adapted representation, we also consider URT layers with multiple heads, i.e. where each head corresponds to the calculation of Equation 6 and each head has its own set of parameters (W q , b q , W k , b k ). Denoting each head now as φ h , a multi-head URT layer then produces as its output the concatenation of all of its heads:
φ(x) = concat(φ 1 (x), . . . , φ H (x)).(7)
Empirically we found that the randomness in the initialization of head weights alone did not lead to uniqueness and being complimentary between the heads, so inspired by Lin et al. [10], we add a regularizer to avoid duplication of the attention scores:
Ω(Θ) = (AA − I) F 2 ,(8)
where · F is the Frobenius norm of a matrix and A ∈ R n×m is the matrix for attention scores, with A h being the vector of all scores α i for head h. The identity matrix I regularizes each set of attention scores to be more focused so that multiple heads can attend to different domain-specific backbones.
Training Strategy
We train representations produced by the URT layer by following the approach of Prototypical Networks [19], where the probability of a label y for a query example x given the support set of a task is modeled as:
p(y = c|x, S; Θ) = exp(−d(φ(x) − p c )) N c =1 exp(−d(φ(x) − p c )) ,(9)
where d is a distance metric and p c = 1/|S c | x∈Sc φ(x) corresponds to the centroid of class c, referred to as its prototype. We use (negative) cosine similarity as the distance. The full training algorithm is presented in Algorithm 1.
Experiments
In this section, we seek to answer three key experimental questions:
Q1 How does URT compare with previous state-of-the-art on multi-domain few-shot classification? Q2 Do the URT attention heads generate interpretable and meaningful attention scores? Q3 Does the URT layer provide consistent benefits, even when pre-trained backbones are trained in different ways?
In addition, we investigate architectural choices made, such as our models for keys/queries and their regularization, and study their contribution to achieving strong performance with URT.
Datasets and Setup
We test our methods on the large-scale few-shot learning benchmark Meta-Dataset [20]. It consists of ten datasets with various data distributions across different domains, including natural images (Birds, Fungi, VGG Flower), hand-written characters (Omniglot, Quick Draw), and human created objects (Traffic Signs, Aircraft). Among the ten datasets, eight provide data that can be used during either training, validation and testing (with each class assigned to only one of those sets), while two datasets are solely used for testing. Following Requeima et al. [16], we also include MNIST [9], CIFAR10 and CIFAR100 [7] as additional unseen test datasets. Following Triantafillou et al. [20], few-shot tasks are sampled with varying number of classes N , varying number of shots K and class imbalance. The performance is reported as the average accuracy over 600 sampled tasks. More details of Meta-Dataset can be found in Triantafillou et al. [20].
The domain-specific backbones are pre-trained following the setup in [5]. Then, we freeze the backbone and train the URT layer for 10,000 episodes, with an initial learning rate of 0.01 and a cosine learning rate scheduler. Following Chen et al. [3], the training episodes have 50% probability coming from the ImageNet data source. Since different pre-trained backbones may produce representations with different vector norms, we normalize the outputs of the backbones as in Dvornik et al. [5]. URT is trained with parameter weight decay of 1e-5 and with a regularization factor λ = 0.1. The number of heads (H in Equation 7), is set to 2 and the dimension of the keys and queries (l in Equation 4) is set to 1024. We choose the hyper-parameters based on the performance of the validation set. Details of the hyper-parameter selection and how the performance is influenced by them are outlined in Section 4.5. Table 1 presents a comparison of URT with SUR, as well as other baselines based on transfer learning by fine-tuning [17] or meta-learning (Prototypical Networks [19], first-order MAML [6], ProtoMAML [20], CNAPs [16]). Among the listed datasets, eight (above the middle line) have some of their classes used for training and five (below the middle line) do not.
Comparison with Previous Approaches
We observe that URT establishes a new state-of-the-art, by outperforming SUR on 3 datasets without compromising performance on others, which is challenging to achieve in the multi-domain setting. Of note, the average inference time for URT is 0.04 second per task, compared to 0.43 for SUR, on a single V100. Thus, getting rid of the optimization procedure for every episode with our meta-trained URT layer also significantly increases the latency, by more than 10×. Additionally, URT outperforms all other meta-learning methods, on all datasets.
Interpreting and Visualizing Attention by URT
To better understand how the URT model of Section 4.2 uses its two heads to build adapted representations, we visualize the attention scores produced on the test tasks of Meta-Dataset in Figure 2.
The blue (first head) and orange (second head) heatmaps summarize the values of the attention scores (Equation 5), averaged across several tasks for each test domain. Specifically, the element on row t and column i is the averaged attention scores α i computed on test set domain t for the backbone from domain i. Note that the last two rows are the two unseen domain datasets. We found that for datasets from the seen domains, i.e. the first eight rows, one head (right, orange) consistently puts most of its weight on the backbone pre-trained on the same domain, while the other head (left, blue) learns relatively smoother weight distributions that blends other related domains. For unseen datasets, the right head puts half of its weight on ImageNet and the left head learned to blend the representations from four backbones. 57.1 ± 1.0 57.3 ± 1.0 = As additional evidence of the benefit of URT on universal representations, we also present experiments based on a different set of backbone architectures. Following SUR [5], we consider the backbones from a parametric network family, obtained by training a base backbone on one dataset (ILSVRC) and then learning separate FiLM layers [12] for each other dataset, to modulate the backbone so it is adapted to the other domains. These backbones collectively have only 0.5% more parameters than a single backbone.
URT using FiLM Modulated Backbones
A comparison between SUR and URT using these backbones (referred to as SUR-pf and URT-pf) is presented in Table 2. Once again, URT improves the performance on three datasets without sacrificing performance on others. Additionally, URT-pf now achieves better performance than BOHB-E on VGGFlower.
Hyper-Parameter and Ablation Studies
We analyze the importance of the various components of URT's attention mechanism structure and training strategy in Table 3. First we analyze the importance of using the support set to model queries and/or keys. To this end, we consider setting the matrices W q / W k of the query / key linear transformation to 0, which only leaves the bias term. We found that the support set representation is most crucial for building the keys (row w/o W k in the table) and has minor benefits for queries (row w/o W q ) in the table. This observation is possibly related to the success of attention-based models with learnable constant queries [11,10]. We also found that adding a regularizer Ω(Θ) as in Equation 8 is important for some datasets, specifically VGG Flower and Birds. Table 4, we show the validation performance of URT for varying number of heads. As suggested by Triantafillou et al. [20], we considered looking at the rank of the performance achieved by each choice of H for each validation domains, and taking the average across domains as a validation metric. However, since the performances when using two to four heads are similar and yield the same average rank, we instead simply consider the average accuracy as the selection criteria. In general, we observe a large jump in performance when using multiple heads instead of just one. However, since the number of heads controls the capacity, predictably we also observe that having too many heads leads to overfitting.
Conclusion
We proposed the URT layer to effectively integrate representations from multiple domains and demonstrated improved performance in multi-domain few-shot classification. Notably, our URT approach was able to set a new state-of-the-art on Meta-Dataset, while also being 10× more efficient at inference compared to the next-best approach (SUR). This work suggests that combining metalearning with pre-trained universal representations is a promising direction for new few-shot learning methods. Specifically, we hope that future work can investigate the design of richer forms of universal representations that go beyond simply pre-training a single backbone for each domain, and developing meta-learners adapted to those settings.
Broader Impact
Our URT model may present an interesting element of solution for applications that present difficulties in the collection and sharing of data. This could include settings where each user of an application has limited private data, and as such desires that a classification task be executed directly and solely on their devices. Any deployment of the proposed model however should be preceded by an analysis of the potential biases captured by the dataset sources used for training and the correction of any such undesirable biases captured by the pre-trained backbones and model.
representation of examples in S and Q as in Eq. of label of examples in Q using Prototypical Network as in Eq.
Figure 2 :
2Average attention scores generated by URT with two heads. Rows correspond to the domain of the test tasks and the columns correspond to the pre-trained backbones r i (x) trained on the eight training domains.
Table 1 :
1Test performance (mean+CI%95) over 600 few-shot tasks. URT and the most recent methods, which are listed in the first row, are compared on 13 test datasets, which are listed in the first column. The numbers in bold have intersecting confidence intervals with the most accurate method.ProtoNet[19] MAML[6] ProtoMAML[20] CNAPs[16] BOHB-E[17] SUR[5]
URT
ILSVRC
44.5 ± 1.1 37.8 ± 1.0
46.5 ± 1.1
52.3 ± 1.0 55.4 ± 1.1 56.3 ± 1.1 55.7 ± 1.0
Omniglot
79.6 ± 1.1 83.9 ± 1.0
82.7 ± 1.0
88.4 ± 0.7 77.5 ± 1.1 93.1 ± 0.5 94.4 ± 0.4
Aircraft
71.1 ± 0.9 76.4 ± 0.7
75.2 ± 0.8
80.5 ± 0.6 60.9 ± 0.9 85.4 ± 0.7 85.8 ± 0.6
Birds
67.0 ± 1.0 62.4 ± 1.1
69.9 ± 1.0
72.2 ± 0.9 73.6 ± 0.8 71.4 ± 1.0 76.3 ± 0.8
Textures
65.2 ± 0.8 64.1 ± 0.8
68.3 ± 0.8
58.3 ± 0.7 72.8 ± 0.7 71.5 ± 0.8 71.8 ± 0.7
QuickDraw 64.9 ± 0.9 59.7 ± 1.1
66.8 ± 0.9
72.5 ± 0.8 61.2 ± 0.9 81.3 ± 0.6 82.5 ± 0.6
Fungi
40.3 ± 1.1 33.5 ± 1.1
42.0 ± 1.2
47.4 ± 1.0 44.5 ± 1.1 63.1 ± 1.0 63.5 ± 1.0
VGGFlower 86.9 ± 0.7 79.9 ± 0.8
88.7 ± 0.7
86.0 ± 0.5 90.6 ± 0.6 82.8 ± 0.7 88.2 ± 0.6
TrafficSigns 46.5 ± 1.0 42.9 ± 1.3
52.4 ± 1.1
60.2 ± 0.9 57.5 ± 1.0 70.4 ± 0.8 69.4 ± 0.8
MSCOCO
39.9 ± 1.1 29.4 ± 1.1
41.7 ± 1.1
42.6 ± 1.1 51.9 ± 1.0 52.4 ± 1.1 52.2 ± 1.1
MNIST
-
-
-
92.7 ± 0.4
-
94.3 ± 0.4 94.8 ± 0.4
CIFAR10
-
-
-
61.5 ± 0.7
-
66.8 ± 0.9 67.3 ± 0.8
CIFAR100
-
-
-
50.1 ± 1.0
-
56.6 ± 1.0 56.9 ± 1.0
Table 2 :
2Performance comparison using para-
metric network family (pf) backbones.
SUR-pf [5] URT-pf VS.
ILSVRC
56.4 ± 1.2 55.5 ± 1.1 =
Omniglot
88.5 ± 0.8 90.2 ± 0.6 +
Aircraft
79.5 ± 0.8 79.8 ± 0.7 =
Birds
76.4 ± 0.9 77.5 ± 0.8 =
Textures
73.1 ± 0.7 73.5 ± 0.7 =
Quick Draw 75.7 ± 0.7 75.8 ± 0.7 =
Fungi
48.2 ± 0.9 48.1 ± 0.9 =
VGG Flower 90.6 ± 0.5 91.9 ± 0.5 +
Traffic Signs 65.1 ± 0.8 67.5 ± 0.8 +
MSCOCO
52.1 ± 1.0 52.1 ± 1.0 =
MNIST
93.2 ± 0.4 93.9 ± 0.4 =
CIFAR10
66.4 ± 0.8 66.1 ± 0.8 =
CIFAR100
Table 3 :
3Meta-Dataset performance variation on ablations of elements of the URT layer.ILSVRC Omniglot Aircraft Birds Textures Draw Fungi Flower Signs MSCOCOAn important hyper-parameter in URT is the number of heads H. We chose this hyper-parameter based on the performance on validation set of tasks in Meta-Dataset. Inw/o W q
+0.2
-0.2
-0.6
-0.1
-0.3
-0.2 0.0
-0.2 -0.8
-0.1
w/o W k
-14.2
-2.8
-10.7 -18.1 -7.6
-9.3 -22.4 -3.6 -0.26
-10.9
w/o r(S c ) -14.2
-2.8
-10.7 -18.1 -7.6
-9.2 -22.4 -3.6 -0.26
-10.9
w/o Ω(Θ)
0.0
-0.9
-0.4
-3.3
-1.2
-0.2 +0.3 -9.0 -2.0
0.0
Table 4 :
4Validation performance on Meta-Dataset using different number of heads1
2
3
4
5
6
7
8
Average Accuracy 74.605 77.145 76.943 76.984 76.602 75.906 75.454 74.473
Average Rank
2.875 1.000 1.000 1.000 2.250 2.250
2.25
2.50
Unable to avoid the unfortunate double usage of the term "query" due to conflicting conventions, we highlight the difference between the query sets Q of few-shot tasks and the queries qc of an attention mechanism.
AcknowledgementWe would like to thank Tianyi Zhou for paper review and suggestions. The computation support for this project is provided by Compute Canada and Google Cloud. This project was supported by the Canada CIFAR AI Chairs program.
Universal representations: The missing link between faces, text, planktons, and cat breeds. Hakan Bilen, Andrea Vedaldi, arXiv:1701.07275arXiv preprintHakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
A closer look at few-shot classification. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Liu, Yu-Chiang Frank Wang, Jia-Bin Huang, International Conference on Learning Representations (ICLR). Wei-Yu Chen, Yen-Cheng Liu, Zsolt Liu, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In International Conference on Learning Representations (ICLR), 2019.
A new meta-baseline for few-shot learning. Yinbo Chen, Xiaolong Wang, Zhuang Liu, Huijuan Xu, Trevor Darrell, arXiv:2003.04390arXiv preprintYinbo Chen, Xiaolong Wang, Zhuang Liu, Huijuan Xu, and Trevor Darrell. A new meta-baseline for few-shot learning. arXiv preprint arXiv:2003.04390, 2020.
A baseline for few-shot image classification. Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto, International Conference on Learning Representations (ICLR). 2020Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In International Conference on Learning Representations (ICLR), 2020.
Selecting relevant features from a universal representation for few-shot classification. Nikita Dvornik, Cordelia Schmid, Julien Mairal, arXiv:2003.09338arXiv preprintNikita Dvornik, Cordelia Schmid, and Julien Mairal. Selecting relevant features from a universal representation for few-shot classification. arXiv preprint arXiv:2003.09338, 2020.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, The International Conference on Machine Learning (ICML). Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adap- tation of deep networks. In The International Conference on Machine Learning (ICML), 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, University of TorontoTechnical reportAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
Human-level concept learning through probabilistic program induction. Ruslan Brenden M Lake, Joshua B Salakhutdinov, Tenenbaum, Science. 3506266Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
A structured self-attentive sentence embedding. Zhouhan Lin, Minwei Feng, Cicero Nogueira, Mo Santos, Bing Yu, Bowen Xiang, Yoshua Zhou, Bengio, arXiv:1703.03130arXiv preprintZhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017.
Learning natural language inference using bidirectional lstm model and inner-attention. Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang, arXiv:1605.09090arXiv preprintYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090, 2016.
Film: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Vincent Harm De Vries, Aaron C Dumoulin, Courville, AAAI Conference on Artificial Intelligence (AAAI). Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, International Conference on Learning Representations (ICLR). Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Interna- tional Conference on Learning Representations (ICLR), 2016.
Efficient parametrization of multidomain deep neural networks. Hakan Sylvestre-Alvise Rebuffi, Andrea Bilen, Vedaldi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Efficient parametrization of multi- domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
Meta-learning for semi-supervised few-shot classification. Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, Richard S Zemel, International Conference on Learning Representations (ICLR). Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenen- baum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. In International Conference on Learning Representations (ICLR), 2018.
Fast and flexible multi-task classification using conditional neural adaptive processes. James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E Turner, The Conference on Neural Information Processing Systems (NeurIPS). James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E Turner. Fast and flexible multi-task classification using conditional neural adaptive processes. In The Conference on Neural Information Processing Systems (NeurIPS), pages 7957-7968, 2019.
Optimized generic feature learning for few-shot classification across domains. Tonmoy Saikia, Thomas Brox, Cordelia Schmid, arXiv:2001.07926arXiv preprintTonmoy Saikia, Thomas Brox, and Cordelia Schmid. Optimized generic feature learning for few-shot classification across domains. arXiv preprint arXiv:2001.07926, 2020.
Meta-learning with memory-augmented neural networks. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap, The International Conference on Machine Learning (ICML). Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In The International Conference on Machine Learning (ICML), 2016.
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, The Conference on Neural Information Processing Systems. NeurIPSJake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In The Conference on Neural Information Processing Systems (NeurIPS), 2017.
Meta-dataset: A dataset of datasets for learning to learn from few examples. Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, International Conference on Learning Representations (ICLR. 2020Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. In International Conference on Learning Representations (ICLR), 2020.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, The Conference on Neural Information Processing Systems (NeurIPS). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In The Conference on Neural Information Processing Systems (NeurIPS), 2017.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, The Conference on Neural Information Processing Systems (NeurIPS). Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In The Conference on Neural Information Processing Systems (NeurIPS), 2016.
Few-shot learning via embedding adaptation with set-to-set functions. Hexiang Han-Jia Ye, Hu, Fei De-Chuan Zhan, Sha, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2020Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. |
59,279,266 | GRAPH WAVELET NEURAL NETWORK | We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform. Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost. Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution. The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed. | [
3144218,
17682909
] | GRAPH WAVELET NEURAL NETWORK
Bingbing Xu [email protected]
Institute of Computing Technology
CAS Key Laboratory of Network Data Science and Technology
Chinese Academy of Sciences
School of Computer and Control Engineering
University of Chinese Academy of Sciences
BeijingChina
Huawei Shen [email protected]
Institute of Computing Technology
CAS Key Laboratory of Network Data Science and Technology
Chinese Academy of Sciences
School of Computer and Control Engineering
University of Chinese Academy of Sciences
BeijingChina
Qi Cao [email protected]
Institute of Computing Technology
CAS Key Laboratory of Network Data Science and Technology
Chinese Academy of Sciences
School of Computer and Control Engineering
University of Chinese Academy of Sciences
BeijingChina
Yunqi Qiu [email protected]
Institute of Computing Technology
CAS Key Laboratory of Network Data Science and Technology
Chinese Academy of Sciences
School of Computer and Control Engineering
University of Chinese Academy of Sciences
BeijingChina
Xueqi Cheng
Institute of Computing Technology
CAS Key Laboratory of Network Data Science and Technology
Chinese Academy of Sciences
School of Computer and Control Engineering
University of Chinese Academy of Sciences
BeijingChina
GRAPH WAVELET NEURAL NETWORK
Published as a conference paper at ICLR 2019
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform. Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost. Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution. The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed.
INTRODUCTION
Convolutional neural networks (CNNs) (LeCun et al., 1998) have been successfully used in many machine learning problems, such as image classification (He et al., 2016) and speech recognition (Hinton et al., 2012), where there is an underlying Euclidean structure. The success of CNNs lies in their ability to leverage the statistical properties of Euclidean data, e.g., translation invariance. However, in many research areas, data are naturally located in a non-Euclidean space, with graph or network being one typical case. The non-Euclidean nature of graph is the main obstacle or challenge when we attempt to generalize CNNs to graph. For example, convolution is not well defined in graph, due to that the size of neighborhood for each node varies dramatically .
Existing methods attempting to generalize CNNs to graph data fall into two categories, spatial methods and spectral methods, according to the way that convolution is defined. Spatial methods define convolution directly on the vertex domain, following the practice of the conventional CNN. For each vertex, convolution is defined as a weighted average function over all vertices located in its neighborhood, with the weighting function characterizing the influence exerting to the target vertex by its neighbors . The main challenge is to define a convolution operator that can handle neighborhood with different sizes and maintain the weight sharing property of CNN. Although spatial methods gain some initial success and offer us a flexible framework to generalize CNNs to graph, it is still elusive to determine appropriate neighborhood.
Spectral methods define convolution via graph Fourier transform and convolution theorem. Spectral methods leverage graph Fourier transform to convert signals defined in vertex domain into spectral domain, e.g., the space spanned by the eigenvectors of the graph Laplacian matrix, and then filter is defined in spectral domain, maintaining the weight sharing property of CNN. As the pioneering work of spectral methods, spectral CNN (Bruna et al., 2014) exploited graph data with the graph Fourier transform to implement convolution operator using convolution theorem. Some subsequent works make spectral methods spectrum-free (Defferrard et al., 2016;Kipf & Welling, 2017;Khasanova & Frossard, 2017), achieving locality in spatial domain and avoiding high computational cost of the eigendecomposition of Laplacian matrix.
In this paper, we present graph wavelet neural network to implement efficient convolution on graph data. We take graph wavelets instead of the eigenvectors of graph Laplacian as a set of bases, and define the convolution operator via wavelet transform and convolution theorem. Graph wavelet neural network distinguishes itself from spectral CNN by its three desirable properties: (1) Graph wavelets can be obtained via a fast algorithm without requiring the eigendecomposition of Laplacian matrix, and thus is efficient; (2) Graph wavelets are sparse, while eigenvectors of Laplacian matrix are dense. As a result, graph wavelet transform is much more efficient than graph Fourier transform;
(3) Graph wavelets are localized in vertex domain, reflecting the information diffusion centered at each node (Tremblay & Borgnat, 2014). This property eases the understanding of graph convolution defined by graph wavelets.
We develop an efficient implementation of the proposed graph wavelet neural network. Convolution in conventional CNN learns an individual convolution kernel for each pair of input feature and output feature, causing a huge number of parameters especially when the number of features is high. We detach the feature transformation from convolution and learn a sole convolution kernel among all features, substantially reducing the number of parameters. Finally, we validate the effectiveness of the proposed graph wavelet neural network by applying it to graph-based semi-supervised classification. Experimental results demonstrate that our method consistently outperforms previous spectral CNNs on three benchmark datasets, i.e., Cora, Citeseer, and Pubmed.
OUR METHOD
PRELIMINARY
Let G = {V, E, A} be an undirected graph, where V is the set of nodes with |V| = n, E is the set of edges, and A is adjacency matrix with A i,j = A j,i to define the connection between node i and node j. The graph Laplacian matrix L is defined as L = D −A where D is a diagonal degree matrix with D i,i = j A i,j , and the normalized Laplacian matrix is L = I n − D −1/2 AD −1/2 where I n is the identity matrix. Since L is a real symmetric matrix, it has a complete set of orthonormal eigenvectors U = (u 1 , u 2 , ..., u n ), known as Laplacian eigenvectors. These eigenvectors have associated real, non-negative eigenvalues {λ l } n l=1 , identified as the frequencies of graph. Eigenvectors associated with smaller eigenvalues carry slow varying signals, indicating that connected nodes share similar values. In contrast, eigenvectors associated with larger eigenvalues carry faster varying signals across connected nodes.
GRAPH FOURIER TRANSFORM
Taking the eigenvectors of normalized Laplacian matrix as a set of bases, graph Fourier transform of a signal x ∈ R n on graph G is defined asx = U x, and the inverse graph Fourier transform is Shuman et al., 2013). Graph Fourier transform, according to convolution theorem, offers us a way to define the graph convolution operator, denoted as * G . Denoting with y the convolution kernel, * G is defined as
x = Ux (x * G y = U (U y) (U x) ,(1)
where is the element-wise Hadamard product. Replacing the vector U y by a diagonal matrix g θ , then Hadamard product can be written in the form of matrix multiplication. Filtering the signal x by the filter g θ , we can write Equation (1) as U g θ U x.
However, there are some limitations when using Fourier transform to implement graph convolution:
(1) Eigendecomposition of Laplacian matrix to obtain Fourier basis U is of high computational cost with O(n 3 ); (2) Graph Fourier transform is inefficient, since it involves the multiplication between a dense matrix U and the signal x;
(3) Graph convolution defined through Fourier transform is not localized in vertex domain, i.e., the influence to the signal on one node is not localized in its neighborhood. To address these limitations, ChebyNet (Defferrard et al., 2016) restricts convolution kernel g θ to a polynomial expansion
g θ = K−1 k=0 θ k Λ k ,(2)
where K is a hyper-parameter to determine the range of node neighborhoods via the shortest path distance, θ ∈ R K is a vector of polynomial coefficients, and Λ =diag({λ l } n l=1 ). However, such a polynomial approximation limits the flexibility to define appropriate convolution on graph, i.e., with a smaller K, it's hard to approximate the diagonal matrix g θ with n free parameters. While with a larger K, locality is no longer guaranteed. Different from ChebyNet, we address the aforementioned three limitations through replacing graph Fourier transform with graph wavelet transform.
GRAPH WAVELET TRANSFORM
Similar to graph Fourier transform, graph wavelet transform projects graph signal from vertex domain into spectral domain. Graph wavelet transform employs a set of wavelets as bases, defined as ψ s = (ψ s1 , ψ s2 , ..., ψ sn ), where each wavelet ψ si corresponds to a signal on graph diffused away from node i and s is a scaling parameter. Mathematically, ψ si can be written as
ψ s = U G s U ,(3)
where U is Laplacian eigenvectors, G s =diag g(sλ 1 ), ..., g(sλ n ) is a scaling matrix and g(sλ i ) = e λis .
Using graph wavelets as bases, graph wavelet transform of a signal x on graph is defined asx = ψ −1 s x and the inverse graph wavelet transform is x = ψ sx . Note that ψ −1 s can be obtained by simply replacing the g(sλ i ) in ψ s with g(−sλ i ) corresponding to a heat kernel (Donnat et al., 2018). Replacing the graph Fourier transform in Equation (1) with graph wavelet transform, we obtain the graph convolution as
x * G y = ψ s ((ψ −1 s y) (ψ −1 s x)).(4)
Compared to graph Fourier transform, graph wavelet transform has the following benefits when being used to define graph convolution:
1. High efficiency: graph wavelets can be obtained via a fast algorithm without requiring the eigendecomposition of Laplacian matrix. In Hammond et al. (2011), a method is proposed to use Chebyshev polynomials to efficiently approximate ψ s and ψ −1 s , with the computational complexity O(m × |E|), where |E| is the number of edges and m is the order of Chebyshev polynomials.
2. High spareness: the matrix ψ s and ψ −1 s are both sparse for real world networks, given that these networks are usually sparse. Therefore, graph wavelet transform is much more computationally efficient than graph Fourier transform. For example, in the Cora dataset, more than 97% elements in ψ −1 s are zero while only less than 1% elements in U are zero (Table 4). 3. Localized convolution: each wavelet corresponds to a signal on graph diffused away from a centered node, highly localized in vertex domain. As a result, the graph convolution defined in Equation (4) is localized in vertex domain. We show the localization property of graph convolution in Appendix A. It is the localization property that explains why graph wavelet transform outperforms Fourier transform in defining graph convolution and the associated tasks like graph-based semisupervised learning. 4. Flexible neighborhood: graph wavelets are more flexible to adjust node's neighborhoods. Different from previous methods which constrain neighborhoods by the discrete shortest path distance, our method leverages a continuous manner, i.e., varying the scaling parameter s. A small value of s generally corresponds to a smaller neighborhood. Figure 1 shows two wavelet bases at different scale on an example network, depicted using GSP toolbox (Perraudin et al., 2014).
GRAPH WAVELET NEURAL NETWORK
Replacing Fourier transform with wavelet transform, graph wavelet neural network (GWNN) is a multi-layer convolutional neural network. The structure of the m-th layer is
X m+1 [:,j] = h(ψ s p i=1 F m i,j ψ −1 s X m [:,i] ) j = 1, · · · , q,(5)
where ψ s is wavelet bases, ψ −1 s is the graph wavelet transform matrix at scale s which projects signal in vertex domain into spectral domain, X m [:,i] with dimensions n × 1 is the i-th column of X m , F m i,j is a diagonal filter matrix learned in spectral domain, and h is a non-linear activation function. This layer transforms an input tensor X m with dimensions n × p into an output tensor X m+1 with dimensions n × q.
In this paper, we consider a two-layer GWNN for semi-supervised node classification on graph. The formulation of our model is
first layer : X 2 [:,j] = ReLU(ψ s p i=1 F 1 i,j ψ −1 s X 1 [:,i] ) j = 1, · · · , q,(6)
second layer :
Z j = softmax(ψ s q i=1 F 2 i,j ψ −1 s X 2 [:,,i] ) j = 1, · · · , c,(7)
where c is the number of classes in node classification, Z of dimensions n × c is the prediction result. The loss function is the cross-entropy error over all labeled examples:
Loss = − l∈y L c i=1 Y li lnZ li ,(8)
where y L is the labeled node set, Y li = 1 if the label of node l is i, and Y li = 0 otherwise. The weights F are trained using gradient descent.
REDUCING PARAMETER COMPLEXITY
In Equation (5), the parameter complexity of each layer is O(n × p × q), where n is the number of nodes, p is the number of features of each vertex in current layer, and q is the number of features of each vertex in next layer. Conventional CNN methods learn convolution kernel for each pair of input feature and output feature. This results in a huge number of parameters and generally requires huge training data for parameter learning. This is prohibited for graph-based semi-supervised learning.
To combat this issue, we detach the feature transformation from graph convolution. Each layer in GWNN is divided into two components: feature transformation and graph convolution. Spectially, we have feature transformation :
X m = X m W ,(9)
graph convolution :
X m+1 = h(ψ s F m ψ −1 s X m ).(10)
where W ∈ R p×q is the parameter matrix for feature transformation, X m with dimensions n × q is the feature matrix after feature transformation, F m is the diagonal matrix for graph convolution kernel, and h is a non-linear activation function.
After detaching feature transformation from graph convolution, the parameter complexity is reduced from O(n × p × q) to O(n + p × q). The reduction of parameters is particularly valuable fro graphbased semi-supervised learning where labels are quite limited.
RELATED WORKS
Graph convolutional neural networks on graphs. The success of CNNs when dealing with images, videos, and speeches motivates researchers to design graph convolutional neural network on graphs.
The key of generalizing CNNs to graphs is defining convolution operator on graphs. Existing methods are classified into two categories, i.e., spectral methods and spatial methods.
Spectral methods define convolution via convolution theorem. Spectral CNN (Bruna et al., 2014) is the first attempt at implementing CNNs on graphs, leveraging graph Fourier transform and defining convolution kernel in spectral domain. Boscaini et al. (2015) developed a local spectral CNN approach based on the graph Windowed Fourier Transform. Defferrard et al. (2016) introduced a Chebyshev polynomial parametrization for spectral filter, offering us a fast localized spectral filtering method. Kipf & Welling (2017) provided a simplified version of ChebyNet, gaining success in graph-based semi-supervised learning task. Khasanova & Frossard (2017) represented images as signals on graph and learned their transformation invariant representations. They used Chebyshev approximations to implement graph convolution, avoiding matrix eigendecomposition. Levie et al. (2017) used rational functions instead of polynomials and created anisotropic spectral filters on manifolds.
Spatial methods define convolution as a weighted average function over neighborhood of target vertex. GraphSAGE takes one-hop neighbors as neighborhoods and defines the weighting function as various aggregators over neighborhood (Hamilton et al., 2017). Graph attention network (GAT) proposes to learn the weighting function via self-attention mechanism (Velickovic et al., 2017). MoNet offers us a general framework for design spatial methods, taking convolution as the weighted average of multiple weighting functions defined over neighborhood . Some works devote to making graph convolutional networks more powerful. Monti et al. (2018) alternated convolutions on vertices and edges, generalizing GAT and leading to better performance. GraphsGAN (Ding et al., 2018) generalizes GANs to graph, and generates fake samples in low-density areas between subgraphs to improve the performance on graph-based semi-supervised learning.
Graph wavelets. Sweldens (1998) presented a lifting scheme, a simple construction of wavelets that can be adapted to graphs without learning process. Hammond et al. (2011) proposed a method to construct wavelet transform on graphs. Moreover, they designed an efficient way to bypass the eigendecomposition of the Laplacian and approximated wavelets with Chebyshev polynomials. Tremblay & Borgnat (2014) leveraged graph wavelets for multi-scale community mining by modulating a scaling parameter. Owing to the property of describing information diffusion, Donnat et al. (2018) learned structural node embeddings via wavelets. All these works prove that graph wavelets are not only local and sparse but also valuable for signal processiong on graph.
EXPERIMENTS
DATASETS
To evaluate the proposed GWNN, we apply GWNN on semi-supervised node classification, and conduct experiments on three benchmark datasets, namely, Cora, Citeseer and Pubmed (Sen et al., 2008). In the three citation network datasets, nodes represent documents and edges are citation links.
Details of these datasets are demonstrated in Table 1. Here, the label rate denotes the proportion of labeled nodes used for training. Following the experimental setup of GCN (Kipf & Welling, 2017), we fetch 20 labeled nodes per class in each dataset to train the model.
BASELINES
We compare with several traditional semi-supervised learning methods, including label propagation (LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifold regularization (ManiReg) (Belkin et al., 2006), graph embeddings (DeepWalk) (Perozzi et al., 2014), iterative classification algorithm (ICA) (Lu & Getoor, 2003) and Planetoid (Yang et al., 2016).
Furthermore, along with the development of deep learning on graph, graph convolutional networks are proved to be effective in semi-supervised learning. Since our method is a spectral method based on convolution theorem, we compare it with the Spectral CNN (Bruna et al., 2014). ChebyNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2017), two variants of the Spectral CNN, are also included as our baselines. Considering spatial methods, we take MoNet as our baseline, which also depends on Laplacian matrix.
EXPERIMENTAL SETTINGS
We train a two-layer graph wavelet neural network with 16 hidden units, and prediction accuracy is evaluated on a test set of 1000 labeled samples. The partition of datasets is the same as GCN (Kipf & Welling, 2017) with an additional validation set of 500 labeled samples to determine hyper-parameters.
Weights are initialized following Glorot & Bengio (2010). We adopt the Adam optimizer (Kingma & Ba, 2014) for parameter optimization with an initial learning rate lr = 0.01. For computational efficiency, we set the elements of ψ s and ψ −1 s smaller than a threshold t to 0. We find the optimal hyper-parameters s and t through grid search, and the detailed discussion about the two hyperparameters is introduced in Appendix B. For Cora, s = 1.0 and t = 1e − 4. For Citeseer, s = 0.7 and t = 1e − 5. For Pubmed, s = 0.5 and t = 1e − 7. To avoid overfitting, dropout (Srivastava et al., 2014) is applied. Meanwhile, we terminate the training if the validation loss does not decrease for 100 consecutive epochs.
ANALYSIS ON DETACHING FEATURE TRANSFORMATION FROM CONVOLUTION
Since the number of parameters for the undetached version of GWNN is O(n × p × q), we can hardly implement this version in the case of networks with a large number n of nodes and a huge number p of input features. Here, we validate the effectiveness of detaching feature transformation form convolution on ChebyNet (introduced in Section 2.2), whose parameter complexity is O(K × p × q). For ChebyNet of detaching feature transformation from graph convolution, the number of parameters is reduced to O(K + p × q). Table 2 shows the performance and the number of parameters on three datasets. Here, the reported performance is the optimal performance varying the order K = 2, 3, 4. As demonstrated in Table 2, with fewer parameters, we improve the accuracy on Pubmed by a large margin. This is due to that the label rate of Pubmed is only 0.003. By detaching feature transformation from convolution, the parameter complexity is significantly reduced, alleviating overfitting in semi-supervised learning and thus remarkably improving prediction accuracy. On Citeseer, there is a little drop on the accuracy. One possible explanation is that reducing the number of parameters may restrict the modeling capacity to some degree.
PERFORMANCE OF GWNN
We now validate the effectiveness of GWNN with detaching technique on node classification. Experimental results are reported in Table 3. GWNN improves the classification accuracy on all the three datasets. In particular, replacing Fourier transform with wavelet transform, the proposed GWNN is comfortably ahead of Spectral CNN, achieving 10% improvement on Cora and Citeseer, and 5% improvement on Pubmed. The large improvement could be explained from two perspectives: (1) Convolution in Spectral CNN is non-local in vertex domain, and thus the range of feature diffusion is not restricted to neighboring nodes; (2) The scaling parameter s of wavelet transform is flexible to adjust the diffusion range to suit different applications and different networks. GWNN consistently outperforms ChebyNet, since it has enough degree of freedom to learn the convolution kernel, while ChebyNet is a kind of approximation with limited degree of freedom. Furthermore, our GWNN also performs better than GCN and MoNet, reflecting that it is promising to design appropriate bases for spectral methods to achieve good performance.
ANALYSIS ON SPARSITY
Besides the improvement on prediction accuracy, wavelet transform with localized and sparse transform matrix holds sparsity in both spatial domain and spectral domain. Here, we take Cora as an example to illustrate the sparsity of graph wavelet transform.
The sparsity of transform matrix. There are 2,708 nodes in Cora. Thus, the wavelet transform matrix ψ −1 s and the Fourier transform matrix U both belong to R 2,708×2,708 . The first two rows in Table 4 demonstrate that ψ −1 s is much sparser than U . Sparse wavelets not only accelerate the computation, but also well capture the neighboring topology centered at each node.
The sparsity of projected signal. As mentioned above, each node in Cora represents a document and has a sparse bag-of-words feature. The input feature X ∈ R n×p is a binary matrix, and X [i,j] = 1 when the i-th document contains the j-th word in the bag of words, it equals 0 otherwise. Here, X [:,j] denotes the j-th column of X, and each column represents the feature vector of a word. Considering a specific signal X [:,984] , we project the spatial signal into spectral domain, and get its projected vector. Here, p = ψ −1 s X [:,984] denotes the projected vector via wavelet transform, q = U X [:,984] denotes the projected vector via Fourier transform, and p, q ∈ R 2,708 . The last row in Table 4 lists the numbers of non-zero elements in p and q. As shown in Table 4, with wavelet transform, the projected signal is much sparser. Each feature, i.e. word in the bag of words, has a projected vector, and each element in this vector is associated with a spectral wavelet basis. Here, each basis is centered at a node, corresponding to a document. The value can be regarded as the relation between the word and the document. Thus, each value in p can be interpreted as the relation between W ord 984 and a document. In order to elaborate the interpretability of wavelet transform, we analyze the projected values of different feature as following.
Considering two features W ord 984 and W ord 1177 , we select the top-10 active bases, which have the 10 largest projected values of each feature. As illustrated in Figure 2, for clarity, we magnify the local structure of corresponding nodes and marked them with bold rims. The central network in each subgraph denotes the dataset Cora, each node represents a document, and 7 different colors represent 7 classes. These nodes are clustered by OpenOrd (Martin et al., 2011) based on the adjacency matrix. Figure 2a shows the top-10 active bases of W ord 984 . In Cora, this word only appears 8 times, and all the documents containing W ord 984 belong to the class " Case-Based ". Consistently, all top-10 nodes activated by W ord 984 are concentrated and belong to the class " Case-Based ". And, the frequencies of W ord 1177 appearing in different classes are similar, indicating that W ord 1177 is a universal word. In concordance with our expectation, the top-10 active bases of W ord 1177 are discrete and belong to different classes in Figure 2b.
(a) (b) Figure 2: Top-10 active bases of two words in Cora. The central network of each subgraph represents the dataset Cora, which is split into 7 classes. Each node represents a document, and its color indicates its label. The nodes that represent the top-10 active bases are marked with bold rims. (a) W ord 984 only appears in documents of the class " Case-Based " in Cora. Consistently, all its 10 active bases also belong to the class " Case-Based ". (b) The frequencies of W ord 1177 appearing in different classes are similar in Cora. As expected, the top-10 active bases of W ord 1177 also belong to different classes.
Owing to the properties of graph wavelets, which describe the neighboring topology centered at each node, the projected values of wavelet transform can be explained as the correlation between features and nodes. These properties provide an interpretable domain transformation and ease the understanding of graph convolution.
CONCLUSION
Replacing graph Fourier transform with graph wavelet transform, we proposed GWNN. Graph wavelet transform has three desirable properties: (1) Graph wavelets are local and sparse;
(2) Graph wavelet transform is computationally efficient; (3) Convolution is localized in vertex domain. These advantages make the whole learning process interpretable and efficient. Moreover, to reduce the number of parameters and the dependence on huge training data, we detached the feature transformation from convolution. This practice makes GWNN applicable to large graphs, with remarkable performance improvement on graph-based semi-supervised learning.
APPENDIX A LOCALIZED GRAPH CONVOLUTION VIA WAVELET TRANSFORM
We use a diagonal matrix Θ to represent the learned kernel transformed by wavelets ψ −1 s y, and replace the Hadamard product with matrix muplication. Then Equation (4) is:
x * G y = ψ s Θψ −1 s x.(11)
We set ψ s = (ψ s1 , ψ s2 , ..., ψ sn ), ψ −1 s = (ψ * s1 , ψ * s2 , ..., ψ * sn ), and Θ = diag({θ k } n k=1 ). Equation (11) becomes :
x * G y = n k=1 θ k ψ sk (ψ * sk ) x.(12)
As proved by Hammond et al. (2011), both ψ s and ψ −1 s are local in small scale (s). Figure 3 shows the locality of ψ s1 and ψ * s1 , i.e., the first column in ψ s and ψ −1 s when s = 3. Each column in ψ s and ψ −1 s describes the neighboring topology of target node, which means that ψ s and ψ −1 s are local. The locality of ψ sk and ψ * sk leads to the locality of the resulting matrix of multiplication between the column vector ψ sk and row vector (ψ * sk ) . For convenience, we set Since each M k is local, for any convolution kernel Θ, ψ s Θψ −1 s is local, and it means that convolution is localized in vertex domain. By replacing Θ with an identity matrix in Equation (12), we get x * G y = n k=1 M k x. We define H = n k=1 M k , and Figure 4 shows H [1,:] in different scaling, i.e., correlation between the first node and other nodes during convolution. The locality of H suggests that graph convolution is localized in vertex domain. Moreover, as the scaling parameter s becomes larger, the range of feature diffusion becomes larger. GWNN leverages graph wavelets to implement graph convolution, where s is used to modulate the range of neighborhoods. From Figure 5, as s becomes larger starting from 0, the range of neighboring nodes becomes large, resulting the increase of accuracy on Cora. However when s becomes too large, some irrelevant nodes are included, leading to decreasing of accuracy. The hyperparameter t only used for computational efficiency, has any slight influence on its performance.
M k = ψ sk (ψ * sk ) , M k[i,
For experiments on specific dataset, s and t are choosen via grid search using validation. Generally, a appropriate s is in the range of [0.5, 1], which can not only capture the graph structure but also guarantee the locality of convolution, and t is less insensive to dataset.
APPENDIX C PARAMETER COMPLEXITY OF NODE CLASSIFICATION
We show the parameter complexity of node classification in Table 5. The high parameter complexity O(n * p * q) of Spectral CNN makes it difficult to generalize to real world networks. ChebyNet approximates the convolution kernel via polynomial function of the diagonal matrix of Laplacian eigenvalues, reducing parameter complexity to O(K * p * q) with K being the order of polynomial function. GCN simplifies ChebyNet via setting K=1. We detach feature transformation from graph convolution to implement GWNN and Spectral CNN in our experiments, which can reduce parameter to O(n + p * q). In Cora and Citeseer, with smaller parameter complexity, GWNN achieves better performance than ChebyNet, reflecting that it is promising to implement convolution via graph wavelet transform. As Pubmed has a large number of nodes, the parameter complexity of GWNN is larger than ChebyNet. As future work, it is an interesting attempt to select wavelets associated with a subset of nodes, further reducing parameter complexity with potential loss of performance. With the stable recurrence relation T k (y) = 2yT k−1 (y) − T k−2 (y), we can generate the Chebyshev polynomials T k (y). Here T 0 = 1 and T 1 = y. For y sampled between -1 and 1, the trigonometric expression T k (y) = cos(k arccos(y)) is satisfied. It shows that T k (y) ∈ [−1, 1] when y ∈ [−1, 1]. Through the Chebyshev polynomials, an orthogonal basis for the Hilbert space of square integrable functions L 2 ([−1, 1], dy √ 1−y 2 ) is formed. For each h in this Hilbert space, we have a uniformly convergent Chebyshev series h(y) = 1 2 c 0 + ∞ k=1 c k T k (y), and the Chebyshev coefficients c k = 2 π 1 −1 T k (y)h(y) √ 1−y 2 dy = 2 π π 0 cos(kθ)h(cos(θ))dθ. A fixed scale s is assumed. To approximate g(sx) for x ∈ [0, λ max ], we can shift the domain through the transformation x = a(y + 1), where a = λmax 2 . T k (x) = T k ( x−a a ) denotes the shifted Chebyshev polynomials, with x−a a ∈ [−1, 1]. Then we have g(sx) = 1 2 c 0 + ∞ k=1 c k T k (x), and x ∈ [0, λ max ], c k = 2 π π 0 cos(kθ)g(s(a(cos(θ) + 1)))dθ. we truncate the Chebyshev expansion to m terms and achieve Polynomial approximation.
Here we give the example of the ψ −1 s and g(sx) = e −sx , the graph signal is f ∈ R n . Then we can give the fast approximation wavelets by ψ −1 s f = 1 2 c 0 f + m k=1 c k T k (L)f . The efficient computation of T k (L) determines the utility of this approach, where T k (L)f = 2 a (L − I)(T k−1 (L)f ) − T k−2 (L)f .
APPENDIX E ANALYSIS ON SPASITY OF SPECTRAL TRANSFORM AND LAPLACIAN MATRIX
The sparsity of the graph wavelets depends on the sparsity of the Laplacian matrix and the hyperparameter s, We show the sparsity of spectral transform matrix and Laplacian matrix in Table 6. The sparsity of Laplacian matrix is sparser than graph wavelets, and this property limits our method, i.e., the higher time complexity than some methods depending on Laplacian matrix and identity matrix, e.g., GCN. Specifically, both our method and GCN aim to improve Spectral CNN via designing localized graph convolution. GCN, as a simplified version of ChebyNet, leverages Laplacian matrix as weighted matrix and expresses the spectral graph convolution in spatial domain, acting as spatial-like method . However, our method resorts to using graph wavelets as a new set of bases, directly designing localized spectral graph convolution. GWNN offers a localized graph convolution via replacing graph Fourier transform with graph wavelet transform, finding good spectral basis with localization property and good interpretability. This distinguishes GWNN from ChebyNet and GCN, which express the graph convolution defined via graph Fourier transform in vertex domain.
Figure 1 :
1Wavelets on an example graph at (a) small scale and (b) large scale.
Figure 3 :
3j] > 0 only when ψ sk [i] > 0 and (ψ * sk ) [j] > 0. In other words, if M k[i,j] > 0, vertex i and vertex j can correlate with each other through vertex k. Locality of (a) ψ s1 and (b) ψ * s1 .
Figure 4 :Figure 5 :
45Correlation between first node and other nodes at (a) small scale and (b) large scale. Nonzero value of node represents correlation between this node and target node during convolution. Locality of H suggests that graph convolution is localized in vertex domain. Moreover, with scaling parameter s becoming larger, the range of feature diffusion becomes larger. APPENDIX B INFLUENCE OF HYPER-PARAMETERS Influence of s and t on Cora.
al. (2011) proposed a method, using Chebyshev polynomials to efficiently approximate ψ s and ψ −1 s . The computational complexity is O(m × |E|), where |E| is the number of edges and m is the order of Chebyshev polynomials. We give the details of the approximation proposed in Hammond et al. (2011).
Table 1 :
1The Statistics of DatasetsDataset
Nodes
Edges Classes Features Label Rate
Cora
2,708
5,429
7
1,433
0.052
Citeseer
3,327
4,732
6
3,703
0.036
Pubmed 19,717 44,338
3
500
0.003
Table 2 :
2Results of Detaching Feature Transformation from ConvolutionMethod
Cora
Citeseer
Pubmed
Prediction Accuracy
ChebyNet
81.2%
69.8%
74.4%
Detaching-ChebyNet
81.6%
68.5%
78.6%
Number of Parameters
ChebyNet
46,080 (K=2) 178,032 (K=3) 24,144 (K=3)
Detaching-ChebyNet 23,048 (K=4)
59,348 (K=2)
8,054 (K=3)
Table 3 :
3Results of Node ClassificationMethod
Cora
Citeseer Pubmed
MLP
55.1%
46.5%
71.4%
ManiReg
59.5%
60.1%
70.7%
SemiEmb
59.0%
59.6%
71.7%
LP
68.0%
45.3%
63.0%
DeepWalk
67.2%
43.2%
65.3%
ICA
75.1%
69.1%
73.9%
Planetoid
75.7%
64.7%
77.2%
Spectral CNN 73.3%
58.9%
73.9%
ChebyNet
81.2%
69.8%
74.4%
GCN
81.5%
70.3%
79.0%
MoNet
81.7±0.5% -
78.8±0.3%
GWNN
82.8%
71.7%
79.1%
Table 4 :
4Statistics of wavelet transform and Fourier transform on CoraStatistical Property
wavelet transform Fourier transform
Transform Matrix
Density
2.8%
99.1%
Number of Non-zero Elements
205,774
7,274,383
Projected Signal
Density
10.9%
100%
Number of Non-zero Elements
297
2,708
4.7 ANALYSIS ON INTERPRETABILITY
Compare with graph convolution network using Fourier transform, GWNN provides good inter-
pretability. Here, we show the interpretability with specific examples in Cora.
Table 5 :
5Parameter complexity of Node ClassificationMethod
Cora
Citeseer
Pubmed
Spectral CNN
62,392,320
197,437,488
158,682,416
Spectral CNN (detaching)
28,456
65,379
47,482
ChebyNet
46,080 (K=2) 178,032 (K=3) 24,144 (K=3)
GCN
23,040
59,344
8,048
GWNN
28,456
65,379
47,482
Table 6 :
6Statistics of spectral transform and Laplacian matrix on CoraDensity Num of Non-zero Elements
wavelet transform 2.8%
205,774
Fourier transform 99.1%
7,274,383
Laplacian matrix
0.15%
10,858
ACKNOWLEDGEMENTSThis work is funded by the National Natural Science Foundation of China under grant numbers 61425016, 61433014, and 91746301. Huawei Shen is also funded by K.C. Wong Education Foundation and the Youth Innovation Promotion Association of the Chinese Academy of Sciences.
Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Mikhail Belkin, Partha Niyogi, Vikas Sindhwani, Journal of machine learning research. 7Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame- work for learning from labeled and unlabeled examples. Journal of machine learning research, 7 (Nov):2399-2434, 2006.
Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. Davide Boscaini, Jonathan Masci, Simone Melzi, M Michael, Umberto Bronstein, Pierre Castellani, Vandergheynst, Computer Graphics Forum. Wiley Online Library34Davide Boscaini, Jonathan Masci, Simone Melzi, Michael M Bronstein, Umberto Castellani, and Pierre Vandergheynst. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. In Computer Graphics Forum, volume 34, pp. 13-23. Wiley Online Library, 2015.
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geomet- ric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
Spectral networks and locally connected networks on graphs. Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann Lecun, International Conference on Learning Representations (ICLR2014). Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. Spectral networks and lo- cally connected networks on graphs. In International Conference on Learning Representations (ICLR2014), CBLS, April 2014, 2014.
Convolutional neural networks on graphs with fast localized spectral filtering. Michaël Defferrard, Xavier Bresson, Pierre Vandergheynst, Advances in Neural Information Processing Systems. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pp. 3844-3852, 2016.
Semi-supervised learning on graphs with generative adversarial nets. Ming Ding, Jie Tang, Jie Zhang, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementACMMing Ding, Jie Tang, and Jie Zhang. Semi-supervised learning on graphs with generative adversarial nets. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 913-922. ACM, 2018.
Learning structural node embeddings via diffusion wavelets. Claire Donnat, Marinka Zitnik, David Hallac, Jure Leskovec, Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. Learning structural node embed- dings via diffusion wavelets. 2018.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
Inductive representation learning on large graphs. Will Hamilton, Zhitao Ying, Jure Leskovec, Advances in Neural Information Processing Systems. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017.
Wavelets on graphs via spectral graph theory. K David, Pierre Hammond, Rémi Vandergheynst, Gribonval, Applied and Computational Harmonic Analysis. 302David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-Rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, IEEE Signal processing magazine. 296Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82-97, 2012.
Graph-based isometry invariant representation learning. Renata Khasanova, Pascal Frossard, International Conference on Machine Learning. Renata Khasanova and Pascal Frossard. Graph-based isometry invariant representation learning. In International Conference on Machine Learning, pp. 1847-1856, 2017.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, International Conference on Learning Representations (ICLR. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional net- works. In International Conference on Learning Representations (ICLR), 2017.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Cayleynets: Graph convolutional neural networks with complex rational spectral filters. Ron Levie, Federico Monti, Xavier Bresson, Michael M Bronstein, arXiv:1705.07664arXiv preprintRon Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convo- lutional neural networks with complex rational spectral filters. arXiv preprint arXiv:1705.07664, 2017.
Link-based classification. Qing Lu, Lise Getoor, Proceedings of the 20th International Conference on Machine Learning (ICML-03). the 20th International Conference on Machine Learning (ICML-03)Qing Lu and Lise Getoor. Link-based classification. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 496-503, 2003.
Openord: an opensource toolbox for large graph layout. Shawn Martin, Michael Brown, Richard Klavans, Kevin W Boyack, International Society for Optics and Photonics. 7868Visualization and Data AnalysisShawn Martin, W Michael Brown, Richard Klavans, and Kevin W Boyack. Openord: an open- source toolbox for large graph layout. In Visualization and Data Analysis 2011, volume 7868, pp. 786806. International Society for Optics and Photonics, 2011.
Geometric deep learning on graphs and manifolds using mixture model cnns. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, Michael M Bronstein, Proc. CVPR. CVPR13Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proc. CVPR, volume 1, pp. 3, 2017.
Federico Monti, Oleksandr Shchur, Aleksandar Bojchevski, Or Litany, Stephan Günnemann, Michael M Bronstein, arXiv:1806.00770Dual-primal graph convolutional networks. arXiv preprintFederico Monti, Oleksandr Shchur, Aleksandar Bojchevski, Or Litany, Stephan Günnemann, and Michael M Bronstein. Dual-primal graph convolutional networks. arXiv preprint arXiv:1806.00770, 2018.
Deepwalk: Online learning of social representations. Bryan Perozzi, Rami Al-Rfou, Steven Skiena, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMBryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre- sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701-710. ACM, 2014.
Gspbox: A toolbox for signal processing on graphs. Nathanaël Perraudin, Johan Paratte, David Shuman, Lionel Martin, Vassilis Kalofolias, Pierre Vandergheynst, David K Hammond, arXiv:1408.5781arXiv preprintNathanaël Perraudin, Johan Paratte, David Shuman, Lionel Martin, Vassilis Kalofolias, Pierre Van- dergheynst, and David K Hammond. Gspbox: A toolbox for signal processing on graphs. arXiv preprint arXiv:1408.5781, 2014.
Collective classification in network data. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, Tina Eliassi-Rad, AI magazine. 29393Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93, 2008.
The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. David I Shuman, K Sunil, Pascal Narang, Antonio Frossard, Pierre Ortega, Vandergheynst, IEEE Signal Processing Magazine. 303David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to net- works and other irregular domains. IEEE Signal Processing Magazine, 30(3):83-98, 2013.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014.
The lifting scheme: A construction of second generation wavelets. Wim Sweldens, SIAM journal on mathematical analysis. 292Wim Sweldens. The lifting scheme: A construction of second generation wavelets. SIAM journal on mathematical analysis, 29(2):511-546, 1998.
Graph wavelets for multiscale community mining. Nicolas Tremblay, Pierre Borgnat, IEEE Transactions on Signal Processing. 6220Nicolas Tremblay and Pierre Borgnat. Graph wavelets for multiscale community mining. IEEE Transactions on Signal Processing, 62(20):5227-5239, 2014.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintPetar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
Deep learning via semisupervised embedding. Jason Weston, Frédéric Ratle, Hossein Mobahi, Ronan Collobert, Neural Networks: Tricks of the Trade. SpringerJason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi- supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012.
Revisiting semi-supervised learning with graph embeddings. Zhilin Yang, William Cohen, Ruslan Salakhudinov, International Conference on Machine Learning. Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pp. 40-48, 2016.
Semi-supervised learning using gaussian fields and harmonic functions. Xiaojin Zhu, Zoubin Ghahramani, John D Lafferty, Proceedings of the 20th International conference on Machine learning (ICML-03). the 20th International conference on Machine learning (ICML-03)Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912-919, 2003. |
252,815,987 | MARKUP-TO-IMAGE DIFFUSION MODELS WITH SCHEDULED SAMPLING | Building on recent advances in image generation, we present a fully data-driven approach to rendering markup into images. The approach is based on diffusion models, which parameterize the distribution of data using a sequence of denoising operations on top of a Gaussian noise distribution. We view the diffusion denoising process as a sequential decision making process, and show that it exhibits compounding errors similar to exposure bias issues in imitation learning problems. To mitigate these issues, we adapt the scheduled sampling algorithm to diffusion training. We conduct experiments on four markup datasets: mathematical formulas (LaTeX), table layouts (HTML), sheet music (LilyPond), and molecular images (SMILES). These experiments each verify the effectiveness of the diffusion process and the use of scheduled sampling to fix generation issues. These results also show that the markup-to-image task presents a useful controlled compositional setting for diagnosing and analyzing generative image models. | [
3480671
] | MARKUP-TO-IMAGE DIFFUSION MODELS WITH SCHEDULED SAMPLING
Yuntian Deng [email protected]
Harvard University
Noriyuki Kojima
Cornell University
Alexander M Rush [email protected]
Cornell University
MARKUP-TO-IMAGE DIFFUSION MODELS WITH SCHEDULED SAMPLING
Building on recent advances in image generation, we present a fully data-driven approach to rendering markup into images. The approach is based on diffusion models, which parameterize the distribution of data using a sequence of denoising operations on top of a Gaussian noise distribution. We view the diffusion denoising process as a sequential decision making process, and show that it exhibits compounding errors similar to exposure bias issues in imitation learning problems. To mitigate these issues, we adapt the scheduled sampling algorithm to diffusion training. We conduct experiments on four markup datasets: mathematical formulas (LaTeX), table layouts (HTML), sheet music (LilyPond), and molecular images (SMILES). These experiments each verify the effectiveness of the diffusion process and the use of scheduled sampling to fix generation issues. These results also show that the markup-to-image task presents a useful controlled compositional setting for diagnosing and analyzing generative image models.
INTRODUCTION
Recent years have witnessed rapid progress in text-to-image generation with the development and deployment of pretrained image/text encoders Raffel et al., 2020) and powerful generative processes such as denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015;Ho et al., 2020). Most existing image generation research focuses on generating realistic images conditioned on possibly ambiguous natural language Saharia et al., 2022;Ramesh et al., 2022). In this work, we instead study the task of markup-to-image generation, where the presentational markup describes exactly one-to-one what the final image should look like.
While the task of markup-to-image generation can be accomplished with standard renderers, we argue that this task has several nice properties for acting as a benchmark for evaluating and analyzing text-to-image generation models. First, the deterministic nature of the problem enables exposing and analyzing generation issues in a setting with known ground truth. Second, the compositional nature of markup language is nontrivial for neural models to capture, making it a challenging benchmark for relational properties. Finally, developing a model-based markup renderer enables interesting applications such as markup compilers that are resilient to typos, or even enable mixing natural and structured commands (Glennie, 1960;Teitelman, 1972).
We build a collection of markup-to-image datasets shown in Figure 1: mathematical formulas, table layouts, sheet music, and molecules (Nienhuys & Nieuwenhuizen, 2003;Weininger, 1988). These datasets can be used to assess the ability of generation models to produce coherent outputs in a structured environment. We then experiment with utilizing diffusion models, which represent the current state-of-the-art in conditional generation of realistic images, on these tasks.
The markup-to-image challenge exposes a new class of generation issues. For example, when generating formulas, current models generate perfectly formed output, but often generate duplicate or misplaced symbols (see Figure 2). This type of error is similar to the widely studied exposure bias issue in autoregressive text generation (Ranzato et al., 2015). To help the model fix this class of errors during the generation process, we propose to adapt scheduled sampling (Bengio et al., 2015). Table Layouts ... <span style=" font-weight:bold; text-align:center; font-size:150%; " > f j </span> </div> ...
Math \widetilde \gamma _ { \mathrm { h o p f } } \simeq \sum _ { n > 0 } \widetilde { G } _ { n } { \frac { ( -a )ˆ{ n } } { 2ˆ{ 2 n -1 } } }
Sheet Music
\relative c'' { \time 4/4 d4 | r2 b4 b2 | ces4 b4˜g2 f4 | a4 d8 | e4 g16 g2 f2 r4 | des2 d8 d8 f8 e4 d8 a16 b16 | d4 e2 d2. a8˜g4 r16˜e16. d2 f4 b4 e2 | f4. | b 16 a16 e4. r2˜c4 r4 b4 d8 b2 | d4 | r8. e 8 e2 | r8˜e2 } Molecules COc1ccc(cc1N)C(=O)Nc2ccccc2
Figure 1: Markup-to-Image suite with generated images. Tasks include mathematical formulas (LaTeX), table layouts (HTML), sheet music (LilyPond), and molecular images (SMILES). Each example is conditioned on a markup (bottom) and produces a rendered image (top). Evaluation directly compares the rendered image with the ground truth image.
Specifically, we train diffusion models by using the model's own generations as input such that the model learns to correct its own mistakes.
Experiments on all four datasets show that the proposed scheduled sampling approach improves the generation quality compared to baselines, and generates images of surprisingly good quality for these tasks. Models produce clearly recognizable images for all domains, and often do very well at representing the semantics of the task. Still, there is more to be done to ensure faithful and consistent generation in these difficult deterministic settings. All models, data, and code are publicly available at https://github.com/da03/markup2im.
MOTIVATION: DIFFUSION MODELS FOR MARKUP-TO-IMAGE GENERATION
Task We define the task of markup-to-image generation as converting a source in a markup language describing an image to that target image. The input is a sequence of M tokens x = x 1 , · · · , x M ∈ X , and the target is an image y ∈ Y ⊆ R H×W of height H and width W (for simplicity we only consider grayscale images here). The task of rendering is defined as a mapping f : X → Y. Our goal is to approximate the rendering function using a model parameterized by
θ f θ : X → Y trained on supervised examples {(x i , y i ) : i ∈ {1
, 2, · · · , N }}. To make the task tangible, we show several examples of x, y pairs in Figure 1.
Challenge The markup-to-image task contains several challenging properties that are not present in other image generation benchmarks. While the images are much simpler, they act more discretely than typical natural images. Layout mistakes by the model can lead to propagating errors throughout the image. For example, including an extra mathematical symbol can push everything one line further down. Some datasets also have long-term symbolic dependencies, which may be difficult for non-sequential models to handle, analogous to some of the challenges observed in nonautoregressive machine translation (Gu et al., 2018). Generation with Diffusion Models Denoising diffusion probabilistic models (DDPM) (Ho et al., 2020) parameterize a probabilistic distribution P (y 0 |x) as a Markov chain P (y t−1 |y t ) with an initial distribution P (y T ). These models conditionally generate an image by sampling iteratively from the following distribution (we omit the dependence on x for simplicity):
P (y T ) = N (0, I) P (y t−1 |y t ) = N (µ θ (y t , t); σ 2 t I)
where y 1 , y 2 , · · · , y T are latent variables of the same size as y 0 ∈ Y, µ θ (·, t) is a neural network parameterizing a map Y → Y.
Diffusion models have proven to be effective for generating realistic images Saharia et al., 2022;Ramesh et al., 2022) and are more stable to train than alternative approaches for image generation such as Generative Adversarial Networks (Goodfellow et al., 2014). Diffusion models are surprisingly effective on the markup-to-image datasets as well. However, despite generating realistic images, they make major mistakes in the layout and positioning of the symbols. For an example of these mistakes see Figure 2 (left).
We attribute these mistakes to error propagation in the sequential Markov chain. Small mistakes early in the sampling process can lead to intermediate y t states that may have diverged significantly far from the model's observed distribution during training. This issue has been widely studied in the inverse RL and autoregressive token generation literature, where it is referred to as exposure bias (Ross et al., 2011;Ranzato et al., 2015).
SCHEDULED SAMPLING FOR DIFFUSION MODELS
In this work, we adapt scheduled sampling, a simple and effective method based on DAgger (Ross et al., 2011;Bengio et al., 2015) from discrete autoregressive models to the training procedure of diffusion models. The core idea is to replace the standard training procedure with a biased sampling approach that mimics the test-time model inference based on its own predictions. Before describing this approach, we first give a short background on training diffusion models.
Background: Training Diffusion Models Diffusion models maximize an evidence lower bound (ELBO) on the above Markov chain. We introduce an auxiliary Markov chain Q(y 1 , · · · , y T |y 0 ) = T t=1 Q(y t |y t−1 ) to compute the ELBO:
log P (y 0 ) ≥ E y1,··· ,y T ∼Q log P (y 0 , · · · , y T ) Q(y 1 , · · · , y T ) = E Q log P (y 0 |y 1 ) − T t=1 D KL (Q(y t−1 |y t , y 0 ) P (y t−1 |y t )) − D KL (Q(y T |y 0 ) P (y T )) . (1)
Diffusion models fix Q to a predefined Markov chain:
Q(y t |y t−1 ) = N ( 1 − β t y t−1 , β t I) Q(y 1 , · · · , y T |y 0 ) = T t=1 Q(y t |y t−1 ),
where β 1 , · · · , β T is a sequence of predefined scalars controlling the variance schedule.
Since Q is fixed, the last term −E Q D KL (Q(y T |y 0 ) P (y T )) in Equation (1) is a constant, and we only need to optimize
E Q log P (y 0 |y 1 ) − T t=1 D KL (Q(y t−1 |y t , y 0 ) P (y t−1 |y t )) = E Q(y1|y0) log P (y 0 |y 1 ) − T t=1 E Q(yt|y0) D KL (Q(y t−1 |y t , y 0 ) P (y t−1 |y t )).
With large T , sampling from Q(y t |y 0 ) can be made efficient since Q(y t |y 0 ) has an analytical form:
Q(y t |y 0 ) = y1,··· ,yt−1 Q(y 1:t |y 0 ) = N ( √ᾱ t y 0 , √ 1 −ᾱ t I),
whereᾱ t = t s=1 α s and α t = 1 − β t . To simplify the P (y t−1 |y t ) terms. Ho et al. (2020) parameterize this distribution by defining µ θ (y t , t) through an auxiliary neural network θ (y t , t):
µ θ (y t , t) = 1 √ α t (y t − β t √ 1 −ᾱ t θ (y t , t)).
With P in this form, applying Gaussian identities, reparameterization (Kingma & Welling, 2013), and further simplification leads to a final MSE training objective,
max θ T t=1 E yt∼Q(yt|y0) y t − √ᾱ t y 0 √ 1 −ᾱ t − θ (y t , t) 2 ,(2)
where y t is the sampled latent,ᾱ t is a constant derived from the variance schedule, y 0 is the training image, and θ is a neural network predicting the update to y t that leads to y t−1 .
Scheduled Sampling
Our main observation is that at training time, for each t, the objective function in Equation (2) takes the expectation with respect to a Q(y t |y 0 ). At test time the model instead uses the learned P (y t ) leading to exposure bias issues like Figure 2.
Scheduled sampling (Bengio et al., 2015) suggests alternating sampling in training from the standard distribution and the model's own distribution based on a schedule that increases model usage through training. Ideally, we would sample from
P (y t ) = yt+1,··· ,y T P (t T ) T s=t+1 P (y s−1 |y s ).
However, sampling from P (y t ) is expensive since it requires rolling out the intermediate steps y T , · · · , y t+1 1 .
We propose an approximation instead. First we use Q as an approximate posterior of an earlier step t + m, and then roll out a finite number of steps m from y t+m ∼ Q(y t+m |y 0 ): Note that when m = 0,P (y t |y 0 ) = Q(y t |y 0 ) and we recover normal diffusion training.
P(y t |y 0 ) yt+1,··· ,yt+m Q(y t+m |y 0 ) t+m s=t+1 P (y s−1 |y s ).When m = T − t,P (y t |y 0 ) = P (y t ) if Q(y T |y 0 ) = N (0, I).
An example of m = 1 is shown in Figure 3. Substituting back, the objective becomes
T t=1 E yt∼P (yt|y0) y t − √ᾱ t y 0 √ 1 −ᾱ t − θ (y t , t) 2 .
(3) To compute its gradients, in theory we need to back-propagate throughP since it depends on θ, but in practice to save memory we ignore ∂P ∂θ and only consider the term inside expectation.
MARKUP-TO-IMAGE SETUP
DATA
We adapt datasets from four domains to the task of markup-to-image. Table 1 provides a summary of dataset statistics.
Math Our first dataset, LaTeX-to-Math, is a large collection of real-world mathematical expressions written in LaTeX markups and their rendered images. We adopt IM2LATEX-100K introduced in Deng et al. (2016), which is collected from Physics papers on arXiv. IM2LATEX-100K is originally created for the visual markup decompiling task, but we adapt this dataset for the reverse task of markup-to-image. We pad all images to size 64 × 320 and remove images larger than that size. For faster evaluation, we form a smaller test set by subsampling 1,024 examples from the original test set in IM2LATEX-100K .
Table Layouts
The second dataset we use is based on the 100k synthesized HTML snippets and corresponding rendered webpage images from Deng et al. (2016). Each HTML snippet contains a nested <div> with a solid border, a random width, and a random float. The maximum depth of a nest is limited to two. We make no change to this dataset, except that we subsample 1,024 examples from the original test set to form a new test set.
Sheet Music
We generate a third dataset of sheet music. The markup language LilyPond is a file format for music engraving (Nienhuys & Nieuwenhuizen, 2003). LilyPond is a powerful language for writing music scores: it allows specifying notes using letters and note durations using numbers. One challenge in the LilyPond-to-Sheet music task is to deal with the possible "relative" mode, where the determination of each note relies on where the previous note is. We generate 35k synthetic LilyPond files and compile them into sheet music. We downsample images by a factor of two and then filter out images greater than 192 × 448.
Molecules The last dataset we use is from the chemistry domain. The input is a string of Simplified Molecular Input Line Entry System (SMILES) which specifies atoms and bonds of a molecule (Weininger, 1988). The output is a scheme of the input molecule. We use a solubility dataset by Wilkinson et al. (2022), containing 19,925 SMILES strings. The dataset is originally proposed to improve the accessibility of chemical structures for deep learning research. 2D molecules images are rendered from SMILES strings using the Python package RDKIT (Landrum et al., 2016). We partition the data into training, validation, and test sets. We downsample images by a factor of two. Table 1: Markup-to-image datasets. Inputs to each dataset are described in Section 4.1 in detail. Input length is measured as the median number of characters in the validation set.
EVALUATION
Popular metrics for conditional image generation such as Inception Score (Salimans et al., 2016) or Fréchet Inception Distance (Heusel et al., 2017) evaluate the fidelity and high-level semantics of generated images. In markup-to-image tasks, we instead emphasize the pixel-level similarity between generated and ground truth images because input markups describe exactly what the image should look like.
Pixel Metrics Our primary evaluation metric is Dynamic Time Warping (DTW) (Müller, 2007), which calculates the pixel-level similarities of images by treating them as a column time-series. We preprocess images by binarizing them. We treat binarized images as time-series by viewing each image as a sequence of column feature vectors. We evaluate the similarity of generated and ground truth images by calculating the cost of alignment between the two time-series using DTW 2 . We use Euclidean distance as a feature matching metric. We allow minor perturbations of generated images by allowing up to 10% of upward/downward movement during feature matching.
Our secondary evaluation metric is the root squared mean error (RMSE) of pixels between generated and ground truth images. We convert all images to grayscale before calculating RMSE. While RMSE compares two images at the pixel level, one drawback is that RMSE heavily penalizes the score of symbolically equivalent images with minor perturbations.
Complimentary Metrics Complementary to the above two main metrics, we report one learned and six classical image similarity metrics. We use CLIP score ) as a learned metric to calculate the similarity between the CLIP embeddings of generated and ground truth images. While CLIP score is robust to minor perturbations of images, it is unclear if CLIP embeddings capture the symbolic meanings of the images in the domains of rendered markups. For classical image similarity metrics 3 , we report SSIM (Wang et al., 2004), PSNR (Wang et al., 2004), UQI (Wang & Bovik, 2002), ERGAS (Wald, 2000), SCC (Zhou et al., 1998), and RASE (González-Audícana et al., 2004).
EXPERIMENTAL SETUP
Model For Math, Table Layouts, and Sheet Music datasets, we use GPT-Neo-175M (Black et al., 2021;Gao et al., 2020) as the input encoder, which incorporates source code in its pre-training. For the Molecules dataset, we use ChemBERTa-77M-MLM from DeepChem (Ramsundar et al., 2019;Chithrananda et al., 2020) to encode the input. To parameterize the diffusion decoder, we experiment with three variants of U-Net (Ronneberger et al., 2015): 1) a standard U-Net conditioned on an average-pooled encoder embedding (denoted as "-Attn,-Pos"), 2) a U-Net alternating with crossattention layers over the full resolution of the encoder embeddings (denoted as "+Attn,-Pos"), and 3) a U-Net with both cross-attention and additional position embeddings on the query marking row ids and column ids (denoted as "+Attn,+Pos") (Vaswani et al., 2017).
Hyperparameters We train all models for 100 epochs using the AdamW optimizer ( sampling, we use m = 1. We linearly increase the rate of applying scheduled sampling from 0% to 50% from the beginning of the training to the end.
Implementation Details Our code is built on top of the HuggingFace diffusers library 4 . We use a single Nvidia A100 GPU to train on the Math, Although one potential concern is that the scheduled sampling approach needs more compute due to the extra computation to getP for m > 0, in practice, we find that the training speed is not much affected: on the Math dataset, scheduled sampling takes 24 minutes 59 seconds per training epoch, whereas without scheduled sampling it takes 24 minutes 13 seconds per epoch. Table 2 summarizes the results of markup-to-image tasks across four domains. We use DTW and RMSE as our primary evaluation metrics to make our experimental conclusions. First, we train and evaluate the variations of diffusion models on the Math dataset. Comparing the model with attention ("-Attn,-Pos") to without attention ("+Attn,-Pos"), using attention in the model results in a significant improvement by reducing DTW (25% reduction) and RMSE (12% reduction). Therefore, we always use attention for experiments on other datasets. We observe that additionally, using positional embeddings ("+Attn,+Pos") is helpful for the Math dataset. The proposed scheduled sampling approach improves the model's performance using attention and positional embeddings.
RESULTS
We observe a similar trend in the other three datasets- Table Layouts, Sheet Music, and Molecules. Using positional embeddings improves the performance measured by DTW and RMSE (except for the Molecules dataset). Training models with the proposed scheduled sampling achieves the best results consistently across all the datasets. As noted in Figure 2, we can qualitatively observe that schedule sampling, which exposes the model to its own generations during training time, comes with the benefits of the model being capable of correcting its own mistakes at inference time. Absolute Evaluation Our evaluation metrics enable relative comparisons between models in the markup-to-image task. However, it remains unclear how capable the models are in an absolute sense-if the models are generating near-perfect images or even the best model is missing a lot of symbols. We investigate this question by removing an increasing number of symbols from the ground truth markups and evaluating the perturbed images against the ground truth images. Our results in Figure 4 highlight that our best model performs roughly equivalent to the ground truth images with three symbols removed on the Math dataset. On the other hand, our best model performs better than ground truth images with only a single symbol removed on the Table Layouts dataset and two symbols removed on the Molecules dataset, indicating that our best model adapts to these datasets well. Results for music are less strong.
Qualitative Analysis
We perform qualitative analysis on the results of our best models, and we observe that diffusion models show a different level of adaptation to four datasets. First, we observe that diffusion models fully learn the Table Layouts dataset, where the majority of generated images are equivalent to the ground truth images for human eyes. Second, diffusion models perform moderately well on the Math and Molecules datasets: diffusion models generate images similar to the ground truth images most of the time on the Math dataset, but less frequently so on the Molecules dataset. The common failure modes such as dropping a few symbols, adding extra symbols, and repeating symbols are illustrated in Figure 5.
On the Sheet Music dataset, diffusion models struggle by generating images that deviate significantly from the ground truth images. Despite this, we observe that diffusion models manage to generate the first few symbols correctly in most cases. The intrinsic difficulty of the Sheet Music dataset is a long chain of dependency of symbols from left to right, and the limited number of denoising steps might be a bottleneck to generating images containing this long chain.
We provide additional qualitative results for all four datasets in Appendix A.
RELATED WORK
Text-to-Image Generation Text-to-image generation has been broadly studied in the machine learning literature, and several model families have been adopted to approach the task. Generative Adversarial Networks (Goodfellow et al., 2014) is one of the popular choices to generate realistic images from text prompts. Initiating from the pioneering work of text-to-image generation in the bird and flower domains by Reed et al. (2016a), researchers have developed methods to improve the quality of text-to-image generation via progressive refinement (Zhang et al., 2017;Zhu et al., 2019;Tao et al., 2020), cross-modal attention mechanisms Zhang et al., 2021), as well as spatial and semantic modeling of objects Reed et al., 2016b;Hong et al., 2018;Hinz et al., 2019). Another common method is based on VQ-VAE (Van Den Oord et al., 2017). In this approach, text-to-image generation is treated as a sequence-to-sequence task of predicting discretized image tokens autoregressively from text prompts Ding et al., 2021;Gafni et al., 2022;Gu et al., 2022;Aghajanyan et al., 2022;Yu et al., 2022).
Diffusion models (Sohl-Dickstein et al., 2015) are the most recent progress in text-to-image generation. The simplicity of training diffusion models introduces significant utility, which often reduces to the minimization of mean-squared error for estimating noises added to images (Ho et al., 2020). Diffusion models are free from training instability or model collapses (Brock et al., 2018;, and yet manage to outperform Generative Adversarial Networks on text-to-image generation in the MSCOCO domain . Diffusion models trained on largescale image-text pairs demonstrate impressive performance in generating creative natural or artistic images Ramesh et al., 2022;Saharia et al., 2022).
So far, the demonstration of successful text-to-image generation models is centered around the scenario with flexible interpretations of text prompts (e.g., artistic image generation). When there is an exact interpretation of the given text prompt (e.g., markup-to-image generation), text-to-image generation models are understudied (with a few exceptions such as Liu et al. (2021) which studied controlled text-to-image generation in CLEVR (Johnson et al., 2017) and iGibson (Shen et al., 2021) domains). Prior work reports that state-of-the-art diffusion models face challenges in the exact interpretation scenario. For example, Ramesh et al. (2022) report unCLIP struggles to generate coherent texts based on images. In this work, we propose a controlled compositional testbed for the exact interpretation scenario across four domains. Our study brings potential opportunities for evaluating the ability of generation models to produce coherent outputs in a structured environment, and highlights open challenges of deploying diffusion models in the exact interpretation scenario.
Scheduled Sampling In sequential prediction tasks, the mismatch between teacher forcing training and inference is known as an exposure bias or covariate shift problem (Ranzato et al., 2015;Spencer et al., 2021). During teacher forcing training, a model's next-step prediction is based on previous steps from the ground truth sequence. During inference, the model performs the next step based on its own previous predictions. Training algorithms such as DAgger (Ross et al., 2011) or scheduled sampling (Bengio et al., 2015) are developed to mitigate this mismatch problem, primarily by forcing the model to use its own previous predictions during training with some probability. In this work, we observe a problem similar to exposure bias in diffusion models, and we demonstrate that training diffusion models using scheduled sampling improves their performance on markup-toimage generation.
CONCLUSION
We propose the task of markup-to-image generation which differs from natural image generation tasks in that there are ground truth images and deterministic compositionality. We adapt four instances of this task and show that they can be used to analyze state-of-the-art diffusion-based image generation models. Motivated by the observation that a diffusion model cannot correct its own mistakes at inference time, we propose to use scheduled sampling to expose it to its own generations during training. Experiments confirm the effectiveness of the proposed approach. The generated images are surprisingly good, but model generations are not yet robust enough for perfect rendering. We see rendering markup as an interesting benchmark and potential application of pretrained models plus diffusion.
ACKNOWLEDGMENTS
YD is supported by an Nvidia Fellowship. NK is supported by a Masason Fellowship. AR is supported by NSF CAREER 2037519, NSF 1704834, and a Sloan Fellowship. Thanks to Bing Yan for preparing molecule data and Ge Gao for editing drafts of this paper. We would also like to thank Harvard University FAS Research Computing for providing computational resources.
A QUALITATIVE RESULTS
We provide additional qualitative results from models trained with or without scheduled sampling on four datasets in Figure
Figure 2 :
2The generation process of diffusion (left) versus diffusion+schedule sampling (right). The numbers on the y-axis are the number of diffusion steps (T − t). The ground truth LaTeX is \gamma_{n}ˆ{\mu}=\alpha_{n}ˆ{\mu}+\tilde{\alpha}_{n}ˆ{\mu},˜˜˜n\neq0.
Figure 3 :
3Diffusion samples y 1 from Q. Scheduled sampling instead samples an upstream latent variable y 2 and then y 1 based on the model's Markov chain P (y 1 |y 2 ).
2 :
2Evaluation results of markup-to-image generation across four datasets. (+/-)Attn indicates a model with or without attention, and (+/-)Pos is a model with or without positional embeddings. Scheduled Sampling is applied to training of models with attention and positional embeddings.
Figure 4 :
4Perturbation results.
Figure 5 :
5Qualitative results showing typical mistakes. (Top row) Model-generated images across datasets. (Bottom row) Ground truth images.
Figure 6 :
6Qualitative results in the Math domain. Left column: ground truth images. Middle column: generations from +Attn,+Pos. Right column: generations from Scheduled Sampling. The top two rows are random selections, and the bottom two rows are examples of good generations.
Figure 7 :
7Qualitative results in the Table Layouts domain. Left column: ground truth images. Middle column: generations from +Attn,+Pos. Right column: generations from Scheduled Sampling. The top two rows are random selections, and the bottom two rows are examples of good generations.
Figure 8 :
8Qualitative results in the Sheet Music domain. Left column: ground truth images. Middle column: generations from +Attn,+Pos. Right column: generations from Scheduled Sampling. The top two rows are random selections, and the bottom two rows are examples of good generations.
Figure 9 :
9Qualitative results in the Molecules domain. Left column: ground truth images. Middle column: generations from +Attn,+Pos. Right column: generations from Scheduled Sampling. The top two rows are random selections, and the bottom two rows are examples of good generations.
PixelComplimentary DTW↓ RMSE↓ CLIP↑ SSIM↑ PSNR↑ UQI↑ ERGAS↓ SCC↑ RASE↓Kingma &
Ba, 2014; Loshchilov & Hutter, 2018). The learning rate is set to 1e−4 with a cosine decay schedule
over 100 epochs and 500 warmup steps. We use a batch size of 16 for all models. For scheduled
Approach
Math
-Attn,-Pos
27.73 44.72 0.95 0.70 15.35 0.97 2916.76 0.02 729.19
+Attn,-Pos
20.81 39.53 0.96 0.76 16.62 0.98 2448.35 0.06 612.09
+Attn,+Pos
19.45 37.81 0.97 0.78 17.12 0.98 2314.31 0.07 578.58
Scheduled Sampling
18.81 37.19 0.97 0.79 17.25 0.98 2247.41 0.07 561.85
Table Layouts
+Attn,-Pos
6.09 22.89 0.95 0.92 38.55 0.98 2497.51 0.44 624.38
+Attn,+Pos
5.91 22.17 0.95 0.93 38.91 0.98 2409.28 0.44 602.32
Scheduled Sampling
5.64 21.11 0.95 0.93 40.20 0.98 2285.83 0.45 571.46
Sheet Music
+Attn,-Pos
81.21 45.23 0.97 0.67 15.10 0.97 3056.72 0.02 764.18
+Attn,+Pos
80.63 45.16 0.97 0.68 15.11 0.97 3032.40 0.02 758.10
Scheduled Sampling
79.76 44.70 0.97 0.68 15.20 0.97 2978.36 0.02 744.59
Molecules
+Attn,-Pos
24.87 38.12 0.97 0.61 16.66 0.98 2482.08 0.00 620.52
+Attn,+Pos
24.95 38.15 0.96 0.61 16.64 0.98 2455.18 0.00 613.79
Scheduled Sampling
24.80 37.92 0.96 0.61 16.69 0.98 2467.16 0.00 616.79
Table
Table Layouts ,
Layoutsand Molecules datasets; We use four A100s to train on the Sheet Music dataset. Training takes approximately 25 minutes per epoch for Math and Table Layouts, 30 minutes for Sheet Music, and 15 minutes for Molecules.
There is no analytical solution since the transition probabilities in this Markov chain are parameterized by a neural network µ θ .
We use the DTW implementation by https://tslearn.readthedocs.io/en/stable/user_ guide/dtw.html.3 We use the similarity metric implementation by https://github.com/andrewekhalel/sewar.
https://github.com/huggingface/diffusers
Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, arXiv:2201.07520A causal masked multimodal model of the internet. arXiv preprintArmen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, et al. Cm3: A causal masked multi- modal model of the internet. arXiv preprint arXiv:2201.07520, 2022.
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Advances in neural information processing systems. 28Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems, 28, 2015.
. Sid Black, Leo Gao, Phil Wang, Connor Leahy, Stella Biderman, Gpt-Neo, 10.5281/zenodo.5297715Large Scale Autoregressive Language Modeling with Mesh-TensorflowSid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Au- toregressive Language Modeling with Mesh-Tensorflow, March 2021. URL https://doi. org/10.5281/zenodo.5297715.
Large scale gan training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, arXiv:1809.11096arXiv preprintAndrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
Chemberta: large-scale selfsupervised pretraining for molecular property prediction. Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar, arXiv:2010.09885arXiv preprintSeyone Chithrananda, Gabriel Grand, and Bharath Ramsundar. Chemberta: large-scale self- supervised pretraining for molecular property prediction. arXiv preprint arXiv:2010.09885, 2020.
Yuntian Deng, Anssi Kanervisto, Alexander M Rush, arXiv:1609.04938What you get is what you see: A visual markup decompiler. 10arXiv preprintYuntian Deng, Anssi Kanervisto, and Alexander M Rush. What you get is what you see: A visual markup decompiler. arXiv preprint arXiv:1609.04938, 10:32-37, 2016.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, Advances in Neural Information Processing Systems. 34Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021.
Mastering text-to-image generation via transformers. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, Advances in Neural Information Processing Systems. 34Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34:19822-19835, 2021.
Make-a-scene: Scene-based text-to-image generation with human priors. Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, Yaniv Taigman, arXiv:2203.13131arXiv preprintOran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a-scene: Scene-based text-to-image generation with human priors. arXiv preprint arXiv:2203.13131, 2022.
The pile: An 800gb dataset of diverse text for language modeling. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, arXiv:2101.00027arXiv preprintLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
On the syntax machine and the construction of a universal compiler. ; Carnegie Inst Of Tech Pittsburgh Pa Computation Ae Glennie, Center, Technical reportAE Glennie. On the syntax machine and the construction of a universal compiler. Technical report, CARNEGIE INST OF TECH PITTSBURGH PA COMPUTATION CENTER, 1960.
Fusion of multispectral and panchromatic images using improved ihs and pca mergers based on wavelet decomposition. María González-Audícana, José Luis Saleta, Raquel García Catalán, Rafael García, IEEE Transactions on Geoscience and Remote sensing. 426María González-Audícana, José Luis Saleta, Raquel García Catalán, and Rafael García. Fusion of multispectral and panchromatic images using improved ihs and pca mergers based on wavelet decomposition. IEEE Transactions on Geoscience and Remote sensing, 42(6):1291-1299, 2004.
Generative adversarial nets. J Ian, Jean Goodfellow, Mehdi Pouget-Abadie, Bing Mirza, David Xu, Sherjil Warde-Farley, Ozair, C Aaron, Yoshua Courville, Bengio, NIPS. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Non-autoregressive neural machine translation. Jiatao Gu, James Bradbury, Caiming Xiong, O K Victor, Richard Li, Socher, International Conference on Learning Representations. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. Non-autoregressive neural machine translation. In International Conference on Learning Representations, 2018.
Vector quantized diffusion model for text-to-image synthesis. Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10696-10706, 2022.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
Generating multiple objects at spatially distinct locations. Tobias Hinz, Stefan Heinrich, Stefan Wermter, arXiv:1901.00686arXiv preprintTobias Hinz, Stefan Heinrich, and Stefan Wermter. Generating multiple objects at spatially distinct locations. arXiv preprint arXiv:1901.00686, 2019.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
Inferring semantic layout for hierarchical text-to-image synthesis. Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSeunghoon Hong, Dingdong Yang, Jongwook Choi, and Honglak Lee. Inferring semantic layout for hierarchical text-to-image synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7986-7994, 2018.
Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJustin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901-2910, 2017.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Text-to-image generation grounded by fine-grained user attention. Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionJing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Text-to-image generation grounded by fine-grained user attention. In Proceedings of the IEEE/CVF Winter Conference on Applica- tions of Computer Vision, pp. 237-246, 2021.
Rdkit: Open-source cheminformatics software. Greg Landrum, Greg Landrum et al. Rdkit: Open-source cheminformatics software. 2016. URL https:// github.com/rdkit/rdkit/.
Learning to compose visual relations. Nan Liu, Shuang Li, Yilun Du, Josh Tenenbaum, Antonio Torralba, Advances in Neural Information Processing Systems. 34Nan Liu, Shuang Li, Yilun Du, Josh Tenenbaum, and Antonio Torralba. Learning to compose visual relations. Advances in Neural Information Processing Systems, 34:23166-23178, 2021.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, 2018.
Dynamic time warping. Information retrieval for music and motion. Meinard Müller, Meinard Müller. Dynamic time warping. Information retrieval for music and motion, pp. 69-84, 2007.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, Mark Chen, arXiv:2112.10741arXiv preprintAlex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
Lilypond, a system for automated music engraving. Jan Han-Wen Nienhuys, Nieuwenhuizen, Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM 2003). the XIV Colloquium on Musical Informatics (XIV CIM 2003)Citeseer1Han-Wen Nienhuys and Jan Nieuwenhuizen. Lilypond, a system for automated music engraving. In Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM 2003), volume 1, pp. 167-171. Citeseer, 2003.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning. PMLRAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020.
Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, International Conference on Machine Learning. PMLRAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pp. 8821-8831. PMLR, 2021.
Hierarchical textconditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text- conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Deep Learning for the Life Sciences. Bharath Ramsundar, Peter Eastman, Patrick Walters, Vijay Pande, Karl Leswing, Zhenqin Wu, O'Reilly MediaBharath Ramsundar, Peter Eastman, Patrick Walters, Vijay Pande, Karl Leswing, and Zhenqin Wu. Deep Learning for the Life Sciences. O'Reilly Media, 2019.
Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, arXiv:1511.06732Sequence level training with recurrent neural networks. arXiv preprintMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Generative adversarial text to image synthesis. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee, International conference on machine learning. PMLRScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In International conference on machine learning, pp. 1060-1069. PMLR, 2016a.
Learning what and where to draw. Zeynep Scott E Reed, Santosh Akata, Samuel Mohan, Bernt Tenka, Honglak Schiele, Lee, Advances in neural information processing systems. 29Scott E Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. Advances in neural information processing systems, 29, 2016b.
U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computerassisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedi- cal image segmentation. In International Conference on Medical image computing and computer- assisted intervention, pp. 234-241. Springer, 2015.
A reduction of imitation learning and structured prediction to no-regret online learning. Stéphane Ross, Geoffrey Gordon, Drew Bagnell, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsStéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and struc- tured prediction to no-regret online learning. In Proceedings of the fourteenth international con- ference on artificial intelligence and statistics, pp. 627-635. JMLR Workshop and Conference Proceedings, 2011.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kam- yar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
Improved techniques for training gans. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, Advances in neural information processing systems. 29Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.
Lyne Tchapmi, et al. igibson 1.0: a simulation environment for interactive tasks in large realistic scenes. Bokui Shen, Fei Xia, Chengshu Li, Roberto Martín-Martín, Linxi Fan, Guanzhi Wang, Claudia Pérez-D'arpino, Shyamal Buch, Sanjana Srivastava, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEBokui Shen, Fei Xia, Chengshu Li, Roberto Martín-Martín, Linxi Fan, Guanzhi Wang, Claudia Pérez-D'Arpino, Shyamal Buch, Sanjana Srivastava, Lyne Tchapmi, et al. igibson 1.0: a simu- lation environment for interactive tasks in large realistic scenes. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7520-7527. IEEE, 2021.
Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, International Conference on Machine Learning. PMLRJascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learn- ing, pp. 2256-2265. PMLR, 2015.
Jonathan Spencer, Sanjiban Choudhury, Arun Venkatraman, Brian Ziebart, J Andrew Bagnell, arXiv:2102.02872Feedback in imitation learning: The three regimes of covariate shift. arXiv preprintJonathan Spencer, Sanjiban Choudhury, Arun Venkatraman, Brian Ziebart, and J Andrew Bag- nell. Feedback in imitation learning: The three regimes of covariate shift. arXiv preprint arXiv:2102.02872, 2021.
Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, Bingkun Bao, arXiv:2008.05865Deep fusion generative adversarial networks for text-to-image synthesis. DfganarXiv preprintMing Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, and Bingkun Bao. Df- gan: Deep fusion generative adversarial networks for text-to-image synthesis. arXiv preprint arXiv:2008.05865, 2020.
Automated programmering: the programmer's assistant. Warren Teitelman, Proceedings of the. thefall joint computer conference, part IIWarren Teitelman. Automated programmering: the programmer's assistant. In Proceedings of the December 5-7, 1972, fall joint computer conference, part II, pp. 917-921, 1972.
Neural discrete representation learning. Advances in neural information processing systems. Aaron Van Den, Oriol Oord, Vinyals, 30Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural informa- tion processing systems, 30, 2017.
Quality of high resolution synthesised images: Is there a simple criterion? In Third conference" Fusion of Earth data: merging point measurements, raster maps and remotely sensed images. Lucien Wald, SEE/URISCALucien Wald. Quality of high resolution synthesised images: Is there a simple criterion? In Third conference" Fusion of Earth data: merging point measurements, raster maps and remotely sensed images", pp. 99-103. SEE/URISCA, 2000.
A universal image quality index. Zhou Wang, Alan C Bovik, IEEE signal processing letters. 93Zhou Wang and Alan C Bovik. A universal image quality index. IEEE signal processing letters, 9 (3):81-84, 2002.
Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan C Bovik, R Hamid, Eero P Sheikh, Simoncelli, IEEE transactions on image processing. 134Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600- 612, 2004.
Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. David Weininger, Journal of chemical information and computer sciences. 281David Weininger. Smiles, a chemical language and information system. 1. introduction to method- ology and encoding rules. Journal of chemical information and computer sciences, 28(1):31-36, 1988.
Images of chemical structures as molecular representations for deep learning. Uriel Matthew R Wilkinson, Martinez-Hernandez, C Chick, Bernardo Wilson, Castro-Dominguez, Journal of Materials Research. 3714Matthew R Wilkinson, Uriel Martinez-Hernandez, Chick C Wilson, and Bernardo Castro- Dominguez. Images of chemical structures as molecular representations for deep learning. Jour- nal of Materials Research, 37(14):2293-2303, 2022.
Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1316-1324, 2018.
Scaling autoregressive models for contentrich text-to-image generation. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, arXiv:2206.10789arXiv preprintJiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content- rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris N Metaxas, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionHan Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dim- itris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative ad- versarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 5907-5915, 2017.
Stackgan++: Realistic image synthesis with stacked generative adversarial networks. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris N Metaxas, IEEE transactions on pattern analysis and machine intelligence. 41Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dim- itris N Metaxas. Stackgan++: Realistic image synthesis with stacked generative adversarial net- works. IEEE transactions on pattern analysis and machine intelligence, 41(8):1947-1962, 2018.
Cross-modal contrastive learning for text-to-image generation. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionHan Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 833-842, 2021.
A wavelet transform method to merge landsat tm and spot panchromatic data. Jie Zhou, L Daniel, J A Civco, Silander, International journal of remote sensing. 194Jie Zhou, Daniel L Civco, and JA Silander. A wavelet transform method to merge landsat tm and spot panchromatic data. International journal of remote sensing, 19(4):743-757, 1998.
Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. Minfeng Zhu, Pingbo Pan, Wei Chen, Yi Yang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionMinfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. Dm-gan: Dynamic memory generative ad- versarial networks for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5802-5810, 2019. |
209,314,627 | PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION AND ENSEMBLING IN DEEP LEARNING | Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.Published as a conference paper at ICLR 2020 calibrated log-likelihood avoids most of the stated pitfalls and generally is a reasonable metric for in-domain uncertainty estimation task.Published as a conference paper at ICLR 2020 While ensembling techniques tend to have better temperature than single models, the default choice of T = 1 is still suboptimal. Comparing the LL with suboptimal temperatures-that is often the case in practice-can potentially produce an arbitrary ranking of different methods.Comparison of the log-likelihood should only be performed at the optimal temperature.Empirically, we demonstrate that the overall ordering of methods and also the best ensembling method according to the LL can vary depending on temperature T . While this applies to most Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. son. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018.Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. . Accuracy-rejection curves (arcs) for comparing classification methods with a reject option. In Machine Learning in Systems Biology, pp. 65-81, 2009.Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In | [
3651422
] | PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION AND ENSEMBLING IN DEEP LEARNING
Arsenii Ashukha [email protected]
HSE ‡
Samsung AI Center Moscow, Skoltech † , HSE §
Samsung AI Center Moscow
HSE ‡
Samsung AI Center Moscow
Samsung AI Center Moscow
Alexander Lyzhov [email protected]
HSE ‡
Samsung AI Center Moscow, Skoltech † , HSE §
Samsung AI Center Moscow
HSE ‡
Samsung AI Center Moscow
Samsung AI Center Moscow
Dmitry Molchanov
HSE ‡
Samsung AI Center Moscow, Skoltech † , HSE §
Samsung AI Center Moscow
HSE ‡
Samsung AI Center Moscow
Samsung AI Center Moscow
Dmitry Vetrov [email protected]
HSE ‡
Samsung AI Center Moscow, Skoltech † , HSE §
Samsung AI Center Moscow
HSE ‡
Samsung AI Center Moscow
Samsung AI Center Moscow
Hse ‡
HSE ‡
Samsung AI Center Moscow, Skoltech † , HSE §
Samsung AI Center Moscow
HSE ‡
Samsung AI Center Moscow
Samsung AI Center Moscow
PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION AND ENSEMBLING IN DEEP LEARNING
Published as a conference paper at ICLR 2020
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.Published as a conference paper at ICLR 2020 calibrated log-likelihood avoids most of the stated pitfalls and generally is a reasonable metric for in-domain uncertainty estimation task.Published as a conference paper at ICLR 2020 While ensembling techniques tend to have better temperature than single models, the default choice of T = 1 is still suboptimal. Comparing the LL with suboptimal temperatures-that is often the case in practice-can potentially produce an arbitrary ranking of different methods.Comparison of the log-likelihood should only be performed at the optimal temperature.Empirically, we demonstrate that the overall ordering of methods and also the best ensembling method according to the LL can vary depending on temperature T . While this applies to most Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. son. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018.Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. . Accuracy-rejection curves (arcs) for comparing classification methods with a reject option. In Machine Learning in Systems Biology, pp. 65-81, 2009.Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In
INTRODUCTION
Deep neural networks (DNNs) have become one of the most popular families of machine learning models. The predictive performance of DNNs for classification is often measured in terms of accuracy. However, DNNs have been shown to yield inaccurate and unreliable probability estimates, or predictive uncertainty (Guo et al., 2017). This has brought considerable attention to the problem of uncertainty estimation with deep neural networks.
There are many faces to uncertainty estimation. Different desirable uncertainty estimation properties of a model require different settings and metrics to capture them. Out-of-domain uncertainty of the model is measured on the data that does not follow the same distribution as the training dataset (out-of-domain data). Out-of-domain data can include images corrupted with rotations or blurring, adversarial attacks (Szegedy et al., 2013) or data points from a completely different dataset. The model is expected to be resistant to data corruptions and to be more uncertain on out-of-domain data than on in-domain data. On the contrary, in-domain uncertainty of the model is measured on data taken from the training data distribution, i.e. data from the same domain. In this setting the model is expected to produce reliable probability estimates, e.g. the model shouldn't be too overconfident in its wrong predictions.
Pitfalls of metrics We show that many common metrics of in-domain uncertainty estimation (e.g. log-likelihood, Brier score, calibration metrics, etc.) are either not comparable across different models or fail to provide a reliable ranking. We address some of the stated pitfalls and point out more reasonable evaluation schemes. For instance, although temperature scaling is not a standard for ensembling techniques, it is a must for a fair evaluation. With this in mind, the Pitfalls of ensembles Equipped with the proposed evaluation framework, we are revisiting the evaluation of ensembles of DNNs-one of the major tools for uncertainty estimation. We introduce the deep ensemble equivalent (DEE) score that measures the number of independently trained models that, when ensembled, achieve the same performance as the ensembling technique of interest. The DEE score allows us to compare ensembling techniques across different datasets and architectures using a unified scale. Our study shows that most of the popular ensembling techniques require averaging predictions across dozens of samples (members of an ensemble), yet are essentially equivalent to an ensemble of only few independently trained models.
Missing part of ensembling In our study, test-time data augmentation (TTA) turned out to be a surprisingly strong baseline for uncertainty estimation and a simple way to improve ensembles. Despite being a popular technique in large-scale classification, TTA seems to be overlooked in the community of uncertainty estimation and ensembling.
SCOPE OF THE PAPER
We use standard benchmark problems of image classification which comprise a common setting in research on learning ensembles of neural networks. There are other relevant settings where the correctness of probability estimates can be a priority, and ensembling techniques are used to improve it. These settings include, but are not limited to, regression, language modeling (Gal, 2016), image segmentation (Gustafsson et al., 2019), active learning (Settles, 2012) and reinforcement learning (Buckman et al., 2018;Chua et al., 2018).
We focus on in-domain uncertainty, as opposed to out-of-domain uncertainty. Out-of-domain uncertainty includes detection of inputs that come from a completely different domain or have been corrupted by noise or adversarial attacks. This setting has been thoroughly explored by (Ovadia et al., 2019).
We only consider methods that are trained on clean data with simple data augmentation. Some other methods use out-of-domain data (Malinin & Gales, 2018) or more elaborate data augmentation, e.g. mixup (Zhang et al., 2017) or adversarial training (Lakshminarayanan et al., 2017) to improve accuracy, robustness and uncertainty.
We use conventional training procedures. We use the stochastic gradient descent (SGD) and use batch normalization (Ioffe & Szegedy, 2015), both being the de-facto standards in modern deep learning. We refrain from using more elaborate optimization techniques including works on superconvergence (Smith & Topin, 2019) and stochastic weight averaging (SWA) (Izmailov et al., 2018). These techniques can be used to drastically accelerate training and to improve the predictive performance. Thus, we do not comment on the training time of different ensembling methods since the use of these and other more efficient training techniques would render such a comparison obsolete.
A number of related works study ways of approximating and accelerating prediction in ensembles. The distillation mechanism allows to approximate the prediction of an ensemble by a single neural network (Hinton et al., 2015;Balan et al., 2015;Tran et al., 2020), whereas fast dropout (Wang & Manning, 2013) and deterministic variational inference (Wu et al., 2018) allow to approximate the predictive distribution of specific stochastic computation graphs. We measure the raw power of ensembling techniques without these approximations.
All of the aforementioned alternative settings are orthogonal to the scope of this paper and are promising points of interest for further research.
PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION
No single metric measures all the desirable properties of uncertainty estimates obtained by a model of interest. Because of this, the community is using many different metrics in an attempt to capture the quality of uncertainty estimation, such as the Brier score (Brier, 1950), log-likelihood (Quinonero-Candela et al., 2005), metrics of calibration (Guo et al., 2017;Nixon et al., 2019), performance of misclassification detection (Malinin & Gales, 2018), and threshold-accuracy curves Figure 1: The average log-likelihood of two different ensembling techniques for ResNet50 on ImageNet dataset before (solid) and after (dashed) temperature scaling. Without the temperature scaling, test-time data augmentation decreases the log-likelihood of plain deep ensembles. However, when the temperature scaling is enabled, deep ensembles with test-time data augmentation outperform plain deep ensembles. Here TACE is reported for VGG16BN model on CIFAR-100 dataset and is evaluated at the optimal temperature. (Lakshminarayanan et al., 2017). In the section we highlight the pitfalls of the aforementioned metrics, and demonstrate that these pitfalls can significantly affect evaluation, changing the ranking of the methods.
Notation We consider a classification problem with a dataset that consists of N training and n testing pairs (x i , y * i ) ∼ p(x, y), where x i is an object and y * i ∈ {1, . . . , C} is a discrete class label. A probabilistic classifier maps an object x i into a predictive distributionp(y | x i ). The predictive distributionp(y | x i ) of a deep neural network is typically defined by the softmax functionp(y | x) = Softmax(z(x)/T ), where z(x) is a vector of logits and T is a scalar parameter standing for the temperature of the predictive distribution. This scalar parameter is usually set to T = 1 or is tuned on a validation set (Guo et al., 2017). The maximum probability max cp (y = c | x i ) is called the confidence of a classifierp on an object x i . I[·] denotes the indicator function throughout the text.
LOG-LIKELIHOOD AND BRIER SCORE
The average test log-likelihood LL = 1 n n i=1 logp(y = y * i | x i ) is a popular metric for measuring the quality of in-domain uncertainty of deep learning models. It directly penalizes high probability scores assigned to incorrect labels and low probability scores assigned to the correct labels y * i . LL is sensitive to the softmax temperature T . The temperature that has been implicitly learned during training can be far from optimal for the test data. However, a nearly optimal temperature can be found post-hoc by maximizing the log-likelihood on validation data. This approach is called temperature scaling or calibration (Guo et al., 2017). Despite its simplicity, the temperature scaling results in a notable improvement in the LL. ensembling techniques (see Figure 10), this effect is most noticeable on experiments with data augmentation on ImageNet (Figure 1).
We introduce a new metric called the calibrated log-likelihood that is the log-likelihood at the optimal temperature.
The calibrated log-likelihood considers a model and a post-training calibration as a unified system, targeting to measure all models in the equal conditions of optimal temperature. That allows to avoid measuring calibration error that can be eliminated by a simple temperature scaling. The metric significantly affects the results of the comparison. For example, in Figure 10 the differences between Bayesian (VI, K-FAC, SWAG, dropout) and conventional non-Bayesian networks become much less pronounced, and in most cases making conventional non-Bayesian networks match the performance of Bayesian ones (VI, K-FAC, Dropout) on ResNet110, ResNet164, and WideResNet.
We show how to obtain an unbiased estimate of the calibrated log-likelihood without a held-out validation set in Section 3.5.
LL also demonstrates a high correlation with accuracy (ρ > 0.86), that in case of calibrated LL becomes even stronger (ρ > 0.95). That suggests that while (calibrated) LL measures the uncertainty of the model, it still has a significant dependence on the accuracy and vice versa. A model with higher accuracy would likely have a higher log-likelihood. See Figure 9 in Appendix C for more details.
Brier score BS = 1
n 1 C n i=1 C c=1 (I[y * i = c] −p(y = c | x i )) 2
has also been known for a long time as a metric for verification of predicted probabilities (Brier, 1950). Similarly to the log-likelihood, the Brier score penalizes low probabilities assigned to correct predictions and high probabilities assigned to wrong ones. It is also sensitive to the temperature of the softmax distribution and behaves similarly to the log-likelihood. While these metrics are not strictly equivalent, they show a high empirical correlation for a wide range of models on CIFAR-10, CIFAR-100 and ImageNet datasets (see Figure 8 in Appendix C ).
MISCLASSIFICATION DETECTION
Detection of wrong predictions of the model, or misclassifications, is a popular downstream problem relevant to the problem of in-domain uncertainty estimation. Since misclassification detection is essentially a binary classification problem, some papers measure its quality using conventional metrics for binary classification such as AUC-ROC and AUC-PR (Malinin & Gales, 2018;Cui et al., 2019;Możejko et al., 2018). These papers use an uncertainty criterion like confidence or predictive entropy H[p(y | x i )] as a prediction score.
While these metrics can be used to assess the misclassification detection performance of a single model, they cannot be used to directly compare misclassification performance across different models. Correct and incorrect predictions are specific for every model, therefore, every model induces its own binary classification problem. The induced problems can differ significantly, since different models produce different confidences and misclassify different objects. In other words, comparing such metrics implies a comparison of performance of classifiers that solve different classification problems. Such metrics are therefore incomparable.
AUCs for misclassification detection cannot be directly compared between different models.
While comparing AUCs is incorrect in the setting of misclassification detection, it is correct to compare these metrics in many out-of-domain data detection problems. In that case, both objects and targets of the induced binary classification problems remain the same for all models. All outof-domain objects have a positive label and all in-domain objects have a negative label. Note that this condition does not necessarily hold in the problem of detection of adversarial attacks. Different models generally have different inputs after an adversarial attack, so such AUC-based metrics might still be flawed.
CLASSIFICATION WITH REJECTION
Accuracy-confidence curves are another way to measure the performance of misclassification detection. These curves measure the accuracy on the set of objects with confidence max cp (y = c | x i ) above a certain threshold τ (Lakshminarayanan et al., 2017) and ignoring or rejecting the others.
The main problem with accuracy-confidence curves is that they rely too much on calibration and the actual values of confidence. Models with different temperatures have different numbers of objects at each confidence level which does not allow for a meaningful comparison. To overcome this problem, one can swhich from thresholding by the confidence level to thresholding by the number of rejected objects. The corresponding curves are then less sensitive to temperature scaling and thus allow to compare the rejection ability in a more meaningful way. Such curves have been known as accuracyrejection curves (Nadeem et al., 2009). In order to obtain a scalar metric for easy comparisons, one can compute the area under this curve, resulting in AU-ARC (Nadeem et al., 2009).
CALIBRATION METRICS
Informally speaking, a probabilistic classifier is calibrated if any predicted class probability is equal to the true class probability according to the underlying data distribution (see (Vaicenavicius et al., 2019) for formal definitions). Any deviation from the perfect calibration is called miscalibration. For brevity, we will usep i,c to denotep(y = c | x i ) in the current section.
Expected calibration error (ECE) (Naeini et al., 2015) is a metric that estimates model miscalibration by binning the assigned probability scores and comparing them to average accuracies inside these bins. Assuming B m denotes the m-th bin and M is overall number of bins, the ECE is defined as follows:
ECE = M m=1 |Bm| n |acc(Bm) − conf(Bm)| ,(1)
where acc(B) = |B| −1 i∈B I[arg max cpi,c = y * i ] and conf(B) = |B| −1 i∈Bp i,y * i . A recent line of works on measuring calibration in deep learning (Vaicenavicius et al., 2019;Kumar et al., 2019;Nixon et al., 2019) outlines several problems of the ECE score. Firstly, ECE is a biased estimate of the true calibration. Secondly, ECE-like scores cannot be optimized directly since they are minimized by a model with constant uniform predictions, making the infinite temperature T = +∞ its global optimum. Thirdly, ECE only estimates miscalibration in terms of the maximum assigned probability whereas practical applications may require the full predicted probability vector to be calibrated. Finally, biases of ECE on different models may not be equal, rendering the miscalibration estimates incompatible. Similar concerns are also discussed by Ding et al. (2019).
Thresholded adaptive calibration error (TACE) was proposed as a step towards solving some of these problems (Nixon et al., 2019). TACE disregards all predicted probabilities that are less than a certain threshold (hence thresholded), chooses the bin locations adaptively so that each bin has the same number of objects (hence adaptive), and estimates miscalibration of probabilties across all classes in the prediction (not just the top-1 predicted class as in ECE). Assuming that B TA m denotes the m-th thresholded adaptive bin and M is the overall number of bins, TACE is defined as follows:
TACE = 1 CM C c=1 M m=1 |B TA m | n objs(B TA m , c) − conf(B TA m , c) ,(2)
where objs(B TA , c) = |B TA | −1 i∈B TA I[y * i = c] and conf(B TA , c) = |B TA | −1 i∈B TApi,c.
Although TACE does solve several problems of ECE and is useful for measuring calibration of a specific model, it still cannot be used as a reliable criterion for comparing different models. Theory suggests that it is still a biased estimate of true calibration with different bias for each model (Vaicenavicius et al., 2019). In practice, we find that TACE is sensitive to its two parameters, the number of bins and the threshold, and does not provide a consistent ranking of different models, as shown in Figure 2.
CALIBRATED LOG-LIKELIHOOD AND TEST-TIME CROSS-VALIDATION
There are two common ways to perform temperature scaling using a validation set when training on datasets that only feature public training and test sets (e.g. CIFARs). The public training set might be divided into a smaller training set and validation set, or the public test set can be split into test and validation parts (Guo et al., 2017;Nixon et al., 2019). The problem with the first method is that the resulting models cannot be directly compared with all the other models that have been trained on the full training set. The second approach, however, provides an unbiased estimate of metrics such as log-likelihood and Brier score but introduces more variance.
In order to reduce the variance of the second approach, we perform a "test-time cross-validation". We randomly divide the test set into two equal parts, then compute metrics for each half of the test set using the temperature optimized on another half. We repeat this procedure five times and average the results across different random partitions to reduce the variance of the computed metrics.
A STUDY OF ENSEMBLING & DEEP ENSEMBLE EQUIVALENT
Ensembles of deep neural networks have become a de-facto standard for uncertainty estimation and improving the quality of deep learning models (Hansen & Salamon, 1990;Krizhevsky et al., 2009;Lakshminarayanan et al., 2017). There are two main directions of training ensembles of DNNs: training stochastic computation graphs and obtaining separate snapshots of neural network parameters.
Methods based on the paradigm of stochastic computation graphs introduce some kind of random noise over the weights or activations of deep learning models. When the model is trained, each sample of the noise corresponds to a member of the ensemble. During test time, the predictions are averaged across the noise samples. These methods include (test-time) data augmentation, dropout All these techniques can be summarized as distributions q m (ω) over parameters ω of computation graphs, where m stands for the technique. During testing, one can average the predictions across parameters ω ∼ q m (ω) to approximate the predictive distribution
p(y i | x i ) ≈ p(y i | x i , ω)q m (ω) dω 1 K K k=1 p(y i | x i , ω k ), ω k ∼ q m (ω)(3)
For example, a deep ensemble of S networks can be represented in this form as a mixture of S Dirac's deltas q DE (ω) = 1 S S s=1 δ(ω − ω s ), centered at independently trained snapshots ω s . Similarly, a Bayesian neural network with a fully-factorized Gaussian approximate posterior distribution over the weight matrices and convolutional kernels ω is represented as q VI (ω) = N (ω | µ, diag(σ 2 )), µ and σ 2 being the optimal variational means and variances respectively.
If one considers data augmentation as a part of the computational graph, it can be parameterized by the coordinates of the random crop and the flag for whether to flip the image horizontally or not. Sampling from the corresponding q aug (ω) would generate different ways to augment the data during inference. However, as data augmentation is present by default during the training of all othe The score is measured in the number of models (higher is better). The area between average lower and upper bounds of DEE is shaded. The plot demonstrates that all of the ensembling techniques are far less efficient than deep ensembles during inference and fail to produce the same level of performance as deep ensembles. The comparison that is normalized on training time is presented in Appendix A.
mentioned ensembling techniques, it is suitable to study it in combination with these methods and not as a separate ensembling technique. We perform such an evaluation in Section 4.3.
Typically, the approximation (equation 3) requires K independent forward passes through a neural network, making the test-time budget directly comparable across all methods.
DEEP ENSEMBLE EQUIVALENT
Most ensembling techniques under consideration are either bounded to a single mode, or provide positively correlated samples. Deep ensemble, on the other hand, is a simple technique that provides independent samples from different modes of the loss landscape, which, intuitively, should result in a better ensemble. Therefore deep ensembles can be considered a strong baseline for the performance of other ensembling techniques given a fixed test-time computation budget.
Comparing the performance of ensembling techniques is, however, a challenging problem. Different models on different datasets achieve different values of metrics; their dependence on the number of samples is non-trivial, and varies depending on a specific model and dataset. Values of the metrics are thus lacking in interpretability as the gain in performance has to be compared against a modeland dataset-specific baseline.
Aiming to introduce perspective and interpretability in our study, we introduce the deep ensemble equivalent score that employs deep ensembles to measure the performance of other ensembling techniques. Specifically, the deep ensemble equivalent score answers the following question:
What size of deep ensemble yields the same performance as a particular ensembling method?
Following the insights from the previous sections, we base the deep ensemble equivalent on the calibrated log-likelihood (CLL). Formally speaking, we define the deep ensemble equivalent (DEE) for an ensembling method m and its upper and lower bounds as follows:
DEE m (k) = min l ∈ R, l ≥ 1 CLL mean DE (l) ≥ CLL mean m (k) ,(4)DEE upper lower m (k) = min l ∈ R, l ≥ 1 CLL mean DE (l) ∓ CLL std DE (l) ≥ CLL mean m (k) ,(5)
Original Image
Augmentations Ensemble Predictions
Ensemble Prediction . . . Figure 4: An illustration of test-time augmentation (TTA) for an ensemble. We apply every member of an ensemble to a separate random augmentation of an image. The predictions of all members are averaged to produce a final prediction of an ensemble. In our experiments, TTA leads to a significant boost of the performance for most of the ensembling techniques on ImageNet with a sufficient computational budget (see Figure 5).
where CLL mean/std m (l) are the mean and the standard deviation of the calibrated log-likelihood achieved by an ensembling method m with l samples. We compute CLL mean DE (l) and CLL std DE (l) for natural numbers l ∈ N >0 and use linear interpolation to define them for real values l ≥ 1. In the following plots we report DEE m (k) for different methods m with different numbers of samples k, and shade the area between the respective lower and upper bounds DEE lower m (k) and DEE upper m (k). . We use PyTorch (Paszke et al., 2017) for implementation of these models, building upon available public implementations. Our implementation closely matches the quality of methods that has been reported in original works. Technical details on training, hyperparameters and implementations can be found in Appendix B. The source code and all computed metrics are available on GitHub 1 .
EXPERIMENTS
As one can see on Figure 3, ensembling methods clearly fall into three categories. SSE and cSGLD outperform all other techniques except deep ensembles and enjoy a near-linear scaling of DEE with the number of samples on CIFAR datasets. The investigation of weight-space trajectories of cSGLD and SSE (Huang et al., 2017;Zhang et al., 2019) suggests that these methods can efficiently explore different modes of the loss landscape. In terms of the deep ensemble equivalent, these methods do not saturate unlike other methods that are bound to a single mode. We found SSE to still saturate on ImageNet. This is likely due to suboptimal hyperparameters of the cyclic learning rate schedule. More verbose results are presented in Figures 11-13 and in Table 5 and Table 8 in Appendix C.
In our experiments SSE typically outperforms cSGLD. This is mostly due to the fact that SSE has a much larger training budget. The cycle lengths and learning rates of SSE and cSGLD are comparable, however, SSE collects one snapshot per cycle while cSGLD collects three snapshots. This makes samples from SSE less correlated with each other while increasing the training budget threefold. Both SSE and cSGLD can be adjusted to obtain a different trade-off between the training budget and the DEE-to-samples ratio. We reused the schedules provided in the original papers (Huang et al., 2017;Zhang et al., 2019).
−−−−−−→ 10 samples , × × × test-time aug −−−−−−→ 50 samples .
The negative calibrated loglikelihood (lower is better) for different ensembling techniques on ImageNet. We report performance for two regimes. Central-crop evaluation (× × ×× × ×) means every member of an ensemble is applied to a central crop of an image, and test-time data augmentation ( ) means each member of the ensemble is applied to a separate random augmentation of the image. Test-time data augmentation significantly improves ensembles with no additional computational cost. Interestingly, a single model with TTA performs competitively with methods that require significantly larger parameters budget, and training complexity, e.g., a single model with TTA performs closely to pure deep ensembles (DE) of the same size.
Being more "local" methods, FGE and SWAG perform worse than SSE and cSGLD, but still significantly outperform "single-snapshot" methods like dropout, K-FAC Laplace approximation and variational inference. We hypothesize that by covering a single mode with a set of snapshots, FGE and SWAG provide a better fit for the local geometry than models trained as stochastic computation graphs. This implies that the performance of FGE and SWAG should be achievable by singlesnapshot methods. However, one might need more elaborate posterior approximations and better inference techniques in order to match the performance of FGE and SWAG by training a stochastic computation graph end-to-end (as opposed to SWAG that constructs a stochastic computation graph post-hoc).
The deep ensemble equivalent curves allow us to notice the common behaviour of different methods, e.g. the relation between deep ensembles, snapshot methods, advanced local methods and singlesnapshot local methods. They also allow us to notice inconsistencies that may indicate a suboptimal choice of hyperparameters. For example, we find that SSE on ImageNet quickly saturates, unlike SSE on CIFAR datasets ( Figure 3). This may indicate that the hyperparameters used on ImageNet are not good enough for efficient coverage of different modes of the loss landscape. We also find that SSE on WideResNet on CIFAR-10 achieves a DEE score of 100 on approx. 70 samples (Figure 12). This may indicate that the members of the deep ensemble for this dataset-architecture pair are underfitted and may benefit from longer training or a different learning rate schedule. Such inconsistencies might be more difficult to spot using plain calibrated log-likelihood plots.
TEST-TIME DATA AUGMENTATION IMPROVES ENSEMBLES FOR FREE
Data augmentation is a time-honored technique that is widely used in deep learning, and is a crucial component for training modern DNNs. Test-time data augmentation has been used for a long time to improve the performance of convolutional networks. For example, multi-crop evaluation has been a standard procedure for the ImageNet challenge (Simonyan & Zisserman, 2014;Szegedy et al., 2015;He et al., 2016). It is, however, often overlooked in the literature on ensembling techniques in deep learning. In this section, we study the effect of test-time data augmentation on the aforementioned ensembling techniques. To keep the test-time computation budget the same, we sample one random augmentation for each member of an ensemble. Figure 5 reports the calibrated log-likelihood on combination of ensembles and test-time data augmentation for ImageNet. Other metrics and results on CIFAR-10/100 datasets are reported in Appendix C. We have used the standard data augmen-tation: random horizontal flips and random padded crops for CIFAR-10/100 datasets, and random horizontal flips and random resized crops for ImageNet (see more details in Appendix B).
Test-time data augmentation ( Figure 4) consistently improves most ensembling methods, especially on ImageNet, where we see a clear improvement across all methods ( Figure 5 and Table 7). The performance gain for powerful ensembles (deep ensembles, SSE and cSGLD) on CIFAR datasets is not as dramatic (Figures 14-15 and Table 4). This is likely due to the fact that CIFAR images are small, making data augmentation limited, whereas images from ImageNet allow for a large number of diverse samples of augmented images. On the other hand, while the performance of "single-snapshot" methods (e.g. variational inference, K-FAC Laplace and dropout) is improved significantly, they perform approximately as good as an augmented version of a single model across all datasets.w
Interestingly, test-time data augmentation on ImageNet improves accuracy but decreases the uncalibrated log-likelihood of deep ensembles ( Table 7 in Appendix C). Test-time data augmentation breaks the nearly optimal temperature of deep ensembles and requires temperature scaling to reveal the actual performance of the method, as discussed in Section 3.1. The experiment demonstrates that ensembles may be highly miscalibrated by default while still providing superior predictive performance after calibration.
We would like to note that test-time data augmentation does not always break the calibration of an ensemble, and, on the contrary, test-time data augmentation often improves the calibration of an ensemble. In our experiments, decalibration was caused by the extreme magnitude of a random crop, that is conventionally used for ImageNet augmentation. Using less extreme magnitude of the random crop fixes decalibration, that makes test-time data augmentation a more practical method that provides out-of-the-box calibration. Although, as we demonstrated earlier, there is no guarantee that any ensemble is calibrated out-of-the-box. If we are willing to apply post-hoc calibration, the final performance can be much better with more severe augmentations.
DISCUSSION & CONCLUSION
We have explored the field of in-domain uncertainty estimation and performed an extensive evaluation of modern ensembling techniques. Our main findings can be summarized as follows:
• Temperature scaling is a must even for ensembles. While ensembles generally have better calibration out-of-the-box, they are not calibrated perfectly and can benefit from the procedure. A comparison of log-likelihoods of different ensembling methods without temperature scaling might not provide a fair ranking, especially if some models happen to be miscalibrated. • Many common metrics for measuring in-domain uncertainty are either unreliable (ECE and analogues) or cannot be used to compare different methods (AUC-ROC, AUC-PR for misclassification detection; accuracy-confidence curves). In order to perform a fair comparison of different methods, one needs to be cautious of these pitfalls. A IS "EFFICIENT" TRAINING OF ENSEMBLES EFFICIENT AT ALL?
Yes! If you care most about training time efficiency. All snapshot based methods (SSE, cSGLD and FGE) on average ( Figure 6) tend to achieve the same performance as deep ensembles using 2-5× less training epochs on CIFAR datasets.
The gain comes at a cost of inference efficiency and memory consumption. "Efficient" snapshotbased methods require to store much more samples of weights compared to deep ensembles making inference significantly more expensive (up to ×25) given the same predictive performance.
You need to get lucky with hyperparameter choice. While on average "efficient" snapshot-based methods require less training resources, they might be completely inefficient if your hyperparameter choice is sub-optimal (see Figure 7). Such hyperparameters as the maximum learning rate, the length of learning rate decay cycle, the snapshot saving schedule can all impact the performance significantly.
This analysis assumes that we use only conventional training procedure. Many models can most likely be trained, stored and executed much more efficiently with methods like super-convergence, stochastic weight averaging, compression, distillation and others. These methods are out of the scope of the paper but are interesting topics for future research. The current choice of hyperparameters may also not be optimal. We reuse the hyperparameters used in the original papers.
B EXPERIMENTAL DETAILS
Implementations of deep ensembles, SWAG, FGE and K-FAC Laplace are heavily based on the original PyTorch implementations of stochastic weight averaging (SWA) 2 and SWAG 3 . Implementations of cyclical MCMC and snapshot ensembles are based on the original implementation of cyclical MCMC 4 . We hypothesize that the optimal hyperparameters of ensembling methods may vary widely depending on the computational budget and the number of samples in the ensemble. Searching for the optimal values for each configuration is outside the scope of this paper so we stick to the originally proposed hyperparameters whenever possible.
Implied probabilistic model Conventional neural networks for classification are usually trained using the average cross-entropy loss function with weight decay regularization hidden inside an optimizer in a deep learning framework like PyTorch. The underlying optimization problem can be written as follows:
L(w) = − 1 N N i=1 logp(y * i | x i , w) + λ 2 w 2 → min w ,(6)
where
{(x i , y * i )} N i=1
is the training dataset of N objects x i with corresponding labels y * i , λ is the weight decay scale andp(j | x i , w) denotes the probability that a neural network with parameters w assigns to class j when evaluated on object x i .
The cross-entropy loss defines a likelihood function p(y * | x, w) and weight decay regularization, or L 2 regularization corresponding to a certain Gaussian prior distribution p(w). The whole optimization objective then corresponds to the maximum a posteriori inference in the following probabilistic model:
p(y * , w | x) = p(y * | x, w)p(w),(7)log p(y * | x, w) = log N i=1 p(y * i | x i , w) = N i=1 logp(y * i | x i , w),(8)
log p(w) = −N λ 2 w 2 + const ⇐⇒ p(w) = N w 0, (N λ) −1 I
In order to make the results comparable across all ensembling techniques, we used the same probabilistic model for all methods, choosing fixed weight decay parameters for each architecture. We used the softmax-based likelihood for all models. We also use the fully-factorized zero-mean Gaussian prior distribution with variances σ 2 = (N λ) −1 where the number of objects N and the weight decay scale λ are dictated by the particular datasets and neural architectures as defined in the following paragraph.
Conventional networks To train a single network on CIFAR-10/100, we used SGD with batch size of 128, momentum 0.9 and model-specific parameters, i.e. the initial learning rate (lr init ), the weight decay coefficient (wd), and the number of optimization epochs (epoch). Specific hyperparameters are shown in Table 1. The models were trained with a unified learning rate scheduler that is shown in equation 10. All models have been trained using data augmentation that consists of horizontal flips and a random crop of 32 pixels with a padding of 4 pixels 5 . The standard data normalization has also been applied. Weight decays, initial learning rates, and the learning rate scheduler were taken from (Garipov et al., 2018) paper. Compared with hyperparameters of (Garipov et al., 2018), we increased the number of optimization epochs since we found that all models were underfitted. While the original WideResNet28x10 network includes a number of dropout layers with p = 0.3 and is trained for 200 epochs, we find that the WideResNet28x10 underfits in this setting and requires a longer training. Therefore, we used p = 0 which reduces training time while bearing Gal & Ghahramani, 2016) is one of the most widely known ensembling techniques. It involves putting a multiplicative Bernoulli noise with a parameter p over the activations of either a fully connected layer or a convolutional layer, averaging predictions of the network w.r.t. noise at test-time. Dropout layers were applied to VGG and WideResNet networks on CIFAR-10 and CIFAR-100 datasets. Dropout for VGG was applied to fully connected layers with p = 0.5. Two dropout layers were applied: one before the first fully connected layer and one before the second one. While the original version of VGG for CIFARs (Zagoruyko, 2015) exploits more dropout layers, we observed that any additional dropout layer deteriorates the performance of the model in ether deterministic or stochastic mode. Dropout for WideResNet was applied in accordance with the original paper (Zagoruyko & Komodakis, 2016) with p = 0.3. Dropout usually increases the time needed to achieve convergence. Because of this, WideResNet networks with dropout were trained for 400 epochs instead of 300 epochs for deterministic case, and VGG networks have always been trained with dropout. All the other hyperparameters were the same as in the case of conventional models.
Variational Inference Variational Inference (VI) approximates the true posterior distribution over weights p(w | Data) with a tractable variational approximation q θ (w) by maximizing a so-called variational lower bound L (eq. 11) w.r.t. the parameters of variational approximation θ. We used a fully-factorized Gaussian approximation q(w) and Gaussian prior distribution p(w). In the case of such a prior, the probabilistic model remains consistent with the conventional training which corresponds to MAP inference in the same probabilistic model. We used variational inference for both convolutional and fully-connected layers where variances of the weights were parameterized by log σ. For fully-connected layers we applied the local reparameterization trick (LRT, (Kingma et al., 2015)).
L(θ) = E q log p(y * | x, w) − KL(q θ (w) || p(w)) → max θ (11) q(w) = N (w | µ, diag(σ 2 )) p(w) = N (w | 0, diag(σ 2 p )), where σ 2 p = (N · wd) −1(12)
While variational inference provides a theoretically grounded way to approximate the true posterior, it tends to underfit deep learning models in practice (Kingma et al., 2015). During pre-training we initialize µ with a snapshot of weights of a pre-trained conventional model, and initialize log σ with a model-specific constant log σ init . The KL-divergence -except for the term corresponding to the weight decay -is scaled with a model-specific parameter β. The weight decay term is implemented as a part of the optimizer. We used a fact that the KL-divergence between two Gaussian distributions can be rewritten as two terms, one of which is equivalent to the weight decay regularization.
On CIFAR-10 and CIFAR-100 we used β equal to 1e-4 for VGG, ResNet100 and ResNet164 networks, and β equal to 1e-5 for WideResNet. Log-variance log σ init was initialized with −5 for all models. Parameters µ were optimized with SGD in the same manner as in the case of conventional networks except that the initial learning rate lr init was set to 1e-3. We used a separate Adam optimizer with a constant learning rate of 1e-3 to optimize log-variances of the weights log σ. Pretraining was done for 300 epochs, and after that the remaining part of training was done for 100 epochs. On ImageNet we used β = 1e-3, lr init = 0.01, log σ init = −6, and trained the model for 45 epochs after pre-training.
K-FAC Laplace
The Laplace approximation uses the curvature information of the appropriately scaled loss function to construct a Gaussian approximation to the posterior distribution. Ideally, one would use the Hessian of the loss function as a covariance matrix and use the maximum a posteriori estimate w M AP as a mean of the Gaussian approximation:
log p(w | x, y * ) = log p(y * | x, w) + log p(w) + const (13) w M AP = arg max w log p(w | x, y * ); Σ = −∇∇ log p(w | x, y * ) (14) p(w | x, y * ) ≈ N (w | w M AP , Σ)(15)
In order to keep the method scalable, we use the Fisher Information Matrix as an approximation to the true Hessian (Martens & Grosse, 2015). For K-FAC Laplace, we use the whole dataset to construct an approximation to the empirical Fisher Information Matrix, and use the π correction to reduce the bias (Ritter et al., 2018;Martens & Grosse, 2015). Following (Ritter et al., 2018), we find the optimal noise scale for K-FAC Laplace on a held-out validation set by averaging across five random initializations. We then reuse this scale for networks trained without a hold-out validation set. We report the optimal values of scales in Table 2. Note that the optimal scale is different depending on whether we use test-time data augmentation or not. Since the data augmentation also introduces some amount of additional noise, the optimal noise scale for K-FAC Laplace with data augmentation is lower.
Snapshot ensembles Snapshot ensembles (SSE) (Huang et al., 2017) is a simple example of an array of methods which collect samples from a training trajectory of a network in weight space to construct an ensemble. Samples are collected in a cyclical manner: during each cycle the learning rate goes from a large value to near-zero and snapshot of weights of the network is taken at the end of the cycle. SSE uses SGD with a cosine learning schedule defined as follows:
α(t) = α0 2 cos π mod (t − 1, T /M ) T /M + 1 ,(16)
where α 0 is the initial learning rate, T is the total number of training iterations and M is the number of cycles.
For all datasets and models hyperparameters from the original SSE paper are reused. For CIFAR-10/100 length of the cycle is 40 epochs, maximum learning rate is 0.2, batch size is 64. On ResNet50 and ImageNet length of the cycle is 45 epochs, maximum learning rate is 0.1, batch size is 256.
Cyclical SGLD Cyclical Stochastic Gradient Langevin Dynamics (cSGLD) (Zhang et al., 2019) is a state-of-the-art ensembling method for deep neural networks pertaining to stochastic Markov Chain Monte Carlo family of methods. It bears similarity to SSE, e.g. it employs SGD with a learning rate schedule described with the equation 16 and training is cyclical in the same manner. Its main differences from SSE are the introduction of gradient noise and the capturing of several snapshots per cycle, both of which can aid in sampling from posterior distribution over neural network weights efficiently.
Some parameters from the original paper are reused: length of cycle is 50 epochs, maximum learning rate is 0.5, batch size is 64. Number of epochs with gradient noise per cycle is 3 epochs. This was found to yield much higher predictive performance and better uncertainty estimation compared to the original paper choice of 10 epochs for CIFAR-10 and 3 epochs for CIFAR-100.
Finally, the results of cyclical Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), (Zhang et al., 2019) which reportedly has marginally better performance compared with cyclical SGLD, could not be reproduced with any value of SGD momentum term. Because of this, we only include cyclical SGLD in our benchmark.
FGE Fast Geometric Ensembling (FGE) is an ensembling method that is similar to SSE in that it collects weight samples from a training trajectory to construct an ensemble. Its main differences from SSE are pretraining, short cycle length and a piecewise-linear learning rate schedule:
α(i) = (1 − 2t(i))α1 + 2t(i)α2 0 < t(i) ≤ 1 2 (2 − 2t(i))α2 + (2t(i) − 1)α1 1 2 < t(i) ≤ 1 .(17)
Hyperparameters of the original implementation of FGE are reused. Model pretraining is done with SGD for 160 epochs according to the standard learning rate schedule described in equation 10 with maximum learning rates from Figure 15: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-100
Figure 2 :
2Thresholded adaptive calibration error (TACE) is highly sensitive to the threshold and the number of bins. It does not provide a consistent ranking of different ensembling techniques.
(
Srivastava et al., 2014;Gal & Ghahramani, 2016), variational inference(Blundell et al., 2015; Kingma et al., 2015; Louizos & Welling, 2017), batch normalization(Ioffe & Szegedy, 2015; Teye et al., 2018;Atanov et al., 2019), Laplace approximation(Ritter et al., 2018) and many more.Snapshot-based methods aim to obtain sets of weights for deep learning models and then to average the predictions across these weights. The weights can be trained independently (e.g. deep ensembles(Lakshminarayanan et al., 2017)), collected on different stages of a training trajectory (e.g. snapshot ensembles (Huang et al., 2017) and fast geometric ensembles (Garipov et al., 2018)), or obtained from a sampling process (e.g. MCMC-based methods (Welling & Teh, 2011; Zhang et al., 2019)). These two paradigms can be combined. Some works suggest construction of ensembles of stochastic computation graphs (Tomczak et al., 2018), while others make use of the collected snapshots to construct a stochastic computation graph (Wang et al., 2018; Maddox et al., 2019). In this paper we consider the following ensembling techniques: deep ensembles (Lakshminarayanan et al., 2017), snapshot ensembles (SSE by Huang et al. (2017)), fast geometric ensembling (FGE by Garipov et al. (2018)), SWA-Gaussian (SWAG by Maddox et al. (2019)), cyclical SGLD (cSGLD by Zhang et al. (2019)), variational inference (VI by Blundell et al. (2015)), K-FAC Laplace approximation (Ritter et al., 2018), dropout (Srivastava et al., 2014) and test-time data augmentation (Krizhevsky et al., 2009). These techniques were chosen to cover a diverse set of approaches keeping their predictive performance in mind.
Figure 3 :
3The deep ensemble equivalent score (DEE) for different numbers of samples on CIFAR-10, CIFAR-100, and ImageNet datasets averaged across different deep convolutional architectures. A deep ensemble equivalent score (DEE) of a model is equal to the minimum size of a deep ensemble (an ensemble of independently train networks) that achieves the same performance as the model under consideration.
We compute the deep ensemble equivalent (DEE) of various ensembling techniques for four popular deep architectures: VGG16 (Simonyan & Zisserman, 2014), PreResNet110/164 (He et al., 2016), and WideResNet28x10 (Zagoruyko & Komodakis, 2016) on CIFAR-10/100 datasets (Krizhevsky et al., 2009), and ResNet50 (He et al., 2016) on ImageNet dataset (Russakovsky et al., 2015)
Figure 6 :Figure 7 :
67The mean cost of training of an ensemble vs. the mean deep ensemble equivalent score. Each marker on the plot denotes one snapshot of weights. Cost of training of an ensemble vs. its quality (DEE). Each marker on a plot denotes one snapshot of weights.
The following tricks are applied to deal with it: pre-training (Molchanov et al., 2017) or equivalently annealing of β (Sønderby et al., 2016), and scaling β down (Kingma et al., 2015; Ullrich et al., 2017).
999 Figure 8 :Figure 9 :
99989) ImageNet dataset |ρ| = 0.The average log-likelihood vs the Brier score on a test dataset for different ensemble methods on CIFAR-10 (a) and CIFAR-10 (b) and ImageNet (c) datasets. While not being equivalent, these metrics demonstrate a strong linear correlation. The correlation coefficient is denoted as ρ. Log-likelihood vs accuracy for different ensembles before (a, c, e) and after(b, d, f) calibration. Both plain log-likelihood and especially calibrated log-likelihood are highly correlated with accuracy.
Figure 10 :Figure 11 :Figure 12 :
101112A side-by-side comparison of log-likelihood and calibrated log-likelihood on CIFAR-10 (a) and CIFAR-100 (b) datasets. On CIFAR-10 the performance of one network becomes close to dropout, variational inference (vi), and K-FAC Laplace approximation (kfacl) after calibration on all models except VGG. On CIFAR-100 deep ensembles move to the first position in the ranking after calibration on WideResNet and VGG. See Section 3.1 for details on the calibrated log-likelihood. The deep ensemble equivalent of various ensembling techniques on ImageNet. Solid lines: mean DEE for different methods and architectures. Area between DEE lower and DEE upper is shaded. Columns 2-4 correspond to DEE based on other metrics, defined similarly to the loglikelihood-based DEE. The results are consistent for all metrics The deep ensemble equivalent of various ensembling techniques on CIFAR-10. Solid lines: mean DEE for different methods and architectures. Area between DEE lower and DEE upper is shaded. Lines 2-4 correspond to DEE based on other metrics, defined similarly to the log-likelihoodbased DEE. Note that while the actual scale of DEE varies from metric to metric, the ordering of different methods and the overall behaviour of the lines remain the same. SSE outperforms deep ensembles on CIFAR-10 on the WideResNet architecture. It possibly indicates that the cosine learning rate schedule and longer training of SSE are more suitable for this architecture than the piecewise-linear learning rate schedule and the number of epochs used in deep ensembles.
11 4 .
457±0.07 4.39±NA 0.226±0.001 0.158±0.002 0.148±0.001 0.134±NA CIFAR-10 Single model 5.83±0.11 5.83±0.11 5.83±0.11 5.83±0.11 0.223±0.002 0.223±0.002 0.223±0.002 0.223±0.002 Variational Inf. (FFG) 6.57±0.09 5.63±0.13 5.50±0.10 5.46±0.03 0.239±0.002 0.192±0.002 0.184±0.002 0.175±0.001 KFAC-Laplace 6.00±0.13 5.82±0.12 5.82±0.19 5.80±0.19 0.210±0.005 0.203±0.007 0.201±0.007 0.200±0.008 Snapshot Ensembles 7.76±0.22 5.52±0.13 5.00±0.10 4.54±0.05 0.247±0.005 0.176±0.001 0.160±0.001 0.137±0.001 SWA-Gaussian 5.77±0.45 4.56±0.17 4.46±0.12 4.34±0.13 0.178±0.009 0.143±0.004 0.139±0.003 0.131±0.003 Cyclic SGLD 6.18±0.20 5.32±0.15 4.55±0.13 3.83±0.02 0.185±0.006 0.156±0.005 0.138±0.002 0.115±0.001 Fast Geometric Ens. 5.52±0.09 4.83±0.08 4.73±0.10 4.28±0.05 0.163±0.002 0.141±0.003 0.137±0.003 0.126±0.002 ResNet110 Deep Ensembles 4.66±0.11 3.77±0.11 3.63±0.07 3.53±NA 0.148±0.004 0.117±0.002 0.112±0.002 0.106±NA CIFAR-10 Single model 4.69±0.11 4.69±0.11 4.69±0.11 4.69±0.11 0.150±0.002 0.150±0.002 0.150±0.003 0.150±0.002 Variational Inf. (FFG) 5.57±0.26 4.91±0.15 4.72±0.13 4.60±0.03 0.178±0.003 0.149±0.001 0.144±0.001 0.140±0.000 KFAC-Laplace 5.81±0.39 5.14±0.15 4.90±0.14 4.78±0.08 0.187±0.014 0.160±0.007 0.153±0.005 0.147±0.003 Snapshot Ensembles 8.41±0.27 4.85±0.11 4.16±0.16 3.52±0.10 0.252±0.006 0.153±0.002 0.132±0.002 0.107±0.001 SWA-Gaussian 5.41±0.71 4.21±0.19 4.21±0.23 4.02±0.14 0.171±0.028 0.130±0.004 0.128±0.004 0.121±0.002 Cyclic SGLD 5.80±0.21 4.97±0.12 4.30±0.08 3.66±0.06 0.178±0.004 0.149±0.004 0.131±0.003 0.110±0.001 Fast Geometric Ens. 5.22±0.07 4.49±0.06 4.36±0.07 4.09±0.12 0.157±0.003 0.134±0.002 0.130±0.001 0.119±0.002 ResNet164 Deep Ensembles 4.53±0.11 3.51±0.09 3.50±0.06 3.34±NA 0.147±0.002 0.113±0.001 0.107±0.001 0.100±NA CIFAR-10 Single model 4.52±0.11 4.52±0.11 4.52±0.11 4.52±0.11 0.144±0.002 0.144±0.003 0.144±0.002 0.144±0.003 Variational Inf. (FFG) 5.62±0.14 4.78±0.05 4.66±0.05 4.55±0.08 0.183±0.004 0.151±0.001 0.146±0.001 0.141±0.001 KFAC-Laplace 5.23±0.29 4.77±0.23 4.65±0.17 4.60±0.09 0.168±0.008 0.151±0.007 0.146±0.005 0.142±0.004 Snapshot Ensembles 8.06±0.10 4.50±0.04 3.89±0.09 3.50±0.05 0.241±0.004 0.144±0.003 0.124±0.002 0.104±0.001 Dropout 3.88±0.12 3.70±0.18 3.63±0.19 3.64±0.17 0.130±0.002 0.120±0.002 0.119±0.001 0.117±0.002 SWA-Gaussian 4.98±1.17 3.53±0.09 3.34±0.14 3.28±0.10 0.157±0.036 0.111±0.004 0.105±0.003 0.101±0.002 Cyclic SGLD 4.78±0.16 4.09±0.11 3.63±0.13 3.19±0.04 0.155±0.003 0.128±0.002 0.114±0.001 0.099±0.002 Fast Geometric Ens. 4.86±0.17 3.95±0.07 3.77±0.10 3.34±0.06 0.148±0.003 0.120±0.002 0.113±0.002 0.102±0.001 WideResNet Deep Ensembles 3.65±0.02 3.11±0.10 3.01±0.06 2.83±NA 0.123±0.002 0.097±0.001 0.095±0.001 0.090±NA CIFAR-10 Single model 3.70±0.15 3.70±0.15 3.70±0.15 3.70±0.15 0.124±0.005 0.124±0.005 0.125±0.005 0.124±0.005 Variational Inf. (FFG) 5.61±0.04 4.15±0.15 3.94±0.10 3.64±0.07 0.189±0.002 0.134±0.002 0.127±0.002 0.117±0.001 KFAC-Laplace 4.03±0.19 3.90±0.15 3.88±0.22 3.83±0.16 0.134±0.004 0.124±0.004 0.122±0.005 0.120±0.003 Snapshot Ensembles 5.56±0.15 3.68±0.09 3.33±0.10 2.89±0.07 0.179±0.005 0.119±0.001 0.105±0.001 0.090±0.001
• Many popular ensembling techniques require dozens of samples for test-time averaging, yet are essentially equivalent to a handful of independently trained models. Deep ensembles dominate other methods given a fixed test-time budget. The results indicate, in particular, that exploration of different modes in the loss landscape is crucial for good predictive performance.• Methods that are stuck in a single mode are unable to compete with methods that are designed to explore different modes of the loss landscape. Would more elaborate posterior approximations and better inference techniques shorten this gap? • Test-time data augmentation is a surprisingly strong baseline for in-domain uncertainty estimation. It can significantly improve other methods without increasing training time or model size since data augmentation is usually already present during training.Our takeaways are aligned with the take-home messages of Ovadia et al.(2019)that relate to indomain uncertainty estimation. We also observe a stable ordering of different methods in our experiments, and observe that deep ensembles with few members outperform methods based on stochastic computation graphs.A large number of unreliable metrics inhibits a fair comparison of different methods. Because of this, we urge the community to aim for more reliable benchmarks in the numerous setups of uncertainty estimation.ACKNOWLEDGMENTSDmitry Vetrov and Dmitry Molchanov were supported by the Russian Science Foundation grant no. 19-71-30020. This research was supported in part through computational resources of HPC facilities at NRU HSE. Joaquin Quinonero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Schölkopf. Evaluating predictive uncertainty challenge. In Machine Learning Challenges Workshop, pp. 1-27. Springer, 2005. Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable laplace approximation for neural networks. 2018. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Marcin B Tomczak, Siddharth Swaroop, and Richard E Turner. Neural network ensembles and variational inference revisited. In 1st Symposium on Advances in Approximate Bayesian Inference, pp. 1-11, 2018. Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, and Richard Zemel. Adversarial distillation of bayesian neural network posteriors. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.Burr Settles. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6
(1):1-114, 2012.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using
large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations
Applications, volume 11006, pp. 1100612. International Society for Optics and Photonics, 2019.
Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. How
to train deep variational autoencoders and probabilistic ladder networks. In 33rd International
Conference on Machine Learning (ICML 2016), 2016.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-
mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
Mattias Teye, Hossein Azizpour, and Kevin Smith. Bayesian uncertainty estimation for batch nor-
malized deep networks. In International Conference on Machine Learning (ICML), 2018.
Linh Tran, Bastiaan S Veeling, Kevin Roth, Jakub Swiatkowski, Joshua V Dillon, Jasper Snoek,
Stephan Mandt, Tim Salimans, Sebastian Nowozin, and Rodolphe Jenatton. Hydra: Preserving
ensemble diversity for model distillation. arXiv preprint arXiv:2001.04694, 2020.
Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compres-
sion. arXiv preprint arXiv:1702.04008, 2017.
Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and
Thomas B Schon.
Evaluating model calibration in classification.
arXiv preprint
arXiv:1902.06977, 2019.
In International Conference on Machine
Learning, pp. 5177-5186, 2018.
Sida Wang and Christopher Manning. Fast dropout training. In international conference on machine
learning, pp. 118-126, 2013.
Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In
Proceedings of the 28th international conference on machine learning (ICML-11), pp. 681-688,
2011.
Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E Turner, José Miguel Hernández-Lobato,
and Alexander L Gaunt. Deterministic variational inference for robust bayesian neural networks.
2018.
Sergey Zagoruyko. 92.45 on cifar-10 in torch, 2015. URL http://torch.ch/blog/2015/
07/30/cifar.html.
Sergey Zagoruyko and Nikos Komodakis.
Wide residual networks.
arXiv preprint
arXiv:1605.07146, 2016.
Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical
stochastic gradient mcmc for bayesian deep learning. arXiv preprint arXiv:1902.03932, 2019.
Deep ensembles Deep ensembles(Lakshminarayanan et al., 2017) average the predictions across networks trained independently starting from different initializations. To obtain a deep ensemble we repeat the described procedure of training standard networks 128 times for all architectures on CIFAR-10 and CIFAR-100 datasets (1024 networks over all) and 50 times for ImageNet dataset. Every member of deep ensembles was trained with exactly the same hyperparameters as conventional models of the same architecture.Model
lr init epochs wd
VGG
0.05 400
5e-4
PreResNet110
0.1 300
3e-4
PreResNet164
0.1 300
3e-4
WideResNet28x10 0.1 300
5e-4
Table 1: Hyperparameters of models trained on CIFARs for single-model evaluation
no significant effect on final model performance in our experiments.
lr(i) =
lrinit,
i ∈ [0, 0.5 · epochs]
lrinit · (1.0 − 0.99 * (i/epochs − 0.5)/0.4), i ∈ [0.5 · epochs, 0.9 · epochs]
lrinit · 0.01,
otherwise
(10)
On ImageNet dataset we used ResNet50 with default hyperparameters taken from PyTorch examples
6 . Specifically, we used SGD with momentum 0.9, batch size of 256, initial learning rate 0.1, weight
decay 1e − 4. Training included data augmentation 7 (scaling, random crops of size 224 × 224,
horizontal flips), normalization and learning rate scheduler lr = lr init · 0.1 epoch//30 where //
denotes integer division. We only deviated from standard parameters by increasing the number of
training epochs from 90 to 130. Or models achieve top-1 error of 23.81 ± 0.15 that closely matches
the accuracy of ResNet50 provided by PyTorch which is 23.85 8 . Training of one model on a single
NVIDIA Tesla V100 GPU takes approximately 5.5 days.
Dropout Binary dropout (or MC dropout) (Srivastava et al., 2014;
Table 2 :
2Optimal noise scale for K-FAC Laplace for different datasets and architectures.For
Table 1 .
1After that, a desired number of FGE cycles is done with one snapshot per cycle collected. For VGG the learning rate is changed with parameters α1 =
174±0.037 1.089±0.007 1.069±0.005 1.050±0.008 Snapshot Ensembles 31.19±0.33 23.87±0.18 22.31±0.31 21.03±0.10 1.170±0.012 0.899±0.004 0.834±0.005 0.751±0.003 SWA-Gaussian 27.75±0.76 22.31±0.22 21.52±0.30 20.69±0.19 0.960±0.033 0.781±0.011 0.745±0.010 0.701±0.008 Cyclic SGLD 25.73±0.14 23.30±0.19 21.20±0.21 18.07±0.16 0.914±0.006 0.818±0.004 0.753±0.002 0.630±0.002 Fast Geometric Ens. 22.84±0.16 21.22±0.20 20.79±0.23 19.64±0.15 0.798±0.006 0.729±0.003 0.713±0.002 0.679±0.002 ResNet110 Deep Ensembles 22.55±0.28 18.30±0.22 17.59±0.21 16.97±NA 0.847±0.007 0.675±0.001 0.638±0.001 0.594±NA CIFAR-100 Single model 22.66±0.31 22.66±0.31 22.66±0.31 22.66±0.31 0.848±0.014 0.848±0.015 0.848±0.014 0.848±0.015 Variational Inf. (FFG) 24.27±0.26 22.41±0.13 22.14±0.12 21.86±0.07 0.924±0.007 0.829±0.003 0.813±0.001 0.795±0.001 KFAC-Laplace 24.88±0.97 22.87±0.44 22.41±0.26 22.14±0.29 0.948±0.036 0.858±0.014 0.836±0.010 0.812±0.010 Snapshot Ensembles 30.30±0.40 22.83±0.23 21.13±0.14 18.48±0.25 1.069±0.006 0.820±0.003 0.761±0.002 0.662±0.002 SWA-Gaussian 24.38±0.93 20.62±0.18 20.08±0.19 19.48±0.19 0.844±0.042 0.719±0.006 0.700±0.006 0.667±0.004 Cyclic SGLD 24.87±0.39 22.37±0.27 20.23±0.22 17.13±0.18 0.888±0.008 0.790±0.009 0.722±0.009 0.606±0.005 Fast Geometric Ens. 21.92±0.15 20.10±0.22 19.87±0.25 18.73±0.25 0.765±0.003 0.699±0.004 0.686±0.004 0.650±0.003 ResNet164 Deep Ensembles 21.41±0.25 17.53±0.17 16.90±0.15 16.50±NA 0.819±0.008 0.647±0.003 0.615±0.002 0.574±NA CIFAR-100 Single model 21.39±0.40 21.39±0.40 21.39±0.40 21.39±0.40 0.817±0.014 0.817±0.014 0.817±0.014 0.817±0.014 Variational Inf. (FFG) 23.47±0.26 21.35±0.11 21.10±0.16 20.82±0.04 0.910±0.001 0.801±0.002 0.782±0.002 0.762±0.000 KFAC-Laplace 23.44±0.45 21.77±0.20 21.29±0.23 21.03±0.38 0.902±0.019 0.813±0.006 0.792±0.005 0.772±0.007 Snapshot Ensembles 29.48±0.19 21.92±0.18 20.27±0.23 17.68±0.07 1.045±0.005 0.789±0.005 0.729±0.004 0.634±0.003 Dropout 20.19±0.11 19.41±0.17 19.36±0.12 19.22±0.15 0.823±0.008 0.768±0.005 0.760±0.006 0.751±0.005 SWA-Gaussian 20.45±0.73 17.57±0.17 17.21±0.22 17.08±0.19 0.794±0.025 0.653±0.004 0.634±0.005 0.614±0.005 Cyclic SGLD 21.42±0.32 19.42±0.28 17.88±0.16 16.29±0.10 0.813±0.010 0.713±0.009 0.654±0.005 0.583±0.004 Fast Geometric Ens. 21.48±0.31 18.54±0.16 18.00±0.19 17.12±0.16 0.770±0.007 0.652±0.006 0.630±0.006 0.596±0.003 WideResNet Deep Ensembles 19.38±0.20 16.55±0.08 16.17±0.15 15.77±NA 0.797±0.007 0.623±0.003 0.595±0.003 0.571±NA CIFAR-100 Single model 19.31±0.24 19.31±0.24 19.31±0.24 19.31±0.24 0.797±0.010 0.797±0.010 0.797±0.010 0.797±0.010 Variational Inf. (FFG) 24.38±0.27 20.17±0.15 19.28±0.09 18.74±0.08 1.004±0.011 0.767±0.004 0.727±0.003 0.685±0.002 KFAC-Laplace 20.02±0.18 19.76±0.15 19.53±0.19 19.43±0.21 0.834±0.009 0.803±0.006 0.795±0.007 0.789±0.006 Snapshot Ensembles 23.01±0.26 18.20±0.13 17.12±0.31 16.07±0.07 0.859±0.009 0.678±0.006 0.633±0.008 0.582±0.004Dropout
26.10±0.20 25.68±0.18 25.66±0.14 25.60±0.17 1.176±0.008 1.111±0.008 1.098±0.009 1.084±0.009
SWA-Gaussian
27.74±1.87 24.53±0.09 23.64±0.28 22.97±0.20 1.109±0.073 0.931±0.007 0.879±0.007 0.826±0.005
Cyclic SGLD
29.75±0.17 26.79±0.19 24.14±0.11 21.15±0.11 1.114±0.003 0.976±0.004 0.881±0.006 0.749±0.004
Fast Geometric Ens. 27.07±0.24 25.35±0.29 24.68±0.40 22.78±0.22 1.057±0.010 0.965±0.003 0.930±0.003 0.827±0.004
VGG16
Deep Ensembles
25.72±0.17 21.60±0.13 20.79±0.16 19.88±NA 1.092±0.004 0.840±0.005 0.794±0.002 0.723±NA
CIFAR-100 Single model
25.44±0.29 25.44±0.29 25.44±0.29 25.44±0.29 1.087±0.006 1.087±0.006 1.087±0.006 1.087±0.006
Variational Inf. (FFG) 27.24±0.09 25.24±0.11 24.85±0.05 24.56±0.07 1.154±0.004 1.001±0.002 0.973±0.002 0.939±0.001
KFAC-Laplace
27.11±0.59 25.98±0.21 25.84±0.38 25.70±0.38 1.
Table 3 :
3Classification error and negative calibrated log-likelihood for different models and numbers of samples on CIFAR-10/100. 196 vs 0.186 ↓ 0.176 vs 0.169 ↓ 0.147 vs 0.147 ≈ Fast Geometric Ens. 5.95 vs 5.64 ↓ 5.69 vs 5.36 ↓ 5.10 vs 4.98 ≈ 0.187 vs 0.177 ↓ 0.178 vs 0.167 ↓ 0.155 vs 0.150 ↓ vs 0.183 ↓ 0.223 vs 0.174 ↓ 0.223 vs 0.166 ↓ Variational Inf. (FFG) 5.63 vs 5.43 ↓ 5.50 vs 5.25 ↓ 5.46 vs 5.07 ↓ 0.192 vs 0.182 ↓ 0.184 vs 0.169 ↓ 0.175 vs 0.154 ↓ KFAC-Laplace 5.82 vs 5.49 ↓ 5.82 vs 5.32 ↓ 5.80 vs 5.17 ↓ 0.203 vs 0.177 ↓ 0.201 vs 0.171 ↓ 0.200 vs 0.164 ↓ Snapshot Ensembles 5.52 vs 5.61 ≈ 5.00 vs 5.03 ≈ 4.54 vs 4.64 ↑ 0.176 vs 0.178 ↑ 0.160 vs 0.160 ≈ 0.137 vs 0.137 ≈ 156 vs 0.148 ↓ 0.138 vs 0.131 ↓ 0.115 vs 0.111 ↓ Fast Geometric Ens. 4.83 vs 4.59 ↓ 4.73 vs 4.44 ↓ 4.28 vs 4.14 ↓ 0.141 vs 0.138 ≈ 0.137 vs 0.132 ↓ 0.126 vs 0.121 ↓ 160 vs 0.139 ↓ 0.153 vs 0.134 ↓ 0.147 vs 0.130 ↓ Snapshot Ensembles 4.85 vs 4.78 ≈ 4.16 vs 4.18 ≈ 3.52 vs 3.48 ≈ 0.153 vs 0.150 ↓ 0.132 vs 0.130 ≈ 0.107 vs 0.106 ↓ 149 vs 0.141 ↓ 0.131 vs 0.125 ↓ 0.110 vs 0.107 ↓ Fast Geometric Ens. 4.49 vs 4.29 ↓ 4.36 vs 4.15 ↓ 4.09 vs 3.85 ↓ 0.134 vs 0.131 ↓ 0.130 vs 0.126 ↓ 0.119 vs 0.115 ↓ 144 vs 0.131 ↓ 0.144 vs 0.128 ↓ 0.144 vs 0.126 ↓ Variational Inf. (FFG) 4.78 vs 4.41 ↓ 4.66 vs 4.22 ↓ 4.55 vs 4.05 ↓ 0.151 vs 0.142 ↓ 0.146 vs 0.135 ↓ 0.141 vs 0.128 ↓ KFAC-Laplace 4.77 vs 4.26 ↓ 4.65 vs 4.21 ↓ 4.60 vs 4.08 ↓ 0.151 vs 0.135 ↓ 0.146 vs 0.132 ↓ 0.142 vs 0.127 ↓ Snapshot Ensembles 4.50 vs 4.42 ↓ 3.89 vs 3.87 ≈ 3.50 vs 3.35 ↓ 0.144 vs 0.142 ↓ 0.124 vs 0.123 ≈ 0.104 vs 0.104 ≈ Dropout 3.70 vs 3.62 ≈ 3.63 vs 3.52 ↓ 3.64 vs 3.44 ↓ 0.120 vs 0.117 ↓ 0.119 vs 0.114 ↓ 0.117 vs 0.111 ↓ SWA-Gaussian 3.53 vs 3.59 ≈ 3.34 vs 3.34 ≈ 3.28 vs 3.24 ≈ 0.111 vs 0.114 ≈ 0.105 vs 0.107 ↑ 0.101 vs 0.102 ≈ Cyclic SGLD 4.09 vs 3.95 ≈ 3.63 vs 3.58 ≈ 3.19 vs 3.22 ≈ 0.128 vs 0.125 ↓ 0.114 vs 0.112 ↓ 0.099 vs 0.100 ↑ Fast Geometric Ens. 3.95 vs 3.90 ≈ 3.77 vs 3.64 ↓ 3.34 vs 3.27 ↓ 0.120 vs 0.120 ≈ 0.113 vs 0.113 ≈ 0.102 vs 0.102 ≈ WideResNet Deep Ensembles 3.11 vs 3.21 ↑ 3.01 vs 3.02 ≈ 2.83 vs 2.91 ↑ 0.097 vs 0.103 ↑ 0.095 vs 0.098 ↑ 0.090 vs 0.094 ↑ CIFAR-10 Single model 3.70 vs 3.53 ↓ 3.70 vs 3.45 ↓ 3.70 vs 3.40 ↓ 0.124 vs 0.117 ↓ 0.125 vs 0.114 ↓ 0.124 vs 0.113 ↓ Variational Inf. (FFG) 4.15 vs 4.05 ≈ 3.94 vs 3.80 ↓ 3.64 vs 3.63 ≈ 0.134 vs 0.136 ≈ 0.127 vs 0.126 ↓ 0.117 vs 0.116 ↓ KFAC-Laplace 3.90 vs 3.63 ↓ 3.88 vs 3.58 ↓ 3.83 vs 3.50 ↓ 0.124 vs 0.115 ↓ 0.122 vs 0.113 ↓ 0.120 vs 0.111 ↓ Snapshot Ensembles 3.68 vs 3.74 ≈ 3.33 vs 3.35 ≈ 2.89 vs 2.90 ≈ 0.119 vs 0.122 ↑ 0.105 vs 0.107 ↑ 0.090 vs 0.093 ↑ Deep ensemble equivalent score Model Method 1 sample 5 samples 10 samples 50 samples 100 samples CIFAR-100 Fast Geometric Ens.Error (%)
Negative calibrated log-likelihood
Model
Method
5
10
100
5
10
100
Dropout
5.81 vs 5.52 ↓
5.82 vs 5.34 ↓
5.79 vs 5.21 ↓
0.225 vs 0.187 ↓ 0.224 vs 0.177 ↓ 0.223 vs 0.167 ↓
SWA-Gaussian
5.66 vs 5.65 ≈
5.49 vs 5.36 ≈
5.25 vs 5.08 ↓
0.182 vs 0.180 ≈ 0.171 vs 0.166 ↓ 0.160 vs 0.152 ↓
Cyclic SGLD
6.56 vs 6.07 ↓
5.71 vs 5.47 ↓
4.84 vs 4.88 ≈
0.VGG16
Deep Ensembles
4.79 vs 4.90 ≈
4.57 vs 4.73 ↑
4.39 vs 4.55 ↑
0.158 vs 0.162 ↑ 0.148 vs 0.152 ↑ 0.134 vs 0.136 ↑
CIFAR-10 Single model
5.83 vs 5.55 ↓
5.83 vs 5.35 ↓
5.83 vs 5.19 ↓
0.223 SWA-Gaussian
4.56 vs 4.47 ≈
4.46 vs 4.34 ↓
4.34 vs 4.16 ↓
0.143 vs 0.142 ≈ 0.139 vs 0.135 ↓ 0.131 vs 0.125 ↓
Cyclic SGLD
5.32 vs 4.88 ↓
4.55 vs 4.35 ↓
3.83 vs 3.59 ↓
0.ResNet110 Deep Ensembles
3.77 vs 3.70 ↓
3.63 vs 3.58 ≈
3.53 vs 3.45 ↓
0.117 vs 0.118 ≈ 0.112 vs 0.112 ≈ 0.106 vs 0.105 ↓
CIFAR-10 Single model
4.69 vs 4.32 ↓
4.69 vs 4.23 ↓
4.69 vs 4.10 ↓
0.150 vs 0.137 ↓ 0.150 vs 0.134 ↓ 0.150 vs 0.131 ↓
Variational Inf. (FFG) 4.91 vs 4.66 ↓
4.72 vs 4.43 ↓
4.60 vs 4.25 ↓
0.149 vs 0.145 ↓ 0.144 vs 0.138 ↓ 0.140 vs 0.130 ↓
KFAC-Laplace
5.14 vs 4.43 ↓
4.90 vs 4.27 ↓
4.78 vs 4.17 ↓
0.SWA-Gaussian
4.21 vs 4.17 ≈
4.21 vs 4.04 ≈
4.02 vs 3.84 ↓
0.130 vs 0.130 ≈ 0.128 vs 0.126 ≈ 0.121 vs 0.116 ↓
Cyclic SGLD
4.97 vs 4.67 ↓
4.30 vs 4.01 ↓
3.66 vs 3.48 ↓
0.ResNet164 Deep Ensembles
3.51 vs 3.58 ↑
3.50 vs 3.46 ≈
3.34 vs 3.30 ↓
0.113 vs 0.114 ↑ 0.107 vs 0.107 ≈ 0.100 vs 0.101 ↑
CIFAR-10 Single model
4.52 vs 4.15 ↓
4.52 vs 4.08 ↓
4.52 vs 3.98 ↓
0.
Table 5 :
5Deep ensemble equivalent score for CIFAR-10/100. Fast Geometric Ens. 23.71±0.00 23.61±0.00 23.56±0.00 23.28±0.00 0.929±0.000 0.921±0.000 0.916±0.000 0.904±0.000 Deep Ensembles 23.79±0.14 21.19±0.14 20.90±0.08 20.63±NA 0.935±0.007 0.823±0.002 0.805±0.000 0.788±NA ResNet50 Single model 23.86±0.20 23.86±0.20 23.86±0.20 23.86±0.20 0.938±0.006 0.938±0.006 0.938±0.006 0.938±0.006 Variational Inf. (FFG) 24.50±0.06 23.82±0.03 23.77±0.04 23.67±0.00 0.957±0.001 0.927±0.000 0.923±0.001 0.920±0.000 KFAC-Laplace 25.01±0.49 24.19±0.29 23.93±0.20 23.86±0.16 0.988±0.022 0.948±0.013 0.939±0.011 0.934±0.008 Snapshot Ensembles 24.92±NA 22.21±NA 21.75±NA 21.48±NA 0.983±NA 0.865±NA 0.843±NA 0.830±NAError (%)
Negative calibrated log-likelihood
Model
Method
1
5
10
50
1
5
10
50
Table 6 :
6Classification error and negative calibrated log-likelihood for different numbers of samples on ImageNet. Fast Geometric Ens. 23.61 vs 22.21 ↓ 23.56 vs 21.37 ↓ 23.28 vs 20.67 ↓ 0.921 vs 0.894 ↓ 0.916 vs 0.842 ↓ 0.904 vs 0.793 ↓ Deep Ensembles 21.19 vs 21.20 ≈ 20.90 vs 20.16 ↓ 20.63 vs 19.39 ↓ 0.823 vs 0.855 ↑ 0.805 vs 0.793 ↓ 0.788 vs 0.739 ↓ ResNet50 Single model 23.86 vs 22.39 ↓ 23.86 vs 21.60 ↓ 23.86 vs 21.06 ↓ 0.938 vs 0.900 ↓ 0.938 vs 0.851 ↓ 0.938 vs 0.805 ↓ Variational Inf. (FFG) 23.82 vs 22.58 ↓ 23.77 vs 21.74 ↓ 23.67 vs 21.10 ↓ 0.927 vs 0.905 ↓ 0.923 vs 0.851 ↓ 0.920 vs 0.805 ↓ KFAC-Laplace 24.19 vs 22.50 ↓ 23.93 vs 21.67 ↓ 23.86 vs 21.04 ↓ 0.948 vs 0.906 ↓ 0.939 vs 0.855 ↓ 0.934 vs 0.809 ↓ Snapshot Ensembles 22.21 vs 21.99 ↓ 21.75 vs 20.81 ↓ 21.48 vs 19.86 ↓ 0.865 vs 0.879 ↑ 0.843 vs 0.815 ↓ 0.830 vs 0.763 ↓Error (%)
Negative calibrated log-likelihood
Model
Method
5
10
50
5
10
50
Table 7 :
7Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on ImageNet. Deep ensemble equivalent score Model Method 1 sample 5 samples 10 samples 25 samples 50 samplesDeep Ensembles
1.0
5.0
10.0
25.0
50.0
Snapshot Ensembles
1.0
2.2
3.2
4.0
4.2
ResNet50 Fast Geometric Ens.
1.1
1.2
1.3
1.4
1.5
Variational Inf. (FFG)
1.0
1.1
1.2
1.2
1.2
KFAC-Laplace
1.0
1.0
1.0
1.0
1.0
Single model
1.0
1.0
1.0
1.0
1.0
Table 8 :
8Deep ensemble equivalent score for ImageNet.Figure 14: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-10Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
4.4
4.6
4.8
5.0
5.2
5.4
5.6
5.8
Top-1 error (%)
VGG16
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
0.14
0.16
0.18
0.20
0.22
Neg. calibrated log-likelihood
VGG16
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
3.4
3.6
3.8
4.0
4.2
4.4
4.6
4.8
Top-1 error (%)
ResNet110
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
0.11
0.12
0.13
0.14
0.15
Neg. calibrated log-likelihood
ResNet110
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
3.4
3.6
3.8
4.0
4.2
4.4
4.6
Top-1 error (%)
ResNet164
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
0.10
0.11
0.12
0.13
0.14
Neg. calibrated log-likelihood
ResNet164
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
2.8
3.0
3.2
3.4
3.6
3.8
4.0
Top-1 error (%)
WideResNet
Single
model
K-FAC-L FFG
VI
SWAG FGE cSGLD SSE DE
0.090
0.095
0.100
0.105
0.110
0.115
0.120
0.125
Neg. calibrated log-likelihood
WideResNet
CIFAR10
central crop (ensemble size 10)
test-time aug. (ensemble size 10)
central crop (ensemble size 50 )
test-time aug. (ensemble size 50)
Source code: https://github.com/bayesgroup/pytorch-ensembles
https://github.com/timgaripov/swa 3 https://github.com/wjmaddox/swa_gaussian 4 https://github.com/ruqizhang/csgmcmc/tree/master/experiments 5 Compose([RandomHorizontalFlip(), RandomCrop(32, padding=4)])
https://github.com/pytorch/examples/tree/ee964a2/imagenet 7 Compose([RandomResizedCrop(224), RandomHorizontalFlip()]) 8 https://pytorch.org/docs/stable/torchvision/models.html
Uncertainty estimation via stochastic batch normalization. Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov, International Symposium on Neural Networks. SpringerAndrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, and Dmitry Vetrov. Un- certainty estimation via stochastic batch normalization. In International Symposium on Neural Networks, pp. 261-269. Springer, 2019.
Bayesian dark knowledge. Vivek Anoop Korattikara Balan, Kevin P Rathod, Max Murphy, Welling, Advances in Neural Information Processing Systems. Anoop Korattikara Balan, Vivek Rathod, Kevin P Murphy, and Max Welling. Bayesian dark knowl- edge. In Advances in Neural Information Processing Systems, pp. 3438-3446, 2015.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra, arXiv:1505.05424Weight uncertainty in neural networks. arXiv preprintCharles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
Verification of forecasts expressed in terms of probability. W Glenn, Brier, Monthly weather review. 781Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1-3, 1950.
Sampleefficient reinforcement learning with stochastic ensemble value expansion. Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, Honglak Lee, Advances in Neural Information Processing Systems. Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample- efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems, pp. 8224-8234, 2018.
Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Kurtland Chua, Roberto Calandra, Rowan Mcallister, Sergey Levine, Advances in Neural Information Processing Systems. Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learn- ing in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
Accelerating monte carlo bayesian inference via approximating predictive uncertainty over simplex. Yufei Cui, Wuguannan Yao, Qiao Li, Antoni B Chan, Chun Jason Xue, arXiv:1905.12194arXiv preprintYufei Cui, Wuguannan Yao, Qiao Li, Antoni B Chan, and Chun Jason Xue. Accelerating monte carlo bayesian inference via approximating predictive uncertainty over simplex. arXiv preprint arXiv:1905.12194, 2019.
Evaluation of neural network uncertainty estimation with application to resource-constrained platforms. Yukun Ding, Jinglan Liu, Jinjun Xiong, Yiyu Shi, arXiv:1903.02050arXiv preprintYukun Ding, Jinglan Liu, Jinjun Xiong, and Yiyu Shi. Evaluation of neural network uncertainty estimation with application to resource-constrained platforms. arXiv preprint arXiv:1903.02050, 2019.
Uncertainty in deep learning. Yarin Gal, University of CambridgePhD thesisYarin Gal. Uncertainty in deep learning. PhD thesis, PhD thesis, University of Cambridge, 2016.
vs 0.909 ↓ Snapshot Ensembles 23.87 vs 23.88 ≈ 22.31 vs 22.50 ≈ 21.03 vs 21.04 ≈ 0.899 vs 0.905 ≈ 0.834 vs 0.836 ≈ 0.751 vs 0.750 ≈ SWA-Gaussian 22.31 vs 22.20 ≈ 21.52 vs 21.21 ≈ 20.69 vs 20.28 ↓ 0.781 vs 0.770 ↓ 0.745 vs 0.730 ↓ 0.701 vs 0.683 ↓ Cyclic SGLD 23.30 vs 22.32 ↓ 21.20 vs 20.43 ↓ 18.07 vs 17.67 ↓ 0.818 vs 0.787 ↓ 0.753 vs 0.725 ↓ 0.630 vs 0.616 ↓ Fast Geometric Ens. Yarin Gal, Zoubin Ghahramani, 785 ↓ 0.782 vs 0.759 ↓ 0.762 vs 0.731 ↓ KFAC-Laplace 21.77 vs 20.66 ↓ 21.29 vs 20.36 ↓ 21.03 vs 20.18 ↓ 0.813 vs 0.778 ↓ 0.792 vs 0.758 ↓ 0.772 vs 0.740 ↓ Snapshot Ensembles 21.92 vs 21.69 ↓ 20.27 vs 19.92 ↓ 17.68 vs 17.66 ≈ 0.789 vs 0.781 ↓ 0.729 vs 0.720 ↓ 0.634 vs 0.629 ↓ Dropout 19.41 vs 19.27 ≈ 19.36 vs 19.11 ≈ 19.22 vs 18.88 ↓ 0.768 vs 0.751 ↓ 0.760 vs 0.738 ↓ 0.751 vs 0.723 ↓ SWA-Gaussian 17.57 vs 17.79 ↑ 17.21 vs 17.27 ≈ 17.08 vs 16.89 ≈ 0.653 vs 0.658 ↑ 0.634 vs 0.635 ≈ 0.614 vs 0.610 ≈ Cyclic SGLD 19.42 vs 18.83 ↓ 17.88 vs 17.50 ↓ 16.29 vs 16.21 ↓ 0.713 vs 0.696 ↓ 0.654 vs 0.641 ↓ 0.583 vs 0.580 ↓ Fast Geometric Ens. 18.54 vs 18.39 ≈ 18.00 vs 17.84 ↓ 17.12 vs 16.93 ↓ 0.652 vs 0.649 ↓ 0.630 vs 0.624 ↓ 0.596 vs 0.592 ↓ WideResNet Deep Ensembles 16.55 vs 16.84 ↑ 16.17 vs 16.30 ≈ 15.77 vs 15.77 ≈ 0.623 vs 0.632 ↑ 0.595 vs 0.602 ↑ 0.571 vs 0.573 ↑ CIFAR-100 Single model 19.31 vs 18.83 ↓ 19.31 vs 18.80 ↓ 19.31 vs 18.72 ↓ 0.797 vs 0.755 ↓ 0.797 vs 0.746 ↓ 0.797 vs 0.738 ↓ Variational Inf. (FFG) 20.17 vs 20.12 ≈ 19.28 vs 19.20 ≈ 18.74 vs 18.54 ↓ 0.767Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning. vs 0.636 ↓ ResNet164 Deep Ensembles 17.53 vs 17.57 ≈ 16.90 vs 16.84 ≈ 16.50 vs 16.22 ↓ 0.647 vs 0.650 ≈ 0.615 vs 0.613 ≈ 0.574 vs 0.575 ↑ CIFAR-100 Single model 21.39 vs 20.60 ↓ 21.39 vs 20.39 ↓ 21.39 vs 20.23 ↓ 0.817 vs 0.772 ↓ 0.817 vs 0.761 ↓ 0.817 vs 0.751 ↓ Variational Inf. (FFG) 21.35 vs 21.06 ↓ 21.10 vs 20.54 ↓ 20.82 vs 19.97 ↓ 0.801 vs 0.. vs 0.766 ≈ 0.727 vs 0.724 ↓ 0.685 vs 0.679 ↓ KFAC-Laplace 19.76 vs 19.21 ↓ 19.53 vs 19.03 ↓ 19.43 vs 18.93 ↓ 0.803 vs 0.764 ↓ 0.795 vs 0.757 ↓ 0.789 vs 0.747 ↓ Snapshot Ensembles 18.20 vs 18.22 ≈ 17.12 vs 17.20 ≈ 16.07 vs 16.27 ≈ 0.678 vs 0.680 ≈ 0.633 vs 0.635 ≈ 0.582 vs 0.586 ↑ Table 4: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-10/100Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059, 2016. Dropout 25.68 vs 24.37 ↓ 25.66 vs 23.89 ↓ 25.60 vs 23.41 ↓ 1.111 vs 0.999 ↓ 1.098 vs 0.960 ↓ 1.084 vs 0.911 ↓ SWA-Gaussian 24.53 vs 24.28 ≈ 23.64 vs 23.27 ↓ 22.97 vs 22.34 ↓ 0.931 vs 0.926 ≈ 0.879 vs 0.859 ↓ 0.826 vs 0.795 ↓ Cyclic SGLD 26.79 vs 25.66 ↓ 24.14 vs 23.45 ↓ 21.15 vs 21.04 ↓ 0.976 vs 0.929 ↓ 0.881 vs 0.848 ↓ 0.749 vs 0.740 ↓ Fast Geometric Ens. 25.35 vs 24.53 ↓ 24.68 vs 23.62 ↓ 22.78 vs 22.20 ↓ 0.965 vs 0.921 ↓ 0.930 vs 0.878 ↓ 0.827 vs 0.800 ↓ VGG16 Deep Ensembles 21.60 vs 21.90 ↑ 20.79 vs 21.03 ↑ 19.88 vs 20.23 ↑ 0.840 vs 0.865 ↑ 0.794 vs 0.811 ↑ 0.723 vs 0.731 ↑ CIFAR-100 Single model 25.44 vs 24.38 ↓ 25.44 vs 23.92 ↓ 25.44 vs 23.48 ↓ 1.087 vs 0.973 ↓ 1.087 vs 0.945 ↓ 1.087 vs 0.912 ↓ Variational Inf. (FFG) 25.24 vs 24.19 ↓ 24.85 vs 23.56 ↓ 24.56 vs 22.89 ↓ 1.001 vs 0.964 ↓ 0.973 vs 0.919 ↓ 0.939 vs 0.864 ↓ KFAC-Laplace 25.98 vs 24.53 ↓ 25.84 vs 24.03 ↓ 25.70 vs 23.57 ↓ 1.089 vs 0.989 ↓ 1.069 vs 0.949 ↓ 1.050 vs 0.909 ↓ Snapshot Ensembles 23.87 vs 23.88 ≈ 22.31 vs 22.50 ≈ 21.03 vs 21.04 ≈ 0.899 vs 0.905 ≈ 0.834 vs 0.836 ≈ 0.751 vs 0.750 ≈ SWA-Gaussian 22.31 vs 22.20 ≈ 21.52 vs 21.21 ≈ 20.69 vs 20.28 ↓ 0.781 vs 0.770 ↓ 0.745 vs 0.730 ↓ 0.701 vs 0.683 ↓ Cyclic SGLD 23.30 vs 22.32 ↓ 21.20 vs 20.43 ↓ 18.07 vs 17.67 ↓ 0.818 vs 0.787 ↓ 0.753 vs 0.725 ↓ 0.630 vs 0.616 ↓ Fast Geometric Ens. 21.22 vs 20.69 ↓ 20.79 vs 20.18 ↓ 19.64 vs 19.25 ↓ 0.729 vs 0.714 ↓ 0.713 vs 0.694 ↓ 0.679 vs 0.661 ↓ ResNet110 Deep Ensembles 18.30 vs 18.36 ≈ 17.59 vs 17.61 ≈ 16.97 vs 16.74 ↓ 0.675 vs 0.672 ≈ 0.638 vs 0.635 ↓ 0.594 vs 0.591 ↓ CIFAR-100 Single model 22.66 vs 21.37 ↓ 22.66 vs 21.17 ↓ 22.66 vs 20.98 ↓ 0.848 vs 0.797 ↓ 0.848 vs 0.786 ↓ 0.848 vs 0.775 ↓ Variational Inf. (FFG) 22.41 vs 21.67 ↓ 22.14 vs 21.21 ↓ 21.86 vs 20.77 ↓ 0.829 vs 0.799 ↓ 0.813 vs 0.775 ↓ 0.795 vs 0.748 ↓ KFAC-Laplace 22.87 vs 21.69 ↓ 22.41 vs 21.28 ↓ 22.14 vs 20.99 ↓ 0.858 vs 0.810 ↓ 0.836 vs 0.788 ↓ 0.812 vs 0.766 ↓ Snapshot Ensembles 22.83 vs 22.33 ↓ 21.13 vs 20.71 ↓ 18.48 vs 18.23 ≈ 0.820 vs 0.806 ↓ 0.761 vs 0.744 ↓ 0.662 vs 0.651 ↓ SWA-Gaussian 20.62 vs 20.61 ≈ 20.08 vs 20.08 ≈ 19.48 vs 19.33 ≈ 0.719 vs 0.715 ≈ 0.700 vs 0.690 ↓ 0.667 vs 0.654 ↓ Cyclic SGLD 22.37 vs 21.57 ↓ 20.23 vs 19.58 ↓ 17.13 vs 16.99 ≈ 0.790 vs 0.767 ↓ 0.722 vs 0.702 ↓ 0.606 vs 0.595 ↓ Fast Geometric Ens. 20.10 vs 19.82 ↓ 19.87 vs 19.39 ↓ 18.73 vs 18.33 ↓ 0.699 vs 0.689 ↓ 0.686 vs 0.671 ↓ 0.650 vs 0.636 ↓ ResNet164 Deep Ensembles 17.53 vs 17.57 ≈ 16.90 vs 16.84 ≈ 16.50 vs 16.22 ↓ 0.647 vs 0.650 ≈ 0.615 vs 0.613 ≈ 0.574 vs 0.575 ↑ CIFAR-100 Single model 21.39 vs 20.60 ↓ 21.39 vs 20.39 ↓ 21.39 vs 20.23 ↓ 0.817 vs 0.772 ↓ 0.817 vs 0.761 ↓ 0.817 vs 0.751 ↓ Variational Inf. (FFG) 21.35 vs 21.06 ↓ 21.10 vs 20.54 ↓ 20.82 vs 19.97 ↓ 0.801 vs 0.785 ↓ 0.782 vs 0.759 ↓ 0.762 vs 0.731 ↓ KFAC-Laplace 21.77 vs 20.66 ↓ 21.29 vs 20.36 ↓ 21.03 vs 20.18 ↓ 0.813 vs 0.778 ↓ 0.792 vs 0.758 ↓ 0.772 vs 0.740 ↓ Snapshot Ensembles 21.92 vs 21.69 ↓ 20.27 vs 19.92 ↓ 17.68 vs 17.66 ≈ 0.789 vs 0.781 ↓ 0.729 vs 0.720 ↓ 0.634 vs 0.629 ↓ Dropout 19.41 vs 19.27 ≈ 19.36 vs 19.11 ≈ 19.22 vs 18.88 ↓ 0.768 vs 0.751 ↓ 0.760 vs 0.738 ↓ 0.751 vs 0.723 ↓ SWA-Gaussian 17.57 vs 17.79 ↑ 17.21 vs 17.27 ≈ 17.08 vs 16.89 ≈ 0.653 vs 0.658 ↑ 0.634 vs 0.635 ≈ 0.614 vs 0.610 ≈ Cyclic SGLD 19.42 vs 18.83 ↓ 17.88 vs 17.50 ↓ 16.29 vs 16.21 ↓ 0.713 vs 0.696 ↓ 0.654 vs 0.641 ↓ 0.583 vs 0.580 ↓ Fast Geometric Ens. 18.54 vs 18.39 ≈ 18.00 vs 17.84 ↓ 17.12 vs 16.93 ↓ 0.652 vs 0.649 ↓ 0.630 vs 0.624 ↓ 0.596 vs 0.592 ↓ WideResNet Deep Ensembles 16.55 vs 16.84 ↑ 16.17 vs 16.30 ≈ 15.77 vs 15.77 ≈ 0.623 vs 0.632 ↑ 0.595 vs 0.602 ↑ 0.571 vs 0.573 ↑ CIFAR-100 Single model 19.31 vs 18.83 ↓ 19.31 vs 18.80 ↓ 19.31 vs 18.72 ↓ 0.797 vs 0.755 ↓ 0.797 vs 0.746 ↓ 0.797 vs 0.738 ↓ Variational Inf. (FFG) 20.17 vs 20.12 ≈ 19.28 vs 19.20 ≈ 18.74 vs 18.54 ↓ 0.767 vs 0.766 ≈ 0.727 vs 0.724 ↓ 0.685 vs 0.679 ↓ KFAC-Laplace 19.76 vs 19.21 ↓ 19.53 vs 19.03 ↓ 19.43 vs 18.93 ↓ 0.803 vs 0.764 ↓ 0.795 vs 0.757 ↓ 0.789 vs 0.747 ↓ Snapshot Ensembles 18.20 vs 18.22 ≈ 17.12 vs 17.20 ≈ 16.07 vs 16.27 ≈ 0.678 vs 0.680 ≈ 0.633 vs 0.635 ≈ 0.582 vs 0.586 ↑ Table 4: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-10/100. |
203,737,303 | SELF: LEARNING TO FILTER NOISY LABELS WITH SELF-ENSEMBLING | Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures. | [
6212000
] | SELF: LEARNING TO FILTER NOISY LABELS WITH SELF-ENSEMBLING
Tam Duc
Nguyen
Chaithanya Kumar Mummadi
Thi Phuong
Nhung Ngo
Thi Hoai
Phuong Nguyen
Laura Beggel
Thomas Brox
SELF: LEARNING TO FILTER NOISY LABELS WITH SELF-ENSEMBLING
Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.
INTRODUCTION
The acquisition of large quantities of a high-quality human annotation is a frequent bottleneck in applying DNNs. There are two cheap but imperfect alternatives to collect annotation at large scale: crowdsourcing from non-experts and web annotations, particularly for image data where the tags and online query keywords are treated as valid labels. Both these alternatives typically introduce noisy (wrong) labels. While Rolnick et al. (2017) empirically demonstrated that DNNs can be surprisingly robust to label noise under certain conditions, Zhang et al. (2017) has shown that DNNs have the capacity to memorize the data and will do so eventually when being confronted with too many noisy labels. Consequently, training DNNs with traditional learning procedures on noisy data strongly deteriorates their ability to generalize -a severe problem. Hence, limiting the influence of label noise is of great practical importance.
A common approach to mitigate the negative influence of noisy labels is to eliminate them from the training data and train deep learning models just with the clean labels (Frénay & Verleysen, 2013). Employing semi-supervised learning can even counteract the noisy labels (Laine & Aila, 2016;Luo et al., 2018). However, the decision which labels are noisy and which are not is decisive for learning robust models. Otherwise, unfiltered noisy labels still influence the (supervised) loss and affect the task performance as in these previous works. They use the entire label set to compute the loss and severely lack a mechanism to identify and filter out the erroneous labels from the labels set.
In this paper, we propose a self-ensemble label filtering (SELF) framework that identifies potentially noisy labels during training and keeps the network from receiving supervision from the filtered noisy labels. This allows DNNs to gradually focus on learning from undoubtedly correct samples even with an extreme level of noise in the labels (e.g., 80% noise ratio) and leads to improved performance as the supervision become less noisy. The key idea of our work is to leverage the knowledge provided in the network's output over different training iterations to form a consensus of predictions (self-ensemble predictions) to progressively identify and filter out the noisy labels from the labeled data; hence we call our approach self-ensemble label filtering (SELF). Our approach allows computation of supervised loss on cleaner subsets rather than the entire noisy labeled data as in previous works.
Our motivation stems from the observation that DNNs start to learn from easy samples in initial phases and gradually adapt to hard ones during training. When trained on wrongly labeled data, DNNs learn from clean labels at ease and receive inconsistent error signals from the noisy labels before over-fitting to the dataset. The network's prediction is likely to be consistent on clean samples and inconsistent or oscillates strongly on wrongly labeled samples over different training iterations. Based on this observation, we record the outputs of a single network made on different training epochs and treat them as an ensemble of predictions obtained from different individual networks. We call these ensembles that are evolved from a single network self-ensemble predictions. Subsequently, we identify the correctly labeled samples via the agreement between the provided label set and our running average of self-ensemble predictions. The samples of ensemble predictions that agree with the provided labels are likely to be consistent and treated as clean samples.
Our training framework leverages the entire dataset, including the filtered out erroneous samples in the unsupervised loss. When learning under label noise, the model receives noisy updates and hence fluctuates strongly. To stabilize the model, we maintain the running average model, such as proposed by Tarvainen & Valpola (2017), a.k.a. the Mean-Teacher model. This model ensemble learning provides a more stable supervisory signal than the noisy model snapshots.
In summary, our SELF framework stabilizes the training process and improves the generalization ability of DNNs. We evaluate the proposed technique on image classification tasks using CIFAR10, CIFAR100 & ImageNet. We demonstrate that our method consistently outperforms the existing approaches on asymmetric and symmetric noise at all noise levels, as shown in Fig. 1. Besides, SELF remains robust towards the choice of the network architecture. Our work is transferable to other tasks without the need to modify the architecture or the primary learning objective. SELF can also be seen as transforming the problem of learning from noisy labels into a semi-supervised learning task by successfully segregating the clean labels (labeled data) and noisy labels (unlabeled data).
2 SELF-ENSEMBLE LABEL FILTERING 2.1 OVERVIEW Fig. 2 shows an overview of our proposed approach. In the beginning, we assume that the labels of the training set are noisy. The model attempts to identify correct labels progressively using self- Figure 2: Overview of the self-ensemble label filtering (SELF) framework. The model starts in iteration 0 with training from the noisy label set. During training, the model maintains a selfensemble, a running average of itself (Tarvainen & Valpola, 2017) to provide a stable learning signal. Also, the model collects a self-ensemble prediction (moving-average) for the subsequent filtering. Once the best model is found, these predictions identify and filter out noisy labels using the original label set L 0 . The model performs this progressive filtering until there is no more better model. For details see Algorithm 1.
forming ensembles of models and predictions. Since wrong labels cause strong fluctuations in the model's predictions, using ensembles is a natural way to counteract noisy labels.
Concretely, in each iteration, the model learns from a detected set of potentially correct labels and maintains a running average of model snapshots (realized by the Mean Teacher model Tarvainen & Valpola (2017)). This ensemble model is evaluated on the entire dataset and provides an additional learning signal for training the single models. Additionally, our framework maintains the runningaverage of the model's predictions for the filtering process. The model is trained until we find the best model w.r.t. the performance on the validation set (e.g., by early-stopping). The set of correct labels is detected based on the strategy defined in Sec. 2.2. In the next iteration, we again use all data and the new filtered label set as input for the model training. The iterative training procedure stops when no better model can be found. In the following, we give more details about the combination of this training and filtering procedure.
PROGRESSIVE LABEL FILTERING
Progressive detection of correctly labeled samples Our framework Self-Ensemble Label Filtering (Algorithm 1) focuses on the detection of certainly correct labels from the provided label set L 0 . In each iteration i, the model is trained using the label set of potentially correct labels L i . At the end of each iteration, the model determines the next correct label set L i+1 using the filtering strategy described in 2.2 The model stops learning when no improvement was achieved after training on the refined label set L i+1 .
In other words, in each iteration, the model attempts to learn from the easy, in some sense, obviously correct labels. However, learning from easy samples also affects similar but harder samples from the same classes. Therefore, by learning from these easy samples, the network can gradually distinguish between hard and wrongly-labeled samples.
Our framework does not focus on repairing all noisy labels. Although the detection of wrong labels is sometimes easy, finding their correct hidden label might be extremely challenging in case of having many classes. If the noise is sufficiently random, the set of correct labels will be representative to achieve high model performance. Further, in our framework, the label filtering is performed on the original label set L 0 from iteration 0. Clean labels erroneously removed in an earlier iteration (e.g., labels of hard to classify samples) can be reconsidered for model training again in later iterations.
Algorithm 1 SELF: Self-Ensemble Label Filtering pseudocode Require: Dtrain = noisy labeled training set Require: D val = noisy labeled validation set Require: (x, y) = training stimuli and label Require: α = ensembling momentum, 0 ≤ α ≤ 1 i ← 0 counter to track iterations Mi ← train(Dtrain, D val ) initial Mean-Teacher ensemble model training M best ← Mi set initial model as best model zi ← 0 initialize ensemble predictions of all samples (ignored sample index for simplicity) while acc(Mi, D val ) ≥ acc(M best , D val ) do iterate until no best model is found on D val M best ← Mi save the best model D f ilter ← Dtrain set filtered dataset as initial label set
i ← i + 1 for (x, y) in D f ilter dô zi ← M best (x) evaluate model outputẑi zi ← αzi−1 + (1 − α)ẑi accumulate ensemble predictions zi if y = argmax(zi) then verify agreement of ensemble predictions & label y ← ∅ in D f ilter identify it as noisy label & remove from label set end if end for Mi ← train(D f ilter , D val )
train Mean-Teacher model on filtered label set end while return M best Filtering strategy The model can determine the set of potentially correct labels L i based on agreement between the label y and its maximal likelihood predictionŷ|x with L i = {(y, x) |ŷ x = y; ∀(y, x) ∈ L 0 }. L 0 is the label set provided in the beginning, (y, x) are the samples and their respective noisy labels in the iteration i. In other words, the labels are only used for supervised training if in the current epoch, the model predicts the respective label to be the correct class with the highest likelihood. In practice, our framework does not useŷ(x) of model snapshots for filtering but a moving-average of the ensemble models and predictions to improve the filtering decision.
SELF-ENSEMBLE LEARNING
The model's predictions for noisy samples tend to fluctuate. For example, take a cat wrongly labeled as a tiger. Other cat samples would encourage the model to predict the given cat image as a cat. Contrary, the wrong label tiger regularly pulls the model back to predict the cat as a tiger. Hence, using the model's predictions gathered in one single training epoch for filtering is sub-optimal. Therefore, in our framework SELF, our model relies on ensembles of models and predictions. Model ensemble with Mean Teacher A natural way to form a model ensemble is by using an exponential running average of model snapshots (Fig. 3a). This idea was proposed in Tarvainen & Valpola (2017) for semi-supervised learning and is known as the Mean Teacher model. In our framework, both the mean teacher model and the normal model are evaluated on all data to preserve the consistency between both models. The consistency loss between student and teacher output distribution can be realized with Mean-Square-Error loss or Kullback-Leibler-divergence. More details for training with the model ensemble can be found in Appendix A.1
Prediction ensemble Additionally, we propose to collect the sample predictions over multiple training epochs: z j = αz j−1 + (1 − α)ẑ j , whereby z j depicts the moving-average prediction of sample k at epoch j, α is a momentum,ẑ j is the model prediction for sample k in epoch j.
This scheme is displayed in Fig. 3b. For each sample, we store the moving-average predictions, accumulated over the past iterations. Besides having a more stable basis for the filtering step, our proposed procedure also leads to negligible memory and computation overhead.
Further, due to continuous training of the best model from the previous model, computation time can be significantly reduced, compared to re-training the model from scratch. On the new filtered dataset, the model must only slowly adapt to the new noise ratio contained in the training set. Depending on the computation budget, a maximal number of iterations for filtering can be set to save time.
3 RELATED WORKS Reed et al. (2014); Azadi et al. (2015) performed early works on learning robustly under label noise for deep neural networks. Recently, Rolnick et al. (2017) have shown for classification that deep neural networks come with natural robustness to label noise following a particular random distribution. No modification of the network or the training procedure is required to achieve this robustness. Following this insight, our framework SELF relies on this natural robustness to kickstart the self-ensemble filtering process to extend the robust behavior to more challenging scenarios.
Laine & Aila (2016); Luo et al. (2018) proposed to apply semi-supervised techniques on the data to counteract noise. These and other semi-supervised learning techniques learn from a static, initial set of noisy labels and have no mechanisms to repair labels. Therefore, the supervised losses in their learning objective are typically high until the model strongly overfits to the label noise. Compared to these works, our framework performs a variant of self-supervised label corrections. The network learns from a dynamic, variable set of labels, which is determined by the network itself. Progressive filtering allows the network to (1) focus on a label set with a significantly lower noise ratio and (2) repair wrong decisions made by itself in an earlier iteration.
Other works assign weights to potentially wrong labels to reduce the learning signal (Jiang et al., 2017;Jenni & Favaro, 2018). These approaches tend to assign less extreme weights or hyperparameters that are hard to set. Since the typical classification loss is highly non-linear, a lower weight might still lead to learning from wrong labels. Compared to these works, the samples in SELF only receive extreme weights: either they are zero or one. Further, SELF focuses only on self-detecting the correct samples, instead of repairing the wrong labels. Typically, the set of correct samples are much easier to detect and are sufficiently representative to achieve high performance. Han et al. (2018b); Jiang et al. (2017) employ two collaborating and simultaneously learning networks to determine which samples to learn from and which not. However, the second network is free in its predictions and hence hard to tune. Compared to these works, we use ensemble learning as a principled approach to counteract model fluctuations. In SELF, the second network is extremely restricted and is only composed of running averages of the first network. To realize the second network, we use the mean-teacher model (Tarvainen & Valpola, 2017) as a backbone. Compared to their work, our self-ensemble label filtering gradually detects the correct labels and learns from them, so the label set is variable. Further, we do use not only model ensembles but also an ensemble of predictions to detect correct labels.
Other works modify the primary loss function of the classification tasks. Patrini et al. (2017) The loss modification impedes the transfer of these ideas to other tasks than classification. Compared to these works, our framework SELF does not modify the primary loss. The progressive filtering procedure and self-ensemble learning are applicable in other tasks to counteract noise.
EVALUATION
EXPERIMENTS DESCRIPTIONS
STRUCTURE OF THE ANALYSIS
We evaluate our approach on CIFAR-10, CIFAR-100, an ImageNet-ILSVRC on different noise scenarios. For CIFAR-10, CIFAR-100, and ImageNet, we consider the typical situation with symmetric and asymmetric label noise. In the case of the symmetric noise, a label is randomly flipped to another class with probability p. Following previous works, we also consider label flips of semantically sim-ilar classes on CIFAR-10, and pair-wise label flips on CIFAR-100. Finally, we perform studies on the choice of the network architecture and the ablation of the components in our framework. More experiments and results can be found in App. A.3
COMPARISONS TO PREVIOUS WORKS
We compare our work to previous methods from Reed-Hard (Reed et al., 2014), S-model (Goldberger & Ben-Reuven, 2016), Wang et al. (2018), Rand. weights , Bi-level-model (Jenni & Favaro, 2018), MentorNet (Jiang et al., 2017), L q (Zhang & Sabuncu, 2018), Trunc L q (Zhang & Sabuncu, 2018), ForwardT (Patrini et al., 2017), Co-teaching (Han et al., 2018b), DAC (Thulasidasan et al., 2019), Random reweighting , and Learning to reweight . In case of co-teaching (Han et al., 2018b) and MentorNet (Jiang et al., 2017), the source codes are available and hence used for evaluation. and DAC (Thulasidasan et al., 2019) considered the setting of having a small clean validation set of 1000 and 5000 images respectively. For comparison purposes, we also experiment with a small clean set of 1000 images additionally. Further, we abandon oracle experiments or methods using additional information to keep the evaluation comparable. For instance, Forward T (Patrini et al., 2017) uses the true underlying confusion matrix to correct the loss. This information is neither known in typical scenarios nor used by other methods.
Whenever possible, we adopt the reported performance from the corresponding publications. The testing scenarios are kept as similar as possible to enable a fair comparison. All tested scenarios use a noisy validation set with the same noise distribution as the training set unless stated otherwise.
NETWORKS CONFIGURATION AND TRAINING
For the basic training of self-ensemble model, we use the Mean Teacher model (Tarvainen & Valpola, 2017) available on GitHub 1 . The students and teacher networks are residual networks (He et al., 2016) with 26 layers with Shake-Shake-regularization (Gastaldi, 2017). We use the Py-Torch (Paszke et al., 2017) implementation of the network and keep the training settings close to (Tarvainen & Valpola, 2017). The network is trained with Stochastic Gradient Descent. In each filtering iteration, the model is trained for a maximum of 300 epochs, with patience of 50 epochs. For more training details, see the appendix.
EXPERIMENTS RESULTS
SYMMETRIC LABEL NOISE
CIFAR-10 and 100 Results for typical uniform noise scenarios with noise ratios on CIFAR-10 and CIFAR-100 are shown in Tab. 1. More results are visualized in Fig. 1a (CIFAR-10) and Fig. 1b (CIFAR-100). Our approach SELF performs robustly in the case of lower noise ratios with up to 60% and outperforms previous works. Although a strong performance loss occurs at 80% label noise, SELF still outperforms most of the previous approaches. The experiment SELF* using a 1000 clean validation images shows that the performance loss mostly originates from the progressive filtering relying too strongly on the extremely noisy validation set. Despite the weaker model, SELF (ResNext50) surpasses the best previously reported results by more than 5% absolute improvement. Even the significantly weaker model ResNext18 outperforms MentorNet, which is based on a more powerful ResNet101 network.
ASYMMETRIC LABEL NOISE
Tab. 2 shows more challenging noise scenarios when the noise is not class-symmetric and uniform. Concretely, labels are flipped among semantically similar classes such as CAT and DOG on CIFAR-10. On CIFAR-100, each label is flipped to the next class with a probability p. In these scenarios, our framework SELF also retains high performance and only shows a small performance drop at 40% noise. The high label noise resistance of our framework indicates that the proposed self-ensemble filtering process helps the network identify correct samples, even under extreme noise ratios.
EFFECTS OF DIFFERENT ARCHITECTURES
Previous works utilize a various set of different architectures, which hinders a fair comparison. Tab. 3 shows the performance of our framework SELF compared to previous approaches. SELF outperforms other works in all scenarios except for CIFAR-10 with 80% noise. Typical robust learning approaches lead to significant accuracy losses at 40% noise, while SELF still retains high performance. Further, note that SELF allows the network's performance to remain consistent across the different underlying architectures. Table 5: Ablation study on CIFAR-10 and CIFAR-100. The Resnet baseline was trained on the full noisy label set. Adding progressive filtering improves over this baseline. The Mean Teacher maintains an ensemble of model snapshots, which helps counteract noise. Having progressive filtering and model ensembles (-MVA-pred.) makes the model more robust but still fails at 80% noise. The full SELF framework additionally uses the prediction ensemble for detection of correct labels. Tab. 5 shows the importance of each component in our framework. See Fig. 4a, Fig. 4b for experiments on more noise ratios. As expected, the Resnet-baseline rapidly breaks down with increasing noise ratios. Adding self-supervised filtering increases the performance slightly in lower noise ratios. However, the model has to rely on extremely noisy snapshots. Contrary, using a model ensemble alone such as in Mean-Teacher can counteract noise on the noisy dataset CIFAR-10. On the more challenging CIFAR-100, however, the performance decreases strongly. With self-supervised filtering and model ensembles, SELF (without MVA-pred) is more robust and only impairs performance at 80% noise. The last performance boost is given by using moving-average predictions so that the network can reliably detect correctly labeled samples gradually.
ABLATION STUDY
CONCLUSION
We propose a simple and easy to implement a framework to train robust deep learning models under incorrect or noisy labels. We filter out the training samples that are hard to learn (possibly noisy labeled samples) by leveraging ensemble of predictions of the single network's output over different training epochs. Subsequently, we allow clean supervision from the non-hard samples and further leverage additional unsupervised loss from the entire dataset. We show that our framework results in DNN models with superior generalization performance on CIFAR-10, CIFAR-100 & ImageNet and outperforms all previous works under symmetric (uniform) and asymmetric noises. Furthermore, our models remain robust despite the increasing noise ratio and change in network architectures.
A APPENDIX
A.1 MEAN TEACHER MODEL FOR ITERATIVE FILTERING
We apply the Mean Teacher algorithm in each iteration i in the train(D f ilter , D val ) procedure as follows.
• Input: examples with potentially clean labels D f ilter from the filtering procedure. In the beginning (i = 0), here D f ilter refers to entire labeled dataset. Dataset Tab. 6 shows the details of CIFAR-10 and 100 datasets in our evaluation pipeline. The validation set is contaminated with the same noise ratio as the training data unless stated otherwise.
Network training For the training our model SELF, we use the standard configuration provided by Tarvainen & Valpola (2017) 3 . Concretely, we use the SGD-optimizer with Nesterov Sutskever et al. (2013) momentum, a learning rate of 0.05 with cosine learning rate annealing Loshchilov & Hutter (2016), a weight decay of 2e-4, max iteration per filtering step of 300, patience of 50 epochs, total epochs count of 600. Table 6: Dataset description. Classification tasks on CIFAR-10 and CIFAR-100 with uniform noise. Note that the noise on the training and validation set is not correlated. Hence, maximizing the accuracy on the noisy set provides a useful (but noisy) estimate for the generalization ability on unseen test data. For basic training of baselines models without semi-supervised learning, we had to set the learning rate to 0.01. In the case of higher learning rates, the loss typically explodes. Every other option is kept the same.
Semi-supervised learning For the mean teacher training, additional hyperparameters are required. In both cases of CIFAR-10 and CIFAR-100, we again take the standard configuration with the consistency loss to mean-squared-error and a consistency weight: 100.0, logit distance cost: 0.01,
(b) (c) Figure 5: Simple training losses to counter label noise. (a) shows the prediction of a sample given a model. The red bar indicates the noisy label, blue the correct one. Arrows depict the magnitude of the gradients (b) Typical losses reweighting schemes are not wrong but suffer from the gradient vanishing problem. Non-linear losses such as Negative-log-likelihood are not designed for gradient ascent. (c)Push-away-loss: as a simple baseline, we propose the intuitive push-away-loss to improve the gradients. We take this version as a strong baseline for a fair comparison.
consistency-ramp-up:5. The total batch-size is 512, with 124 samples being reserved for labeled samples, 388 for unlabeled data. Each epoch is defined as a complete processing of all unlabeled data. When training without semi-supervised-learning, the entire batch is used for labeled data.
Data augmentation The data are normalized to zero-mean and standard-variance of one. Further, we use real-time data augmentation with random translation and reflection, subsequently random horizontal flip. The standard PyTorch-library provides these transformations.
A.2.2 IMAGENET-ILSVRC-2015
Network Training The network used for evaluation were ResNet He et al. (2016) and Resnext Xie et al. (2017) for training. All ResNext variants use a cardinality of 32 and base width of 4 (32x4d). ResNext models follow the same structure as their Resnet counterparts, except for the cardinality and base width.
All other configurations are kept as close as possible to Tarvainen & Valpola (2017). The initial learning rate to handle large batches Goyal et al. (2017) is set to 0.1; the base learning rate is 0.025 with a single cycle of cosine annealing.
Semi-supervised learning Due to the large images, the batch size is set to 40 in total with 20/20 for labeled and unlabeled samples, respectively. We found the Kullback-divergence leads to no meaningful network training. Hence, we set the consistency loss to mean-squared-error, with a weight of 1000. We use consistency ramp up of 5 epochs to give the mean teacher more time in the beginning. Weight decay is set to 5e-5; patience is four epochs to stop training in the current filtering iteration.
Filtering We filter noisy samples with the topk=5 strategy, instead of taking the maximumlikelihood (ML) prediction as on CIFAR-10 and CIFAR-100. That means the samples are kept for supervised training if their provided label lies within the top 5 predictions of the model. The main reason is that each image of ImageNet might contain multiple objects. Filtering with ML-predictions is too strict and would lead to a small recall of the detection of the correct sample.
Data Augmentation For all data, we normalize the RGB-images by the mean: (0.485, 0.456, 0.406) and the standard variance (0.229, 0.224, 0.225). For training data, we perform a random rotation of up to 10 degrees, randomly resize images to 224x224, apply random horizontal flip and random color jittering. This noise is needed in regular mean-teacher training. The jittering setting are: brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1. The validation data are resized to 256x256 and randomly cropped to 224x224
A.2.3 SEMI-SUPERVISED LOSSES
For the learning of wrongly labeled samples, Fig. 6 shows the relationship between the typical reweighting scheme and our baseline push-away-loss. Typically, reweighting is applied directly to the losses with samples weights w (k) for each sample k as shown in Eq. 4
min w (k) i N LL(y (k) label |x (k) , D)(1)
D is the dataset, x (k) and y (k) label are the samples k and its noisy label. w (k) i is the samples weight for the sample k at step i. Negative samples weights w (k) i are often assigned to push the network away from the wrong labels. Let w
(k) i = −c (k) i with c (k) i > 0, then we have: min −c (k) i N LL(y (k) label |x (k) , D)(2)
Which results in:
max c (k) i N LL(y (k) label |x (k) , D)(3)
In other words, we perform gradient ascent for wrongly labeled samples. However, the Negativelog-likelihood is not designed for gradient ascent. Hence the gradients of wrongly labeled samples vanish if the prediction is too close to the noisy label. This effect is similar to the training of Generative Adversarial Network (GAN) Goodfellow et al.. In the GAN-framework, the generator loss is not simply set to the negated version of the discriminator's loss for the same reason. Therefore, to provide a fair comparison with our framework, we suggest the push-away-loss L P ush−away (y (k) label , x (k) , D) with improved gradients as follows:
min 1 |Y |−1 y,y =y (k) label c (k) i N LL(y|x (k) , D)(4)
Whereby Y is the set of all classes in the training set. This loss has improved gradients to push the model away from the potentially wrong labels.
Entropy minimization The typical entropy loss for semi-supervised learning is shown in Fig. 6. It encourages the model to provide extreme predictions (such as 0 or 1) for each sample. Over a large number of samples, the model should balance its predictions over all classes.
The entropy loss can easily be applied to all samples to express the uncertainty about the provided labels. Alternatively, the loss can be combined with a strict filtering strategy, as in our work, which removes the labels of potentially wrongly labeled samples.
For a large noise ratio, predictions of wrongly labeled samples fluctuate strongly over previous training iterations. Amplifying these network decisions could lead to even noisier models model. Combined with iterative filtering, the framework will have to rely on a single noisy model snapshot.
In the case of an unsuitable snapshot, the filtering step will make many wrong decisions.
A.3 MORE EXPERIMENTS RESULTS
A.3.1 COMPLETE REMOVAL OF SAMPLES
Tab. 7 shows the results of deleting samples from the training set. It leads to significant performances gaps compared to our strategy (SELF), which considers the removed samples as unlabeled data. In case of a considerable label noise of 80%, the gap is close to 9%.
Continuously using the filtered samples lead to significantly better results. The unsupervised-loss provides meaningful learning signals, which should be used for better model training.
A.3.2 SEMI-SUPERVISED LEARNING FOR PROGRESSIVE FILTERING
Tab. 8 shows different semi-supervised learning strategies with and without iterative filtering. The push-away-loss corresponds to assigning negative weights to potentially noisy labels. The entropy loss minimizes the network's uncertainty on a set of samples. Since our labels are all potentially noisy, it is meaningful to apply this loss to all training samples instead of removed samples only. Hence we compare both variants. The Mean-teacher loss is always applied to all samples (details in the appendix).
Without filtering: Learning from the entropy-loss performs second-best, when the uncertainty is minimized on all samples. Without the previous filtering step, there is no set of unlabeled samples to perform a traditional semi-supervised-learning. The Mean-teacher performs best since the teacher represents a stable model state, aggregated over multiple iterations.
With filtering: Applying entropy-loss to all samples or only unsupervised samples leads to very similar performance. Both are better than the standard push-away-loss. Our Mean Teacher achieves by far the best performance, due to the temporal ensemble of models and sample predictions for filtering. Fig. 7 shows the sample training processes of SELF under 60% and 80% noise on CIFAR-100. The mean-teacher always outperform the student models. Further, note that regular training leads to rapid over-fitting to label noise.
A.3.3 SAMPLE TRAINING PROCESS
Contrary, with our effective filtering strategy, both models slowly increase their performance while the training accuracy approaches 100%. Hence, by using progressive filtering, our model could erase the inconsistency in the provided labels set. Figure 7: Sample training curves of our approach SELF on CIFAR-100 with (a) 60% and (b) 80% noise, using noisy validation data. Note that with our approach, the training loss remains close to 0. Further, note that the mean-teacher continously outperforms the noisy student models. This shows the positive effect of temporal emsembling to counter label noise.
Figure 1 :
1Comparing the performance of SELF with previous works for learning under different (symmetric) label noise ratios on the (a) CIFAR-10 & (b) CIFAR-100 datasets. SELF retains higher robust classification accuracy at all noise levels.
Figure 3 :
3Maintaining the (a) model and (b) predictions ensembles is very effective against noisy model updates. These ensembles are self-forming during the training process as a moving-average of (a) model snapshots or (b) class predictions from previous training steps.
estimates the noise transition matrix to correct the loss, Han et al. (2018a) uses human-in-the-loop, Zhang & Sabuncu (2018); Thulasidasan et al. (2019) propose other forms of cross-entropy losses.
Figure 4 :
4Ablation study on the importance of the components in our framework SELF, evaluated on (a) Cifar-10 and (b) Cifar-100 with uniform noise. Please refer Tab. 5 for details of components.
•
Initialize a supervised neural network as the student model M s i . • Initialize the Mean Teacher model M t i as a copy of the student model with all weights detached. • Let the loss function be the sum of normal classification loss of M s i and the consistency loss between the outputs of M t i and M t i • Select an optimizer • In each training iteration: -Update the weights of M s i using the selected optimizer -Update the weights of M t i as an exponential moving-average of the student weights -Evaluate performance of M s i and M t i over D val to verify the early stopping criteria. • Return the best M
Figure 6 :
6The entropy loss for semi-supervised learning. (a) Extreme predictions such as [0, 1] are encouraged by minimizing the entropy on each prediction. (b) Additionally, maximizing the entropy of the mean prediction on the entire dataset or a large batch forces the model to balance its predictions over multiple samples.
Table 1 :
1Comparison of classification accuracy when learning under uniform label noise on CIFAR-10 and CIFAR-100. Following previous works, we compare two evaluation scenarios: with a noisy validation set (top) and with 1000 clean validation samples (bottom). The best model is marked in bold. Having a small clean validation set improves the model but is not necessary.CIFAR-10
CIFAR-100
Table 2 :
2Asymmetric noise on CIFAR-10, CIFAR-100.All methods use Resnet34. CIFAR-10: flip TRUCK → AUTOMOBILE, BIRD → AIRPLANE, DEER → HORSE, CAT↔DOG with prob. p. CIFAR-100: flip class i to (i+1)%100 with prob. p. Numbers are adopted fromZhang & Sabuncu (2018). SELF retains high performances across all noise ratios and outperforms all previous works.CIFAR-10
NOISE RATIO
10% 20% 30% 40%
CCE
90.69 88.59 86.14 80.11
MAE
82.61 52.93 50.36 45.52
FORWARDT
90.52 89.09 86.79 83.55
Lq
90.91 89.33 85.45 76.74
TRUNC Lq
90.43 89.45 87.10 82.28
SELF (OURS)
93.75 92.76 92.42 89.07
CIFAR-100
CCE
66.54 59.20 51.40 42.74
MAE
13.38 11.50 08.91 08.20
FORWARDT
45.96 42.46 38.13 34.44
Lq
68.36 66.59 61.45 47.22
TRUNC Lq
68.86 66.59 61.87 47.66
SELF (OURS)
72.45 70.53 65.09 53.83
Table 3 :
3Effect of the architecture on classification accuracy on CIFAR-10, 100 with uniform label noise. SELF is compatible with all tested architectures.CIFAR-10
CIFAR-100
NOISE
40%
80%
40%
80%
RESNET101
MENTORNET
89.00 49.00 68.00 35.00
CO-T.
62.58 21.79 39.58 16.79
SELF
92.77 64.52 69.00 39.73
WIDE RESNET 28-10
MENTORNET
88.7
46.30 67.50 30.10
REWEIGHT
86.02 -
58.01 -
SELF
93.34 67.41 72.48 42.06
RESNET34
Lq
87.13 64.07 61.77 29.16
TRUNC Lq
87.62 67.92 62.64 29.60
FORWARDT
83.25 54.64 31.05 8.90
SELF
91.13 63.59 66.71 35.56
RESNET26
CO-T.
81.85 29.22 55.95 23.22
SELF
93.70 69.91 71.98 42.09
Table 4 :
4Classification accuracy on clean ImageNet validation dataset. The models are trained at 40% label noise and the best model is picked based on the evaluation on noisy validation data. Mentornet shows the best previously reported results. Mentornet* is based on Resnet-101. We chose the smaller Resnext50 model to reduce the run-time.Resnext18 Resnext50
Accurracy
P@1 P@5 P@1 P@5
Mentornet*
-
-
65.10 85.90
ResNext
50.6 75.99 56.25 80.90
Mean-T.
58.04 81.82 62.96 85.72
SELF (Ours) 66.92 86.65 71.31 89.92
Table 7 :
7Accuracy of the complete removal of samples during iterative filtering on CIFAR-10 and CIFAR-100. The underlying model is the MeanTeacher based on Resnet26. When samples are completely removed from the training set, they are no longer used for either supervised-or-unsupervised learning. This common strategy from previous works leads to rapid performance breakdown.CIFAR-10
CIFAR-100
NOISE RATIO
40%
80 %
40%
80 %
USING NOISY DATA ONLY
DATA REMOVAL
93.4
59.98 68.99 35.53
SELF (OURS)
93.7
69.91 71.98 42.09
WITH CLEAN VALIDATION SET
COMPL. REMOVAL 94.39 70.93 71.86 36.61
SELF (OURS)
95.1
79.93 74.76 46.43
Table 8 :
8Analysis of semi-supervised learning strategies. Push-away-loss, entropy-loss, and meanteacher-loss are separately considered. The best two performances from the respective category are marked. The entropy loss is further applied to all training samples or only to the removed samples from earlier filtering steps.CIFAR-10
https://github.com/CuriousAI/mean-teacher
https://github.com/CuriousAI/mean-teacher
Auxiliary image regularization for deep cnns with noisy labels. Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell, arXiv:1511.07069arXiv preprintSamaneh Azadi, Jiashi Feng, Stefanie Jegelka, and Trevor Darrell. Auxiliary image regularization for deep cnns with noisy labels. arXiv preprint arXiv:1511.07069, 2015.
Classification in the presence of label noise: a survey. Benoît Frénay, Michel Verleysen, IEEE transactions on neural networks and learning systems. 25Benoît Frénay and Michel Verleysen. Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845-869, 2013.
Xavier Gastaldi, arXiv:1705.07485Shake-shake regularization. arXiv preprintXavier Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.
Training deep neural-networks using a noise adaptation layer. Jacob Goldberger, Ehud Ben-Reuven, Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. 2016.
. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Nets. 9Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. pp. 9.
Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, arXiv:1706.02677Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprintPriya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, An- drew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Masking: A new perspective of noisy supervision. Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, Masashi Sugiyama, Advances in Neural Information Processing Systems. Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. Masking: A new perspective of noisy supervision. In Advances in Neural Information Processing Systems, pp. 5836-5846, 2018a.
Co-teaching: Robust training of deep neural networks with extremely noisy labels. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama, NeurIPS. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, pp. 8535-8545, 2018b.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Deep bilevel learning. Simon Jenni, Paolo Favaro, ECCV. Simon Jenni and Paolo Favaro. Deep bilevel learning. In ECCV, 2018.
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, Li Fei-Fei, arXiv:1712.05055arXiv: 1712.05055MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. MentorNet: Learning Data- Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. arXiv:1712.05055 [cs], December 2017. URL http://arxiv.org/abs/1712.05055. arXiv: 1712.05055.
Temporal ensembling for semi-supervised learning. Samuli Laine, Timo Aila, arXiv:1610.02242arXiv preprintSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
Smooth neighbors on teacher graphs for semi-supervised learning. Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, Bo Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYucen Luo, Jun Zhu, Mengxi Li, Yong Ren, and Bo Zhang. Smooth neighbors on teacher graphs for semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8896-8905, 2018.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
Making deep neural networks robust to label noise: A loss correction approach. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, Lizhen Qu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGiorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944-1952, 2017.
Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, Andrew Rabinovich, arXiv:1412.6596Training deep neural networks on noisy labels with bootstrapping. arXiv preprintScott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014.
Learning to Reweight Examples for Robust Deep Learning. Mengye Ren, Wenyuan Zeng, Bin Yang, Raquel Urtasun, arXiv:1803.09050arXiv: 1803.09050cs, statMengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to Reweight Examples for Robust Deep Learning. arXiv:1803.09050 [cs, stat], March 2018. URL http://arxiv.org/ abs/1803.09050. arXiv: 1803.09050.
Deep learning is robust to massive label noise. David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit, arXiv:1705.10694arXiv preprintDavid Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017.
On the importance of initialization and momentum in deep learning. Ilya Sutskever, James Martens, George E Dahl, Geoffrey E Hinton, ICML. 35Ilya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of initialization and momentum in deep learning. ICML (3), 28(1139-1147):5, 2013.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola, Advances in neural information processing systems. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consis- tency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pp. 1195-1204, 2017.
Tanmoy Sunil Thulasidasan, Jeff Bhattacharya, Gopinath Bilmes, Jamal Chennupati, Mohd-Yusof, arXiv:1905.10964Combating label noise in deep learning using abstention. arXiv preprintSunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, and Jamal Mohd- Yusof. Combating label noise in deep learning using abstention. arXiv preprint arXiv:1905.10964, 2019.
Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia, arXiv:1804.00092Iterative learning with open-set noisy labels. arXiv preprintYisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. Iterative learning with open-set noisy labels. arXiv preprint arXiv:1804.00092, 2018.
Aggregated residual transformations for deep neural networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSaining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual trans- formations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492-1500, 2017.
Understanding deep learning requires rethinking generalization. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, International Conference on Learning Representations. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Rep- resentations, 2017. URL https://openreview.net/forum?id=Sy8gdB9xx.
Generalized cross entropy loss for training deep neural networks with noisy labels. Zhilu Zhang, Mert Sabuncu, Advances in Neural Information Processing Systems. Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems, pp. 8778-8788, 2018. |
222,208,810 | ALFWORLD: ALIGNING TEXT AND EMBODIED ENVIRONMENTS FOR INTERACTIVE LEARNING | Given a simple request (e.g., Put a washed apple in the kitchen fridge), humans can reason in purely abstract terms by imagining action sequences and scoring their likelihood of success, prototypicality, and efficiency, all without moving a muscle.Once we see the kitchen in question, we can update our abstract plans to fit the scene. Embodied agents require the same abilities, but existing work does not yet provide the infrastructure necessary for both reasoning abstractly and executing concretely. We address this limitation by introducing ALFWorld, a simulator that enables agents to learn abstract, text-based policies in TextWorld(Côté et al., 2018)and then execute goals from the ALFRED benchmark (Shridhar et al., 2020) in a rich visual environment. ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions. In turn, as we demonstrate empirically, this fosters better agent generalization than training only in the visually grounded environment. BUTLER's simple, modular design factors the problem to allow researchers to focus on models for improving every piece of the pipeline (language understanding, planning, navigation, visual scene understanding, and so forth). Data, code, and videos are available at: alfworld.github.io arXiv:2010.03768v1 [cs.CL] | [
969555,
210911499,
5590763
] | ALFWORLD: ALIGNING TEXT AND EMBODIED ENVIRONMENTS FOR INTERACTIVE LEARNING
Mohit Shridhar
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
Xingdi Yuan
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
Marc-Alexandre Côté
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
Yonatan Bisk
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
Adam Trischler
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
Matthew Hausknecht
University of Washington ♡ Microsoft Research
Montréal ‡ Carnegie Mellon University ♣ Microsoft Research
ALFWORLD: ALIGNING TEXT AND EMBODIED ENVIRONMENTS FOR INTERACTIVE LEARNING
Given a simple request (e.g., Put a washed apple in the kitchen fridge), humans can reason in purely abstract terms by imagining action sequences and scoring their likelihood of success, prototypicality, and efficiency, all without moving a muscle.Once we see the kitchen in question, we can update our abstract plans to fit the scene. Embodied agents require the same abilities, but existing work does not yet provide the infrastructure necessary for both reasoning abstractly and executing concretely. We address this limitation by introducing ALFWorld, a simulator that enables agents to learn abstract, text-based policies in TextWorld(Côté et al., 2018)and then execute goals from the ALFRED benchmark (Shridhar et al., 2020) in a rich visual environment. ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions. In turn, as we demonstrate empirically, this fosters better agent generalization than training only in the visually grounded environment. BUTLER's simple, modular design factors the problem to allow researchers to focus on models for improving every piece of the pipeline (language understanding, planning, navigation, visual scene understanding, and so forth). Data, code, and videos are available at: alfworld.github.io arXiv:2010.03768v1 [cs.CL]
INTRODUCTION
TextWorld Embodied
Welcome! You are in the middle of the room. Looking around you, you see a diningtable, a stove, a microwave, and a cabinet.
Your task is to: Put a pan on the diningtable.
> goto the cabinet
You arrive at the cabinet. The cabinet is closed.
> open the cabinet
The cabinet is empty.
> goto the stove
You arrive at the stove. Near the stove, you see a pan, a pot, a bread loaf, a lettuce, and a winebottle.
> take the pan from the stove
You take the pan from the stove.
> goto the diningtable
You arrive at the diningtable.
> put the pan on the diningtable
You put the pan on the diningtable. Consider helping a friend prepare dinner in an unfamiliar house: when your friend asks you to clean and slice an apple for an appetizer, how would you approach the task? Intuitively, one could reason abstractly: (1) find an apple (2) wash the apple in the sink (3) put the clean apple on the cutting board (4) find a knife (5) slice the apple with the knife (6) put the slices in a bowl. Even in an unfamiliar setting, abstract reasoning can help accomplish the goal by leveraging semantic priors. Priors like locations of objects -apples are commonly found in the kitchen, as are implements for cleaning and slicing, object affordances -a sink is useful for washing an apple, a refrigerator is not, pre-conditions -better to wash an apple before slicing it, rather than the converse. We hypothesize that, learning to solve tasks using abstract language, unconstrained by the particulars of the physical world, enables agents to complete embodied tasks in novel environments by leveraging the kinds of semantic priors that are exposed by abstraction.
To test this hypothesis, we have created the novel ALFWorld framework, the first interactive, parallel environment that aligns text descriptions and commands with physically embodied robotic simulation. We build ALFWorld by extending two prior works: TextWorld (Côté et al., 2018) -an engine for interactive text-based games, and ALFRED (Shridhar et al., 2020) -a large scale dataset for visionlanguage instruction following in embodied environments. ALFWorld provides two views of the same underlying world and two modes by which to interact with it: TextWorld, an abstract, text-based environment, generates textual observations of the world and responds to high-level text actions; ALFRED, the embodied simulator, renders the world in high-dimensional images and responds to low-level physical actions as from a robot ( Figure 1). 1 Unlike prior work on instruction following (MacMahon et al., 2006;Anderson et al., 2018a), which typically uses a fixed corpus of cross-modal expert demonstrations, we argue that aligned parallel environments like ALFWorld offer a distinct advantage: they allow agents to explore, interact, and learn in the abstract environment of language before encountering the complexities of the embodied environment.
While fields such as robotic control use simulators like MuJoCo (Todorov et al., 2012) to provide infinite data through interaction, there has been no analogous mechanism -short of hiring a human around the clock -for providing linguistic feedback and annotations to an embodied agent. TextWorld addresses this discrepancy by providing programmatic and aligned linguistic signals during agent exploration. This facilitates the first work, to our knowledge, in which an embodied agent learns the meaning of complex multi-step policies, expressed in language, directly through interaction.
Empowered by the ALFWorld framework, we introduce BUTLER (Building Understanding in Textworld via Language for Embodied Reasoning), an agent that first learns to perform abstract tasks in TextWorld using Imitation Learning (IL) and then transfers the learned policies to embodied tasks in ALFRED. When operating in the embodied world, BUTLER leverages the abstract understanding gained from TextWorld to generate text-based actions; these serve as high-level subgoals that facilitate physical action generation by a low-level controller. Broadly, we find that BUTLER is capable of generalizing in a zero-shot manner from TextWorld to unseen embodied tasks and settings. Our results show that training first in the abstract text-based environment is not only 7× faster, but also yields better performance than training from scratch in the embodied world. These results lend credibility to the hypothesis that solving abstract language-based tasks can help build priors that enable agents to generalize to unfamiliar embodied environments.
Our contributions are as follows: § 2 ALFWorld environment: The first parallel interactive text-based and embodied environment. § 3 BUTLER architecture: An agent that learns high-level policies in language that transfer to low-level embodied executions, and whose modular components can be independently upgraded. § 4 Generalization: We demonstrate empirically that BUTLER, trained in the abstract text domain, generalizes better to the embodied setting than agents trained from corpora of demonstrations or from scratch in the embodied world. Table 1: Six ALFRED task types with heldout seen and unseen evaluation sets.
ALIGNING ALFRED AND TEXTWORLD
The ALFRED dataset (Shridhar et al., 2020), set in the THOR simulator (Kolve et al., 2017), is a benchmark for learning to complete embodied household tasks using natural language instructions and egocentric visual observations. ALFRED involves a wide variety of 3D interactive environments and compositional tasks. As shown in Figure 1 (right), ALFRED tasks pose challenging interaction and navigation problems to an agent in a highfidelity simulated environment. Tasks come annotated with a goal instruction that describes the objective (e.g., "put a pan on the dining table"). The dataset provides both template-based and human-annotated goals (see Appendix E). Agents observe the world through high-dimensional pixel images and interact using low-level action primitives: MOVEA-HEAD, ROTATELEFT/RIGHT, LOOKUP/DOWN, PICKUP, PUT, OPEN, CLOSE, and TOGGLEON/OFF.
While ALFRED also provides low-level step-by-step language instructions on how to complete a particular goal, we tackle the challenge of completing tasks with only high-level goal descriptions. This task is harder than the instruction-following challenge posed in ALFRED, since the agent begins without any information about object locations or a sequential plan for solving the task.
Our aligned ALFWorld framework adopts six ALFRED task-types (Table 1) of various difficulty levels. 2 These typically involve first finding a particular object, which often requires the agent to open and search receptacles like drawers or cabinets. Subsequently, all tasks other than Pick & Place require some interaction with the object such as heating (place object in microwave and start it) or cleaning (wash object in a sink). To conclude, the object must be placed in the designated location.
Within each task category there is significant variation: the embodied environment includes 120 rooms (30 kitchens, 30 bedrooms, 30 bathrooms, 30 living rooms), each dynamically populated with a set of portable objects (e.g., apple, mug), and static receptacles (e.g., microwave, fridge). For each task type we construct a larger train set, as well as seen and unseen validation evaluation sets:
(1): seen consists of known task tuples {task-type, object, receptacle, room} in rooms seen during training, but with different instantiations of object locations, quantities, and visual appearances (e.g. two blue pencils on a shelf instead of three red pencils in a drawer seen in training).
(2): unseen consists of new task tuples with known or unknown object-receptacle pairs, but always in an unseen room with different receptacles and scene layouts than in training tasks.
The seen set is designed to measure in-distribution generalization, whereas the unseen set measures out-of-distribution generalization. The scenes in ALFRED are visually diverse, so even the same task tuple can lead to very distinct tasks, e.g., involving differently colored apples, shaped statues, or textured cabinets. For this reason, purely vision-based agents often struggle to generalize to unseen environments and objects (see unimodal baselines in Section 5).
The TextWorld framework (Côté et al., 2018) procedurally generates text-based environments for training and evaluating language-based agents. We extend TextWorld to create text-based analogs of each ALFRED environment. Aligning text and embodied environments necessitates a common latent structure representing the state of the simulated world. ALFWorld uses PDDL -Planning Domain Definition Language (McDermott et al., 1998) to describe each scene from ALFRED and to construct an equivalent text game using the TextWorld engine. The dynamics of each game are defined by the PDDL domain (see Appendix C for additional details). We generate text that serves as a stand-in for visual observations by filling templates sampled from a context-sensitive grammar designed for the ALFRED environments. For interaction, TextWorld environments use the following high-level actions: where {obj} and {recep} correspond to objects and receptacles. Note that heat, cool, clean, and goto are high-level actions that correspond to several low-level embodied actions.
Since TextWorld is an abstract representation of the world, transferring a TextWorld-trained agent to an embodied setting involves dealing with some domain gaps. For example, it is not possible to place objects inside a receptacle that is already full. Similarly, the physical size of objects and receptacles must be respected -it is not possible to put a large pot inside the microwave. The agent is also subject to visual challenges like occluded objects, misdetections, and inaccurate object relations. BUTLER::BRAIN is a novel text-based game agent that generates high-level text actions in a token-by-token fashion akin to Natural Language Generation (NLG) approaches for dialogue (Sharma et al., 2017) and summarization (Gehrmann et al., 2018). An overview of the agent's architecture is shown in Figure 3. At game step t, the encoder takes the initial text observation o 0 , current observation o t , and the goal description g as input and generates a contextaware representation of the current observable game state. Here o 0 explicitly lists all the navigable receptacles in the scene. Since games are partially observable, the agent only has access to the observation describing the effects of its previous action and its present location. Therefore, we incorporate two memory mechanisms to imbue the agent with history: (1) a recurrent aggregator, adapted from Yuan et al. (2018), combines the encoded state with recurrent state h t−1 from the previous game step;
(2) an observation queue feeds in the k most recent, unique textual observations. The decoder generates an action sentence a t token-by-token to interact with the game. The encoder and decoder are based on a Transformer Seq2Seq model with pointer softmax mechanism (Gulcehre et al., 2016 When playing a game, an agent might get stuck at certain states due to various failures (e.g., action is grammatically incorrect, wrong object name). The observation for a failed action does not contain any useful feedback, so a fully deterministic model tends to produce the same (wrong) action repeatedly. Since our decoder generates token-by-token and does not rely on templates, BUTLER::BRAIN is fully capable of leveraging search heuristics such as Beam Search (Reddy et al., 1977). During evaluation, BUTLER::BRAIN uses Beam Search to generate alternative action sentences in the event of a failed action, but otherwise greedily picks a sequence of best words.
3.2 BUTLER:: Table 2: Generalization within TextWorld environments: We independently train BUT-LER::BRAIN on each type of TextWorld task and evaluate on heldout scenes of the same type. Respectively, tn/sn/un indicate success rate on train/seen/unseen tasks. All sn and un scores are computed using the random seeds (from 8 in total) producing the best final training score on each task type. BUTLER is trained with DAgger and performs beam search during evaluation. Without beam search, BUTLER g decodes actions greedily and gets stuck repeating failed actions. Further removing DAgger and training the model in a Seq2Seq fashion leads to worse generalization. Note that tn scores for BUTLER are lower than sn and un as they were computed without beam search.
VISION (STATE ESTIMATOR) ∶ v t → o tAt
Specifically, we use a pre-trained Mask R-CNN detector (He et al., 2017) to detect objects in the visual frame. The detector is trained separately in a supervised setting with random frames from ALFRED training scenes (see Appendix F). For each frame v t , the detector generates N detections
{(c 1 , m 1 ), (c 2 , m 2 ), . . . , (c N , m N )},
where c n is the predicted object class, and m n is a pixel-wise object mask. These detections are formatted into a sentence using a template e.g., On table 1, you see a mug 1, a tomato 1, and a tomato 2. To handle multiple instances of objects, each object is associated with a class c n and a number ID e.g., tomato 1.
(CONTROLLER) ∶ v t , a t → {â 1 ,â 2 , . . . ,â L }
The controller translates a high-level text action a t into a sequence of L low-level physical actions {â 1 ,â 2 , . . . ,â L } that are executable in the embodied environment. The controller handles two types of commands: manipulation and navigation. For manipulation actions, we use the ALFRED API to interact with the simulator by providing an API action and a pixel-wise mask based on Mask R-CNN detections m n that was produced during state-estimation. For navigation commands, each episode is initialized with a pre-built grid-map of the scene, where each receptacle instance is associated with a receptacle class and an interaction viewpoint (x, y, θ, φ) with x and y representing the 2D position, θ and φ representing the agent's yaw rotation and camera tilt. The goto command invokes an A* planner to find the shortest path between two viewpoints. The planner outputs a sequence of L displacements in terms of motion primitives: MOVEAHEAD, ROTATERIGHT, ROTATELEFT, LOOKUP, and LOOKDOWN, which are executed in an open-loop fashion via the ALFRED API. We note that a given pre-built grid-map of receptacle locations is a strong prior assumption, but future work could incorporate existing models from the vision-language navigation literature (Anderson et al., 2018a;Wang et al., 2019) for map-free navigation
EXPERIMENTS
We design experiments to answer the following questions: (1) Is it possible to learn robust generalizing policies in TextWorld that can solve a large variety of tasks? (2) Can these abstract policies provide suitable guidance to help agents solve physically embodied tasks? (3) In contrast to directly training in the embodied world, do abstract textual policies enable better task completion and generalization?
BUTLER::BRAIN (TEXT AGENT) PRE-TRAINING
To answer the first question, we train BUTLER::BRAIN in abstract TextWorld environments spanning the six tasks in Table 1, as well as All Tasks, a simple union of all 6. Because of the strong diversity across task types, the All Tasks setting shows the extent to which a single policy can learn and generalize on the large set of 3,553 different text-based tasks. After finding that current reinforcement learning approaches were not successful on our set of training tasks (see Appendix I), we turned to DAgger (Ross et al., 2011) assisted by a rule-based expert (detailed in Appendix G). BUTLER::BRAIN is trained for 50K episodes using data collected by interacting with the set of training games. Table 2 show (i) Training success rate varies from 16-60% depending on the category of tasks, illustrating the challenge of solving hundreds to thousands of training tasks within each category. (ii) Transferring from training to heldout test games typically reduces performance, with the unseen rooms leading to the largest performance drops. Notable exceptions include heat and cool tasks where unseen performance exceeds training performance. (iii) Beam search is a key contributor to test performance; its ablation causes a performance drop of 21% on the seen split of All Tasks. (iv) Further ablating the DAgger strategy and directly training a Sequence-to-Sequence (Seq2Seq) model with pre-recorded expert demonstrations causes a bigger performance drop of 30% on seen split of All Tasks. These results suggest that online interaction with the environment, as facilitated by DAgger learning and beam search, is essential for recovering from mistakes and sub-optimal behavior.
Results in
TEXTWORLD TO EMBODIED GENERALIZATION
To understand whether abstract policies can provide guidance for agents to solve physically embodied tasks, we study the zero-shot domain transfer of BUTLER to novel tasks in embodied environments. Table 3 presents results for agents trained independently on individual tasks and also jointly on all 6 tasks. For each category of task, we select the agent with best evaluation performance in TextWorld (from 8 random seeds). This is done separately for each split: seen and unseen. These best-performing agents are then evaluated on the heldout seen and unseen ALFRED tasks.
The Seq2Seq baseline is trained in TextWorld from pre-recorded expert demonstrations using standard supervised learning. BUTLER is our main model using the Mask R-CNN detector and A* navigator. BUTLER-ORACLE uses an oracle state-estimator with ground-truth object detections and an oracle controller that directly teleports between locations. In Human Goals, instead of templated goal descriptions, we evaluate BUTLER using human-annotated ALFRED goals, which contain 66 unseen verbs (e.g., 'wash', 'grab', 'chill') and 189 unseen nouns (e.g., 'rag', 'lotion', 'disc'; see Appendix E for full list). For embodied evaluations, we also report goal-condition success rates, a metric proposed in ALFRED (Shridhar et al., 2020) to measure partial goal completion 3 .
Overall, TextWorld training generalizes well to unseen embodied tasks. The drop in performance from TextWorld to BUTLER-ORACLE is often a result of the inability of TextWorld-trained agents to understand physical constraints and infeasibilities, e.g., placing a plate inside a full microwave. Future works could address this issue by trying to reduce the domain gap between the two environments, or fine-tuning the agent in the embodied setting with reinforcement learning. The further drop in performance with BUTLER is a result of misdetections from Mask R-CNN and navigation failures caused by collisions. The Mask R-CNN detector struggles with unseen environments which are visually very distinct from training scenes. Finally, even though the agents were trained only with templated language, they are able to handle some human-annotated goals in Human Goals.
The supplementary video contains qualitative examples of the BUTLER agent solving tasks in unseen environments. It showcases 3 successes and 1 failure of a TextWorld-only agent trained on All Tasks.
In "put a watch in the safe", the agent has never seen the 'watch'-'safe' combination as a goal. Given the domain gap between TextWorld and the embodied world, a natural question is Why not eliminate this gap by training from scratch in the embodied world? To answer this question, we investigate three training strategies: (i) EMBODIED-ONLY: pure embodied training, (ii) TW-ONLY: pure TextWorld training followed by zero-shot embodied transfer and (iii) HYBRID training that switches between the two environments with 75% probability for TextWorld and 25% for embodied world. Table 4 presents success rates for these agents trained and evaluated on the Pick & Place task. All evaluations were conducted with an oracle state-estimator and controller. For a fair comparison, each agent is trained for 50K episodes and training speed is recorded for each strategy. We report peak performance for each split.
TRAINING STRATEGIES
Results indicate that TW-ONLY training has higher performance and better generalization to unseen environments than HYBRID or EMBODIED-ONLY. We hypothesize that the abstract TextWorld environment allows the agent to focus on quickly learning tasks without having to deal executionfailures and expert-failures caused by physical constraints inherent to embodied environments. TextWorld training is also 7× faster 4 since it does not require running a rendering or physics engine like the embodied setting. The visual models tend to overfit to seen environments and generalize poorly to unfamiliar environments. Operating in text-space allows better transfer of policies without needing to learn state representations that are robust to visually diverse environments. The zero-performing ACTION-ONLY baseline indicates that memorizing action sequences is an infeasible strategy for agents. Figure 4 illustrates more factors that affect the performance of BUTLER::BRAIN. The three rows of plots show training curves, evaluation curves in seen and unseen settings, respectively. All experiments are run on the Pick & Place task with 8 random seeds.
ABLATIONS
Model Ablations
In the first column, we show the effect of using different observation queue lengths k as described in Section 3.1, in which size 0 refers to not providing any observation information to the agent. In the second column, we examine the effect of explicitly keeping the initial observation o 0 , which lists all the receptacles in the scene. Keeping the initial observation o 0 facilitates the pointer softmax mechanism in the decoder by guiding it to generate receptacle words more accurately. The third column suggests that the recurrent component in our aggregator is helpful in making history-based decisions when the current observation contains insufficient information. Finally, in the fourth column, we see that using more training games can lead to better generalizability in both seen and unseen settings. Fewer training games achieve high training scores by quickly overfitting, which lead to zero evaluation scores.
RELATED WORK
The longstanding goal of grounding language learning in embodied settings has lead to substantial work on interactive environments. ALFWorld extends that work with fully-interactive aligned environments that parallel textual interactions with photo-realistic renderings and physical interactions.
Interactive Text-Only Environments: We build on the work of text-based environments like TextWorld (Côté et al., 2018) and Jericho (Hausknecht et al., 2020). While these environment allow for textual interactions, they are not grounded in visual or physical modalities.
CONCLUSION
We introduced ALFWorld, the first interactive text environment with aligned embodied worlds. ALFWorld allows agents to explore, interact, and learn abstract polices in a textual environment. Pre-training our novel BUTLER agent in TextWorld, we show zero-shot generalization to embodied tasks in the ALFRED dataset. The results indicate that reasoning in textual space allows for better generalization to unseen scenes and also faster training, compared to other modalities like vision.
BUTLER is designed with modular components which can be upgraded in future work. Examples include the template-based state-estimator and the A* navigator which could be replaced with learned modules, enabling end-to-end training of the full pipeline. Another avenue of future work is to learn "textual dynamics models" through environment interactions, akin to vision-based world models (Ha and Schmidhuber, 2018). Such models would facilitate construction of text-engines for new domains, without requiring access to symbolic state descriptions like PDDL. Overall, we are excited by the challenges posed by aligned text and embodied environments for better cross-modal learning.
Hausknecht, M. J., Ammanabrolu, P., Côté, M.-A., and Yuan, X. (2020). Interactive fiction games: A colossal adventure. In AAAI.
He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Ross, S., Gordon, G., and Bagnell, D. (2011). A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics.
Sanh, V., Debut, L., Chaumond, J., and Wolf, T. (2019). Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Schwartz, E., Tennenholtz, G., Tessler, C., and Mannor, S. (2019). Language is power: Representing states using natural language in reinforcement learning.
Sharma, S., Asri, L. E., Schulz, H., and Zumer, J. (2017). Relevance of unsupervised metrics in taskoriented dialogue for evaluating natural language generation. arXiv preprint arXiv:1706.09799.
Shridhar, M., Thomason, J., Gordon, D., Bisk, Y., Han, W., Mottaghi, R., Zettlemoyer, L., and Fox, D. (2020) A DETAILS OF BUTLER::BRAIN
NOTATIONS
In this section, we use o t to denote text observation at game step t, g to denote the goal description provided by a game.
We use L to refer to a linear transformation and L f means it is followed by a non-linear activation function f . Brackets [⋅; ⋅] denote vector concatenation, ⊙ denotes element-wise multiplication.
A.1 OBSERVATION QUEUE
As mentioned in Section 3.1, we utilize an observation queue to cache the text observations that have been seen recently. Since the initial observation o 0 describes the high level layout of a room, including receptacles present in the current game, we it visible to BUTLER::BRAIN at all game steps, regardless of the length of the observation queue. Specifically, the observation queue has an extra space storing o 0 , at any game step, we first concatenate all cached observations in the queue, then prepend the o 0 to form the input to the encoder. We find this helpful because it facilitates the pointer softmax mechanism in the decoder (described below) by guiding it to point to receptacle words in the observation. An ablation study on this is provided in Section 5.
A.2 ENCODER
We use a transformer-based encoder, which consists of an embedding layer and a transformer block (Vaswani et al., 2017). Specifically, embeddings are initialized by pre-trained 768-dimensional BERT embeddings (Sanh et al., 2019). The embeddings are fixed during training in all settings.
The transformer block consists of a stack of 5 convolutional layers, a self-attention layer, and a 2-layer MLP with a ReLU non-linear activation function in between. In the block, each convolutional layer has 64 filters, each kernel's size is 5. In the self-attention layer, we use a block hidden size H of 64, as well as a single head attention mechanism. Layernorm (Ba et al., 2016) is applied after each component inside the block. Following standard transformer training, we add positional encodings into each block's input.
At every game step t, we use the same encoder to process text observation o t and goal description g. The resulting representations are h o t ∈ R L ot ×H and h g ∈ R L g ×H , where L o t is the number of tokens in o t , L g denotes the number of tokens in g, H = 64 is the hidden size.
A.3 AGGREGATOR
We adopt the context-query attention mechanism from the question answering literature (Yu et al., 2018a) to aggregate the two representations h o t and h g .
Specifically, a tri-linear similarity function is used to compute the similarity between each token in h o t with each token in h g . The similarity between i-th token in h o and j-th token in h g is thus computed by (omitting game step t for simplicity):
Sim(i, j) = W (h o i , h g j , h o i ⊙ h g j ),(1)
where W is a trainable parameter in the tri-linear function. By applying the above computation for each h o and h g pair, we get a similarity matrix S ∈ R L o ×L g .
By computing the softmax of the similarity matrix S along both dimensions (number of tokens in goal description L g and number of tokens in observation L o ), we get S g and S o , respectively. The two representations are then aggregated by:
h og = [h o ; P ; h o ⊙ P ; h o ⊙ Q], P = S g h ⊤ g , Q = S g S ⊤ o h ⊤ o ,(2)
where h og ∈ R L o ×4H is the aggregated observation representation.
Next, a linear transformation projects the aggregated representations to a space with size H = 64:
h og = L tanh (h og ).(3)
To incorporate history, we use a recurrent neural network. Specifically, we use a GRU (Cho et al., 2014):
h RNN = Mean(h og ), h t = GRU(h RNN , h t−1 ),(4)
in which, the mean pooling is performed along the dimension of number of tokens, i.e., h RNN ∈ R H . h t−1 is the output of the GRU cell at game step t − 1.
A.4 DECODER
Our decoder consists of an embedding layer, a transformer block and a pointer softmax mechanism (Gulcehre et al., 2016). We first obtain the source representation by concatenating h og and h t ,
resulting h src ∈ R L o ×2H .
Similar to the encoder, the embedding layer is frozen after initializing it with pre-trained BERT embeddings. The transformer block consists of two attention layers and a 3-layer MLP with ReLU non-linear activation functions inbetween. The first attention layer computes the self attention of the input embeddings h self as a contextual encoding for the target tokens. The second attention layer then computes the attention α i src ∈ R L o between the source representation h src and the i-th token in h self . The i-th target token is consequently represented by the weighted sum of h src , with the weights α i src . This generates a source information-aware target representation h ′ tgt ∈ R L tgt ×H , where L tgt denotes the number of tokens in the target sequence. Next, h ′ tgt is fed into the 3-layer MLP with ReLU activation functions inbetween, resulting h tgt ∈ R L tgt ×H . The block hidden size of this transformer is H = 64.
Taking h tgt as input, a linear layer with tanh activation projects the target representation into the same space as the embeddings (with dimensionality of 768), then the pre-trained embedding matrix E generates output logits (Press and Wolf, 2016), where the output size is same as the vocabulary size. The resulting logits are then normalized by a softmax to generate a probability distribution over all tokens in vocabulary:
p a (y i ) = E Softmax (L tanh (h tgt )),(5)
in which, p a (y i ) is the generation (abstractive) probability distribution.
We employ the pointer softmax (Gulcehre et al., 2016) mechanism to switch between generating a token y i (from a vocabulary) and pointing (to a token in the source text). Specifically, the pointer softmax module computes a scalar switch s i at each generation time-step i and uses it to interpolate the abstractive distribution p a (y i ) over the vocabulary (Equation 5) and the extractive distribution
p x (y i ) = α i src over the source text tokens: p(y i ) = s i ⋅ p a (y i ) + (1 − s i ) ⋅ p x (y i ),(6)
where s i is conditioned on both the attention-weighted source representation ∑ j α i,j src ⋅ h j src and the decoder state h i tgt :
s i = L sigmoid 1 (tanh(L 2 ( j α i,j src ⋅ h j src ) + L 3 (h i tgt ))).(7)
In which, L 1 ∈ R H×1 , L 2 ∈ R 2H×H and L 3 ∈ R H×H are linear layers, H = 64.
B IMPLEMENTATION DETAILS
In this section, we provide hyperparameters and other implementation details.
For all experiments, we use Adam (Kingma and Ba, 2014) as the optimizer. The learning rate is set to 0.001 with a clip gradient norm of 5.
During training with DAgger, we use a batch size of 10 to collect transitions (tuples of {o 0 , o t , g,â t }) at each game step t, whereâ t is the ground-truth action provided by the rule-based expert (see Section G). We gather a sequence of transitions from each game episode, and push each sequence into a replay buffer, which has a capacity of 500K episodes. We set the max number of steps per episode to be 50. If the agent uses up this budget, the game episode is forced to terminate. We linearly anneal the fraction of the expert's assistance from 100% to 1% across a window of 50K episodes.
The agent is updated after every 5 steps of data collection. We sample a batch of 64 data points from the replay buffer. In the setting with the recurrent aggregator, every sampled data point is a sequence of 4 consecutive transitions. Following the training strategy used in the recurrent DQN literature (Hausknecht and Stone, 2015;Yuan et al., 2018), we use the first 2 transitions to estimate the recurrent states, and the last 2 transitions for updating the model parameters.
BUTLER::BRAIN learns to generate actions token-by-token, where we set the max token length to be 20. The decoder stops generation either when it generates a special end-of-sentence token [EOS], or hits the token length limit.
When using the beam search heuristic to recover from failed actions, we use a beam width of 10, and take the top-5 ranked outputs as candidates. We iterate through the candidates in the rank order until one of them succeeds. This heuristic is not always guaranteed to succeed, however, we find it helpful in most cases. Note that we do not employ beam search when we evaluate during the training process due to speed restrictions, e.g., in the seen and unseen curves shown in Figure 4. We take the best performing checkpoints and then apply this heuristic during evaluation and report the resulting scores in tables (e.g., Table 3).
By default unless mentioned otherwise (ablations), we use all available training games in each of the task types. We use an observation queue length of 5 and use a recurrent aggregator. The model is trained with DAgger, and during evaluation, we apply the beam search heuristic to produce the reported scores. All experiment settings in TextWorld are run with 8 random seeds. All text agents are trained for 50,000 episodes.
C TEXTWORLD ENGINE
Internally, the TextWorld Engine is divided into two main components: a planner and text generator.
Planner TextWorld Engine uses Fast Downward (Helmert, 2006), a domain-independent classical planning system to maintain and update the current state of the game. A state is represented by a set of predicates which define the relations between the entities (objects, player, room, etc.) present in the game. A state can be modified by applying production rules corresponding to the actions listed in Table 6. All variables, predicates, and rules are defined using the PDDL language.
For instance, here is a simple state representing a player standing next to a microwave which is closed and contains a mug:
s t = at(player, microwave) ⊗ in(mug, microwave) ⊗ closed(microwave) ⊗ openable(microwave),
where the symbol ⊗ is the linear logic multiplicative conjunction operator. Given that state, a valid action could be open microwave, which would essentially transform the state by replacing closed(microwave) with open(microwave).
Text generator The other component of the TextWorld Engine, the text generator, uses a contextsensitive grammar designed for the ALFRED environments. The grammar consists of text templates similar to those listed in Table 6. When needed, the engine will sample a template given some context, i.e., the current state and the last action. Then, the template gets realized using the predicates found in the current state. The goal instructions for training games are generated with following templates. Here obj, recep, lamp refer to object, receptacle, and lamp classes, respectively, that pertain to a particular task. For each task, the two corresponding templates are sampled with equal probability. fine-tune it with additional labels from ALFRED training scenes. To generate additional labels, we replay the expert demonstrations from ALFRED and record ground-truth image and instance segmentation pairs from the simulator (THOR) after completing each high-level action e.g., goto, pickup etc. We generate a dataset of 50K images, and fine-tune the detector for 4 epochs with a batch size of 8 and a learning rate of 5e-4. The detector recognizes 105 object classes where each class could vary up to 1-10 instances. Since demonstrations in the kitchen are often longer as they involve complex sequences like heating, cleaning etc., the labels are slightly skewed towards kitchen objects.
To counter this, we balance the number of images sampled from each room (kitchen, bedroom, livingroom, bathroom) so the distribution of object categories is uniform across the dataset.
G RULE-BASED EXPERT
To train text agents in an imitation learning (IL) setting, we use a rule-based expert for supervision. A given task is decomposed into sequence of subgoals (e.g., for heat & place: find the object, pick the object, find the microwave, heat the object with the microwave, find the receptacle, place the object in the receptacle), and a closed-loop controller tries to sequentially execute these goals. We note that while designing rule-based experts for ALFWorld is relatively straightforward, experts operating directly in embodied settings like the PDDL planner used in ALFRED are prone to failures due to physical infeasibilities and non-deterministic behavior in physics-based environments.
H ACTION CANDIDATES VS ACTION GENERATION
BUTLER::BRAIN generates actions in a token-by-token fashion. Prior text-based agents typically use a list of candidate commands from the game engine (Adhikari et al., 2020) or populate a list of command templates (Ammanabrolu and Hausknecht, 2020). We initially trained our agents with candidate commands from the TextWorld Engine, but they quickly ovefit without learning affordances, commonsense, or pre-conditions, and had zero performance on embodied transfer. In the embodied setting, without access to a TextWorld Engine, it is difficult to generate candidate actions unless a set of heuristics is handcrafted with strong priors and commonsense knowledge. We also experimented with populating a list of command templates, but found this to be infeasible as some scenarios involved 1000s of populated actions per game step.
I IMITATION LEARNING VS REINFORCEMENT LEARNING
We experimented with training BUTLER::BRAIN through reinforcement learning (RL) where the agent is rewarded after completing a goal. Due to the infesibility of using candidate commands or command templates as discussed in Section H, the RL agent had to generate actions token-by-token.
Since the probability of randomly stumbling upon a grammatically correct and contextually valid action is very low (7.02e-44 for sequence length 10), the RL agent struggled to make any meaningful progress towards the tasks. Hence we resorted to imitation learning.
J ALFRED TASK DESCRIPTIONS
The following descriptions describe the processes involved in each of six task-types:
• Pick & Place (e.g., "put a plate on the coffee table") -the agent must find an object of the desired type, pick it up, find the correct location to place it, and put it down there. • Examine in Light (e.g., "examine a book under the lamp") -the agent must find an object of the desired type, locate and turn on a light source with the desired object in-hand. • Clean & Place (e.g., "clean the knife and put in the drawer") -the agent must find an object of the desired type, pick it up, go to a sink or a basin, wash the object by turning on the faucet, then find the correct location to place it, and put it down there. • Heat & Place (e.g., "heat a mug and put on the coffee table") -the agent must find an object of the desired type, pick it up, go to a microwave, heat the object turning on the microwave, then find the correct location to place it, and put it down there. • Cool & Place (e.g., "put a cool bottle on the countertop") -the agent must find an object of the desired type, pick it up, go to a fridge, put the object inside the fridge and cool it, then find the correct location to place it, and put it down there.
• Pick Two & Place (e.g., "put two pencils in the drawer") -the agent must find an object of the desired type, pick it up, find the correct location to place it, put it down there, then look for another object of the desired type, pick it up, return to previous location, and put it down there with the other object.
K ALFWORLD TEXT GAME EXAMPLES
Three examples of ALFWorld text game walkthroughs:
You are in the middle of a room. Looking quickly around you, you see a towelholder 1, a toilet 1, a bathtubbasin 1, a drawer 4, a handtowelholder 2, a drawer 6, a drawer 1, a countertop 1, a sinkbasin 1, a drawer 2, a drawer 3, a toiletpaperhanger 1, a drawer 5, a handtowelholder 1, a towelholder 2, a sinkbasin 2, and a garbagecan 1.
Your task is to: put a clean cloth in bathtubbasin.
> go to drawer 5
You arrive at loc 9. The drawer 5 is closed.
> open drawer 5
You open the drawer 5. The drawer 5 is open. In it, you see a candle 1. > close drawer 5 You close the drawer 5. > go to countertop 1 You arrive at loc 25. On the countertop 1, you see a cloth 2, a soapbottle 1, a cloth 1, and a candle 3. > take cloth 1 from countertop 1 You pick up the cloth 1 from the countertop 1. > go to sinkbasin 1 You arrive at loc 20. On the sinkbasin 1, you see nothing.
> clean cloth 1 with sinkbasin 1 You clean the cloth 1 using the sinkbasin 1. > go to bathtubbasin 1 You arrive at loc 14. On the bathtubbasin 1, you see a soapbar 1. > put cloth 1 in/on bathtubbasin 1 You won! You are in the middle of a room. Looking quickly around you, you see a drawer 15, a drawer 19, a drawer 7, a armchair 1, a drawer 3, a drawer 16, a drawer 2, a drawer 17, a sidetable 3, a sidetable 5, a drawer 4, a drawer 20, a sofa 1, a drawer 13, a drawer 12, a drawer 9, a drawer 21, a sidetable 4, a sidetable 2, a drawer 5, a drawer 11, a drawer 1, a garbagecan 1, a dresser 1, a cabinet 1, a drawer 8, a drawer 18, a sidetable 1, a drawer 14, a drawer 6, and a drawer 10.
Your task is to: find two remotecontrol and put them in armchair.
> go to sidetable 2
You arrive at loc 34. On the sidetable 2, you see a remotecontrol 1. > take remotecontrol 1 from sidetable 2 You pick up the remotecontrol 1 from the sidetable 2. > go to armchair 1 You arrive at loc 1. On the armchair 1, you see nothing. > put remotecontrol 1 in/on armchair 1 You put the remotecontrol 1 in/on the armchair 1. > go to dresser 1 You arrive at loc 18. On the dresser 1, you see a creditcard 1, a television 1, a laptop 1, a book 1, a statue 2, and a cellphone 2. > go to sidetable 5 You arrive at loc 17. On the sidetable 5, you see a box 1, and a creditcard 2. > go to sofa 1 You arrive at loc 2. On the sofa 1, you see a newspaper 1, a pillow 1, and a remotecontrol 2. > take remotecontrol 2 from sofa 1 You pick up the remotecontrol 2 from the sofa 1. > go to armchair 1 You arrive at loc 1. On the armchair 1, you see a remotecontrol 1. > put remotecontrol 2 in/on armchair 1 You won! You are in the middle of a room. Looking quickly around you, you see a safe 1, a shelf 4, a drawer 2, a bed 1, a drawer 1, a shelf 5, a shelf 2, a sidetable 2, a shelf 3, a drawer 3, a shelf 1, a sidetable 1, a desk 1, and a garbagecan 1.
Your task is to: examine the alarmclock with the desklamp.
> go to desk 1 You arrive at loc 8. On the desk 1, you see a pen 1, a bowl 1, a alarmclock 2, a pencil 2, a pencil 3, a creditcard 3, a book 1, a alarmclock 3, a keychain 3, and a book 2. > take alarmclock 2 from desk 1 You pick up the alarmclock 2 from the desk 1. > go to sidetable 2 You arrive at loc 1. On the sidetable 2, you see a desklamp 1, and a alarmclock 1. > use desklamp 1 You won!
Figure 1 :
1ALFWorld: Interactive aligned text and embodied worlds. An example with high-level text actions (left) and low-level physical actions (right).
Figure 2 :Figure 3 :
23BUTLER Agent consists of three modular components. 1) BUTLER::BRAIN: a text agent pre-trained with the TextWorld engine (indicated by the dashed yellow box) which simulates an abstract textual equivalent of the embodied world. It is then fine-tuned or directly evaluated on new embodied tasks. 2) BUTLER::VISION: a state estimator that translates, at each time step, the visual frame v t from the embodied world into a textual observation o t using a pre-trained Mask R-CNN detector. The text agent uses the current observation o t , the initial observation o 0 , and the task goal g to predict the next high-level action a t . 3) BUTLER::BODY: a controller that translates the high-level action a t into a sequence of low-level actions in the embodied environment.3.1 BUTLER::BRAIN (TEXT AGENT) ∶ o 0 , o t , g → a BUTLER::BRAIN: The text agent takes the initial/current observations o 0 /o t , and goal g to generate a textual action a t token-by-token.
Figure 4 :
4Ablation study. x-axis: 0 to 50k episodes; y-axis: normalized success from 0 to 100.
Vision and language: While substantial work exists on vision-language representation learning e.g.,MAttNet (Yu et al., 2018b),CMN (Hu et al., 2017), VQA(Antol et al., 2015),CLEVR (Johnson et al., 2017),ViLBERT (Lu et al., 2019), they lack embodied or sequential decision making.Embodied Language Learning: To address language learning in embodied domains, a number of interactive environments have been proposed: BabyAI (Chevalier-Boisvert et al., 2019), Room2Room (Anderson et al., 2018b), ALFRED (Shridhar et al., 2020), InteractiveQA (Gordon et al., 2018), EmbodiedQA (Das et al., 2018), and NetHack (Küttler et al., 2020). These environments use language to communicate instructions, goals, or queries to the agent, but not as a fully interactive modality. Language for State and Action Representation: Others have used language for more than just goal-specification. Schwartz et al. (2019) use language as a state representation for VizDoom. Hu et al. (2019) use a natural language instructor to command a low-level executor, and Jiang et al. (2019) use language as an abstraction for hierarchical RL. However these works do not feature an interactive text environment, for pre-training the agent in an abstract textual space. Zhu et al. (2017) use high-level commands similar to ALFWorld to solve tasks in THOR with IL and RL-finetuning methods, but the policy only generalizes to a small set of tasks due to the vision-based state representation.Game Engines as World Models: The concept of using TextWorld as a "game engine" to represent the world is broadly related to inverse graphics(Kulkarni et al., 2015) and inverse dynamics(Wu et al., 2017) where abstract visual or physical models are used for reasoning and future predictions.
Helmert, M. (2006). The Fast Downward planning system. Journal of Artificial Intelligence Research.Hu, H., Yarats, D.,Gong, Q., Tian, Y., and Lewis, M. (2019). Hierarchical decision making by generating and following natural language instructions. In Advances in Neural Information Processing Systems.Hu, R., Rohrbach, M., Andreas, J.,Darrell, T., and Saenko, K. (2017). Modeling relationships in referential expressions with compositional modular networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Jiang, Y., Gu, S. S., Murphy, K. P., and Finn, C. (2019). Language as an abstraction for hierarchical deep reinforcement learning. In Advances in Neural Information Processing Systems. Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., and Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In CVPR. Johnson, J., Karpathy, A., and Fei-Fei, L. (2016). Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition. Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kolve, E., Mottaghi, R., Han, W., VanderBilt, E., Weihs, L., Herrasti, A., Gordon, D., Zhu, Y., Gupta, A., and Farhadi, A. (2017). Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474. Kulkarni, T. D., Whitney, W. F., Kohli, P., and Tenenbaum, J. (2015). Deep convolutional inverse graphics network. In Advances in neural information processing systems. Küttler, H., Nardelli, N., Miller, A. H., Raileanu, R., Selvatici, M., Grefenstette, E., and Rocktäschel, T. (2020). The nethack learning environment. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision. Lu, J., Batra, D., Parikh, D., and Lee, S. (2019). Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems. MacMahon, M., Stankiewicz, B., and Kuipers, B. (2006). Walk the talk: Connecting language, knowledge, and action in route instructions. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-2006). McDermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., and Wilkins, D. (1998). Pddl-the planning domain definition language.Press, O. and Wolf, L. (2016). Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859.Reddy,D. R. et al. (1977). Speech understanding systems: A summary of results of the five-year research effort. Department of Computer Science. Camegie-Mell University, Pittsburgh, PA, 17.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L.,Gomez, A. N., Kaiser, L. u., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30.Wang, X., Huang, Q., Celikyilmaz, A., Gao, J., Shen, D., Wang, Y.-F., Wang, W. Y., and Zhang, L. (2019). Reinforced cross-modal matching and self-supervised imitation learning for visionlanguage navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Wu, J., Lu, E., Kohli, P., Freeman, B., and Tenenbaum, J. (2017). Learning to see physics via visual de-animation. In Advances in Neural Information Processing Systems. Yu, A. W., Dohan, D., Le, Q., Luong, T., Zhao, R., and Chen, K. (2018a). Fast and accurate reading comprehension by combining self-attention and convolution. In International Conference on Learning Representations. Yu, L., Lin, Z., Shen, X., Yang, J., Lu, X., Bansal, M., and Berg, T. L. (2018b). Mattnet: Modular attention network for referring expression comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Yuan, X., Côté, M.-A., Sordoni, A., Laroche, R., Combes, R. T. d., Hausknecht, M., and Trischler, A. (2018). Counting to explore and generalize in text-based games. arXiv preprint arXiv:1806.11525. Zhu, Y., Gordon, D., Kolve, E., Fox, D., Fei-Fei, L., Gupta, A., Mottaghi, R., and Farhadi, A. (2017). Visual semantic planning using deep successor representations. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017.
templates are used by the state-estimator to generate textual observations o t . The object IDs {obj id} correspond to Mask R-CNN objects detection or ground-truth instance IDs. The receptacle IDs {recep id} are based on the receptacles listed in the initial observation o 0 . Failed actions and actions without any state-changes result in Nothing happens. Actions Templates goto (a) You arrive at {loc id}. On the {recep id}, you see a {obj1 id}, ... and a {objN id}. (b) You arrive at {loc id}. The {recep id} is closed. (c) You arrive at {loc id}. The {recep id} is open. On it, you see a {obj1 id}, ... and a {objN id}. take You pick up the {obj id} from the {recep id}. put You put the {obj id} on the {recep id}. open (a) You open the {recep id}. In it, you see a {obj1 id}, ... and a {objN id}. (b) You open the {recep id}. The {recep id} is empty. close You close the {recep id}. toggle You turn the {obj id} on. heat You heat the {obj id} with the {recep id}. cool You cool the {obj id} with the {recep id}. clean You clean the {obj id} with the {recep id}. inventory (a) You are carrying: {obj id}. (b) You are not carrying anything. examine (a) On the {recep id}, you see a {obj1 id}, ... and a {objN id}. (b) This is a hot/cold/clean {obj}.
put a {obj} in {recep}. (b) put some {obj} on {recep}. Examine in Light (a) look at {obj} under the {lamp}. (b) examine the {obj} with the {lamp}. Clean & Place (a) put a clean {obj} in {recep}. (b) clean some {obj} and put it in {recep}. Heat & Place (a) put a hot {obj} in {recep}. (b) heat some {obj} and put it in {recep}. Cool & Place (a) put a cool {obj} in {recep}. (b) cool some {obj} and put it in {recep}. Pick Two & Place (a) put two {obj} in {recep}. (b) find two {obj} and put them {recep}.
{obj} with {recep} heat {obj} with {recep} cool {obj} with {recep}goto {recep}
take {obj} from {recep} put {obj} in/on {recep}
open {recep}
close {recep}
toggle {obj}/{recep}
clean
Commands goto, open, and examine generate a list of detections, whereas all other commands generate affirmative responses if the action succeeds e.g., a t : put mug 1 on desk 2 → o t+1 : You put mug 1 on desk 2, otherwise produce Nothing happens to indicate failures or no statechange. See Appendix D for a full list of templates. While this work presents preliminary results with template-based descriptions, future work could generate more descriptive observations using pre-trained image-captioning models(Johnson et al., 2016), video-action captioning frameworks (Sun et al., 2019), or scene-graph parsers(Tang et al., 2020).3.3 BUTLER::BODY
Table 3 :
3Zero-shot Domain Transfer.Left: Success percentages of best-performing BUT-
Table 4 :
4Training Strategy Success. Trained on Pick & Place Tasks for 50K episodes with embodied evaluations using an oracle state-estimator and controller.
Table 5 :
5Unimodal Baselines. Trained on All Tasks with 50K episodes and evaluated in the embodied environment. CNN from the state-estimator to extract FPN layer features for the whole image. ACTION-ONLY acts without any visual or textual feedback. We report peak performance for each split.Unimodal Baselines: Table 5 presents results for uni-
modal baseline comparisons to BUTLER. For all baselines,
the action space and controller are fixed, but the state space
is substituted with different modalities. To study the agents'
capability of learning a single policy that generalizes across
various tasks, we train and evaluate on All Tasks. In VI-
SION (RESNET18), the textual observation from the state-
estimator is replaced with ResNet-18 fc7 features (He
et al., 2016) from the visual frame. Similarly, VISION (MCNN-FPN) uses the pre-trained Mask
R-
Table 6 :
6High-level text actions supported in ALFWorld along with their observation templates.E GOAL DESCRIPTIONS
E.1 TEMPLATED GOALS
Table 7 :
7Task-types and the corresponding goal description templates.E.2 HUMAN ANNOTATED GOALSThe human goal descriptions from ALFRED contain 66 unseen verbs and 189 unseen nouns with respect to the templated goal instructions used during training.Verbs: acquire, arrange, can, carry, chill, choose, cleaning, clear, cook, cooked, cooled, dispose, done, drop, end, fill, filled, frying, garbage, gather, go, grab, handled, heated, heating, hold, holding, inspect, knock, left, lit, lock, microwave, microwaved, move, moving, pick, picking, place, placed, placing, putting, read, relocate, remove, retrieve, return, rinse, serve, set, soak, stand, standing, store, take, taken, throw, transfer, turn, turning, use, using, walk, warm, wash, washed. Unseen Nouns: alarm, area, back, baisin, bar, bars, base, basin, bathroom, beat, bed, bedroom, bedside, bench, bin, books, bottle, bottles, bottom, box, boxes, bureau, burner, butter, can, canteen, card, cardboard, cards, cars, cds, cell, chair, chcair, chest, chill, cistern, cleaning, clock, clocks, coffee, container, containers, control, controllers, controls, cooker, corner, couch, count, counter, cover, cream, credit, cupboard, dining, disc, discs, dishwasher, disks, dispenser, door, drawers, dresser, edge, end, floor, food, foot, freezer, game, garbage, gas, glass, glasses, gold, grey, hand, head, holder, ice, inside, island, item, items, jars, keys, kitchen, knifes, knives, laddle, lamp, lap, left, lid, light, loaf, location, lotion, machine, magazine, maker, math, metal, microwaves, move, nail, newsletters, newspapers, night, nightstand, object, ottoman, oven, pans, paper, papers, pepper, phone, piece, pieces, pillows, place, polish, pot, pullout, pump, rack, rag, recycling, refrigerator, remote, remotes, right, rinse, roll, rolls, room, safe, salt, scoop, seat, sets, shaker, shakers, shelves, side, sink, sinks, skillet, soap, soaps, sofa, space, spatulas, sponge, spoon, spot, spout, spray, stand, stool, stove, supplies, table, tale, tank, television, textbooks, time, tissue, tissues, toaster, top, towel, trash, tray, tv, vanity, vases, vault, vegetable, wall, wash, washcloth, watches, water, window, wine. F MASK R-CNN DETECTOR We use a Mask R-CNN detector(He et al., 2017) pre-trained onMSCOCO (Lin et al., 2014) andUnseen
Note: Throughout this work, for clarity of exposition, we use ALFRED to refer to both tasks and the grounded simulation environment, but rendering and physics are provided byTHOR (Kolve et al., 2017).
To start with, we focus on a subset of the ALFRED dataset for training and evaluation that excludes tasks involving slicing objects or using portable container (e.g., bowls), but we plan on supporting these in the future.
For instance, the task "put a hot potato on the countertop" is composed of three goal-conditions: (1) heating some object, (2) putting a potato on the countertop, (3) heating a potato and putting it on the countertop. If the agent manages to put any potato on the countertop, then 1/3 = 0.33 goal-conditions are satisfied, and so on.
For a fair comparison, all agents inTable 4use a batch-size of 10. THOR instances use 100MB×batch-size of GPU memory for rendering, whereas TextWorld instances are CPU-only and are thus much easier to scale up.
INTRODUCING BUTLER: AN EMBODIED MULTI-TASK AGENTWe investigate learning in the abstract language modality before generalizing to the embodied setting. This requires an agent capable of spanning both modalities. BUTLER uses three components: BUTLER::BRAIN -the abstract text agent, BUTLER::VISION -the language state estimator, and BUTLER::BODY -the low-level controller. An overview of BUTLER is shown inFigure 2.ACKNOWLEDGMENTSThe authors thank Cheng Zhang, Jesse Thomason, Karthik Desingh, Rishabh Joshi, Romain Laroche, Shunyu Yao, and Victor Zhong for insightful feedback and discussions.
Learning dynamic belief graphs to generalize on text-based games. A Adhikari, X Yuan, M.-A Côté, M Zelinka, M.-A Rondeau, R Laroche, P Poupart, J Tang, A Trischler, W L Hamilton, Neural Information Processing Systems (NeurIPS). Adhikari, A., Yuan, X., Côté, M.-A., Zelinka, M., Rondeau, M.-A., Laroche, R., Poupart, P., Tang, J., Trischler, A., and Hamilton, W. L. (2020). Learning dynamic belief graphs to generalize on text-based games. In Neural Information Processing Systems (NeurIPS).
Graph constrained reinforcement learning for natural language action spaces. P Ammanabrolu, M Hausknecht, International Conference on Learning Representations. Ammanabrolu, P. and Hausknecht, M. (2020). Graph constrained reinforcement learning for natural language action spaces. In International Conference on Learning Representations.
Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. P Anderson, Q Wu, D Teney, J Bruce, M Johnson, N Sünderhauf, I Reid, S Gould, Van Den, A Hengel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAnderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., and van den Hengel, A. (2018a). Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments. P Anderson, Q Wu, D Teney, J Bruce, M Johnson, N Sünderhauf, I Reid, S Gould, Van Den, A Hengel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., and van den Hengel, A. (2018b). Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
VQA: Visual Question Answering. S Antol, A Agrawal, J Lu, M Mitchell, D Batra, C L Zitnick, D Parikh, International Conference on Computer Vision (ICCV). Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., and Parikh, D. (2015). VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV).
Layer normalization. CoRR. L J Ba, J R Kiros, G E Hinton, abs/1607.06450Ba, L. J., Kiros, J. R., and Hinton, G. E. (2016). Layer normalization. CoRR, abs/1607.06450.
BabyAI: First steps towards grounded language learning with a human in the loop. M Chevalier-Boisvert, D Bahdanau, S Lahlou, L Willems, C Saharia, T H Nguyen, Y Bengio, International Conference on Learning Representations. Chevalier-Boisvert, M., Bahdanau, D., Lahlou, S., Willems, L., Saharia, C., Nguyen, T. H., and Bengio, Y. (2019). BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingEMNLPCho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Textworld: A learning environment for text-based games. M.-A Côté, A Kádár, X Yuan, B Kybartas, T Barnes, E Fine, J Moore, R Y Tao, M Hausknecht, L E Asri, M Adada, W Tay, A Trischler, abs/1806.11532CoRRCôté, M.-A., Kádár, A., Yuan, X., Kybartas, B., Barnes, T., Fine, E., Moore, J., Tao, R. Y., Hausknecht, M., Asri, L. E., Adada, M., Tay, W., and Trischler, A. (2018). Textworld: A learning environment for text-based games. CoRR, abs/1806.11532.
Embodied Question Answering. A Das, S Datta, G Gkioxari, S Lee, D Parikh, D Batra, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., and Batra, D. (2018). Embodied Question Answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Bottom-up abstractive summarization. S Gehrmann, Y Deng, A Rush, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingGehrmann, S., Deng, Y., and Rush, A. (2018). Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
Iqa: Visual question answering in interactive environments. D Gordon, A Kembhavi, M Rastegari, J Redmon, D Fox, A Farhadi, Computer Vision and Pattern Recognition (CVPR). Gordon, D., Kembhavi, A., Rastegari, M., Redmon, J., Fox, D., and Farhadi, A. (2018). Iqa: Visual question answering in interactive environments. In Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on.
Pointing the unknown words. C Gulcehre, S Ahn, R Nallapati, B Zhou, Y Bengio, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Gulcehre, C., Ahn, S., Nallapati, R., Zhou, B., and Bengio, Y. (2016). Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Recurrent world models facilitate policy evolution. D Ha, J Schmidhuber, Advances in Neural Information Processing Systems. 31Ha, D. and Schmidhuber, J. (2018). Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems 31.
Deep recurrent q-learning for partially observable mdps. M Hausknecht, P Stone, arXiv:1507.06527arXiv preprintHausknecht, M. and Stone, P. (2015). Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527. |
255,570,226 | Combinatorial Pure Exploration of Causal Bandits | The combinatorial pure exploration of causal bandits is the following online learning task: given a causal graph with unknown causal inference distributions, in each round we choose a subset of variables to intervene or do no intervention, and observe the random outcomes of all random variables, with the goal that using as few rounds as possible, we can output an intervention that gives the best (or almost best) expected outcome on the reward variable Y with probability at least 1 − δ, where δ is a given confidence level. We provide the first gap-dependent and fully adaptive pure exploration algorithms on two types of causal models -the binary generalized linear model (BGLM) and general graphs. For BGLM, our algorithm is the first to be designed specifically for this setting and achieves polynomial sample complexity, while all existing algorithms for general graphs have either sample complexity exponential to the graph size or some unreasonable assumptions. For general graphs, our algorithm provides a significant improvement on sample complexity, and it nearly matches the lower bound we prove. Our algorithms achieve such improvement by a novel integration of prior causal bandit algorithms and prior adaptive pure exploration algorithms, the former of which utilize the rich observational feedback in causal bandits but are not adaptive to reward gaps, while the latter of which have the issue in reverse. | [] | Combinatorial Pure Exploration of Causal Bandits
Nuoya Xiong
IIIS
Tsinghua University
Microsoft Research
Wei Chen [email protected]
IIIS
Tsinghua University
Microsoft Research
Combinatorial Pure Exploration of Causal Bandits
The combinatorial pure exploration of causal bandits is the following online learning task: given a causal graph with unknown causal inference distributions, in each round we choose a subset of variables to intervene or do no intervention, and observe the random outcomes of all random variables, with the goal that using as few rounds as possible, we can output an intervention that gives the best (or almost best) expected outcome on the reward variable Y with probability at least 1 − δ, where δ is a given confidence level. We provide the first gap-dependent and fully adaptive pure exploration algorithms on two types of causal models -the binary generalized linear model (BGLM) and general graphs. For BGLM, our algorithm is the first to be designed specifically for this setting and achieves polynomial sample complexity, while all existing algorithms for general graphs have either sample complexity exponential to the graph size or some unreasonable assumptions. For general graphs, our algorithm provides a significant improvement on sample complexity, and it nearly matches the lower bound we prove. Our algorithms achieve such improvement by a novel integration of prior causal bandit algorithms and prior adaptive pure exploration algorithms, the former of which utilize the rich observational feedback in causal bandits but are not adaptive to reward gaps, while the latter of which have the issue in reverse.
Introduction
Stochastic multi-armed bandits (MAB) is a classical framework in sequential decision making [32,3]. In each round, a learner selects one arm based on the reward feedback from the previous rounds, and receives a random reward of the selected arm sampled from an unknown distribution, with the goal of accumulating as much rewards as possible over T rounds. This framework models the exploration-exploitation tradeoff in sequential decision making -whether one should select the best arm so far based on the past observations or one should try some arms that have not been played much. Pure exploration is an important variant of the multi-armed bandit problem, where the goal of the learner is not to accumulate reward but to identify the best arm through possibly adaptive explorations of arms. Pure exploration of MAB typically corresponds to a testing phase where we do not need to pay penalty for exploration, and it has wide applications in online recommendation, advertising, drug testing, etc.
Causal bandits, first introduced by [19], integrates causal inference [31] with multi-armed bandits. In causal bandits, we have a causal graph structure G = (X ∪ {Y } ∪ U , E), where X ∪ {Y } are observable causal variables with Y being a special reward variable, U are unobserved hidden variables, and E is the set of causal edges between pairs of variables. For simplicity, we consider binary variables in this paper. The arms are the interventions on variables S ⊆ X together with the choice of null intervention (natural observation), i.e. the arm (action) set is A ⊆ {a = do(S = s) | S ⊆ X, s ∈ {0, 1} |S| } with do() ∈ A, where do(S = s) is the standard notation for intervening the causal graph by setting S to s [31], and do() means null intervention. The reward of an action a is the random outcome of Y when we intervene with action a, and thus the expected reward is E[Y | a = do(S = s)]. In each round, one action in A is played, and the random outcomes of all variables in X ∪ {Y } are observed. Given the causal graph G, but without knowing its causal inference distributions among nodes, the task of combinatorial pure exploration (CPE) of causal bandits is to (adaptively) select actions in each round, observe the feedback from all observable random variables, so that in the end the learner can identify the best or nearly best actions. Causal bandits are useful in many real scenarios. In drug testing, the physicians wants to adjust the dosage of some particular drugs to treat the patient. In policy design, the policy-makers select different actions to reduce the spread of disease.
Existing studies on CPE of causal bandits either requires the knowledge of P (Pa(Y ) | a) for all action a or only consider causal graphs without hidden variables, and the algorithms proposed are not fully adaptive to reward gaps [19,35]. In this paper, we study fully adaptive pure exploration algorithms and analyze their gap-dependent sample complexity bounds in the fixed-confidence setting. More specifically, given a confidence bound δ ∈ (0, 1) and an error bound ε, we aim at designing adaptive algorithms that output an action such that with probability at least 1 − δ, the expected reward difference between the output and the optimal action is at most ε. The algorithms should be fully adaptive in the follow two senses. First, it should adapt to the reward gaps between suboptimal and optimal actions similar to existing adaptive pure exploration bandit algorithms, such that actions with larger gaps should be explored less. Second, it should adapt to the observational data from causal bandit feedback, such that actions with enough observations already do not need further interventional rounds for exploration, similar to existing causal bandit algorithms. We are able to integrate both types of adaptivity into one algorithmic framework, and with interaction between the two aspects, we achieve better adaptivity than either of them alone.
First we introduce a particular term named gap-dependent observation threshold, which is a non-trivial gap-dependent extension for a similar term in [19]. Then we provide two algorithms, one for the binary generalized linear model (BGLM) and one for the general model with hidden variables. The sample complexity of both algorithms contains the gap-dependent observation threshold that we introduced, which shows significant improvement comparing to the prior work. In particular, our algorithm for BGLM achieves a sample complexity polynomial to the graph size, while all prior algorithms for general graphs have exponential sample complexity; and our algorithm for general graphs match a lower bound we prove in the paper. To our best knowledge, our paper is the first work considering a CPE algorithm specifically designed for BGLM, and the first work considering CPE on graphs with hidden variables, while all prior studies either assume no hidden variables or assume knowing distribution P (Pa(Y ) | a) for parent of reward variable Pa(Y ) and all action a, which is not a reasonable assumption.
To summarize, our contribution is to propose the first set of CPE algorithms on causal graphs with hidden variables and fully adaptive to both the reward gaps and the observational causal data. The algorithm on BGLM is the first to achieve a gap-dependent sample complexity polynomial to the graph size, while the algorithm for general graphs improves significantly on sample complexity and matches a lower bound. Due to the space constraint, further materials including experimental results, an algorithm for the fixed-budget setting, and all proofs are moved to the supplementary material.
Related Work. Causal bandit is proposed by [19], who discuss the simple regret for parallel graphs and general graphs with known probability distributions P (Pa(Y ) | a). [29] extend algorithms on parallel graphs to graphs without back-door paths, and [26] extend the results to the general graphs. All of them are either regard P (Pa(Y ) | a) as prior knowledge, or consider only atomic intervention. The study by [35] is the only one considering the general graphs with combinatorial action set, but their algorithm cannot work on causal graphs with hidden variables. All the above pure exploration studies consider simple regret criteria that is not gap-dependent. Cumulative regret is considered in [29,25,26]. To our best knowledge, [33] is the only one discussing gap-dependent bound for pure exploration of causal bandits for the fixed-budget setting, but it only considers the soft interventions (changing conditional distribution P (X|Pa(X))) on one single node, which is different from causal bandits defined in [19].
Pure exploration of multi-armed bandit has been extensively studied in the fixed-confidence or fixedbudget setting [2,14,13,9,15]. PAC pure exploration is a generalized setting aiming to find the ε-optimal arm instead of exactly optimal arm [8,27,14]. In this paper, we utilize the adaptive LUCB algorithm in [15]. CPE has also been studied for multi-armed bandits and linear bandits, etc.( [6], [4], [17]), but the feedback model in these studies either have feedback at the base arm level or have full or partial bandit feedback, which are all very different from the causal bandit feedback considered in this paper.
The binary generalized linear model (BGLM) is studied in [22,10] for cumulative regret MAB problems. We borrow the maximum likelihood estimation method and its result in [22,10] for our BGLM part, but its integration with our adaptive sampling algorithm for the pure exploration setting is new.
Model and Preliminaries
Causal Models.
From [31], a causal graph G = (X ∪ {Y } ∪ U , E) is a directed acyclic graph (DAG) with a set of observed random variables X ∪ {Y } and a set of hidden random variables U , where X = {X 1 , · · · , X n }, U = {U 1 , · · · , U k } are two set of variables and Y is the special reward variable without outgoing edges. In this paper, for simplicity, we only consider that X i 's and Y are binary random variables with support {0, 1}. For any node V in G, we denote the set of its parents in G as Pa(V ). The set of values for Pa(X) is denoted by pa(X). The causal influence is represented by P (V | Pa(V )), modeling the fact that the probability distribution of a node V 's value is determined by the value of its parents. Henceforth, when we refer to a causal graph, we mean both its graph structure (X ∪ {Y } ∪ U , E) and its causal
inference distributions P (V | Pa(V )) for all V ∈ X ∪ {Y } ∪ U . A parallel graph G = (X ∪ {Y }, E) is a special class of causal graphs with X = {X 1 , · · · , X n }, U = ∅ and E = {X 1 → Y, X 2 → Y, · · · , X n → Y }.
An intervention do(S = s) in the causal graph G means that we set the values of a set of nodes S ⊆ X to s, while other nodes still follow the P (V | Pa(V )) distributions. An atomic intervention means that |S| = 1. When S = ∅, do(S = s) is the null intervention denoted as do(), which means we do not set any node to any value and just observe all nodes' values.
In this paper, we also study a parameterized model with no hidden variables: the binary generalized linear model (BGLM). Specifically, in BGLM, we have U = ∅, and P (X = 1 | Pa(X) = pa(X)) = f X (θ X · pa(X)) + e X , where f X is a strictly increasing function, θ X ∈ R Pa(X) is the unknown parameter vector for X, e X is a zero-mean bounded noise variable that guarantees the resulting probability to be within [0, 1]. To represent the intrinsic randomness of node X not caused by its parents, we denote X 1 = 1 as a global variable, which is a parent of all nodes.
Combinatorial Pure Exploration of Causal Bandits. Combinatorial pure exploration (CPE) of causal bandits describes the following setting and the online learning task. The causal graph structure is known but the distributions P (V |Pa(V ))'s are unknown. The action (arm) space A is a subset of possible interventions on combinatorial sets of variables, plus the null intervention, that is,
A ⊆ {do(S = s) | S ⊆ X, s ∈ {0, 1} |S| } and {do()} ∈ A. For action a = do(S = s), define µ a = E[Y | do(S = s)] to be the expected reward of action do(S = s). Let µ * = max a∈A µ a .
In each round t, the learning agent plays one action a ∈ A, observes the sample values X t = (X t,1 , X t,2 · · · , X t,n ) and Y t of all observed variables. The goal of the agent is to interact with the causal model with as small number of rounds as possible to find an action with the maximum expected reward µ * . More precisely, we mainly focus on the following PAC pure exploration with the gap-dependent bound in the fixed-confidence setting. In this setting, we are given a confidence parameter δ ∈ (0, 1) and an error parameter ε ∈ [0, 1), and we want to adaptively play actions over rounds based on past observations, terminate at a certain round and output an action a o to guarantee that µ * − µ a o ≤ ε with probability at least 1 − δ. The metric for this setting is sample complexity, which is the number of rounds needed to output a proper action a o . Note that when ε = 0, the PAC setting is reduced to the classical pure exploration setting. We also consider the fixed budget setting in the appendix, where given an exploration round budget T and an error parameter ε ∈ [0, 1), the agent is trying to adaptively play actions and output an action a o at the end of round T , so that the error probability Pr{µ a o < µ * − ε} is as small as possible.
We study the gap-dependent bounds, meaning that the performance measure is related to the reward gap between the optimal and suboptimal actions, as defined below. Let a * be one of the optimal arms. For each arm a, we define the gap of a as
∆ a = µ a * − max a∈A\{a * } {µ a }, a = a * ; µ a * − µ a , a = a * .(1)
We further sort the gaps ∆ a 's for all arms and assume ∆ (1) ≤ ∆ (2) · · · ≤ ∆ (|A|) , where ∆ (1) is also denoted as ∆ min .
Gap-Dependent Observation Threshold
In this section, we introduce the key concept of gap-dependent observation threshold, which is instrumental to the fix-confidence algorithms in the next two sections. Intuitively, it describes for any action a whether we can derive its reward from pure observations of the causal model.
We assume that X i 's are binary random variables. First, we describe terms q a ∈ [0, 1] for each action a ∈ A, which can have different definitions in different settings. Intuitively, q a represents how easily the action a is to be estimated by observation. For example, in [19], for parallel graph with action set
A = {do(X i = x) | 1 ≤ i ≤ n, x ∈ {0, 1}} ∪ {do()}, for action a = do(X i = x), q a = P (X i = x) represents
the probability for action do(X i = x) to be observed, since in parallel graph we have P (Y | X i = x) = P (Y | do(X i = x)). Thus, when q a = P (X i = x) is larger, it is easier to estimate P (Y | do(X i = x)) by observation. We will instantiate q a 's for BGLM and general graphs in later sections. For a = do(), we always set q a = 1. Then, for set q a , a ∈ A we define the observation thershold as follows:
Definition 1 (Observation threshold [19]). For a given causal graph G and its associated {q a | a ∈ A}, the observation threshold m is defined as:
m = min{τ ∈ [|A|] : |{a ∈ A | q a < 1/τ }| ≤ τ }.(2)
The observation threshold can be equivalently defined as follows: When we sort {q a | a ∈ A} as q (1) ≤ q (2) ≤ · · · ≤ q |A| , m = min{τ : q (τ +1) ≥ 1 τ }. Note that m ≤ |A| always holds since q do() = 1. In some cases, m |A|. For example, in parallel graph, when P (
X i = 1) = P (X i = 0) = 1 2 for all i ∈ [n], q do(X i =1) = q do(X i =0) = 1 2 , q do() = 1.
Then m = 2 2n + 1 = |A|. Intuitively, when we collect passive observation data without intervention, arms corresponding to q (j) with j ≤ m are under observed while arms corresponding to q (j) with j > m are sufficiently observed and can be estimated accurately. Thus, for convenience we name m as the observation threshold (the term is not given a name in [19]).
In this paper, we improve the definition of m to make it gap-dependent, which would lead to a better adaptive pure exploration algorithm and sample complexity bound. Before introducing the definition, we first define the term H r . Sort the arm set as q a 1 · max{∆ a 1 , ε/2} 2 ≤ q a 2 · max{∆ a 2 , ε/2} 2 ≤ · ≤ q a |A| · max{∆ a |A| , ε/2} 2 , then H r is defined by
H r = r i=1 1 max{∆ a i , ε/2} 2 .(3)
Definition 2 (Gap-dependent observation threshold). For a given causal graph G and its associated q a 's and ∆ a 's, the gap-dependent observation threshold m ε,∆ is defined as:
m ε,∆ = min τ ∈ [|A|] : a ∈ A q a · max {∆ a , ε/2} 2 < 1 H τ ≤ τ .(4)
The Gap-dependent observation threshold can be regarded as a generalization of the observation threshold. Intuitively, when considering the gaps, q a · max{∆ a , ε/2} 2 represents how easily the action a would to be distinguished from the optimal arm. To show the relationship between m ε,∆ and m, we provide the following lemma. The proof of Lemma is in Appendix D.1.
Lemma 1. m ε,∆ ≤ 2m.
Lemma 1 shows that m ε,∆ = O(m). In many real scenarios, m ε,∆ might be much smaller than m. Consider some integer n with 4 < n < |A|, < 1/n, q a = 1 n for a ∈ A \ {do()} and q do() = 1. Then m = n. Now we consider ∆ a 1 = ∆ a 2 = 1 n , while other arms a have ∆ a = 1 2 . Then H r ≥ n 2 for all r ≥ 1. Then for a = a 1 , a 2 , we have q a · max{∆ a , ε/2} 2 ≥ 1 4n > 1 Hr , which implies that m ε,∆ = 2. This lemma will be used to show that our result improves previous causal bandit algorithm in [19].
Combinatorial Pure Exploration for BGLM
In this section, we discuss the pure exploration for BGLM, a general class of causal graphs with a linear number of parameters, as defined in Section 2. In this section, we assume U = ∅. Let θ * = (θ * X ) X∈X∪{Y } be the vector of all weights. Since X 1 is a global variable, we only need to consider the action set [22,10], we have three assumptions: Assumption 1. For any X ∈ X ∪ {Y }, f X is twice differentiable. Its first and second order derivatives can be upper bounded by constant M (1) and M (2) .
A ⊆ {do(S = s) | S ⊆ X \ {X 1 }, s ∈ {0, 1} |S| }. FollowingAssumption 2. κ := inf X∈X∪{Y },v∈[0,1] Pa(X) ,||θ−θ * X ||≤1ḟ X (v · θ) > 0 is a positive constant. Assumption 3.
There exists a constant η > 0 such that for any X ∈ X ∪ {Y } and X ∈ Pa(X), for any v ∈ {0, 1} |Pa(X)−2| and x ∈ {0, 1}, we have
P r[X = x | Pa(X) \ {X , X 1 } = v] ≥ η.(5)
Assumptions 1 and 2 are the classical assumptions in generalized linear model [22]. Assumption 3 makes sure that each parent node of X has some freedom to become 0 and 1 with a non-zero probability, even when the values of all other parents of X are fixed, and it is originally given in [10] with additional justifications. Henceforth, we use σ(θ, a) to denote the reward µ a on parameter θ.
Our main algorithm, Causal Combinatorial Pure Exploration-BGLM (CCPE-BGLM), is given in Algorithm 1. The algorithm follows the LUCB framework [15], but has several innovations. In each round t, we play three actions and thus it corresponds to three rounds in the general CPE model. In each round t, we maintainμ t O,a andμ t I,a as the estimates of µ a from the observational data and the interventional data, respectively. For each estimate, we maintain its confidence interval, [L t O,a , U t O,a ] and [L t I,a , U t I,a ] respectively.
At the beginning of round t, similar to LUCB, we find two candidate actions, one with the highest empirical mean so far, a t−1 h ; and one with the highest UCB among the rest, a t−1 l . If the LCB of a t−1 h is higher than the UCB of a t−1 l with an ε error, then the algorithm could stop and return a t−1 h as the best action. However, the second stopping condition in line 5 is new, and it is used to guarantee that the observational estimatesμ t O,a 's are from enough samples. If the stopping condition is not met, we will do three steps. The first step is the novel observation step comparing to LUCB. In this step, we do the null intervention do(), collect observational data, use maximum-likelihood estimate adapted from [22,10] to obtain parameter estimateθ t , and then useθ t to compute observational estimateμ t O,a = σ(θ t , a) for all action a, where σ(θ t , a) means the reward for action a on parameterθ t . This can be efficiently done by following the topological order of nodes in G and parameterθ t . Fromμ t O,a , we obtain the confidence interval [L t O,a , U t O,a ] using the bonus term defined later in Eq. (8). In the second step, we play the two candidate actions a t−1 h and a t−1 l and update their interventional estimates and confidence intervals, as in LUCB. In the third step, we merge the two estimates together and set the final estimateμ t a to be the mid point of the intersection of two confidence intervals. While the second step follows the LUCB, the first and the third step are new, and they are crucial for utilizing the observational data to obtain quick estimates for many actions at once.
Utilizing observational data has been explored in past causal bandit studies, but they separate the exploration from observations and the interventions into two stages [19,29], and thus their algorithms are not adaptive and cannot provide gap-dependent sample complexity bounds. Our algorithm innovation is in that we interleave the observation step and the intervention step naturally into the adaptive LUCB framework, so that we can achieve an adaptive balance between observation and intervention, achieving the best of both worlds.
To get the confidence bound for BGLM, we use the following lemma from [10]:
Lemma 2.
For an action a = do(S = s) and any two weight vectors θ and θ , we have
|σ(θ, a) − σ(θ , a)| ≤ E e X∈N S,Y |V X (θ X − θ X )|M (1) ,(6)
where N S,Y is the set of all nodes that lie on all possible paths from X 1 to Y excluding S, V X is the value vector of a sample of the parents of X according to parameter θ, M (1) is defined in Assumption 1, and the expectation is taken on the randomness of the noise term e = (e X ) X∈X∪{Y } of causal model under parameter θ.
Algorithm 1 CCPE-BGLM(G, A, ε, δ, M (1) , M (2) , κ, η, c) 1: Input:causal general graph G, action set A, parameter ε, δ, M (1) , M (2) , κ, η, c in Assumptions 1,2, 3 and Lemma 4 . 2: Initialize M 0,X = I for all node X. N a = 0,μ 0 a = 0, L 0 a = −∞, U 0 a = ∞ for arms a ∈ A. 3: for t = 1, 2, · · · , do 4: Perform action do() and observe X t and Y t . For a = do(), N a = N a + 1. 10:θ t = BGLM-estimate((X 1 , Y 1 ), · · · , (X t , Y t )). 14:
a t−1 h = argmax a∈Aμ t−1 a , a t−1 l = argmax a∈A\{a t−1 h } U t−1 a . 5: if U t−1 a t−1 l ≤ L t−1 a t−1 h + ε and t ≥ max{ cD η 2 log nt 2 δ , 1024(M (2) ) 2 (4D 2 −3)D κ 4 η (D 2 + log 3nt 2 δ )}
11:
For a = do(S = s) ∈ A, calculateμ O,a = σ(θ t , S), and [L t O,a , U t O,a ] = [μ O,a − β a O (t),μ O,a + β a O (t)]. /* β a O (t) isN a t−1 l = N a t−1 l + 1, N a t−1 h = N a t−1 h + 1.
15:
For a ∈ {a t−1 l , a t−1 h , do()}, update the empirical mean 16: .
μ I,a = t j=1 1 Na (I{a = a j−1 l }Y (l) j + I{a = a j−1 h }Y (h) j + I{a = do()}Y j ) and [L t I,a , U t I,a ] = [μ I,a − β I (N a ),μ I,a + β I (N a )]. /* β I (t) is
19: end for
The key idea in the design and analysis of the algorithm is to divide the actions into two setsthe easy actions and the hard actions. Intuitively, the easy actions are the ones that can be easily estimated by observational data, while the hard actions require direction playing these actions to get accurate estimates. The quantity q a mentioned in Section 3 indicates how easy is action a, and it determines the gap-dependent observational threshold m ε,∆ (Definition 2), which essentially gives the number of hard actions. In fact, the set of actions in Eq.(4) with τ = m ε,∆ is the set of hard actions and the rest are easy actions. We need to define q a representing the hardness of estimation for each a.
Algorithm 2 BGLM-estimate 1: Input: data pairs ((X 1 , Y 1 ), (X 2 , Y 2 ), · · · , (X t , Y t )) 2: Construct (V t,X , X t ) for each X, where V t,X is the value of parent of X at round t, X t is the value of X at round t.
3: for X ∈ X ∪ {Y } do 4: M t,X = M t−1,X +V t,X V t,X , calculateθ t,X by solving t i=1 (X i − f X (V T i,Xθ t,X ))V i,X = 0. 5: end for 6: returnθ t .
For CCPE-BGLM, we define its q (L) a as follows. Let D = max X∈X∪{Y } |Pa(X)|. For node S ⊆ X, let
S = |N S,Y |. Then for a = do(S = s), we define q (L) a = 1 2 S D 3 .(7)
Intuitively, based on Lemma 2 and S = |N S,Y |, a large S means that the right-hand side of Inequality (6) could be large, and thus it is difficult to estimate µ a accurately. Hence the term q (L) a represents how easy it is to estimate for action a. Note that q (L) a only depends on the graph structure and set S. We can define m (L) and m
(L) ε,∆ with respect to q (L)
a 's by Definitions 1 and 2. We use two confidence radius terms as follows, one from the estimate of the observational data, and the other from the estimate of the interventional data.
β a O (t) = α O M (1) D 1.5 κ √ η 1 q (L) a t log 3nt 2 δ , β I (t) = α I 1 t log |A| log(2t) δ .(8)
Parameters α O and α I are exploration parameters for our algorithm. For a theoretical guarantee, we choose α O = 6 √ 2 and α I = 2, but more aggressive α O and α I could be used in experiments. (e.g. [28], [18], [13]) The sample complexity of CCPE-BGLM is summarized in the following theorem. Theorem 1. With probability 1 − δ, our CCPE-BGLM(G, A, ε, δ/2) returns an ε-optimal arm with sample complexity
T = O H m (L) ε,∆ log |A|H m (L) ε,∆ δ ,(9)
where m If we treat the problem as a naive |A|-arms bandit, the sample complexity of LUCB1 is O(H) = O( a∈A 1 max{∆a,ε/2} 2 ), which may contain an exponential number of terms. Now note that q
(L) a ≥ 1 n 5 , it is easy to show that m (L) ε,∆ ≤ 2n 5 . Hence H m (L) ε,∆
contains only a polynomial number of terms. Other causal bandit algorithms also suffer an exponential term, unless they rely on a strong and unreasonable assumption as describe in the related work. We achieve an exponential speedup by (a) a specifically designed algorithm for the BGLM model, and (b) interleaving observation and intervention and making the algorithm fully adaptive.
The idea of the analysis is as follows. First, for the m ε,∆ hard actions, we rely on the adaptive LUCB to identify the best, and its sample complexity according to
LUCB is O(H m (L) ε,∆ log(|A|H m (L) ε,∆ /δ)). Next,
for easy actions, we rely on the observational data to provide accurate estimates. According to Eq.(4), every easy action a has the property that q a · max{∆ a , ε/2} 2 ≥ 1/H m ε,∆ . Using this property together with Lemma 2, we would show that the sample complexity for estimating easy action rewards is also
O(H m (L) ε,∆ log(|A|H m (L)
ε,∆ /δ)). Finally, the interleaving of observations and interventions keep the samply complexity in the same order.
Combinatorial Pure Exploration for General Graphs
CPE Algorithm for General Graphs
In this section, we apply a similar idea to the general graph setting, which further allows the existence of hidden variables. The first issue is how to estimate the causal effect (or the do effect) E[Y | do(S = s)] in general causal graphs from the observational data. The general concept of identifiability [31] is difficult for sample complexity analysis. Here we use the concept of admissible sequence [31] to achieve this estimation.
Definition 3 (Admissible sequence). An admissible sequence for general graph G with respect to Y and
S = {X 1 , · · · , X k } ⊆ X is a sequence of sets of variables Z 1 , · · · Z k ⊆ X such that (1) Z i consists of nondescendants of {X i , X i+1 , · · · , X k }, (2) (Y ⊥ ⊥ X i | X 1 , · · · , X i−1 , Z 1 , · · · , Z i ) G X i ,X i+1 ,··· ,X k ,
where G X means graph G without out-edges of X, and G X means graph G without in-edges of X.
Then, for S = {X 1 , · · · , X k }, s = {x 1 , · · · , x k }, we can calculate E[Y | do(S = s)] by E[Y | do(S = s)] = z P (Y = 1 | S = s, Z i = z i , i ≤ k) · P (Z 1 = z 1 ) · · · P (Z k = z k | Z i = z i , X i = x i , i ≤ k − 1),(10)
where z means the value of
∪ k i=1 Z i , and z i means the projection of z on Z i . For a = do(S = s) with |S| = k, we use {Z a,i } k i=1
to denote the admissible sequence with respect to Y and S , and Z a = ∪ k i=1 Z a,i . Z a = |Z a | and Z = max a Z a . In this paper, we simplify Z a,i to Z i if there is no ambiguity.
For any P ⊆ X, denote P t = X t | P as the projection of X t on P . We define
T a,z = t j=1 I{S j = s, (Z a ) j = z}, r a,z (t) = 1 T a,z t j=1 I{S j = s, (Z a ) j = z}Y j (11) n a,z,l (t) = t j=1 I{(Z i ) j = z i , (X i ) j = x i , i ≤ l − 1} (12) p a,z,l (t) = 1 n a,z,l (t) t j=1 I{(Z l ) j = z l , (Z i ) j = z i , (X i ) j = x i , i ≤ l − 1}(13)
where the r a,z (t) and p a,z,l (t) are the empirical mean of P (Y | S = s, Z a = z) and
P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1).
Using the above Eq.(10), we estimate each term of the right-hand side for every z ∈ {0, 1} Za to obtain an estimate for E[Y | a] as follows:
µ O,a = z r a,z (t) k l=1 p a,z,l (t).(14)
Algorithm 3 CCPE-General(G, A, ε, δ) Perform do() operation and observe X t and Y t . For a = do(), N a = N a + 1.
1: Input:causal graph G, action set A, parameter ε, δ, admissible sequence {(Z a ) i } for each action a ∈ A 2: Initialize t = 1, T a = 0, T a,z = 0, N a = 0,μ a = 0 for all arms a ∈ A, z ∈ {0, 1} z , z ∈ [|X|]. 3: for t = 1, 2, · · · , do 4: a t−1 h = argmax a∈Aμ t−1 a , a t−1 l = argmax a∈A\a t−1 h (U t−1 a ) 5: if U a t−1 l ≤ L a t−1 h + ε
10:
for a = do(S = s) ∈ A\{do()} with an admissible sequence and
S = {X 1 , · · · , X k }, s = {x 1 , · · · , x k } do 11:
Estimateμ O,a using (14) and 15:
[L t O,a , U t O,a ] = [μ O,a − β a O (T a ),μ O,a + β a O (T a )]. /* β a O (t) is defined in Eq.(16) */N a t−1 l = N a t−1 l + 1, N a t−1 h = N a t−1 h + 1.
16:
For a ∈ {a t−1 l , a t−1 h , do()}, update the empirical meanμ I,a as Line 16 in Algorithm 1.
17:
Update
[L t I,a , U t I,a ] = [μ I,a − β I (D a ),μ I,a + β I (D a )]. /* β I (t) isFor a ∈ A, calculate [L t a , U t a ] = [L t O,a , U t O,a ] ∩ [L t I,a , U t I,a ] andμ t a = L t a +U t a 2 .
20: end for
For general graphs, there is no efficient algorithm to determine the existence of the admissible sequence and extract it when it exists. But we could rely on several methods to find admissible sequences in some special cases. First, we can find the adjustment set, a special case of admissible sequences. For a causal graph G, Z is an adjustment set for variable Y and set S if and only if P (Y = 1 | do(S = s)) = z P (Y = 1 | S = s, Z = z)P (Z = z). There is an efficient algorithm for deciding the existence of a minimal adjustment set with respect to any set S and Y and finding it [34]. Second, for general graphs without hidden variables, the admissible sequence can be easily found by
Z j = Pa(X j ) \ (Z 1 ∪ · · · Z j−1 ∪ X 1 · · · ∪ X j−1 ) (See Theorem 4 in Appendix D.2)
. Finally, when the causal graph satisfies certain properties, there exist algorithms to decide and construct admissible sequences [5].
Algorithm 3 provides the pseudocode of our algorithm CCPE-General, which has the same framework as Algorithm 1. The main difference is in the first step of updating observational estimates, in which we rely on the do-calculus formula Eq. (10).
For an action a = do(S = s) without an admissible sequence, define q (G) a = 0, meaning that it is hard to be estimated through observation. Otherwise, define q a as:
q (G) a = min z {q a,z }, where q a,z = P (S = s, Z a = z), ∀z ∈ {0, 1} Za .(15)
Similar to CCPE-BGLM, for a = do(S = s) with |S| = k, we use observational and interventional confidence radius as:
β a O (t) = α O 1 t log 20k|A|Z a I a log(2t) δ ; β I (t) = α I 1 t log |A| log(2t) δ ,(16)
where α O and α I are exploration parameters, and I a = 2 Za . For a theoretical guarantee, we will choose α O = 8 and α I = 2. Our sample complexity result is given below.
Theorem 2. With probability 1 − δ, CCPE-General(G, A, ε, δ/5) returns an ε-optimal arm with sample complexity
T = O H m (G) ε,∆ log |A|H m (G) ε,∆ δ ,(17)
where m
(G) ε,∆ , H m (G) ε,∆
are defined in Definitions 2 and 3 in terms of q
(G)
a 's defined in Eq. (15).
Comparing to LUCB1, since m (G)
ε,∆ ≤ |A|, our algorithm is always as good as LUCB1. It is easy to construct cases where our algorithm would perform significantly better than LUCB1. Comparing to other causal bandit algorithms, our algorithm also performs significantly better, especially when m
(G) ε,∆ m (G)
or the gap ∆ a is large relative to ε. Some causal graphs with candidate action sets and valid admissible sequence are provided in Appendix A, and more detailed discussion can be found in Appendix B.
Lower Bound for the General Graph Case
To show that our CCPE-General algorithm is nearly minimax optimal, we provide the following lower bound, which is based on parallel graphs. We consider the following class of parallel bandit instance ξ with causal graph G = ({X 1 , · · · , X n , Y }, E): the action set is
A = {do(X i = x) | x ∈ {0, 1}, 1 ≤ i ≤ n} ∪ {do()}. The q (G) a in this case is reduced to q (G) do(X i =x) = P (X i = x) and q do() = 1. Sort the action set as q (G) a 1 · max{∆ a 1 , ε/2} 2 ≤ q (G) a 2 · max{∆ a 2 , ε/2} 2 ≤ · · · ≤ q (G) a 2n+1 · max{∆ a 2n+1 , ε/2} 2 . Let p min = min x∈{0,1} n P (Y = 1 | X = x), p max = max x∈{0,1} n P (Y = 1 | X = x). Let p max + 2∆ 2n+1 + 2ε ≤ 0.9, p min + ∆ min ≥ 0.1.
Theorem 3. For the parallel bandit instance class ξ defined above, any (ε, δ)-PAC algorithm has expected sample complexity at least
Ω H m (G) ε,∆ −1 − 1 min i<m (G) ε,∆ max{∆ a i , ε/2} 2 − 1 max{∆ do(),ε/2 } 2 log 1 δ .(18)
Theorem 3 is the first gap-dependent lower bound for causal bandits, which needs brand-new construction and technique. Comparing to the upper bound in Theorem 2, the main factor H m (G) ε,∆ is the same, except that the lower bound subtracts several additive terms. The first term H m ε,∆ −1 is almost equal to H m ε,∆ appearing in Eq. (17), except the it omits the last and the smallest additive term in H m ε,∆ . The second term is to eliminate one term with minimal ∆ a i , which is common in multi-armed bandit. ( [20], [16]) The last term is because do()'s reward must be in-between µ do(X i =0) and µ do(X i =1) and thus cannot be the optimal arm.
Future Work
There are many interesting directions worth exploring in the future. First, how to improve the computational complexity for CPE of causal bandits is an important direction. Second, one can consider developing efficient pure exploration algorithms for causal graphs with partially unknown graph structures. Lastly, identifying the best intervention may be connected with the markov decision process and studying their interactions is also an interesting direction.
Appendix A General Classes of Graphs Supporting Theorem 2
For Theorem 2, in this section we show some graphs with small size of admissible sequence for all arms, which makes our result much better than previous algorithms. By comparison in the Appendix B, we show that if 2 Z+l ≤ |A|, where Z = max a Z a , l = max{|S| | do(S = s) ∈ A}, our algorithm can perform better than previous classical bandit algorithms.
Two-layer graphs
Consider X = A ∪ B, where A = {X 1 , · · · , X k } is the set of key variables, B = {X k+1 · · · , X n } are the rest of variables. Now we consider k ≤ 1 2 log 2 n, and the edge set is in E ⊆ {(X i → X j ) | X i ∈ A, X j ∈ A} ∪ {(X i → X j ) | X i ∈ A, X j ∈ B} ∪ {(X i → Y ) | X i ∈ B}.
There can also exist some hidden confounders between two variables in A, namely,
A 1 ← U → A 2 for unobserved variables U and A 1 , A 2 ∈ A.
Figure 1: An Example of Two-layer Graphs
We define the action set as {do(S = s) | S ⊂ B, |S| ≤ l, s ∈ {0, 1} |S| } for some l. Then, since for arm do(S = s), A is the adjustment set for it, we know Z a ≤ k ≤ 1 2 log 2 n for all action a. Then 2 Z+l ≤ √ n · 2 l < n l · 2 l < |A|. Consider the scenario in which a farmer wants to optimize the crop yield [19]. A = {X 1 , X 2 , · · · , X k } are key elements influencing crop yields, such as temperature, humidity, and soil nutrient. B = {X k+1 , · · · , X n } are different kinds of crops, and Y is the final total reward collected from all crops. Each kind of crop may be influenced by key elements in A in different ways. Moreover, the elements in A may have some causal relationships: higher humidity will lead to lower temperature. The above causal graph represents this problem very well.
Collaborative graphs Consider
X = X 1 ∪ X 2 ∪ · · · ∪ X l , where each X i (1 ≤ i ≤ l) has at most k ≤ 1 2 log n nodes. The edge set is contained in E = {X → Y | X ∈ X} ∪ {X i → X j | X i , X j ∈ X t , 1 ≤ t ≤ l}.
In each subgraph X i , we allow the existence of unobserved confounders between two variables in X i . (We use dashed arrows to represent the confounders.) We call this class of graphs collaborative graphs (see Figure 2), since it is modified by [1] on collaborative causal discovery.
(S = s) | |{S ∩ X i }| ≤ 1, |S| ≤ d}. Then for a particular S = {X i 1 , X i 2 , · · · , X i d } and i 1 , · · · , i d such that X i j ∈ X i j , 1 ≤ j ≤ d for some d ∈ [0, l]
For these graphs, we know T = ∪ d j=1 X i j \ S is a adjustment set (then also a admissible sequence) for S and Y with |T | ≤ 1 2 d log n. Then 2 Z < n d/2 · 2 d < n/k d · 2 d < |A| when n is large. Collaborative graphs are useful in many real-world scenarios. For example, many companies want to cooperate and maximize their profits. Then each subgraph X i (1 ≤ i ≤ l) represents a company, and they want to find the best intervention to generate the maximum profit.
Causal Tree Causal tree is a useful structure in real scenario, which is consider in [24] and [11]. In this class of graph, the underlying causal graph of causal model is a directed tree, in which all its edges point away from the root. Denote the root as layer 0, and layer i, L i contains all the nodes with distance i to the root. For simplicity, we assume all unobserved confounders point to two nodes in same layer. For a set T , its c-component C T means all the nodes connected to T by only bi-directed edges (confounders) .
For each action set
{do(S = s) | S ⊆ X}, we consider S ∩ L i = S i . Then the sequence Z i = C S i ∪ Pa(C S i ) \ {Z 0 , · · · , Z i−1 , S 0 , · · · , S i−1 } is the admissible sequence.
We give an example in Figure 3.
For example, if we consider action do({X 3 , X 4 , X 8 } = s, s ∈ {0, 1} 3 ), then the admissible sequence is Z 1 = {X 1 , X 2 }, Z 2 = ∅, Z 3 = {X 7 }, and we can write P (Y | do(X 3 , X 4 , X 8 )) = X 1 ,X 2 ,X 7 P (Y | X 1 , X 2 , X 3 , X 4 , X 7 , X 8 ) P (X 1 , X 2 )P (X 7 | X 1 , X 2 , X 3 , X 4 ).
Figure 3: An Example of Causal Tree with Confounders
B More Detailed Comparison of Sample Complexity
Here we provide a bit more detailed sample complexity comparison between our Theorem 2 on general graphs with hidden variables and prior studies.
Compare with LUCB1 algorithm Comparing to LUCB1, since m (G) ε,∆ ≤ |A|, our algorithm will not perform worse than LUCB1. Our algorithm can also perform much better than LUCB1 algorithm in some cases. For example, when we consider A = {do(S = s) | |S| = k, s ∈ {0, 1} |S| } for some constant k, we have |A| = n k · 2 k . Assume Z = max a Z a ≤ c log n for a constant c, and q a can be approximately Θ( 1 2 Za+k ) (q a = min z P (S = s, Z a = z)) for all action a. Then we can get m (G) ε,∆ ≈ 2 Z+k ≤ n c · 2 k = o(|A|). Thus our algorithm performs much better than LUCB1.
Compare with previous causal bandit algorithms Since there is no previous causal bandit algorithm working on combinatorial action set with hidden variables, we compare two previous causal bandit algorithms in some special cases. First, compare to [19] with parallel graph and atomic intervention, we first transfer the simple regret result in [19] to sample complexity O( m (G) ε 2 log( 1 δ )). For parallel graph and a = do(X i = x), we know q a = P (X i = x) since there is no parent for X i , and our algorithm result is O(H m ε,∆ ). Then since m (G) ε,∆ = O(m (G) ) and max{∆ a , ε/2} ≥ ε/2, our algorithm always perform better. When the gap ∆ a is large relative to ε, our algorithm perform much better because of our gap-dependent sample complexity. [35] consider combinatorial intervention on graphs without hidden variables, so we can compare our algorithm's result with theirs in this setting. We also transfer their simple regret result to sample complexity O( max{nC,n|A|} ε 2 log(1/δ)), where C = X∈X∪{Y } 2 |Pa(X)| . Note that when |Pa(Y )| is large, C ≥ 2 |Pa(Y )| can be really large. However, our algorithm even does not need the knowledge of Pa(Y ). Indeed, considering max a=do(S=s) |S| = k is a constant, and assume Z ≤ log C − k and q a = Θ( 1 2 Z+k ), we have m (G) ε,∆ ≤ Θ(C), then our dominating term H (G) m ε,∆ is smaller than nC ε 2 because both max{∆ a , ε/2} ≥ ε/2 and |M (G) | = m (G) ε,∆ ≤ nC. Also, at the worst case our algorithm's sample complexity is not more than O( |A| ε 2 log( 1 δ )), while the algorithm in [35] may result in O( n|A| ε 2 log(1/δ)). The experiments are provided in Appendix E.
In summary, when compared to prior studies on causal bandit algorithms, our algorithm wins when the reward gaps are relatively large or the size of the admissible sequence is small; and when compared to prior studies on adaptive pure exploration algorithms, our algorithm wins by estimating do effects using observational data and saving estimates on those easy actions.
C Proof of Theorems
C.1 Proof of Theorem 1
Proof. We first provide a lemma in [23] to show the confidence for the maximum likelihood estimation.
Lemma 3. For one node X ∈ X ∪ {Y }, assume Assumption 1 and 2 holds, and
λ min (M t,X ) ≥ 512D(M (2) ) 2 κ 4 (D 2 + ln 3nt 2 δ ),
with probability 1 − δ/nt 2 , for any vector v ∈ R |Pa(X)| , at all rounds t the estimatorθ t,X in Algorithm 2 satisfy
|v (θ t,X − θ * X )| ≤ 3 κ log(3nt 2 /δ)||v|| M −1 t,X .
Since we need to estimate θ t,X for all nodes, let F 1 be the event that the above inequality doesn't hold, then by union bound, Pr{F 1 } ≤ n t>1 δ nt 2 ≤ δ.(We can consider t > 1) Now from [10], the true mean σ(θ t , X i ) and our estimation σ(θ * , X i ) can be bounded by Lemma 2. We rewrite the Lemma 2 here, and give proof in Appendix D.3.
Lemma 2.
For an action a = do(S = s) and any two weight vectors θ and θ , we have
|σ(θ, a) − σ(θ , a)| ≤ E e X∈N S,Y |V X (θ X − θ X )|M (1) ,(6)
where N S,Y is the set of all nodes that lie on all possible paths from X 1 to Y excluding S, V X is the value vector of a sample of the parents of X according to parameter θ, M (1) is defined in Assumption 1, and the expectation is taken on the randomness of the noise term e = (e X ) X∈X∪{Y } of causal model under parameter θ.
By definition, for any action a = do(M = s), |P S,Y | = a ∈ {1, · · · , n}. We then introduce Lecué and Mendelson's Inequality represented in [30]. Lemma 4 ([30] Lecué and Mendelson's Inequality ). Let random column vector v ∈ R D , and v 1 , · · · , v n are n independent copies of v. Assume z ∈ Sphere(D) such that
Pr[|v z| > α 1/2 ] ≥ β,
then there exists a constant c > 0 such that when n ≥ cD
β 2 Pr λ min 1 n n i=1 v i v i ≤ αβ 2 ≤ e −nβ 2 /c .
This lemma can help us to bound the minimum eigenvalue for M t,X = 1≤i≤t V t,X V t,X . To satisfy the condition for Lemma 4, we provide a similar lemma in [10]:
Lemma 5. Under Assumption 3, for any node X ∈ X and v ∈ Sphere(|Pa(X)|),
Pr |Pa(X) · z| > 1 √ 4D 2 − 3 ≥ η.
Proof. The proof is similar to [10] with a modification. For completeness, we provide the full proof below. Let |Pa(X)| = d ≤ D, z = (z 1 , z 2 , · · · , z d ). Let Pa(X) = (X i 1 = X 1 , X i 2 , · · · , X i d ) and pa(X) = (
x i 1 = 1, x i 2 , · · · , x i d ). We denote d 0 = √ d − 1 + 1 2 √ d−1 . If |z 1 | ≥ d 0 √ d 2 0 +1
, then by Cauchy-Schwarz inequality, we can deduce that
|pa(X) · v| ≥ |z 1 | − d i=2 |z i | ≥ d 0 d 2 0 + 1 − (d − 1) d i=2 |z i | 2 ≥ d 0 d 2 0 + 1 − (d − 1)(1 − d 2 0 d 2 0 + 1 ) = 1 2 (d 2 0 + 1)(d − 1) = 1 4d 2 − 3 .
Thus when
|z 1 | ≥ d 0 √ d 2 0 +1 , |Pa(X) · z| > 1 4d 2 −3 ≥ 1 4D 2 −3 . If |z 1 | < d 0 √ d 2 0 +1
, assume |z 2 | = max 2≤i≤d |z i |, then
|z 2 | ≥ 1 √ d − 1 d i=2 |z i | 2 ≥ 1 − (d 0 / d 2 0 + 1) 2 √ d − 1 = 1 √ 4d 2 − 3 .(19)
By Assumption 3
Pr{X i 1 = 1, X i 2 = x i 2 , · · · , X i d = x i d } = Pr{X i 2 = x i 2 | X i 1 = 1, X i 3 = x i 3 , · · · , X i d = x i d } · Pr{X i 1 = 1, X i 3 = x i 3 , · · · , X i d = x i d } ≥ η · Pr{X i 1 = 1, X i 3 = x i 3 , · · · , X i d = x i d },
we have
Pr |Pa(X) · z| ≥ 1 √ 4D 2 − 3 = x i 3 ,··· ,x i d Pr{X i 1 = 1, X i 2 = 1, X i 3 = x i 3 · · · , X i d = x i d } · I |(1, 1, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 + x i 3 ,··· ,x i d Pr{X i 1 = 1, X i 2 = 0, X i 3 = x i 3 · · · , X i d = x i d } · I |(1, 0, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 ≥ η x i 3 ,··· ,x i d Pr{X i 1 = 1, X i 3 = x i 3 · · · , X i d = x i d } · I |(1, 1, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 + η x i 3 ,··· ,x i d Pr{X i 1 = 1, X i 3 = x i 3 · · · , X i d = x i d } · I |(1, 0, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 ≥ η x i 3 ,··· ,x i d Pr{X i 1 = 1, i 3 = x i 3 · · · , X i d = x i d }· I |(1, 1, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 + I |(1, 0, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 ≥ η.
where the last inequality is because
x i 3 ,··· ,x i d Pr{X i 1 = 1, i 3 = x i 3 · · · , X i d = x i d } = 1, and I |(1, 1, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 + I |(1, 0, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| ≥ 1 √ 4D 2 − 3 ≥ 1.
The above equation is because otherwise
|z 2 | = |(1, 1, x i 3 , · · · , x i d ) · (z 1 , · · · , z d ) − (1, 0, x i 3 , · · · , x i d ) · (z 1 , · · · , z d )| < 2 √ 4D 2 − 3 ≤ 2 √ 4d 2 − 3 ,
which leads to a contradiction of Eq. (19). We thus complete the proof of Lemma 5.
Now let F 2 be the event
F 2 = ∃X ∈ X ∪ {Y }, λ min 1 t t i=1 V i,X V i,X ≤ η 2(4D 2 − 3)
, ∀t ≥ cD η 2 log nt 2 δ .
Then
Pr{F 2 } ≤ n t≥(cD/η 2 ) log(nt 2 /δ) e −tη 2 /c ≤ n t≥(cD/η 2 ) log(nt 2 /δ) δ nt 2 ≤ ( π 2 3 − 1)δ ≤ δ.
Now from Lemmas 2, 3 and 4, for all a = do(S = 1), with probability 1−2δ, for all t ≥ max{ cD
η 2 log nt 2 δ , 1024(M (2) ) 2 (4D 2 −3)D κ 4 η ln 1 δ )}, we can deduce that λ min (M t,X ) ≥ ηt 2(4D 2 − 3)
.
Then |σ(θ t , S) − µ a | ≤ X∈P S,Y |V t,X (θ t − θ * )|M (1) ≤ 3M (1) κ log(3nt 2 /δ) X∈P S,Y ||V t,X || M −1 t,X ≤ 3M (1) κ log(3nt 2 /δ) X∈P S,Y √ D λ min (M t,X ) ≤ 3 √ 2M (1) κ D(4D 2 − 3) log(3nt 2 /δ) X ∈P S,Y 1 √ ηt ≤ 6 √ 2M (1) κ √ η 2 a D 3 t log(3nt 2 /δ) = 6 √ 2M (1) κ √ η 1 q a t log(3nt 2 /δ) = β a O (t).
Now we prove that Algorithm 1 must terminate after T rounds, where T = 1152(M (1) ) 2
κ 2 η H m (L) ε,∆ log 3nT 2 δ + 16H m (L) ε,∆ log 4|A| log(2T ) δ .
In the following proof, we assume F 1 and F 2 do not happen. Then the true mean will not out of observational confidence bound and interventional confidence bound.
When t ≥ T 1 such that T 1 = 1152(M (1) ) 2
κ 2 η H m (L) ε,∆ log 3nT 2 1 δ , for all a = do() such that q (L) a ≥ 1 H m (L) ε,∆ ·max{∆a,ε/2} 2 , let β a (t) = U t a −L t a 2 , we have β a (t) := U t a − L t a 2 ≤ β a O ( T 1 ) ≤ 6 √ 2M (1) κ √ η 1 q (L) a T 1 log(3nt 2 /δ) ≤ max{∆ a , ε/2} 4 .
Then we provide the following lemma:
Lemma 6. If at round t, we have β a t h (t) ≤ max{∆ a t h , ε/2} 4 , β a t l (t) ≤ max{∆ a t l , ε/2} 4 ,
where a t h , a t l are the actions performed by algorithm at round t. then the algorithm will stop at round t + 1. Proof. From above, if the optimal arm a * = a t h ,
µ a t l + β a t l (t) ≤ µ a t l + 2β a t l (t) ≤ µ a t l + max{∆ a t l , ε/2} 2 ≤ µ a t h − ∆ a t l + max{∆ a t l , ε/2} 2 ≤μ a t h + β a * (T a * (t)) − ∆ a t l + max{∆ a t l , ε/2} 2 ≤μ a t h − β a * (T a * (t)) + max{∆ a * , ε/2} + max{∆ a t l , ε/2} 2 − ∆ a t l ≤μ a t h − β a * (T a * (t)) + ∆ a * + ε/2 + ∆ a t l + ε/2 2 − ∆ a t l ≤μ a t h − β a * (T a * (t)) + ε.
If optimal arm a * = a t h , and the algorithm doesn't stop at round t + 1, then we prove a * = a t l . Otherwise,
assume a * = a t lμ t a t h ≤ µ t a t h + max{∆ a t h , ε/2} 4 (20) = µ t a t l − ∆ a t h + max{∆ a t h , ε/2} 4 (21) ≤ µ t a t l − 3∆ a t h 4 + ε/4 (22) ≤μ t a t l + max{∆ a * , ε/2} 4 − 3∆ a t h 4 + ε/4 (23) ≤μ t a t l + ε/2 − ∆ a t h 2 .(24)
From the definition of a t h , we know ε > ∆ a t
h ≥ ∆ a * , β a t h (t) ≤ ε/4, β a t l (t) ≤ ε/4. Thenμ t a t l + β a t l (t) + β a t h (t) ≤ µ a t l + ε/2 ≤μ t a t
h + ε, which means the algorithm stops at round t + 1. Now we can assume a * = a t l , a * = a t h . Then
µ a t l + 2β a t l (t) ≥μ a t l + β a t l (t) ≥μ a * + β a * (T a * (t)) ≥ µ a * = µ a t l + ∆ a t l .(25)
Thus
∆ a t l ≤ 2β a t l (t) ≤ max{∆ a t l , ε/2} 2 ,(26)
which leads to ∆ a t l ≤ ε/2, β a t l (t) ≤ ε/8. Since Also,
µ a t h + β a t h (t) ≥μ a t h ≥μ a t l ≥ µ a * − β a t l (t) = µ a t h + ∆ a t h − β a t l (t),(27)
which leads to
max{∆ a t h , ε/2} 4 ≥ ∆ a t h − ε/8,(28)
and
∆ a t h ≤ ε/2, β a t h (t) ≤ ε/8. Henceμ t a t l + β a t l (t) + β a t h (t) ≤μ a t l + ε/2 ≤μ t a t
h + ε, which means the algorithm stops at round t + 1.
Denote N a (t) as the value of variable N a at round t. So by Lemma 6, when t ≥ T 1 , at each round at least one intervention will be performed on some actions a with β a (t) ≥ max{∆a,ε/2} 4 , which implies that
q a < 1 H m (L) ε,∆ ·max{∆a,ε/2} 2 , and N a (t) ≤ 64 max{∆a,ε/2} 2 log |A| log(2t) δ (Since β a (t) ≤ β a I (t) = 2 1 t log |A| log(2t) δ ).
Denote the set of these arms as M , so we have
T − T 1 ≤ a∈M 64 max{∆ a , ε/2} 2 log |A| log(2t) δ ≤ 64(H m (L) ε,∆ ) log |A| log(2t) δ , Hence T ≤ 1152(M (1) ) 2 κ 2 η H m (L) ε,∆ log 3nT 2 δ + 64H m (L) ε,∆ log |A| log(2T ) δ .
Now we prove a sample complexity bound for Algorithm 1 by the lemma above:
Lemma 7. If T = N Q log 3nT 2 δ + 64Q log |A| log(2T ) δ
for some constant N , then T = O(Q log(Q|A|/δ)).
Proof.
Since f (x) = x − N Q log 3nx 2 δ − 64Q log |A| log(2x) δ
is a increasing function when T ≥ 64Q, we only need to show that there exists a constant C ≥ 64 such that f (CQ log Q|A| δ ) ≥ f (T ) = 0. Then
f (CQ log Q|A| δ ) = CQ log Q|A| δ − N Q log 3n δ − 2N Q log( CQ log(Q|A|/δ) δ ) − 64Q log |A| log(2(CQ log(Q|A|/δ))) δ ≥ (C − 2N log C − N )Q log Q|A| δ − 64Q log |A| δ − (64 + 2N )Q log(log 2C + log(Q log Q|A|/δ)) ≥ (C − 2N log C − N − 64)Q log Q|A| δ − (64 + 2N )Q log(log 2CQ) − (64 + 2N )Q log(log Q|A|/δ) (29) ≥ (C − 2N log C − N − 192 − (64 + 2N ) log log 2C)Q log(Q|A|/δ).(30)
The equation (29) and (30) , we know the total sample complexity is
T = O(H m (L) ε,∆ log H m (L) ε,∆ |A| δ ).
Finally, we prove the correctness of our algorithm. Since the stopping rule isμ t
a t l +β a t l (t) ≤μ t a t h −β a t h (t)+ε, if a * = a t h , we have µ a t h + ε ≥μ a t h − β a t h (t) + ε ≥μ a t l + β a t l (t)(31)≥μ a * + β a * (T a * (t))(32)
≥ µ a * .
Hence either a * = a t h or a t h is ε-optimal arm. Thus, we complete the proof.
C.2 Proof of Theorem 2
Proof. In this proof, we denote T a,z (t), T a (t), N a (t) are the value of T a,z , T a , N a respectively. For conveniece, we prove CCPE-General(G, ε, δ) outputs a ε-optimal arm with probability 1 − 3δ. For simplity, we denote H m (G) ε,∆ as H (G) . In round t, T a,z (t) = t j=1 I{X j,i = x, Pa(X) j = z},q a,z = Ta,z(t) t
. By Chernoff bound, at round t such that q a,z (t) ≥ 6 t log 6|A|Ia δ , with probability at most δ/3|A|I a ,
|q a,z − q a,z | > 6q a,z t log 6|A|I a δ .
Henceq
a = min z {q a,z } ≤ min z {q a,z + 6q a,z t log 6|A|I a δ } = q a + 6q a t log 6|A|I a δ .(34)When q a ≥ 3 t log 6|A|Ia δ , f (x) = x − 6x t log 6|A|Ia δ is a increasing function. q a ≥ min z {q a,z − 6q a,z t log 6|A|I a δ } = q a − 6q a t log 6|A|I a δ .(35)
Let F 1 be the event that at least one of above inequalities doesn't hold, then Pr{F 1 } ≤ δ. Now let F 2 and F 3 be the event that during some round t, when t is large the true mean of an arm is out of range
[L t O,a , U t O,a ]
and [L t I,a , U t I,a ] respectively. Following anytime confidence bound,Pr{F 3 } ≤ δ. By Lemma 8 and 10 we prove Pr{F 2 } ≤ 3δ.
To prove the concentration bound, we need the following lemma, which is a Chernoff-type anytime confidence bound for Bernoulli variables. To our best knowledge, it is the first anytime confidence bound based on Chernoff inequality. Lemma 8. For X 1 , X 2 , · · · , X n drawn from Bernoulli distribution with mean µ, denoteX = n i=1 X i , then for all round t we have
P (X − µ > 2 3µ t log 20 log(2t) δ , ∀t ≥ 3 µ log 20 log(2t) δ ) ≤ 1 − δ.
The main proof is achieved by modification on part of Lemma 1 in [13]. For completeness, we provide the full proof here.
Let S t = t i=1 (X i − µ) and φ(x) = 3µx log( log(x) δ )
. We define the sequence {u i } i≥1 as follows: u 0 = 1, u k+1 = (1 + C)u k , where C is a constant. Then for simple union bound and Chernoff inequality, we have
P (∃k ≥ 1 : S u k ≥ √ 1 + Cφ(u k )) ≤ ∞ k=1 exp − (1 + C) · 3µu k log( log(u k ) δ ) 3µ · u k ≤ exp −(1 + C) log log(u k ) δ ≤ ∞ k=1 δ k log(1 + C) 1+C ≤ 1 + 1 C log δ log(1 + C) 1+C .
Then we proof Chernoff-type maximal Inequality:
P (∃t ∈ [n], S t ≥ x) ≤ exp − x 2 3µn .(36)
First, we know {S t } is a martingale and then {e St } is a non-negative submartingale. By Doob's submartingale inequality, we have
P ( sup 0≤i≤n S i ≥ x) = P ( sup 0≤i≤n e S i ≥ e sx ) ≤ E[e s·Sn ] e sx = (µe s·(1−µ) + (1 − µ)e −sµ ) n e sx = ((1 − µ) + µe s ) n e sx+snµ ≤ e nµ·(e s −1) e sx+snµ .
Choose s = ln(1 + x nµ ), by the proof of Chernoff bound with µ ≥ 3 t log 20 log(2t) δ , we can easily get
P ( sup 0≤i≤n S i ≥ x) ≤ exp −x 2 3µn .
Now with this inequality, we can derive the lemma.
P (∃t ∈ {u k + 1, · · · , u k+1 − 1} : S t − S u k ≥ √ Cφ(u k+1 )) ≤ P (∃t ∈ [u k+1 − u k − 1] : S t ≥ √ Cφ(u k+1 )) ≤ exp −C · u k+1 u k+1 − u k − 1 log log(u k+1 ) δ ≤ exp −(1 + C) log log(u k+1 ) δ ≤ δ (k + 1) log(1 + C) 1+C ≤ δ log(1 + C) 1+C .
Now with probability at least
1 − (2 + 1/C) δ log(1+C) 1+C , for u k ≤ t ≤ u k+1 , we have S t = S t − S u k + S u k ≤ √ Cφ(u k+1 ) + √ 1 + Cφ(u k ) ≤ (1 + √ C)φ((1 + C)t). Now denote δ = (2 + 1/C) δ log(1+C) 1+C , δ = log(1 + C) Cδ 2+C 1 1+C , we have with probability 1 − δ P S t ≥ (1 + √ C) 3(1 + C)µt log 2 + C Cδ 1 1+C · log(1 + C)t log(1 + C) ≤ 1 − δ .
Choose C = 0.25, and note that log(1.25t) log(1.25) ≤ log(2t) log 2 , (2.25/0.25) 0.8 / log(2) < 10 and 1.5 * √ 1.25 < 2, we complete the lemma's proof.
Lemma 9. Denote T a,z,l (t) is the number of observations from round 1 to round t in which Z a,i = z i , X i = x i , i ≤ l − 1. Then we have T a,z,l (t) ≥ 2 k−l+1+|Z a,l | T a,z (t).
Proof. The proof is straightforward. Since T a,z (t) is the number of observations from round 1 to round t in
which Z i = z i , X i = x i , 1 ≤ i ≤ k. Hence the number of observations for Z i = z i , X i = x i for i ≤ l − 1 is at least 2 |Z l | · 2 k−(l−1) · T a,z (t) = 2 k−l+1+|Z l | T a,z (t) Lemma 10. With probability 1 − 3δ, for all round t, |μ obs,a − µ a | < 8 1 T a (t) log 20kZ a I a |A| log(2t) δ ,(37)
where I a = 2 |Za| .
Proof. If T a (t) ≤ 12 log 20kZaIa|A| log(2t) δ , then the right term of (37) is greater than 1, and this lemma always holds. In this proof, we denote Z a,i as Z i for simplity. By classical anytime confidence bound, we know with probability 1 − δ/(k · I a ), for all round t we have
|r a,z (t) − P (Y = 1 | X = x, Z a = z)| ≤ 4 T a,z (t) log |A| log(2t) δ .
First, let s(t) = 20kZ a I a |A| log(2t), if t < 6 qa log(s(t)/δ), then let Q = 6 qa log(s(t)/δ), based on T a (t) ≥ 12 log s(t) δ , then P t < 6 q a log(1/δ) ≤ P T a (Q) ≥ 12 log s(t) δ .
Thus by Chernoff bound, we know
P T a (Q) ≥ 12 log s(t) δ = P (q a (Q) ≥ 2q a ) ≤ δ,
whereq a (Q) = Ta(Q) Q . Hence with probability at least 1−δ, now we have t ≥ 6 qa log(s(t)/δ). Also, sinceP
(Z i = z i , X i = x i , i ≤ l−1) = T a,z,l (t)/t, by Chernoff bound, when t ≥ 6 qa log(s(t)/δ), with probability 1−exp{− P (Z i =z i ,X i =x i ,i≤l−1)·t 3 } ≥ 1 − δ, we haveP (Z i = z i , X i = x i , i ≤ l − 1) ≤ 2P (Z i = z i , X i = x i , i ≤ l − 1).
Now by Lemma 8 and Lemma 9, with probability 1 − δ/(k · I a ), since
P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1) ≥ q a P (Z i = z i , X i = x i , i ≤ l − 1) ≥ q a 2P (Z i = z i , X i = x i , i ≤ l − 1) ≥ q a t 2T a,z,l (t) ≥ 3 T a,z,l (t) log 20kI a |A| log(2t) δ .
By Lemma 8 we have
|p a,z,l (t) − P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1)| ≤ 2 3P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1) T a,z,l (t) log 20kI a |A| log(2t) δ ≤ 2 3P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1) 2 k−l+|Z l |+1 T a,z (t) log 20kZ a I a |A| log(2t) δ .
Thus by union bound, with probability 1 − δ, we have
z l (p a,z,l (t) − P (Z l = z k | Z i = z i , X i = x i , i ≤ l − 1)) ≤ z:p a,z,l (t)≥P (Z l =z l |Z i =z i ,X i =x i ,i≤l−1) (p a,z,l (t) − P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1)) = 1 2 z |p a,z,l (t) − P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1)| (38) ≤ z 3P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1) 2 k−l+|Z l |+1 T a,z (t) log 20kZ a I a |A| log(2t) δ . The equation (38) is because z k p a,z,k (t) = z P (Z k = z k | Z i = z i , X i = x i , i ≤ k − 1) = 1. Now we denoteP a,z,l (t) = p a,z,1 (t) · · · p a,z,k (t), ≤ z r a,z (t)P a,z,k + k i=1 3 2 i T a,z (t) log 20kZ a I a |A| log(2t) δ ≤ µ a + √ 2 √ 2 − 1 3 T a,z (t) log 20kZ a I a |A| log(2t) δ + 4 T a,z (t) log log(2t) δ . ≤ µ a + 8 1 T a (t) log 20kZ a I a |A| log(2t) δ .
The above inequality holds for probability 1 − 3δ.
Thus by union bound, Pr{F 2 } ≤ 3δ. In later proof, we will always assume that F 1 , F 2 and F 3 don't happen. In this case, true mean µ a ∈ [L t a , U t a ] for all rounds t. Denote
T 1 = 2048H (G) log(20k|A|H (G) log(2T 1 )/δ), then when t ≥ T 1 , for all arm a such that q a ≥ 1 H (G) ·max{∆a,ε/2} 2 , note that I a ≤ 1 qa ≤ H (G) , we have q a ≥ 1 H m (G) ε,∆ · max{∆ a , ε/2} 2 ≥ 3 t log 6|A|I a δ .
Since F 1 doesn't happen, by (35),
|q a − q a | ≤ 6qa t log 6|A|Ia δ and 6q a t log 6|A|I a δ ≤ q a 2 ,
we haveq a ≥ q a − 6qa t log 6|A|Ia
δ ≥ 1 2H (G) ·max{∆a,ε/2} 2 . Hence T a (t) =q a · t ≥ 1024 max{∆ a , ε/2} 2 log 20kZ a I a |A| log(2t) δ . Thus β O (T a (t)) = 64 T a (t) log 20kZ a I a |A| log(2T a (t)) δ ≤ max{∆ a , ε/2} 4 ,
and by Lemma 10, we know the estimation lies in the confidence interval. Now we prove the main theorem.
The following lemma provides the upper bound of sample complexity Lemma 11. With probability 1-5δ, the algorithm 3 takes at most T rounds such that
T ≥ 2112H (G) log 20H (G) |A| log(2t) δ .
Proof. In the proof we assume F 1 , F 2 and F 3 don't happen. The probability for these events are 1 − 5δ. Assume when t = T , the algorithm don't terminate at t rounds.
Then since f (x) = x log(20k|A|H (G) log(2t)/δ) is a increasing function, t ≥ 2048H (G) log 20H (G) k|A| log(2t) δ
for any t ∈ [T 1 , T ]. Then from above, for arm a such that q a ≥ 1
H (G) max{∆a,ε/2} 2 , we have β a (t) ≤ β O (T a (t)) ≤ max{∆a,ε/2} 4
. Then by Lemma 6, at each round at least one intervention will be performed on some arm a
with β I (N a (t)) ≥ β a (t) ≥ max{∆a,ε/2} 4 , which implies that N a (t) ≤ 64 max{∆a,ε/2} 2 log H (G) log(2t) δ .
Since these arms are M , we have |M | ≤ m ε,∆ and
T − T 1 ≤ a∈S 64 max{∆ a , ε/2} 2 log 20H (G) k|A| log(2T ) δ ≤ 64H (G) log 20H (G) k|A| log(2T ) δ .
Hence
T ≤ T 1 + 64H (G) log 20H (G) k|A| log(2T ) δ ≤ 2112H (G) log 20H (G) k|A| log(2T ) δ ,
which completes the proof of Lemma. 11.
Lemma 12. Suppose T = N Q log( 20k|A|Q log(2T ) δ ), then T = O(Q log( |A|Q δ )).
Proof. Similar to Lemma 7, for f (x) = x − N Q log( 20k|A|Q log(2T ) δ )we only need to show that there exists a constant C such that f (CQ log Q|A| δ ) ≥ f (T ) = 0. We have
f (CQ log Q|A| δ ) = CQ log Q|A| δ − N Q log( 20k|A|Q log(2CQ log Q|A| δ ) δ ) ≤ CQ log Q|A| δ − 2N Q log( 20|A|Q log(2CQ log Q|A| δ ) δ ) = (C − 2N )Q log Q|A| δ − log(log 40CQ log Q|A| δ )) ≥ (C − 2N )Q log Q|A| δ − log(log 40C) · log(Q log Q|A| δ ) ≥ (C − 2N − log log 40C)Q log Q|A| δ .
Thus we choose C such that C − 2N − log log 40C ≥ 0, then we complete the Lemma 12.
By the Lemma 12 above, with probability 1 − 5δ, we have
T = O H (G) log |A|H (G) δ .
The correctness has been proved in Section C.1, so we complete the proof of Theorem 2.
C.3 Proof of Theorem 3
Proof. We consider a bandit instance ξ with q and probability distribution P (X 1 , X 2 , · · · , X n , Y ). Recall min x∈{0,1} n P (Y = 1 | X = x) = p min , max x∈{0,1} n P (Y = 1 | X = x) = p max and p max +2∆ 2n+1 +2ε ≤ 1.
For arm a ∈ A with q a ≤ 1 H m ε,∆ −1 ·max{∆a,ε/2} 2 , we denote the set of these arms are M . By definition of m ε,∆ , we know |M | ≥ m ε,∆ . Then for a = do(X i = x) = argmin a ∈M ∆ a (if optimal arm a * ∈ M, a = a * ), we construct bandit instance ξ a with probability distribution
P (Y | X 1 , · · · , X n ) = P (Y | X 1 , · · · , X n ) X i = x P (Y | X 1 , · · · , X n ) + 2(∆ a + ε) X i = x
Thus for arm a with q a ≤ 1 H (P ) m ε,∆ ·max{∆a,ε/2} 2 . Denote a min = argmin a ∈S max{∆ a , ε/2}, (We break the tie arbitrarily),for a = a min , q a ≤ 1 2 .
P (Y | do(X i = x)) = P (Y | X i = x) = x −i P (Y | X i = x, X −i = x −i )P (X −i = x −i ) = x −i (P (Y | X i = x, X −i = x −i ) + 2(∆ a + ε))P (X −i = x −i ) = x −i (P (Y | X i = x, X −i = x −i ) + 2(∆ a + ε))P (X −i = x −i ) = P (Y | X i = x) + 2(∆ a + ε) = P (Y | do(X i = x)) + 2(∆ a + ε),
Now we consider other arms a = do(X j = x ) ∈ A, we have
P (Y | do(X j = x )) = P (Y | X j = x ) = x −j P (Y | X j = x , X −j = x −j )P (X −j = x −j ) = x −j P (Y | X j = x , X −j = x −j )P (X −j = x −j ) + 2(∆ a + ε) x −j,−i P (X −j,−i = x −j,−i , X i = x) = P (Y | X j = x ) + 2(∆ a + ε) · P (X i = x) · x −j,−i P (X −j,−i = x −j,−i ) = P (Y | X j = x ) + 2(∆ a + ε) · q a · x −j,−i P (X −j,−i = x −j,−i ) ≤ P (Y | X j = x ) + (∆ a + ε).
Also, if a = do(), we have
P (Y | a ) = P (Y ) = x P (Y | X = x)P (X = x) = x P (Y | X = x)P (X = x) + 2(∆ a + ε) · x −i P (X −i = x −i , X i = x) = P (Y ) + 2(∆ a + ε) · P (X i = x) · x −i P (X −i = x −i ) ≤ P (Y ) + (∆ a + ε).
Thus for all a ∈ A, we have
P (Y | a ) ≤ P (Y | a ) + ∆ a + ε ≤ (P (Y | a) + ∆ a + ε) + ∆ a + ε = P (Y | a) + 2(∆ a + ε) = P (Y | a),
which means that a is the best arm in bandit environment ξ a . Then denote the probability measure for ξ a and ξ as Pr a and Pr. Denote Y t and X t as the reward and observed value at time t. Define stopping time for the algorithm σ with respect to F t , Then from Lemma 19 in [7], for any event ζ ∈ F σ
E ξ σ t=1 log Pr(Y t ) Pr a (Y t ) = d(Pr(ζ), Pr a (ζ)),
where d(x, y) = x log(x/y) + (1 − x) log((1 − x)/(1 − y)) is the binary relatively entropy. Denote the output of our algorithm is a o . Then since a = a * , when we choose ζ = {a o = a}, we have Pr(ζ) ≤ δ, Pr a (ζ) ≥ 1 − δ and d(Pr(ζ), Pr a (ζ)) ≥ log( 1 2.4δ ) Now note that
E ξ σ t=1 log Pr(Y t ) Pr a (Y t ) (41) = σ t=1 E ξ log Pr(Y t ) Pr a (Y t ) = σ t=1 Pr((X t ) i = x) y∈{0,1} Pr(Y t = y | (X t ) i = x) log Pr(Y t = y | (X t ) i = x) Pr a (Y t = y | (X t ) i = x) (42) Denote B = Pr(Y t = y | (X t ) i = x) ∈ [p min , p max ] y∈{0,1} Pr(Y t = y | (X t ) i = x) log Pr(Y t = y | (X t ) i = x) Pr a (Y t = y | (X t ) i = x) = B log B B + 2(∆ a + ε) + (1 − B) 1 − B 1 − B − 2(∆ a + ε) ≤ −2B(∆ a + ε) B + 2∆ a + 2(1 − B)(∆ a + ε) 1 − B − 2(∆ a + ε) ≤ 4(∆ a + ε) 2 (B + 2(∆ a + ε))(1 − B − 2(∆ a + ε)) ≤ 4(∆ a + ε) 2 0.9 · 0.1 ,
Then (42) becomes
log 1 2.4δ ≤ E ξ σ t=1 log Pr(Y t ) Pr a (Y t ) ≤ 4(∆ a + ε) 2 0.09 σ t=1 Pr((X t ) i = x) ≤ 16 max{∆ a , ε/2} 2 0.09 E ξ [T a (σ)],
where T a (σ) for a = do(X i = x) means the number of times that X i = x. Suppose the sample complexity is T for ξ, denote N a (σ) be the number of times that A t = a. We have
E ξ [N a (σ) + q a · σ] ≥ E ξ [T a (σ)] ≥ 0.09 16 max{∆ a , ε/2} 2 log 1 2.4δ .
By summing over all a = do(X i = x) ∈ M, a = a min , we get 0.09 a∈M \{do()} a =a min 1 16 max{∆ a , ε/2} 2 log
1 2.4δ ≤ a∈M \{do()} a =a min E ξ [N a (σ) + q a · σ] (43) ≤ E ξ σ + a∈M \{do()} a =a min q a · σ (44) ≤ E ξ [σ] 1 + a∈M \{do()} a =a min 1 max{∆ a , ε/2} 2 H m ε,∆ −1 . (45) Denote Q = a∈M \{do()} a =a min 1 max{∆ a , ε/2} 2 ≥ H m ε,∆ −1 − min i<m ε,∆ 1 max{∆ a i , ε/2} 2 − 1 max{∆ do() , ε/2} 2 , then E ξ [σ] ≥ 0.09Q log 1 2.4δ 1 + Q/H m ε,∆ −1 ≥ 0.09 2 H m ε,∆ −1 − min i<m ε,∆ 1 max{∆ a i , ε/2} 2 − 1 max{∆ do(),ε/2 } 2 log 1 2.4δ .
D Some Proofs of Lemma
D.1 Proof of Lemma 1
Proof. By definition, we only need to show that |{a | q a · max{∆ a , ε/2} 2 < 1/H 2m }| ≤ 2m. Assume it does not hold, then q a i · max{∆ a i , ε/2} 2 < 1 H 2m for i = 1, 2, · · · , 2m + 1. Then for i ≥ m + 1, m + 2, · · · , 2m + 1, we have
q a i < 1 H 2m · max{∆ a i , ε/2} 2 < 1 m j=1 1 max{∆a j ,ε/2} 2 · max{∆ a i , ε/2} 2 ≤ 1 m .
The inequality above implies the |{a | q a < 1 m }| ≥ m + 1, which leads to a contradiction for the definition of m.
D.2 The existence for Admissible Sequence in Graphs without Hidden Variables
In graph without hidden variables, the admissible-sequence is important for identifying the causal effect. Now we provide an algorithm to show how to find the admissible-sequence in this condition.
Theorem 4. For causal graph G = (X ∪ {Y }, E) without hidden variables, for a set S = {X 1 , · · · , X k } and X 1 X 2 · · · X k , the admissible-sequence with respect to S and Y can be found by
Z i = Pa(X i ) \ (Z 1 ∪ · · · ∪ Z i−1 ∪ X 1 ∪ · · · ∪ X i−1 ).
Proof. The proof is straightforward. First, Z i ⊆ Pa(X i ) consists of nondescendants of {X i , X i+1 , · · · , X k } by topological order. Second, we need to prove
(Y ⊥ ⊥ X i | X 1 , · · · , X i−1 , Z 1 , · · · , Z i ) G X i ,X i+1 ,··· ,X k .(46)
We know that the Pa(X i ) ⊆ X 1 ∪X 2 · · · X i−1 ∪Z 1 ∪· · · Z i . Then it blocks all the backdoor path from X i to Y . Also, since X 1 ∪X 2 · · · X i−1 ∪Z 1 ∪Z 2 · · · Z i consists of nondescendants of X i , it cannot block any forward path from X i to Y . Also, for any forward path with colliders, namely, X i → · · · → X ← X · · · Y , the X cannot be conditioned since it is a descendant for X i . So conditioning on X 1 ∪ X 2 · · · X i−1 ∪ Z 1 ∪ Z 2 · · · Z i will not active any extra forward path. Hence, there is only original forward path from X i to Y , which means that (46) holds.
D.3 Proof of Lemma 2
Lemma 2. For an action a = do(S = s) and any two weight vectors θ and θ , we have
|σ(θ, a) − σ(θ , a)| ≤ E e X∈N S,Y |V X (θ X − θ X )|M (1) ,(6)
where N S,Y is the set of all nodes that lie on all possible paths from X 1 to Y excluding S, V X is the value vector of a sample of the parents of X according to parameter θ, M (1) is defined in Assumption 1, and the expectation is taken on the randomness of the noise term e = (e X ) X∈X∪{Y } of causal model under parameter θ.
Proof. Note that our BGLM model is equivalent to a threshold model: For each node X, we randomly sample a threshold Γ X ∈ [0, 1], and if f X (θ T X Pa(X)) + e X ≥ Γ X , we let X = 1, which means it is activated. At timestep 1, the X 1 is activated, then at timestep i ≥ 2, X i is either activated (set it to 1) or deactivated (set it to 0). Then, the BGLM is equivalent to the propagating process above if we uniformly sample Γ X for each node X, i.e. Γ X ∼ U[0, 1]. Now we only need to show Now since only nodes in N S,Y will influence Y , we can only consider node X in N S,Y .
|σ(θ, a) − σ(θ , a)| ≤ E e,Γ X∈N S,Y |V X (θ X − θ X )|M (1) ,(47)
Let φ e (θ, Γ) = (φ e 0 (θ, Γ) ⊆ S, φ e 1 (θ, Γ), · · · , φ e n (θ, Γ)) be the sequence of activated sets on θ, 0-mean noise e and threshold factor Γ. More specifically, φ i (θ, Γ) is the set of nodes activated by time step i. For every node X ∈ N S,Y , we define the event that X is the first node that has different activation under θ and θ as below:
E e 1 (X) = {Γ|∃i ∈ [n], ∀i < i, φ e i (θ, Γ) = φ e i (θ , Γ), X ∈ (φ e i (θ, Γ)\φ e i (θ , Γ) ∪ (φ e i (θ , Γ)\φ e i (θ, Γ)))}.
Then we have E e 0 (Y ) ⊆ ∪ X∈N S,Y E e 1 (X). We also define other events:
E e 2,0 (X, i) = {Γ|∀i < i, φ e i (θ, Γ) = φ e i (θ , Γ), X ∈ φ e i−1 (θ, Γ)}, E e 2,1 (X, i) = {Γ|∀i < i, φ e i (θ, Γ) = φ e i (θ , Γ), X ∈ φ e i (θ, Γ)\φ e i (θ , Γ)}, E e 2,2 (X, i) = {Γ|∀i < i, φ e i (θ, Γ) = φ e i (θ , Γ), X ∈ φ e i (θ , Γ)\φ e i (θ, Γ)}, E e 3,1 (X, i) = {Γ|X ∈ φ e i (θ, Γ)\φ e i (θ , Γ)} E e 3,2 (X, i) = {Γ|X ∈ φ e i (θ , Γ)\φ e i (θ, Γ)}.
Then since E e 2,1 (X, i) and E e 2,2 (X, i) are exclusive, we have
Pr Γ {E e 1 (X)} = n i=1 Pr Γ {E e 2,1 (X, i)} + n i=1 Pr Γ {E e 2,2 (X, i)}.
Now we need to bound the two terms above. First, consider Pr Γ {E e 2,1 (X, i)}, we set Γ −X is the vector with all value Γ X of node X = X, then we also define the corresponding sub-event E e 2,1 (X, i, Γ −X ) ⊂ E e 2,1 (X, i) as the event with value Γ −X . Define E e 2,0 (X, i, Γ −X ) ⊂ E e 2,0 (X, i), E e 3,1 (X, i, Γ −X ) ⊂ E e 3,1 (X, i), E e 3,2 (X, i, Γ −X ) ⊂ E e 3,2 (X, i) in a similar way. From definition, E e 2,1 (X, i, Γ −X ) = E e 3,1 (X, i, Γ −X ) ∪ E e 2,0 (X, i, Γ −X ), then we have
Pr Γ {E e 2,1 (X, i, Γ −X )} = Pr Γ {E e 2,0 (X, i)} · Pr Γ {E e 3,1 (X, i, Γ −X )|E e 2,0 (X, i, Γ −X )}.
Thus, by the definition of BGLM, in E e 2,0 (X, i, Γ −X ), the value of Γ X must lie in an interval with highest value 1. Denote it as [W e 2,0 (X, i, Γ −X ), 1], then
Pr Γ X ∼U [0,1] E e 2,0 (X) = 1 − W e 2,0 (X, i, Γ −X ).
Now we consider
Pr Γ X {E e 3,1 (X, i, Γ −X )|E e 2,0 (X, i, Γ −X )}.
We first assume W e 2,0 (X, i, Γ −X ) < 1, otherwise our statement holds trivially. Then we denote that the nodes activated at timestep t under E e 2,0 (X, i, Γ −X ) as φ e t (E e 2,0 (X, i, Γ −X )). If the conditional event above holds, we have
f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X + e X < Γ X ≤ f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X + e X , or f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X + e X ≥ Γ X > f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X + e X ,
where θ X ,X is the element corresponding to X in θ X . Thus,
Pr Γ X ∼U [0,1] {E e 3,1 (X, i, Γ −X ) ∪ E e 3,2 (X, i, Γ −X )|E e 2,0 (X, i, Γ −X )} = |f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X − f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X | 1 − W e 2,0 (X, i, Γ −X )
Thus we have
Pr Γ {E e 2,1 (X, i, Γ −X ) ∪ E e 2,2 (X, i, Γ −X )} = Pr Γ {E e 2,0 (X, i)} · Pr Γ {E e 3,1 (X, i, Γ −X ) ∪ E e 3,2 (X, i, Γ −X )|E e 2,0 (X, i, Γ −X )} = f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X − f X X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X ≤ M (1) X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X − X ∈φ e i−1 (E e 2,0 (X,i,Γ −X ))∩N (X) θ X ,X .
When E e 2,0 (X) = ∅, both two sides are zero, so it holds in general.
Now we define E e 4,0 (X, i, Γ −X ) = {Γ | Γ = (Γ X , Γ −X ) | X / ∈ φ e i−1 (θ, Γ)}, then E e 2,0 (X, i, Γ −X ) ⊆ E e 4,0 (X, i, Γ −X ). In addition, when E e 2,0 (X, i, Γ −X ) = ∅, φ e i (E e 2,0 (X, i, Γ −X )) = φ e i (E e 4,0 (X, i, Γ −X )) for all i < i. Thus we have Pr Γ {E e 2,1 (X, i, Γ −X ) ∪ E e 2,2 (X, i, Γ −X )} ≤ M (1) X ∈φ e i−1 (E e 4,0 (X,i,Γ −X ))∩N (X) θ X ,X − X ∈φ e i−1 (E e 4,0 (X,i,Γ −X ))∩N (X) θ X ,X .
Now we can get
Pr Γ {E e 1 (X)} = Γ −X n i=1 Pr γ X ∼U [0,1] {E e 2,1 (X, i, Γ −X ) ∪ E e 2,2 (X, i, Γ −X )}dΓ −X = Γ −X Pr γ X ∼U [0,1] {E e 2,1 (X, i * , Γ −X ) ∪ E e 2,2 (X, i * , Γ −X )}dΓ −X ≤ Γ −X X ∈φ e i * −1 (E e 4,0 (X,i * ,Γ −X ))∩N (X) (θ X ,X − θ X ,X ) M (1) dΓ −X = E Γ −X X ∈φ e i * −1 (E e 4,0 (X,i * ,Γ −X ))∩N (X) (θ X ,X − θ X ,X ) M (1) = E Γ −X |V X (θ X − θ X )|M (1) ,
where i * is the topological order of X in graph G, and the second inequality is because E e 2,1 (X, i, Γ −X ) = ∅ only when i = i * . Summing over all node X ∈ N S,Y , we complete the proof.
D.4 Proof of Lemma 3
Lemma 3. For one node X ∈ X ∪ {Y }, assume Assumption 1 and 2 holds, and
λ min (M t,X ) ≥ 512D(M (2) ) 2 κ 4 (D 2 + ln 3nt 2 δ ),
with probability 1 − δ/nt 2 , for any vector v ∈ R |Pa(X)| , at all rounds t the estimatorθ t,X in Algorithm 2 satisfy
|v (θ t,X − θ * X )| ≤ 3 κ log(3nt 2 /δ)||v|| M −1 t,X .
Proof. The all proof is very similar to the proof in [10], for the completeness, we provide them here. Note thatθ t,X satisfies ∇L t,X (θ t,X ) = 0, where
∇L t,X (θ X ) = t i=1 [X t − f X (V T i,X θ X )]V i,X . Define G(θ X ) = t i=1 (f X (V T i,X θ X )−f X (V T i,X θ * X ))V i,X . Thus G(θ * X ) = 0 and G(θ t,X ) = t i=1 ε i,X V i,X , where ε i,X = X i − f X (V T i,X θ * X ). Now note that E[ε i,X |V i,X ] = 0 and ε i,X = X i − f X (V T i,X θ * X ) ∈ [−1, 1], then ε i,X is 1-subgaussian. Let Z = G(θ t,X ) = t i=1 ε i,X V i,X
Step 1: Consistency ofθ t,X For any θ 1 , θ 2 ∈ R |Pa(X)| , ∃θ = sθ 1 + (1 − s)θ 2 , 0 < s < 1 such that
G(θ 1 ) − G(θ 2 ) = t i=1ḟ X (V T i,Xθ )V i,X V T i,X (θ 1 − θ 2 ) F (θ)(θ 1 − θ 2 ).
Since f is strictly increasing,ḟ > 0, then G(θ) is an injection and G −1 is well-defined. Now let B η = {θ | ||θ − θ * || ≤ η}, then define κ η := inf θ∈Bη,X =0ḟ (X T θ) > 0. The following lemma helps our proof, and it can be found in Lemma A of [36]: By the above two lemmas, when E G holds, for any η ≥ 4 κη |Pa(X)|+ln(1/δ) λ min (M t,X ) , we have ||θ t,X − θ * || ≤ η. Choose η = 1, we know 1 ≥ 4 κ |Pa(X)|+ln(1/δ) λ min (M t,X ) , then with probability 1 − δ ||θ t,X − θ * || ≤ 1.
Lemma 13 ([36]). {θ | ||G(θ)|| M −1 t,X ≤ κ η η λ min (M t,X )} ⊆ B η .
Step 2: Normality ofθ t,X . Now we assume ||θ t,X − θ * || ≤ 1 holds. Define ∆ =θ t,X − θ * , then
∃s ∈ [0, 1] such that Z = G(θ t,X ) − G(θ * X ) = (H + E)∆, whereθ = sθ * X + (1 − s)θ t,X , H = F (θ * X ) = t i=1ḟ X (V T i,X θ * X )V i,X V T i,X and E = F (θ) − F (θ * X )
. Then, according to mean value theorem, we have
E = t i=1 (ḟ X (V i,X · θ) −ḟ X (V i,X · θ * X ))V i,X V T i,X = t i=1f X (r i )V T i,X ∆V i,X V T i,X ≤ t i=1 M (2) V T i,X ∆V i,X V T i,X for some r i ∈ R. Thus we have v T H −1/2 EH −1/2 v ≤ t i=1 L (2) f X ||V i,X ||||∆||||v T H −1/2 V i,X || 2 ≤ M (2) |Pa(X)|||∆||(v T H −1/2 ( t i=1 V i,X V T i,X )H −1/2 v) ≤ M (2) |Pa(X)| κ ||∆||||v|| 2 ,
hence we know
||H −1/2 EH −1/2 || ≤ M (2) |Pa(X)| κ ||∆|| ≤ 4M (2) |Pa(X)| κ 2 |Pa(X)| + ln 1 δ λ min (M t,X ) ≤ 1 2 ,
where the last inequality is because
λ min (M t,X ) ≥ 512 (M (2) ) 2 κ 4 |Pa(X)| |Pa(X)| + ln 1 δ > 64 (M (2) ) 2 κ 4 |Pa(X)| |Pa(X)| + ln 1 δ . Now for any v ∈ R |Pa(X)| , we have v T (θ t,X − θ * X ) = v T (H + E) −1 Z = v T H −1 Z − v T H −1 E(H + E) −1 Z.
The second equality is correct from
H + E = F (θ) κM t,X 0. Define D (V 1,X , V 2,X , · · · , V t,X ) T ∈ R t×|Pa(X)| . Then D T D = t i=1 V i,X V T i,X = M t,X .
By the Hoeffding's inequality [12],
Pr( v T H −1 Z ≥ a ) ≤ exp − a 2 2||v T H −1 D T || 2 = exp − a 2 2v T H −1 D T DH −1 v ≤ exp − a 2 κ 2 2||v|| 2 M −1 t,X .
The last inequality holds because H κM t,X = κD T D.
Thus with probability 1 − 2δ, |v T H −1 Z| ≤ √ 2 ln 1/δ κ ||v|| M −1 t,X
. For the second term, we know
|v T H −1 E(H + E) −1 Z| ≤ ||v|| H −1 ||H − 1 2 E(H + E) −1 Z|| ≤ ||v|| H −1 ||H − 1 2 E(H + E) −1 H 1 2 || ||Z|| H −1 ≤ 1 κ ||v|| M −1 t,X ||H − 1 2 E(H + E) −1 H 1 2 || ||Z|| M −1 t,X . (48) ≤ 8M (2) |Pa(X)| κ 2 |Pa(X)| + ln 1 δ λ min (M t,X )
Thus by (48) and Lemma 14
v T H −1 E(H + E) −1 Z ≤ 32L (2) f X |Pa(X)|(|Pa(X)| + log 1 δ ) κ 3 λ min (M t,X ) ||v|| M −1 t,X . So we have v T (θ t,X − θ * X ) ≤ 32L (2) f X |Pa(X)|(|Pa(X)| + log 1 δ ) κ 3 λ min (M t,X ) + 2 ln 1/δ κ ||v|| M −1 t,X ≤ 3 κ log(1/δ)||v|| M −1 t,X ,
where the last inequality is because
λ min (M t,X ) ≥ 512|Pa(X)|(L (2) f X ) 2 κ 4
|Pa(X)| 2 + ln 1 δ .
By replace δ with δ/3nt 2 , we complete the proof.
E Experiments
In this section, we provide some experiments supporting our theoretical result for CCPE-BGLM and CCPE-General.
E.1 CCPE-BGLM
Experiment 1 First, we provide the experiments for our CCPE-BGLM algorithm. We construct a causal graph with 9 nodes X 1 , · · · , X 8 and X 0 , such that X i X i+1 . Then, we randomly choose two nodes in X 1 , · · · , X i−1 and also X 0 to be the parent of X i (i ≥ 1). Y has 4 parents, and they are randomly chosen in X = {X 1 · · · , X 8 }. For X 0 , we know P (X 0 = 1) = 1. For node X i and their parent X
(1) i , X (2) i , P (X i = 1) = 0.4X 0 + 0.1X (1) i + 0.1X (2) i . (If i = 2, P (X i = 1) = 0.4X 0 + 0.1X (1) i = 0.4X 0 + 0.1X 1 ; If i = 1, P (X 1 = 1) = 0.4X 0 .
) Suppose the parents of reward variable are X (1) , X (2) , X (3) , X (4) The reward variable is defined by P (Y = 1) = 0.3X (1) + 0.3X (2) + 0.3X (3) + 0.05X (4) . The action set is {do(S = 1 | |S| = 3, S ⊂ X)}. Hence the optimal arm is do({X (1) , X (2) , X (3) } = 1).
We choose 4 algorithms in this experiment: LUCB in [15], lilUCB-heuristic in [13], Propagating-Inference in [35] and our CCPE-BGLM. LUCB and lilUCB-heuristic are classical pure exploration algorithm. Because in previous causal bandit literature, Propagating-Inference is the only algorithm considering combinatorial action set without prior knowledge P (Pa(Y ) | a) for action a ∈ A, we choose it in this experiment. Note that the criteria of Propagating-Inference algorithm is simple regret, hence it cannot directly compare to our pure exploration algorithm. We choose to compare the error probability at some fixed time T instead. In this criteria, Propagating-Inference algorithm will have an extra knowledge of budget T while LUCB, lilUCB-heuristic and CCPE-BGLM not. To implement the Propagating Inference algorithm, we follows the modification in [35] to make this algorithm more efficient and accurate by setting λ = 0 and η A = 1/C. (Defined and stated in [35].) For CCPE-BGLM, we ignore the condition that
t ≥ max{ cD η 2 log nt 2 δ , 1024(M (2) ) 2 (4D 2 −3)D κ 4 η (D 2 + ln 3nt 2 δ )},
to make it more efficient. During our experiment, the error probability is smaller than other algorithm even if we ignore this condition. Also, to make this algorithm more efficient, we update observational confidence bound (Line 11) each 50 rounds. (This will not influence the proof of Theorem 1.) For LUCB, lilUCB-heuristic and CCPE-BGLM, we find the best exploration parameter α, α I and α O by grid search from {0.05, 0.1, · · · , 1}. (Exploration parameter α for UCB-type algorithm is a constant multiplied in front of the confidence radius, which should be tuned in practice. e.g.( [21], [28].) For this task, we find α = 0.3, α O = 0.05, α I = 0.4. We choose T = 50 + 50i for 0 ≤ i ≤ 9. For each time T , we run 100 iterations and average the result.
As the Figure 4 shows, even if our algorithm does not know the budget T , our algorithm converges quicker than all other algorithms.
E.2 CCPE-General
In this subsection, we provide the experiments for CCPE-General algorithm. We also choose 4 algorithms, LUCB, lilUCB-heuristic, Propagating-Inference and CCPE-General (called "adm_seq" in figure because it utilizes admissible sequence.). Since the Propagating-Inference cannot hold for general graph with hidden variables, we first compare them in graphs without hidden variables. Then we also compare LUCB, lilUCB-heuristic and our algorithm in the graphs with hidden variables. Experiment 2 we construct the graph with 7 nodes X 1 , · · · , X 7 such that X i X i+1 . Then, we randomly choose two nodes in X 1 , · · · , X i−1 as parents of X i . The reward variable Y has 5 parents i , X (2) i } = 1). We choose α O = 0.25, α I = 0.4, and exploration parameters α for LUCB and lilUCB are both 0.3. For each time T = 150 + 50i for 0 ≤ i ≤ 9, we run 100 times and average the result to get the error probability. The result is shown in Figure 5. We note that our algorithm performs almost the same as Propagating-Inference algorithm. Our CCPE-General algorithm is a fixed confidence algorithm without requirement for budget T , and our algorithm can be applied to causal graphs with hidden variables.
Experiment 3
In this paragraph, we provide an experiment to show that CCPE-General algorithm can be applied to broader causal graphs with hidden variables. Since there is no previous algorithm working on both combinatorial action set and existence of hidden variable, we compare our result with LUCB and lilUCB-heuristic.
Our causal graph are constructed as follows:
X = X 0 , X 1 , · · · , X n+1 , where X i → Y and X 1 → X i for 2 ≤ i ≤ n + 1, X 1 → X 0 . U = {U 1 , · · · , U n } and U i → X i+2 , U i → X 0 for 2 ≤ i ≤ n + 1.
F Fixed Budget Causal Bandit Algorithm
In this section, we provide a preliminary fixed budget causal bandit algorithm, which based on successive reject algorithm and our previous analysis for causal bandit. The previous causal bandit algorithm in fixed budget always directly estimate the observation threshold m. However, to derive a gap-dependent result, this method does not work. Our Causal Successive Reject avoids the estimation for observation threshold and get a better gap-dependent result. Note that T a (t) = min z T a,z (t) for z ∈ {0, 1} |Za| , then we can get the simple causal successive reject algorithm as follows:
Algorithm 4 Causal Successive Reject 1: Input: Causal Graph G, action set A, budget T , parameter ε. 2: Initialize t = 1, T a = 0,μ a = 0 for all arms a ∈ A. Define |A| = N . A 0 = A 3: Perform do() for T /2 times, and update T a (t) for all a.
4: Set n k = T 2 −N log N (N +1−k) for k = 1, 2, · · · , n − 1. 5: for each phase k = 1, 2, · · · , n − 1: do 6: for i = 1, 2, · · · , (N + 1 − k)(n k − n k−1 ) do
7:
Perform intervention a for action with least T a + N a , and N a = N a + 1. Denote a k = argmin a∈A k−1μ a , whereμ a follows the same definition of Algorithm 2. A k = A k−1 \{a k }. 10: if |A k | = 1 then 11: return A k .
12:
end if 13: end for Theorem 5. The algorithm 4 will return the ε-optimal arm within error probability
P (µ a o < µ a * − ε) ≤ 4I a N 2 exp − T 2 − N 128log N H 3 ,
where H 3 = max k=1,2,··· ,N {α −1 k (max{∆ (k) , ε}) −2 }, and α k is defined by
α k = 1 + N i=k+1 1 i m if k > m 1 k + N i=m+1 1 i m if k ≤ m
where m is defined in 1 with respect to q a similar to Theorem 2
To show that our algorithm outperforms the classical successive reject and sequential halving algorithm, it is obvious that H 3 ≤ H 2 , where H 2 = max k=1,2,··· ,N {k max{∆ (k) , ε} −2 }, since α k ≥ 1 k .
Proof. We also denote T a (t), N a (t) as the value of T a and N a at the round t. The main idea is that: Each stage we spend half budget to observe, and spend the remaining budget to supplement the arms which are not observed enough. The main idea of proof is to show that in each stage, each arm in A k has T a (t) + D a (t) ≥ m k times, which leads to a brand-new result. Denote the set of arm S = {a ∈ A | q a < 1/m}, then |S| ≤ m. First, for a / ∈ S, by chernoff bound, it has been observed byq a ·T /2 ≥ T 4m with probability 1−δ, where δ = 6N ·exp{− T 24m }. Hence T a (T /2) ≥ T 4m for a / ∈ S. First we prove the following lemma:
Lemma 15. After stage 1 ≤ k ≤ N − m, all the arms in A k must have N a (t) + T a (t) ≥ m k , where
m k = k i=1 (N +1−i)(n i −n i−1 ) 2m ≥ T 2 −N 2log N ·m (1 + N i=N +2−k 1 i ) .
Proof. Let D a (t) = T a (t) + N a (t). Denote m 0 = n 0 = 0, then For a / ∈ S, T a (t) ≥ T 4m . Thus number of arm a with T a (t) ≤ T 4m is less than m. Then the intervention in stage k will only performed on D a (t) ≤ T 4m unless all the arms have D a (t) ≥ T 4m ≥ m k times. If all arms have D a (t) ≥ T 4m ≥ m k times, the lemma holds. If it is not true, the (N + 1 − k)(n k − n k−1 ) interventions will performed on at most m arms. Hence all the arms must have N a (t) ≥ m k−1 + (N +1−k)(n k −n k−1 ) m = m k−1 + 2(m k − m k−1 ) ≥ m k times after stage k ≤ N − m.
Then
m k = k i=1 (N +1−i)(n i −n i−1 ) 2m ≥ k−1 i=1 n i 2m + (N +1−k)n k 2m ≥ T 2 −N log N ·2m (1 + N i=N +2−k 1 i ).1 i ) + 1 + ( T 2 − N log N ( 1 N + 1 − k − 1 m + 1 )) − 1 = T 2 − N log N · m ( N i=m+1 1 i ) + ( T 2 − N log N ( 1 N + 1 − k )) = T 2 − N log N ( 1 N + 1 − k + 1 m( N i=m+1 1 i ) −1 ) = T 2 − N log N · α N +1−k ≥ m k .
Lemma 17. In round t, with probability 1 − δ 8nt 3 , |μ obs,a − µ a | < 4 1 T a (t) log 4I a δ .
(50)
Proof. When T a (t) ≥ 12 log 4Ia δ , we know this lemma is trivial since µ a ,μ obs,a ∈ [0, 1]. Otherwise, if t < 6 qa , define Q = 6 qa log( 4Ia δ ), based on T a (t) ≥ 12 log 4Ia δ , then P t < 6 q a log(1/δ) ≤ P T a (Q) ≥ 12 log 4I a δ .
Thus by Chernoff bound, we know P T a (Q) ≥ 12 log 4I a δ = P (q a (Q) ≥ 2q a ) ≤ δ, whereq a (Q) = Ta(Q) Q . Hence with probability at least 1 − δ, now we have t ≥ 6 qa log(4I a /δ). Also, sinceP (Z i = z i , X i = x i , i ≤ l−1) = T a,z,l (t)/t, by Chernoff bound, when t ≥ 6 qa log(4I a /δ), with probability 1−exp{− P (Z i =z i ,X i =x i ,i≤l−1)·t 3 } ≥ 1 − δ, we haveP
(Z i = z i , X i = x i , i ≤ l − 1) ≤ 2P (Z i = z i , X i = x i , i ≤ l − 1).
Now since
P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1) ≥ q a P (Z i = z i , X i = x i , i ≤ l − 1) ≥ q a 2P (Z i = z i , X i = x i , i ≤ l − 1) ≥ q a t 2T a,z,l (t) ≥
3 T a,z,l (t) log 4I a δ .
By Hoeffding's inequality and Chernoff bound, for a = do(X = x), |r a,z (t) − P (Y = 1 | S = s, z = z)| ≤ 1 2T a,z (t) log 4I a δ , ≤ µ a + 4 1 T a (t) log 4I a δ .
Now we prove another lemma to bound the error probability of each stage.
Lemma 18. For an arm a ∈ A, N a (t) + T a (t) = D a (t), then we have P (|μ a − µ a | > ) < 4I a exp{−D a (t)( 2 /32}.
Proof. We know N a (t) ≥ Da(t) or T a (t) ≥ Na(t) 2 . When N a (t) ≥ Na(t) 2 , by Hoeffding's inequality, we know that P (|μ a − µ a | > ) < 2 exp{−2N a (t)( /2) 2 } < 2 exp{−D a (t)( /2) 2 }.
When T a (t) ≥ Da(t) 2 , by Lemma 17, we know P (|μ a − µ a | > ) < 4I a exp{−N a (t) 2 /16} < 4I a exp{−D a (t) 2 /32}.
Then we complete the proof.
Hence the event that ζ = ∀i ∈ {1, 2, · · · , N }, ∀a ∈ A i , |μ a − µ a | < 1 2 max{∆ (N +1−i) , ε} doesn't happen within probability at most n i=1 a∈A i 4I a exp{−m i ( max{(∆ (N +1−i) ) 2 , ε} 2
) 2 /32} ≤ n i=1 a∈A i 4I a exp −α N +1−k · max{(∆ (N +1−i) ), ε} 2 T 2 − N 128log N ≤ 4I a N 2 exp − T 2 − N 128log N H 3 ,
where H 3 = max i=1,2,··· ,n {α −1 i (max{∆ (i) , ε}) −2 }. Now we prove that under event ζ, the algorithm output a ε-optimal arm. For each stage k, we prove that one of the following condition will be satisfied: (1). All arms in A k−1 are ε−optimal.
(2). Stage k eliminate an non-optimal arm a k = a * In fact, assume (1) does not hold, then there exists at least one arm which is not ε−optimal. Since |A k−1 | = N + 1 − k, there must exist an arm a ∈ A k−1 with µ a * − µ a ≥ max{ε, ∆ (N +1−k) }. Hence because of event ζ, after stage k, all arms in A k satisfyμ a − µ a < 1 2 max{∆ (N +1−k) , ε}. Hencê µ a ≤ µ a + 1 2 max{∆ (N +1−k) , ε} ≤ µ a * − max{∆ (N +1−k) , ε} + 1 2 max{∆ (N +1−k) , ε} <μ a * + max{∆ (N +1−k) , ε} − max{∆ (N +1−k) , ε} =μ a * .
So the optimal arm a * will not be eliminated. Hence if (2) always happen, the remaining arm will be the optimal arm. Otherwise, if (1) happens, the algorithm will return an ε-optimal arm. Hence we complete the proof.
Step 1. Conduct a passive observation and estimate from the observational data */ 9:
a
's for a ∈ A \ {do()} defined in Eq.(7).
Figure 2 :
2An Example of Collaborative GraphsFor simplicity, the action set is defined by {do
are based on log(x + y) ≤ log(xy) = log x + log y when x, y ≥ 2. Then choose C such that C − 2N log C − N − 192 − (64 + 2N ) log log 2C ≥ 0, so we complete the proof. Hence, by Lemma 7 with N = 1152(M (1) ) 2 κ 2 η
Firstly
, we have |σ(θ, a) − σ(θ , a)| = E e,Γ I{Y is activated on θ = I{Y is activated on θ }} , and we define the event E e 0 (X) as E e 0 (X) = Γ|I{X is activated under Γ, e, θ} = I{X is activated under Γ, e, θ } . Hence σ(θ, a) − σ(θ , a) ≤ E e Pr Γ∼(U [0,1]) n {E e 0 (Y )} .
The next lemma provides an upper bound of ||Z|| M −1 t,X : Lemma 14 ([37]). For any δ > 0, the event E G := {||Z|| M −1 t,X ≤ 4 |Pa(X)| + ln(1/δ)} holds with probability at least 1 − δ.
Figure 4 :
4Error the action set {do(S = s) | |S| = 2, s ∈ {0, 1} 2 }, then the optimal arm is do({X(1)
For action set {do(S = s) | |S| = 2, S ⊂ {X 2 , · · · , X n+1 }}. Each U i satisfies P (U i = 1) = 0.5. P (X 0 i + 0.1X 1 , 1},. P (X 1 = 1) = 0.5. P (X i = 1) = 0.5 if X 1 = U i−2 = 1 and otherwise 0.4. For the reward variable Y , P (Y = 1) = 0.4X 2 + 0.4X 3 +
Figure 5 :
5Error Probabiltiy for Experiment 2 For this task, by grid search, we set α = 0.25 for exploration parameter of LUCB and lilUCB, and α O = 0.3, α I = 0.4 for CCPE-General algorithm (In the figures below, we call our algorithm "adm_seq" since it uses admissible sequence.) We compare the error probability and sample complexity for them. The results are shown inFigure 6(a) andFigure 6(b). Our CCPE-General algorithm wins in both metrics.
Figure 6 :
6Experiment 3
Lemma 16 .
16After stage k > N − m, all the arms in A k have T a (t) + N a (t) ≥ m k , where m k = T 2 −N 2log N · α N +1−k . Proof. For a ∈ A k , in stage k > N − m, |A k | = N − k + 1 ≤ m. All the arms in A k must have T a (t)+N a (t) ≥ m k−1 +(N +1−k)(n k −n k−1 )/(N +1−k) = m k−1 +n k −n k−1 times, where m N −m = m N −m .Thus after stage k > N − m, all the arms in A k must haveT a (t) + N a (t) ≥ m N −m + N −m<i≤k (n i − n i−1 ) = m N −m + (n k − n N −m )
defined in Eq.(8) */ /* Step 3. Merge the observational estimate and the interventional estimate */17:
18:
For a ∈ A, calculate [L t
a , U t
a ] = [L t
O,a , U t
O,a ] ∩ [L t
I,a , U t
I,a ] andμ t
a = L t
a +U t
a
2
Then we getP a,z,l = P (Z 1 = z 1 ) · · · P (Z l = z l | Z i = z i , X i = x i , i ≤ l − 1).Hence we get µ obs,a (39)at round 2t with probability 1 − 2Z δ 2Ia = 1 − δ. Hence by Lemma 9, Eq (38), we get µ obs,alog 4I a δ ≤ · · · ≤ z r a,z (t)P a,z,k + 1 2
Collaborative causal discovery with atomic interventions. Raghavendra Addanki, Shiva Kasiviswanathan, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Raghavendra Addanki and Shiva Kasiviswanathan. Collaborative causal discovery with atomic interventions. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 12761-12773. Curran Associates, Inc., 2021.
Best arm identification in multi-armed bandits. Jean-Yves Audibert, Sébastien Bubeck, Rémi Munos, COLT 2010 -The 23rd Conference on Learning Theory. Adam Tauman Kalai and Mehryar MohriHaifa, IsraelOmnipressJean-Yves Audibert, Sébastien Bubeck, and Rémi Munos. Best arm identification in multi-armed bandits. In Adam Tauman Kalai and Mehryar Mohri, editors, COLT 2010 -The 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010, pages 41-53. Omnipress, 2010.
Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends® in Machine Learning. Sébastien Bubeck, Nicolo Cesa-Bianchi, 5Sébastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi- armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1-122, 2012.
Combinatorial pure exploration of multi-armed bandits. Shouyuan Chen, Tian Lin, Irwin King, Michael R Lyu, Wei Chen, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT Press1Shouyuan Chen, Tian Lin, Irwin King, Michael R. Lyu, and Wei Chen. Combinatorial pure exploration of multi-armed bandits. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 1, NIPS'14, page 379-387, Cambridge, MA, USA, 2014. MIT Press.
Identifying the consequences of dynamic treatment strategies: A decision-theoretic overview. Alexander Dawid, Vanessa Didelez, Statistics Surveys. 4Alexander Dawid and Vanessa Didelez. Identifying the consequences of dynamic treatment strategies: A decision-theoretic overview. Statistics Surveys, 4, 10 2010.
Combinatorial pure exploration with full-bandit or partial linear feedback. Yihan Du, Yuko Kuroki, Wei Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Yihan Du, Yuko Kuroki, and Wei Chen. Combinatorial pure exploration with full-bandit or partial linear feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35:7262-7270, 05 2021.
On the complexity of best-arm identification in multi-armed bandit models. Aurélien Garivier, Emilie Kaufmann, Olivier Cappé, J. Mach. Learn. Res. 17Aurélien Garivier Emilie Kaufmann, Olivier Cappé. On the complexity of best-arm identification in multi-armed bandit models. J. Mach. Learn. Res., 17:1:1-1:42, 2016.
PAC bounds for multi-armed bandit and markov decision processes. Shie Eyal Even-Dar, Yishay Mannor, Mansour, Computational Learning Theory, 15th Annual Conference on Computational Learning Theory. Jyrki Kivinen and Robert H. SloanSydney, AustraliaSpringer2375Eyal Even-Dar, Shie Mannor, and Yishay Mansour. PAC bounds for multi-armed bandit and markov decision processes. In Jyrki Kivinen and Robert H. Sloan, editors, Computational Learning Theory, 15th Annual Conference on Computational Learning Theory, COLT 2002, Sydney, Australia, July 8-10, 2002, Proceedings, volume 2375 of Lecture Notes in Computer Science, pages 255-270. Springer, 2002.
Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Shie Eyal Even-Dar, Yishay Mannor, Mansour, Journal of Machine Learning Research. 7Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079-1105, 06 2006.
Shi Feng, Wei Chen, arXiv:2206.01995Combinatorial Causal Bandits. arXiv e-prints. Shi Feng and Wei Chen. Combinatorial Causal Bandits. arXiv e-prints, page arXiv:2206.01995, June 2022.
Sample efficient active learning of causal trees. Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Enric Boix Adsera, and Guy BreslerKristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, and Guy Bresler. Sample efficient active learning of causal trees. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Probability inequalities for sums of bounded random variables. Wassily Hoeffding, The collected works of Wassily Hoeffding. SpringerWassily Hoeffding. Probability inequalities for sums of bounded random variables. In The collected works of Wassily Hoeffding, pages 409-426. Springer, 1994.
lil' ucb : An optimal exploration algorithm for multi-armed bandits. Kevin Jamieson, Matthew Malloy, Robert Nowak, Sébastien Bubeck, Journal of Machine Learning Research. 35Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sébastien Bubeck. lil' ucb : An optimal exploration algorithm for multi-armed bandits. Journal of Machine Learning Research, 35, 12 2013.
Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. Kevin Jamieson, Robert Nowak, 48th Annual Conference on Information Sciences and Systems (CISS). Kevin Jamieson and Robert Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In 2014 48th Annual Conference on Information Sciences and Systems (CISS), pages 1-6, 2014.
Pac subset selection in stochastic multi-armed bandits. Ambuj Shivaram Kalyanakrishnan, Peter Tewari, Peter Auer, Stone, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningShivaram Kalyanakrishnan, Ambuj Tewari, Peter Auer, and Peter Stone. Pac subset selection in stochastic multi-armed bandits. Proceedings of the 29th International Conference on Machine Learning, ICML 2012, 1, 01 2012.
Almost optimal exploration in multi-armed bandits. T Zohar Karnin, O Koren, Somekh, 30th International Conference on Machine Learning, ICML 2013. 012013Zohar Karnin, T. Koren, and O. Somekh. Almost optimal exploration in multi-armed bandits. 30th International Conference on Machine Learning, ICML 2013, pages 2275-2283, 01 2013.
Almost optimal exploration in multi-armed bandits. Zohar Karnin, Tomer Koren, Oren Somekh, Proceedings of the 30th International Conference on International Conference on Machine Learning. the 30th International Conference on International Conference on Machine Learning281238III-1246. JMLR.orgZohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In Proceedings of the 30th International Conference on International Conference on Machine Learning -Volume 28, ICML'13, page III-1238-III-1246. JMLR.org, 2013.
On the complexity of best-arm identification in multi-armed bandit models. Emilie Kaufmann, Olivier Cappé, Aurélien Garivier, J. Mach. Learn. Res. 17Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On the complexity of best-arm identification in multi-armed bandit models. J. Mach. Learn. Res., 17:1:1-1:42, 2016.
Causal bandits: learning good interventions via causal inference. Finnian Lattimore, Tor Lattimore, Mark D Reid, Advances in Neural Information Processing Systems. Finnian Lattimore, Tor Lattimore, and Mark D Reid. Causal bandits: learning good interventions via causal inference. In Advances in Neural Information Processing Systems, pages 1189-1197, 2016.
Refining the confidence level for optimistic bandit strategies. T Lattimore, Journal of Machine Learning Research. 19T. Lattimore. Refining the confidence level for optimistic bandit strategies. Journal of Machine Learning Research, 19:1-32, 07 2018.
A contextual-bandit approach to personalized news article recommendation. Lihong Li, Wei Chu, John Langford, Robert E Schapire, Proceedings of the 19th International Conference on World Wide Web, WWW 2010. Michael Rappa, Paul Jones, Juliana Freire, and Soumen Chakrabartithe 19th International Conference on World Wide Web, WWW 2010Raleigh, North Carolina, USAACMLihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Michael Rappa, Paul Jones, Juliana Freire, and Soumen Chakrabarti, editors, Proceedings of the 19th International Conference on World Wide Web, WWW 2010, Raleigh, North Carolina, USA, April 26-30, 2010, pages 661-670. ACM, 2010.
Provably optimal algorithms for generalized linear contextual bandits. Lihong Li, Yu Lu, Dengyong Zhou, International Conference on Machine Learning. PMLRLihong Li, Yu Lu, and Dengyong Zhou. Provably optimal algorithms for generalized linear contextual bandits. In International Conference on Machine Learning, pages 2071-2080. PMLR, 2017.
Online influence maximization under linear threshold model. Shuai Li, Fang Kong, Kejie Tang, Qizhi Li, Wei Chen, Advances in Neural Information Processing Systems. Shuai Li, Fang Kong, Kejie Tang, Qizhi Li, and Wei Chen. Online influence maximization under linear threshold model. In Advances in Neural Information Processing Systems, 2020.
Causal bandits with unknown graph structure. Yangyi Lu, Amirhossein Meisami, Ambuj Tewari, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Yangyi Lu, Amirhossein Meisami, and Ambuj Tewari. Causal bandits with unknown graph structure. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 24817-24828. Curran Associates, Inc., 2021.
Regret analysis of bandit problems with causal background knowledge. Yangyi Lu, Amirhossein Meisami, Ambuj Tewari, William Yan, Conference on Uncertainty in Artificial Intelligence. PMLRYangyi Lu, Amirhossein Meisami, Ambuj Tewari, and William Yan. Regret analysis of bandit problems with causal background knowledge. In Conference on Uncertainty in Artificial Intelligence, pages 141-150. PMLR, 2020.
Causal bandits on general graphs. Aurghya Maiti, Vineet Nair, Gaurav Sinha, arXiv:2107.02772arXiv preprintAurghya Maiti, Vineet Nair, and Gaurav Sinha. Causal bandits on general graphs. arXiv preprint arXiv:2107.02772, 2021.
The sample complexity of exploration in the multi-armed bandit problem. Shie Mannor, John N Tsitsiklis, Journal of Machine Learning Research. 5Shie Mannor and John N. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning Research, 5:623-648, dec 2004.
Finding all $\epsilon$-good arms in stochastic bandits. Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Blake Mason, Lalit Jain, Ardhendu Tripathy, and Robert Nowak. Finding all $\epsilon$-good arms in stochastic bandits. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Budgeted and non-budgeted causal bandits. Vineet Nair, Vishakha Patil, Gaurav Sinha, International Conference on Artificial Intelligence and Statistics. PMLRVineet Nair, Vishakha Patil, and Gaurav Sinha. Budgeted and non-budgeted causal bandits. In International Conference on Artificial Intelligence and Statistics, pages 2017-2025. PMLR, 2021.
Zipei Nie, arXiv:2111.05553Matrix anti-concentration inequalities with applications. arXiv preprintZipei Nie. Matrix anti-concentration inequalities with applications. arXiv preprint arXiv:2111.05553, 2021.
. Judea Pearl, Causality, Cambridge University Press2nd EditionJudea Pearl. Causality. Cambridge University Press, 2009. 2nd Edition.
Some aspects of the sequential design of experiments. Herbert Robbins, Bulletin of the American Mathematical Society. 585Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58(5):527-535, 1952.
Identifying best interventions through online importance sampling. Rajat Sen, Karthikeyan Shanmugam, G Alexandros, Sanjay Dimakis, Shakkottai, International Conference on Machine Learning. PMLRRajat Sen, Karthikeyan Shanmugam, Alexandros G Dimakis, and Sanjay Shakkottai. Identifying best interventions through online importance sampling. In International Conference on Machine Learning, pages 3057-3066. PMLR, 2017.
Separators and adjustment sets in causal graphs: Complete criteria and an algorithmic framework. Benito Van Der Zander, Maciej Liskiewicz, Johannes Textor, Artif. Intell. 270Benito van der Zander, Maciej Liskiewicz, and Johannes Textor. Separators and adjustment sets in causal graphs: Complete criteria and an algorithmic framework. Artif. Intell., 270:1-40, 2019.
Causal bandits with propagating inference. Akihiro Yabe, Daisuke Hatano, Hanna Sumita, Shinji Ito, Naonori Kakimura, Takuro Fukunaga, Ken-Ichi Kawarabayashi, Proceedings of the 35th International Conference on Machine Learning. Jennifer G. Dy and Andreas Krausethe 35th International Conference on Machine LearningStockholmsmässan, Stockholm, SwedenPMLR80Akihiro Yabe, Daisuke Hatano, Hanna Sumita, Shinji Ito, Naonori Kakimura, Takuro Fukunaga, and Ken-ichi Kawarabayashi. Causal bandits with propagating inference. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 5508-5516. PMLR, 2018.
Strong consistency of maximum quasi-likelihood estimates in generalized linear models. Chang Ming Yin, Lincheng Zhao, Science in China Series A: Mathematics. 48Chang Ming Yin and Lincheng Zhao. Strong consistency of maximum quasi-likelihood estimates in generalized linear models. Science in China Series A: Mathematics, 48:1009-1014, 2005.
Online influence maximization with node-level feedback using standard offline oracles. Zhijie Zhang, Wei Chen, Xiaoming Sun, Jialin Zhang, AAAI. 2022Zhijie Zhang, Wei Chen, Xiaoming Sun, and Jialin Zhang. Online influence maximization with node-level feedback using standard offline oracles. In AAAI, 2022. |
104,292,008 | A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning | Due to the lack of enough training data and high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications, such as face recognition, image classification, speech recognition, etc. A commonly-used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained models used in transfer learning are usually available publicly to everyone, including potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. Note that we assume that the attacker does not have access to any targetspecific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus called target-agnostic attack. These assumptions render all previous attacks impractical, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work sheds light on a fundamental security challenge of transfer learning in deep neural networks. | [] | A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei [email protected]
University of California
95616DavisCAUSA
Xin Liu [email protected]
University of California
95616DavisCAUSA
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Adversarial attack · Evasion attack · Transfer learning · Deep neural networks · Target-agnostic attack
Due to the lack of enough training data and high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications, such as face recognition, image classification, speech recognition, etc. A commonly-used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained models used in transfer learning are usually available publicly to everyone, including potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. Note that we assume that the attacker does not have access to any targetspecific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus called target-agnostic attack. These assumptions render all previous attacks impractical, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work sheds light on a fundamental security challenge of transfer learning in deep neural networks.
Introduction
Deep learning has been widely used in various applications, such as image classification [18], image segmentation [8], speech recognition [12], machine translation [25], network traffic classification [19], etc. Because training a deep model is expensive, time-consuming and requires a large amount of data to achieve a good accuracy, it is often undesirable or impractical to train a model from scratch in many applications. In such cases, transfer learning is often adopted to overcome such hurdles.
A typical approach for transfer learning is to transfer a part of the network that has already been trained on a similar task, add one or a few layers at the end, and then re-train the model. Since the part of the model has already been trained on a similar task, the weights are usually kept frozen and only the new layers are trained on the new task. Hence, the number of training parameters are considerably smaller than training the entire model, which allows us to train the model quickly with a small dataset. Transfer learning has been used in practice, including applications such as face recognition [18], text-to-speech synthesis [13], encrypted traffic classification [20], and skin cancer detection [11].
One security vulnerability of transfer learning is that pre-trained models are usually known models that are publicly available to everyone. For example, Google Cloud ML tutorial suggests using Google's Inception V3 model as a pretrained model and Microsoft Cognitive Toolkit (CNTK) suggests using ResNet18 as a pre-trained model for tasks such as flower classification [24]. This means that the part of the model transferred from the pre-trained model is known to potential attackers.
In this paper, we show that an attacker can launch a target-agnostic attack and fool the network when only the pre-trained model is available to the attacker. In our attack, the attacker only knows the source (pre-trained) model used to re-train the target model. The attacker does not know the class labels, samples from any target class, the entire re-trained model, or probabilities the model assigns to each class. That is why it is called target-agnostic. To the best of our knowledge, these assumptions are more restrictive than any previously proposed attacks and none of them works under such restrictive assumptions.
The target-agnostic attack can be adopted in scenarios where fingerprint, face, or voice is used for authentication/verification. In such cases, the attacker usually do not have access to any instances of fingerprint or voice samples, otherwise she could have used that instance to bypass authentication/verification. The aim of our attack is to craft an input that trigger any target class with high confidence. The crafting process can also be continued to trigger all target classes. Such adversarial examples can be used to easily bypass authentication/verification systems without having a true sample of the target class. Our work develops a highly effective target-agnostic attack, exploiting the intrinsic characteristic of transfer learning. Our experiments on face recognition and speech recognition demonstrate the effectiveness of our attack.
This work reveals an inherent security challenge in the current practice of transfer learning in deep learning: Due to the lack of sufficient non-linearity in the re-trained part of the model, the search space becomes small enough that a brute force attack operates remarkably efficient. This lack of non-linearity stems from the fact that with fewer re-training data samples only a single or a few layers can be trained after transferring the weight, and thus the re-trained layers cannot be extremely non-linear. This is a fundamental challenge in transfer learning. The simple solution of considerably increasing the re-training dataset as well as the number of re-training layers to generate sufficient non-linearity contradicts the whole purpose of transfer learning.
Related Work
In general, there are two types of attacks on deep neural networks in literature. One type of attacks aims to generate adversarial examples during inference (evasion attack) [10,23,6,7] while other attacks assume that some modifications are possible during training (poisoning attack) [24,22,9,15,14,21,17,5].
In the evasion attack, an attacker aims to craft or modify an input to fool the neural network or force the model to predict a specific target class. Various methods have been developed to generate adversarial examples by iteratively modifying pixels in an image using gradient of the loss function with respect to the input to finally fool the network [23,6,7]. These attacks usually assume that the gradient of the loss function is available to the attacker. In cases where the gradient is not available, it has been shown than one can still generate adversarial examples if the top 3 (or any other number of) predicted class labels are available [22]. Interestingly, it has been shown that the adversarial examples are often universal, that is, an adversarial example generated for a model can often fool other models as well [6]. This allows an attacker to craft adversarial examples from a model she trained and use it on the target model provided that the training set is available for training a model.
Several defenses have been proposed in literature to defend against these easily generated adversarial examples, including dimensionality reduction, mean blurring, dropout randomization, etc., that mainly attempt to make the model resistant to small perturbations of input image. However, none of the defenses has been shown to be fully resistant [6]. Furthermore, some methods proposed to re-train the model with adversarial examples or train a new model to detect adversarial examples. Nevertheless, these methods also failed to be successful against all adversarial examples [6]. Note that the effectiveness of adversarial examples are not limited to image classification, and its effectiveness has been shown on other tasks, such as speech recognition, image segmentation [16], PDF malware classifiers [26], etc.
The second type of attacks on deep neural networks, which can be probably extended to any machine learning algorithm, is usually called backdoor attack or data poisoning. In the data poisoning attack, an attacker modifies the training dataset to create a backdoor that can be used later to trigger specific neurons which causes mis-classification. In some papers, a specific pattern is generated and added to the training set to fool the network to associate the pattern with a specific target class [22,9,14,15]. For instance, these patterns can be an eyeglass in a face recognition task [22], randomly chosen patterns [9], some specific watermarks or logos [15], specific patterns to fool malware classifiers [17], etc. In some extreme cases, it has been shown that by only modifying a single bit to have a maximum or minimum possible value, one can create a backdoor [5]. This happens due to the operation of max pooling layer commonly used in convolutional neural networks. After the training phase, the backdoor can be used to fool the network to predict the class label associated with these patterns at inference time. For instance, an eyeglass can be added to any face image to fool the network to misclassify the person in the image. Similar to adversarial examples, there is no effective defense for these attacks yet.
The vulnerability of transfer learning has been studied in [12,24]. In [24], the pre-trained model and an instance of target image are assumed to be available. Assuming that the attacker knows that the first k layers of the pre-trained model copied to the new model, the attacker perturb the source image such that the internal representation of the source image becomes similar to the internal representation of the target image at layer k, using pre-trained model. In [12], first, a set of semantic neighbors are generated for a given source and target input which are used to find the salient features of source and target class. Then, similar to [24], the pre-trained feature extractor is used to perturb the source image along the salient features such that their internal representation becomes close. However, these attacks do not work when no instances of the target class is available.
In this paper, we propose a target-agnostic attack on transfer learning. We assume that only the pre-trained model (e.g., VGG face or ResNet18) is available to the attacker. We assume the re-training data and the re-trained model is unknown and not even a single target class sample is available. Our attack model is more restrictive than the previous studies, and thus renders previous attacks on transfer learning infeasible.
System Model
Transfer Learning
Transfer learning is a process by which the knowledge from one task is transferred to another task to accelerate the learning process. In the case of classification with deep neural networks, it is done using the architecture and weights of a model trained on a similar task and then re-training the model on a new task.
As an example, one can transfer the entire model trained on the source task and then initialize and re-train the last layer on the target task, as shown in Fig. 1.
In transfer learning, the layers whose weights are transferred to the new model are called feature extractor that outputs semantic (internal) representation of an input. The last few layers that are re-trained on the new task are called classifier . Using transfer learning, the number of learning parameters can be reduced dramatically. Hence, it requires only a few samples from target classes, and significantly less computational power and training time.
Threat Model
In this paper, we assume that the transferred model that is already trained on a source task is publicly available. This is a reasonable assumption, which in fact is widely used in practice. For instance, [15] used the VGG face model [18] that is trained to recognize 2622 identities to recognize 5 new faces. They use the same procedure shown in Fig. 1. In such cases, the transferred model is publicly available, but the re-trained model is not known to an attacker. In other words, the attacker only knows the feature extractor but not the classifier. Moreover, the attacker does not have access to any samples of the new target classes. This makes the attack more difficult yet more practical. The previous work on transfer learning [24,12] assumes that at least one sample image from each target class is available because they aim to generate images that trigger the same neurons at the feature extractor as the target sample triggers. These approaches do not work without samples from the target class.
Attack Design
While our attack targets any transfer-learning-based deep models, we use face recognition based on VGG face as an example for explanation. Fig. 2 shows the typical transfer learning approach for face recognition [18]. Existing attacks on transfer learning rely on perturbing a source image to mimic a target image at the last layer of the feature extractor. Such attacks thus require samples from target classes. However, our motivation of the attack is to craft images for models that are used in systems, such as authentication/verification system, for which there is no target sample available, otherwise the attacker could have just used that samples. In such a case, we do not have access to samples of the target classes and, consequently, the previous attacks do not work.
Design Principle: To launch an attack with these restrictive assumptions, we need to approach the problem differently. Our key observation is that the typical approach of transfer learning does not generate enough non-linearity for the retrained model given that the feature extractor is known. In other words, the last layer (or the last few layers), which is retrained during transfer learning, only consists of a fully connected (FC) layer and a softmax that outputs the probability of each class. In such a case, each neuron in the last FC layer has a direct and linear relation with one or few target classes with different weights. Hence, the attacker can trigger these neurons one by one to see which one is highly associated with each target class. This limitation of transfer learning allows us to design an efficient yet powerful brute force attack that can trigger a target class with a relatively small number of attempts. Moreover, we show in Evaluation section that if the attacker does not know the exact layer from which the re-training is carried out, she can still assume only the last layer is re-trained and launch the attack. In such cases, the effectiveness of the attack drops but it is still powerful enough to fool the re-trained model after a few attempts.
The main attack idea is to activate the i th neuron at the output of the feature extractor (n − 1 th layer), denoted by x n−1 i , with a high value and keep the other neurons at the same layer zero. After the feature extractor, the model has only a FC layer and a softmax that outputs the probability of each class. This structure, which is fundamental to the flexibility and easiness of transfer learning, inherently limits the amount of non-linearity, and thus make the retrained model prone to attack. Due to the lack of non-linearity after the feature extractor, if there exists a neuron at layer n th that associated a large weight to x n−1 i , it will become large. Hence, the softmax will assign a high probability to that class. In order to find an adversary image, we can iteratively try to trigger each neuron at the (n − 1) th layer to find an adversary image.
Next, we further explain the attack intuition in more detail using a simple example. Let's assume that the output of feature extractor is layer (n − 1) th and we only have two target classes. Let's keep all neurons at layer (n − 1) th zero except the i th neuron, denoted by x n−1 i . Then, for the last layer, n th , we have
x n 1 = W n 1,i x n−1 i and x n 2 = W n 2,i x n−1 i
, and other terms will be zero. We omit b for simplicity. Now, if W n 1,i > W n 2,i , increasing x n−1 i increases the difference between x n 1 and x n 2 . Although the difference increases linearly with x n−1 i , the softmax activation makes the difference exponential. In other words, by increasing x n−1 i , one can arbitrarily increase the confidence of the target class whose W n i is higher,
i.e., class 1 in this example. That is the motivation of the proposed brute force attack. We will show in the next section that even if more than one FC layer is trained after the feature extractor, the attack is still effective. That is because the re-training dataset is usually small and, consequently, the classifier portion of the model cannot be trained to capture sufficient non-linearity.
K do L = M l=1 (Y [l] − F (X)[l]) 2 ; δ = ∂L ∂x ; X = X − αδ; end if T (X) bypasses the authentication then return X; end end return ∅;
Algorithm 1: The target-agnostic brute force attack Algorithm Design: The brute force algorithm is shown in Algorithm 1. We first iterate through all neurons at the output of the feature extractor and set the target, Y , such that at each iteration only one neuron is triggered. We set all elements of Y to zero except for the i th one which can be set to any sufficiently large number, e.g., 100 in Algorithm 1. Note that Y is a target of the feature extractor, not that of the entire re-trained model. In the case of the VGG face, there are 4096 neurons at this layer. So, we only try 4096 times at maximum. In fact, we will show in the next section that we only need to try a few times to trigger any class and we need way fewer than 4096 attempts to trigger all target classes at least once.
Inside the second loop, we use the derivative of the loss (here is the MSE between Y and feature extractor) with respect to an input and change the input gradually to decrease the loss. The goal is to find an input that triggers a neuron with a high value so that it bypasses the authentication system. Here, we assume that whenever the probability (confidence) assigned by the softmax is higher than a preset threshold, say 99%, the system authorizes the attacker.
Implication: We call this type of attack target-agnostic because it does not exploit any information from target's classes, model, or samples. In fact, if the same pre-trained model is used to re-train two different target tasks (models), A and B, the proposed target-agnostic attack crafts similar adversarial inputs for both A and B since it only uses the pre-trained model to craft inputs. The implication is that the attacker can craft a set of adversarial inputs with the source model using the proposed attack and use it effectively to attack all retrained models that use the same pre-trained model. This means that the attack crafting time is not important and one can create a database of likely-to-trigger inputs for each popular pre-trained model, such as the VGG face or ResNet18. Given the simplicity, remarkable effectiveness, and target-agnostic feature of the proposed algorithm, it poses a huge security threat to transfer learning.
Transfer learning allows easier and faster retraining of a new model with few target samples and layers, and lower computational cost. However, at the same time, this simplicity exposes an inherent security vulnerability due to the lack of non-linearity. Our work reveals a fundamental challenge of transfer learning: A trained model with a small dataset using transfer learning can be easily broken with the proposed brute-force attack. The straightforward defense of using significantly more training data as well as re-training more layers to generate enough non-linearity contradicts the purpose of using transfer learning in the first place. Therefore, a solution to this security threat should be more fundamental and potentially change the way we practice transfer learning today.
Evaluation
In this section, we evaluate the effectiveness of our approach using two test cases: Face recognition and speech recognition. We use Keras with Tensorflow backend and a server with Intel Xeon W-2155 and Nvidia Titan Xp GPU using Ubuntu 16.04. We use the following metrics to evaluate the proposed attack model:
-Number of attempts. Assuming that the number of target classes are known, this metric shows how many adversarial input instances are created, on average, in the first for loop in Algorithm 1 to trigger all target classes at least once with above 99% confidence. This metric illustrates whether all target classes are easy to craft adversarial examples for. -Effectiveness (X%). This metric shows the ratio of crafted inputs that trigger any target classes with X% confidence over total number of crafted inputs. We use 95% and 99% confidence for effectiveness in this paper.
Case Study: Face Recognition
In this case study, we use the VGG face model [18] as a pre-trained model. The implementation is available in [4]. We remove the last FC layer and the softmax (SM) layer to make a feature extractor. Then, we pair it with a new FC and SM layer, and re-train the model (while fixing feature extractor) with labeled faces of vision lab at UMass [1]. During re-training, we train the model with Adam optimizer and cross entropy loss function. In some experiments, we add more FC layers before the SM layer which will be explained later. Table 2, we undersample all classes to have an equal training size.
As it is shown, the effectiveness of the attack on an imbalanced model is higher. However, the number of attempts is worse. That is because when imbalanced dataset is used for re-training, the model finds more patterns in larger classes and associate more neurons in FC layer to that class. Hence, it is easier to trigger that class with the proposed method which increases the effectiveness. However, it is much harder to trigger the smallest class which makes the number of attempts larger. The impact of imbalance re-training dataset will be studied more in Distribution of Target Classes subsection. Moreover, the effectiveness and the number of attempts improve when the number of target classes are decreased, as expected.
Choice of Algorithm's Parameter. To study the effect of number of iterations (k) on attack effectiveness, we use the balanced dataset and 5 target classes as explained in the previous subsection. As shown in Fig. 3, the effectiveness increases as the number of iteration increases. The rate of improvement decreases considerably after 50 epochs. Note that the time required to craft a single adversarial sample is proportional to the number of iteration. Therefore, as a trade-off between effectiveness and crafting time, we set k = 50 for all the other experiments. 4. First column shows crafted images starting from the random input. Second column illustrates crafted images from blank image. Third and fourth columns show the initial images and the crafted images from the initial image, respectively. The fifth column illustrates a sample image from each class that is used for re-training. Choice of Initial Image: To generate adversarial images using Algorithm 1, we need to start with an initial image. To find out whether the initial image we start with has any impact on the brute force attack, we conduct 3 different experiments. We use random input, blank image (with all pixel set to one), and random images of celebrities. The results are shown in Fig. 4. First column shows crafted images starting from the random input. Second column illustrates crafted images from blank image. Third and fourth columns show the initial images and the crafted images from the initial image, respectively. The fifth column illustrates a sample image from each class that is used for re-training. In our experiment, the choice of initial image has negligible impact on effectiveness of our attack. Table 3 shows the result of using different initial images on the attack performance. We only re-train a model once with 5 randomly chosen faces and we achieve 99.38% accuracy. Then, we launch the attack on the same model 3 times, each with a different initial image. Although using a face image marginally improves the attack performance, the impact is negligible and the other initial input cases are still considerably effective.
Number of Layers to Re-train:
In previous experiments, we assume that the weights of the feature extractor transferred from the pre-trained model are fixed during re-training and only the last FC layer is changed. One can tune more layers during re-training. Fig. 5(a) shows the impact of tuning more layers on the effectiveness and accuracy. Note that we assume that attacker does not know anything about the target model. Hence, in this experiment, the attacker still uses the pre-trained feature extractor up until the last FC layer. That means the pre-trained feature extractor that the attacker uses is slightly different from the re-trained model. In Fig. 5(a), X axis represents the layer from which we start tuning up to the last FC layer. Due to the small re-training dataset, as the number of tuning layers increases the accuracy drops. However, by tuning more layers, the pre-trained model that the attacker has access to becomes more different from the re-trained model. That is why the effectiveness of the attack decreases. Similarly, the number of attempts increases, as shown in Fig. 5(b). Despite the difference between the re-trained model and the model the attacker has access to, the attack is still effective which means that the pre-trained model cannot be changed dramatically during re-training process.
Number of New Layers in the Re-trained Model: Next, we measure how adding and training more layers (pair of FC + Relu) after feature extractor can affect the proposed attack effectiveness. In this experiment, we use 5 balanced target classes. As shown in Table 4, adding more layers decreases the accuracy of the re-trained model because the re-training dataset is small and not enough to train more layers from scratch. The effectiveness of the attack decreases sightly as more new layers are tuned. However, due to the lack the enough re-training data, the model cannot capture highly non-linear relations after feature extractor and hence the attack is still effective. Moreover, the feature extractor has already trained to outputs high semantic representation which means that the classifier will not need to train to capture highly complex function anyway. When adding more new layers, not all target classes are affected equally and some classes may become harder to trigger. That is why the number of attempts increases.
Distribution of Target Classes: Fig. 6(a) illustrates a typical distribution of target classes triggered by crafted images of the proposed method. It is clear that the distribution is far from Uniform. It basically means that more neurons in layer n − 1 are associated with class 1 and, hence, during brute force attack, more crafted images will trigger that class.
To measure the impact of re-training set on the distribution of target classes, we use Jensen-Shannon distance (JSD). Jensen-Shannon divergence measures the similarity between two distributions as follows: where D(.) is Kullback-Leibler divergence and M = 1 2 (P + Q). Square root of JSD is a metric that we use to compare the similarity between the distribution of data samples in re-training dataset versus the distribution of triggered classes with adversarial inputs of our method.
We find that distribution of training samples during re-training can affect the target class distribution. Fig. 6(b) shows the JS distance of training set distribution and Uniform distribution versus JS distance of target class distribution and Uniform distribution. For each data point, we pick 5 random persons from UMass dataset and then re-train the VGG face model with. The line in Fig. 6(b) represents the linear regression of all data point. The figure shows that when the training set of re-training phase becomes more non-Uniform, the target class distribution becomes even more non-Uniform.
Case Study: Speech Recognition
In [12], a speech recognition model for digits were re-trained to detect speech commands. Following the same experiment, a model first pre-trained on the Pannous Speech dataset [2] containing utterance of ten digits. Then, we randomly pick 5 classes from speech command dataset [3] to re-train the model. 80% of the dataset is used for fine-tuning and 20% for inference. Due to the lack of space and similarity of the results with previous case study, we omit most experiments with similar results. We use a 2D CNN model with 3 building block, each of which contains convolutional layers, Relu activation, and pooling layer, followed by 2 FC layers and softmax layer at the end. The input is the Mel-Frequency Cepstral Coefficients (MFCC) of the wave files. Similar to the previous case study, we replace the SM layer and re-train the model by only tuning the last FC and SM layer. Table 5 shows the impact of number of target classes on the accuracy of the model and attack performance. Similar to the face recognition experiment, we start with a blank input (a 2D MFCC with 0 for all elements) and we use 70 and 0.1 for k and α, respectively. As expected, the accuracy drops when the number of target classes increases. Since ten classes representing digits exist in both the pre-training dataset (Pannous [2]) and the re-training dataset (speech command [3]), these classes are much easier for the target model to re-train with high accuracy in comparison with other classes, such as stop or left command. Hence, the re-trained model has more neuron connections to help classify digit classes which makes it harder for both the model to classify the other classes and the proposed attack to craft adversarial input for the non-digit classes. That is why we observe more dramatic decrease in accuracy and attack performance when the number of target classes increases.
Number of Target Classes:
Re-training Sample Size: Unlike face recognition case study in which most re-training classes have fewer than 100 samples, speech command dataset [3] contains more than 2000 samples for each class. Hence, we conduct an experiment to study the effect of re-training sample size on model and attack performance. We choose six classes (commands) that the pre-trained model did not trained on, i.e., left, right, down, up, go, and stop speech commands. Table 6 shows the impact of re-training set size on the model and attack performance. As expected, increasing the re-training set size improves the accuracy of the model. However, the accuracy of the re-trained model and the re-training set size have a negligible effect on the performance of proposed attack. By comparing Table 5 and Table 6, we realize that the attack performance is directly affected by the number of target classes, but it is not significantly affected by the accuracy of the re-trained model.
Conclusion
In this paper, we develop an efficient brute force attack on transfer learning for deep neural networks. We illustrate that due to the lack of sufficient non-linearity, an attacker can iteratively generate a set of inputs each of which trigger only a single neuron at the final transferred layer and quickly craft adversarial samples that trigger one class with high confidence. We assume that the attacker only knows the transferred model and its weights, and does not have access to the re-trained model, the re-trained dataset, and the re-trained model's output. Our evaluations based on face recognition and speech recognition show that with a handful of attempts, the attacker can craft adversarial samples that can trigger all classes despite the fact that the attacker does not know the re-trained model and model's target classes. The target-agnostic feature of the attack allows the attacker to use the same set of crafted images for two different re-trained models and achieve high effectiveness when the two models use the same pre-trained model. The proposed target-agnostic attack reveals a fundamental challenge with transfer learning: because of the lack of large dataset during re-training, which makes the re-trained part of the model fairly linear, a simple brute-force attack can operate surprisingly effective. The issue can be mitigated by re-training more layers with a significantly larger dataset. This solution, however, contradicts the main purpose of transfer learning. Therefore, more research is needed to address this fundamental challenge.
Fig. 1 .
1Typical approach for transfer learning
Fig. 2 .
2Transfer learning on VGG Face
Data: M (number of neurons at the output of feature extractor), Iimg (initial input), K (number of iteration), F (known feature extractor), α (step constant), T (the target model on attack): for i from 1 to M do Y = 0 m ; Y [i] = 100; //Any sufficiently large number X = Iimg; for j from 1 to
Fig. 3 .
3Number
Fig.
Fig. 4. First column shows crafted images starting from the random input. Second column illustrates crafted images from blank image. Third and fourth columns show the initial images and the crafted images from the initial image, respectively. The fifth column illustrates a sample image from each class that is used for re-training.
Fig. 5 .
5Effect of number of re-training layers
Fig. 6 .
6Target class distribution
Table 1 .
1Imbalanced re-training set Number of Target Classes:Table 1and 2 show the impact of number of target classes on the attack performance. We use 20 classes with the highest number of samples from UMass dataset[1]. The largest class is George W Bush with 530 samples and the smallest one is Alejandro Toledo with 39 samples. We start from a blank image and we use 50 and 0.1 for k and α, respectively. For 5, 10, and 15 classes, we randomly choose a set from 20 classes and re-train and attack the model 50 times and average the results. For 20 classes, we only re-train and attack once.Table 1shows the result when we use five images from each class for test set and all other other images for training set. Hence, the re-training dataset is imbalanced. To balance the dataset, the result of which is shown in# of target classes Accuracy # of Attempts Effectiveness(95%) Effectiveness(99%)
5
99.21%
63.29
93.52%
90.23%
10
98.47%
264.80
91.14%
86.28%
15
98.01%
451.45
90.41%
85.31%
20
97.07%
2836
88.72%
82.93%
Table 2. Balanced (using undersampling) re-training set
# of target classes Accuracy # of attempts Effectiveness(95%) Effectiveness(99%)
5
99.12%
48.25
91.68%
87.82%
10
98.43%
149.97
88.87%
83.07%
15
97.16%
323.36
87.79%
82.05%
20
96.87%
413
87.17%
79.16%
Table 3 .
3Impact of initial input on the attackInitial input # of attempts Effectiveness(95%) Effectiveness(99%)
Blank
18
98.37%
98.37%
Random
19
98.37%
97.22%
A face image
18
99.83%
99.19%
Table 4 .
4Effect of number of new layers in the re-trained model# of new layers Accuracy Attempts Effectiveness(95%) Effectiveness(99%)
1
99.57%
48.25
91.68%
87.82%
2
98.24%
51.87
91.57%
87.45%
3
95.46% 257.26
87.45%
85.67%
Table 5 .
5Effect of number of target classes on the proposed attack # of target classes Accuracy Attempts Effectiveness(95%) Effectiveness(99%)5
97.38%
37
100.00%
98.21%
10
93.30%
114
95.80%
93.75%
15
85.72%
812
92.22%
84.17%
Table 6. Effect of re-training set size
# of samples per class Accuracy Attempts Effectiveness(95%) Effectiveness(99%)
50
77.56%
13
97.48%
95.00%
100
82.46%
17
97.21%
95.23%
200
85.51%
21
98.25%
96.82%
1000
89.89%
17
98.60%
97.64%
2000
92.04%
17
98.60%
97.81%
Pannous speech recognition. Online; accessed 24-Mar-20192. Pannous speech recognition (2017), https://github.com/pannous/ tensorflow-speech-recognition, [Online; accessed 24-Mar-2019]
Target-agnostic attack implementation. 31Target-agnostic attack implementation (2019), https://github.com/shrezaei/ Target-Agnostic-Attack, [Online; accessed 31-Mar-2019]
M Alberti, V Pondenkandath, M Würsch, M Bouillon, M Seuret, R Ingold, M Liwicki, arXiv:1808.06809Are you tampering with my data. arXiv preprintAlberti, M., Pondenkandath, V., Würsch, M., Bouillon, M., Seuret, M., Ingold, R., Liwicki, M.: Are you tampering with my data? arXiv preprint arXiv:1808.06809 (2018)
Adversarial examples are not easily detected: Bypassing ten detection methods. N Carlini, D Wagner, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityACMCarlini, N., Wagner, D.: Adversarial examples are not easily detected: Bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. pp. 3-14. ACM (2017)
Towards evaluating the robustness of neural networks. N Carlini, D Wagner, 2017 IEEE Symposium on Security and Privacy (SP). IEEECarlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). pp. 39-57. IEEE (2017)
Attention to scale: Scaleaware semantic image segmentation. L C Chen, Y Yang, J Wang, W Xu, A L Yuille, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionChen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: Scale- aware semantic image segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3640-3649 (2016)
X Chen, C Liu, B Li, K Lu, D Song, arXiv:1712.05526Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprintChen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)
G F Elsayed, S Shankar, B Cheung, N Papernot, A Kurakin, I Goodfellow, J Sohl-Dickstein, arXiv:1802.08195Adversarial examples that fool both human and computer vision. arXiv preprintElsayed, G.F., Shankar, S., Cheung, B., Papernot, N., Kurakin, A., Goodfellow, I., Sohl-Dickstein, J.: Adversarial examples that fool both human and computer vision. arXiv preprint arXiv:1802.08195 (2018)
Dermatologist-level classification of skin cancer with deep neural networks. A Esteva, B Kuprel, R A Novoa, J Ko, S M Swetter, H M Blau, S Thrun, Nature. 5427639115Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)
Model-reuse attacks on deep learning systems. Y Ji, X Zhang, S Ji, X Luo, T Wang, Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. the 2018 ACM SIGSAC Conference on Computer and Communications SecurityACMJi, Y., Zhang, X., Ji, S., Luo, X., Wang, T.: Model-reuse attacks on deep learning systems. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. pp. 349-363. ACM (2018)
Transfer learning from speaker verification to multispeaker text-to-speech synthesis. Y Jia, Y Zhang, R Weiss, Q Wang, J Shen, F Ren, P Nguyen, R Pang, I L Moreno, Y Wu, Advances in Neural Information Processing Systems. Jia, Y., Zhang, Y., Weiss, R., Wang, Q., Shen, J., Ren, F., Nguyen, P., Pang, R., Moreno, I.L., Wu, Y., et al.: Transfer learning from speaker verification to multi- speaker text-to-speech synthesis. In: Advances in Neural Information Processing Systems. pp. 4485-4495 (2018)
C Liao, H Zhong, A Squicciarini, S Zhu, D Miller, arXiv:1808.10307Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprintLiao, C., Zhong, H., Squicciarini, A., Zhu, S., Miller, D.: Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307 (2018)
Y Liu, S Ma, Y Aafer, W C Lee, J Zhai, W Wang, X Zhang, Trojaning attack on neural networks. Liu, Y., Ma, S., Aafer, Y., Lee, W.C., Zhai, J., Wang, W., Zhang, X.: Trojaning attack on neural networks (2017)
Universal adversarial perturbations against semantic image segmentation. J H Metzen, M C Kumar, T Brox, V Fischer, The IEEE International Conference on Computer Vision (ICCV. Metzen, J.H., Kumar, M.C., Brox, T., Fischer, V.: Universal adversarial perturba- tions against semantic image segmentation. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
Towards poisoning of deep learning algorithms with backgradient optimization. L Muñoz-González, B Biggio, A Demontis, A Paudice, V Wongrassamee, E C Lupu, F Roli, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityACMMuñoz-González, L., Biggio, B., Demontis, A., Paudice, A., Wongrassamee, V., Lupu, E.C., Roli, F.: Towards poisoning of deep learning algorithms with back- gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. pp. 27-38. ACM (2017)
O M Parkhi, A Vedaldi, A Zisserman, Deep face recognition. In: BMVC. 16Parkhi, O.M., Vedaldi, A., Zisserman, A., et al.: Deep face recognition. In: BMVC. vol. 1, p. 6 (2015)
Deep learning for encrypted traffic classification: An overview. S Rezaei, X Liu, arXiv:1810.07906arXiv preprintRezaei, S., Liu, X.: Deep learning for encrypted traffic classification: An overview. arXiv preprint arXiv:1810.07906 (2018)
How to achieve high classification accuracy with just a few labels: A semi-supervised approach using sampled packets. S Rezaei, X Liu, arXiv:1812.09761arXiv preprintRezaei, S., Liu, X.: How to achieve high classification accuracy with just a few labels: A semi-supervised approach using sampled packets. arXiv preprint arXiv:1812.09761 (2018)
A Shafahi, W R Huang, M Najibi, O Suciu, C Studer, T Dumitras, T Goldstein, arXiv:1804.00792Poison frogs! targeted clean-label poisoning attacks on neural networks. arXiv preprintShafahi, A., Huang, W.R., Najibi, M., Suciu, O., Studer, C., Dumitras, T., Gold- stein, T.: Poison frogs! targeted clean-label poisoning attacks on neural networks. arXiv preprint arXiv:1804.00792 (2018)
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. M Sharif, S Bhagavatula, L Bauer, M K Reiter, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityACMSharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. pp. 1528-1540. ACM (2016)
C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, arXiv:1312.6199Intriguing properties of neural networks. arXiv preprintSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fer- gus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
With great training comes great vulnerability: practical attacks against transfer learning. B Wang, Y Yao, B Viswanath, H Zheng, B Y Zhao, 27th {USENIX} Security Symposium ({USENIX} Security 18). Wang, B., Yao, Y., Viswanath, B., Zheng, H., Zhao, B.Y.: With great training comes great vulnerability: practical attacks against transfer learning. In: 27th {USENIX} Security Symposium ({USENIX} Security 18). pp. 1281-1297 (2018)
Y Wu, M Schuster, Z Chen, Q V Le, M Norouzi, W Macherey, M Krikun, Y Cao, Q Gao, K Macherey, arXiv:1609.08144Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprintWu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)
Automatically evading classifiers. W Xu, Y Qi, D Evans, Proceedings of the 2016 Network and Distributed Systems Symposium. the 2016 Network and Distributed Systems SymposiumXu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Proceedings of the 2016 Network and Distributed Systems Symposium (2016) |
239,024,909 | AXIOMATIC EXPLANATIONS FOR VISUAL SEARCH, RETRIEVAL, AND SIMILARITY LEARNING | Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine's behavior. We show that the theory of fair credit assignment provides a unique axiomatic solution that generalizes several existing recommendation-and metric-explainability techniques in the literature. Using this formalism, we show when existing approaches violate "fairness" and derive methods that sidestep these shortcomings and naturally handle counterfactual information. More specifically, we show existing approaches implicitly approximate second-order Shapley-Taylor indices and extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to search engines. These extensions can extract pairwise correspondences between images from trained opaque-box models. We also introduce a fast kernel-based method for estimating Shapley-Taylor indices that require orders of magnitude fewer function evaluations to converge. Finally, we show that these game-theoretic measures yield more consistent explanations for image similarity architectures. | [] | AXIOMATIC EXPLANATIONS FOR VISUAL SEARCH, RETRIEVAL, AND SIMILARITY LEARNING
Mark Hamilton [email protected]
Scott Lundberg
Stephanie Fu
Lei Zhang
William T Freeman
Mit
Microsoft
Google
AXIOMATIC EXPLANATIONS FOR VISUAL SEARCH, RETRIEVAL, AND SIMILARITY LEARNING
Published as a conference paper at ICLR 2022
Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine's behavior. We show that the theory of fair credit assignment provides a unique axiomatic solution that generalizes several existing recommendation-and metric-explainability techniques in the literature. Using this formalism, we show when existing approaches violate "fairness" and derive methods that sidestep these shortcomings and naturally handle counterfactual information. More specifically, we show existing approaches implicitly approximate second-order Shapley-Taylor indices and extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to search engines. These extensions can extract pairwise correspondences between images from trained opaque-box models. We also introduce a fast kernel-based method for estimating Shapley-Taylor indices that require orders of magnitude fewer function evaluations to converge. Finally, we show that these game-theoretic measures yield more consistent explanations for image similarity architectures.
INTRODUCTION
Search, recommendation, retrieval, and contrastive similarity learning powers many of today's machine learning systems. These systems help us organize information at scales that no human could match. The recent surge in million and billion parameter contrastive learning architectures for vision and language underscore the growing need to understand these classes of systems (Nayak, 2019;a;Radford et al., 2021;Caron et al., 2020). Like classifiers and regressors, contrastive systems face a key challenge: richer models can improve performance but hinder interpretability. In high-risk domains like medicine, incorrect search results can have serious consequences. In other domains, search engine bias can disproportionately ans systematically hide certain voices (Mowshowitz & Kawaguchi, 2002;Diaz, 2008;Goldman, 2005).
Currently, there are several competing techniques to understand a similarity model's predictions (Zhu et al., 2019;Zheng et al., 2020;Dong et al.;Selvaraju et al., 2017;Vaswani et al., 2017). However, there is no agreed "best" method and no a formal theory describing an "optimal" search explanation method. We show that the theory of fair credit assignment provides a uniquely determined and axiomatically grounded approach for "explaining" a trained model's similarity judgements. In many cases, existing approaches are special cases of this formalism. This observation allows us to design variants of these methods that better satisfy the axioms of fair credit assignment and can handle counterfactual or relative explanations. Though we explore this topic through the lens of visual search, we note that these techniques could also apply to text, tabular, or audio search systems.
This work identifies two distinct classes of search engine explainability methods. "First order" approaches highlight the most important pixels that contribute to the similarity of objects and "Second order" explanations provide a full correspondence between the parts of query and retrieved image. We relate first order interpretations to existing theory on classifier explainability through a generic function transformation, as shown in the third column of Figure 1. We find that second order explanations correspond to a uniquely specified generalization of the Shapley values (Sundararajan et al., 2020) and is equivalent to projecting Harsanyi Dividends onto low-order subsets (Harsanyi, 1963). We use this formalism to create new second-order generalizations of Class Activation Maps (Zhou et al., 2016), GradCAM (Selvaraju et al., 2017), LIME (Ribeiro et al., 2016), and SHAP (Lundberg & Lee, 2017). Our contributions generalize several existing methods, illustrate a rich mathematical Second order search interpretation methods yield a dense correspondence between image locations (last two columns). CAM (second column) is a particular case of Shapley value approximation, and we generalize it to yield dense correspondences (last column). structure connecting model explainability and cooperative game theory, and allow practitioners to understand search engines with greater nuance and detail. We include a short video detailing the work at https://aka.ms/axiomatic-video. In summary we:
• Present the first uniquely specified axiomatic framework for model-agnostic search, retrieval, and metric learning interpretability using the theory of Harsanyi dividends. • Show that our framework generalizes several existing model explanation methods (Zhou et al., 2016;Selvaraju et al., 2017;Zhu et al., 2019;Ribeiro et al., 2016) to yield dense pairwise correspondences between images and handle counterfactual information. • Introduce a new kernel-based approximator for Shapley-Taylor indices that requires about 10× fewer function evaluations. • Show that our axiomatic approaches provide more faithful explanations of image similarity on the PascalVOC and MSCoCo datasets.
BACKGROUND
This work focuses on search, retrieval, metric learning, and recommendation architectures. Often, these systems use similarity between objects or learned features (Bengio et al., 2013) to rank, retrieve, or suggest content (Bing, 2017;Su & Khoshgoftaar, 2009;Chopra et al., 2005;Radford et al., 2021). More formally, we refer to systems that use a distance, relevance, or similarity function of the form: d : X × Y → R to quantify the relationship between items from sets X and Y. In search and retrieval, X represents the space of search queries and Y represents the space of results, the function d assigns a relevance to each query result pair. Without loss of generality, we consider d as a "distance-like" function where smaller values indicate more relevance. The expression arg min y∈Y d(x, y) yields the most relevant result for a query x ∈ X .
Specializing this notion yields a variety of different kinds of ML systems. If X = Y = Range(N (·)) where N is an image featurization network such as ResNet50 (He et al., 2016), the formalism yields a visual search engine or "reverse image search". Though this work focuses on visual search, we note that if X is the space of character sequences and Y is the space of webpages, this represents web search. In recommendation problems, X are users and Y are items, such as songs or news articles. In this work we aim to extract meaningful "interpretations" or "explanations" of the function d.
MODEL INTERPRETABILITY
The Bias-Variance trade-off (Kohavi et al., 1996) affects all machine learning systems and governs the relationship between a model's expressiveness and generalization ability. In data-rich scenarios, a model's bias dominates generalization error and increasing the size of the model class can improve performance. However, increasing model complexity can degrade model interpretability because Figure 2: Comparison of first-order search interpretation methods which highlight pixels that contribute to similarity in red. Integrated Gradients (on pixels) struggles because well trained classifiers are invariant to minor pixel changes and have uninformative gradients.
added parameters can lose their connection to physically meaningful quantities. This affects not only classification and regression systems, but search and recommendation architectures as well.
For example, the Netflix-prize-winning "BellKor" algorithm (Koren, 2009), boosts and ensembles several different methods making it difficult to interpret through model parameter inspection alone.
To tackle these challenges, some works introduce model classes that are naturally interpretable (Nori et al., 2019;Hastie & Tibshirani, 1990). Alternatively, other works propose model-agnostic methods to explain the predictions of classifiers and regressors. Many of these approaches explain the local structure around a specific prediction. Lundberg & Lee (2017) show that the Shapley value (Shapley, 1951), a measure of fair credit assignment, provides a unique and axiomatically characterized solution to classifier interpretability (SHAP). Furthermore, they show that Shapley values generalize LIME, DeepLIFT (Shrikumar et al., 2017), Layer-Wise Relevance Propagation (Bach et al., 2015), and several other methods (Štrumbelj & Kononenko, 2014;Datta et al., 2016;Lipovetsky & Conklin, 2001;Saabas, 2014). Many works in computer vision use an alternative approach called Class Activation Maps (CAMs). CAM projects the predicted class of a deep global average pooled (GAP) convolutional network onto the feature space to create a low resolution heatmap of class-specific network attention. GradCAM (Selvaraju et al., 2017) generalizes CAM to architectures other than GAP and can explain a prediction using only a single network evaluation. In Section 4 we show that CAM, GradCAM, and their analogue for search engine interpretability, Zhu et al. (2019), are also unified by the Shapley value and its second order generalization, the Shapley-Taylor index.
FAIR CREDIT ASSIGNMENT AND THE SHAPLEY VALUE
Shapley values provide a principled and axiomatic framework for classifier interpretation. We briefly overview Shapley values and point readers to Molnar (2020) for more detail. Shapley values originated in cooperative game theory as the only fair way to allocate the profit of a company to its employees based on their contributions. To formalize this notion we define a "coalition game" as a set N of |N | players and a "value" function v : 2 N → R. In cooperative game theory, this function v represents the expected payout earned by cooperating coalition of players. Shapley (1951) show that the unique, fair credit assignment to each player, φ v (i ∈ N ), can be calculated as:
φ v (i) := S⊆N \{i} |S|!(|N | − |S| − 1)! |N |! (v(S ∪ {i}) − v(S))(1)
Informally, this equation measures the average increase in value that a player i brings to a coalition S by weighting each increase, v(S ∪ {i}) − v(S), by the number of ways this event could have happened during the formation of the "grand coalition" N . We note that this assignment, φ v , is the unique assignment that satisfies four reasonable properties: symmetry under player re-labeling, no credit assignment to dummy players, linearity (or it's alternative monotonicity), and efficiency which states that Shapley values should sum to v(N ) − v(∅) (Young, 1985). Intuitively, these axioms require that a fair explanation should treat every feature equally (Symmetry), should not assign importance to features that are not used (Dummy), should behave linearly when the value function is transformed (Linear), and should sum to the function's value (Efficiency).
Shapley Values provide a principled way to explain the predictions of a machine learning model. To connect this work to model interpretability, we can identify the "features" used in a model as the "players" and interpret the value function, v(S), as the expected prediction of the model when features N \S are replaced by values from a "background" distribution. This background distribution allows for "counterfactual" or relative explanations (Goyal et al., 2019). Figure 3: Explanations relative to a background distribution show why a result is better than an alternative. When asked why the best result (lower left) was better than the second best result (top right) our method correctly selects the player. The method is not guaranteed to satisfy the axioms but is more "efficient".
We rely on several works to extend Shapley values to more complex interactions. Harsanyi (1963) generalized the Shapely value by introducing a "dividend" that, when split and distributed among players, yields the Shapley values. Owen (1972) introduces an equivalent way to extend Shapley values using a multi-linear extension of the game's characteristic function. Sundararajan et al. (2020) introduce the Shapley-Taylor index and show is equivalent to the Lagrangian remainder of Owen's multi-linear extension. Integrated Hessians (Janizek et al., 2020) enable estimation of a secondorder variant of the Aumann-Shapley values and we use this approach to create a more principled second-order interpretation method for differentiable search engines.
UNIFYING FIRST-ORDER SEARCH INTERPRETATION TECHNIQUES
Though there is a considerable body of work on opaque-box classifier interpretability, opaque-box search engine interpretability has only recently been investigated (Singh & Anand, 2019;Zhu et al., 2019;Zheng et al., 2020). We introduce an approach to transform opaque and grey-box classification explainers into search engine explainers, allowing us to build on the rich body of existing work for classifiers. More formally, given a similarity function d : X × Y → R and elements x ∈ X and y ∈ Y we can find the "parts" of y that most contribute to the similarity by computing the Shapley values for the following value function:
v 1 (S) : 2 N → R := d(x, mask(y, S))
Where the function mask(·, ·) : Y × 2 N → Y, replaces "parts" of y indexed by S with components from a background distribution. Depending on the method, "parts" could refer to image superpixels, small crops, or locations in a deep feature map. This formula allows us to lift many existing approaches to search engine interpretability. For example, let X , and Y represent the space of pixel representations of images. Let the grand coalition, N , index a collection of superpixels from the retrieved image y. Let mask(y, S) act on an image y by replacing the S superpixels with background signal. With these choices, the formalism provides a search-engine specific version of ImageLIME and KernelSHAP. Here, Shapley values for each i ∈ S measure the impact of the corresponding superpixel on the similarity function. If we replace superpixels with hierarchical squares of pixels we arrive at Partition SHAP (Lundberg). We can also switch the order of the arguments to get an approach for explaining the query image's impact on the similarity. In Figure 2 we qualitatively compare how methods derived from our approach compare to two existing approaches:
φ v1 ((h, w) ∈ N ) = 1 HW c GAP (x) c (y chw − b chw )(3)
Where GAP refers to global average pooling. We defer proof of this and other propositions to the Supplement. The results of this proposition mirrors the form of VEDML but with an added term to handle background distributions. These extra terms broaden the applicability of VEDML and we demonstrate their effect on explanations in Figure 3. In particular, we explain why two guitar players are similar in general (no background distribution), and relative to the second-best result of a guitar. Without a background, the explanation focuses on the guitar. However, when the explanation is relative to an image of a guitar the explanation focuses instead on the "tie-breaking" similarities, like the matching player. With counterfactual queries one can better understand a model's rationale behind relative similarity judgements and this can help in domains such as search engine optimization and automated medical diagnosis. We refer to Equation 3 as the Search Activation Map (SAM) in analogy with the Class Activation Map. We note that in non-GAP architectures, VEDML requires Taylor approximating nonlinear components. This heuristic corresponds estimating the Shapley values for a linear approximation of the true value function. For nonlinear architectures such as those that use cosine similarity, SAM diverges from Shapley value theory and hence violates its axioms. We can remedy this by using a Kernel-based Shapley value approximator (Lundberg & Lee, 2017) and refer to this approach as Kernel SAM.
Though the Shapley value framework unifies several methods for search engine interpretability, we note that the popular technique GradCAM does not align with Shapley value theory when applied to our feature-based value function (though it does align with Shapley values for GAP classifiers).
To connect this approach to the theory of fair credit assignment, we show that GradCAM closely resembles Integrated Gradients (IG) (Sundararajan et al., 2017b), an approximator to the Aumann-Shapley values (Aumann & Shapley, 2015):
Proposition 4.2 Let v(S) : [0, 1] N → R := f (mask(x, S)
) represent soft masking of the spatial locations of a deep feature map x with the vector of zeros and applying a differentiable function f . GradCAM is equivalent to Integrated Gradients approximated with a single sample at α = 1 only if the function f has spatially invariant derivatives:
∀(h, w), (i, j) ∈ N : ∂f (x) ∂x chw = ∂f (x) ∂x cij
In typical case where f does not have spatially invariant derivatives GradCAM violates the dummy axiom (see Section 2.2) and does not represent an approximation of Integrated Gradients.
Where α refers to the parameter of IG that blends background and foreground samples. We note that the Aumann-Shapley values generalize the Shapley value to games where infinite numbers of players can join finitely many "coalitions". These values align with Shapley values for linear functions but diverge in the nonlinear case. Proposition 4.2 also shows that in general GradCAM is sub-optimal and can be improved by considering Integrated Gradients on the feature space. We refer to this modification to GradCAM as Integrated Gradient Search Activation Maps or "IG SAM". We also note that this modification can be applied to classifier-based GradCAM to yield a more principled classifier interpretation approach. We explore this and show an example of GradCAM violating the dummy axiom in the Supplement.
SECOND-ORDER SEARCH INTERPRETATIONS
Visualizing the pixels that explain a similarity judgement provides a simple way to inspect where a retrieval system is attending to. However, this visualization is only part of the story. Images can be similar for many different reasons, and a good explanation should clearly delineate these independent reasons. For example, consider the pair of images in the left column of Figure 6. These images show two similar scenes of people playing with dogs, but in different arrangements. We seek not just a heatmap highlighting similar aspects, but a data-structure capturing how parts of the query image correspond to parts of a retrieved image. To this end we seek to measure the interaction strength between areas of query and retrieved images as opposed to the effect of single features. We refer to this class of search and retrieval explanation methods as "second-order" methods due to their relation with second-order terms in the Shapley-Taylor expansion in Section 5.1.
HARSANYI DIVIDENDS
To capture the notion of interactions between query and retrieved images, we must consider credit assignments to coalitions of features. (Harsanyi, 1963) formalize this notion with a unique and axiomatically specified way to assign credit or "Harsanyi Dividends" to every possible coalition, S, of N players in a cooperative game using the formula:
d v (S) := v(S) if |S| = 1 v(S) − T S d v (T ) if |S| > 1(4)
These dividends provide a detailed view of the function's behavior at every coalition. In particular, Harsanyi (1963) show that Shapley values arise from distributing these dividends evenly across members of the coalitions, a process we refer to a "projecting" the dividends down. In this work we seek a second-order analog of the Shapley values, so we generalize the notion of sharing these dividends between individuals to sharing these dividends between sub-coalitions. This computation rederives the recently proposed Shapley-Taylor Indices (Sundararajan et al., 2020), which generalize the Shapley values to coalitions of a size k using the discrete derivative operator. More specifically, by sharing dividends, we can alternatively express Shapley-Taylor values for coalitions |S| = k as:
φ k v (S) = T :S⊂T d v (T ) |T | |S|(5)
Which states that the Shapley-Taylor indices arise from projecting Harsanyi dividends onto the k th order terms. We note that this interpretation of the Shapley-Taylor indices is slightly more flexible than that of Sundararajan et al. (2020) as it allows one to define "jagged" fair credit assignments over just the coalitions of interest. Equipped with the Shapley-Taylor indices, φ k v , we can now formulate a value function for "second-order" search interpretations. As in the first order case, consider two spaces X , Y equipped with a similarity function d. We introduce the second-order value function:
v 2 (S) : 2 N → R := d(mask(x, S), mask(y, S))(6)
Where the grand coalition, N = L q ∪ L r , are "locations" in both the query and retrieved images. These "locations" can represent either superpixels or coordinates in a deep feature map. Our challenge now reduces to computing Shapley-Taylor indices for this function.
A FAST SHAPLEY-TAYLOR APPROXIMATION KERNEL
Though the Harsanyi Dividends and Shapley-Taylor indices provide a robust way to allocate credit, they are difficult to compute. The authors of the Shapley-Taylor indices provide a sampling-based approximation, but this requires estimating each interaction term separately and scales poorly as dimensionality increases. To make this approach more tractable for high dimensional functions we draw a parallel to the unification of LIME with Shapley values through a linear regression weighting kernel. In particular, one can efficiently approximate Shapley values by randomly sampling coalitions, evaluating the value function, and fitting a weighted linear map from coalition vectors to function values. We find that this connection between Shapley values and weighted linear models naturally lifts to a weighted quadratic estimation problem in the "second-order" case. In particular, we introduce a weighting kernel for second order Shapley-Taylor indices: . When censoring all but these shared objects (right column) the search engine should view these images as similar.
Λ(S) = |N | − 1 |N | |S| |S| 2 (|N | − |S|)(7)
Using this kernel, one can instead sample random coalitions, evaluate v, and aggregate the information into weighted quadratic model with a term for each distinct coalition |S| ≤ 2. This allows one to approximate all Shapley-Taylor indices of k = 2 with a single sampling procedure, and often requires 10× fewer function evaluations to achieve the same estimation accuracy. We show this speedup in Figure 5 on randomly initialized 15-dimensional deep networks. A detailed description of this and other experiments in this work are in the supplement. We find that one can further speed up the method by directly sampling from the induced distribution (Kernel-Direct) as opposed to randomly sampling coalitions and calculating weights (Kernel-Weighting). This direct sampling can be achieved by first sampling the size of the coalition from p(s) ∝ (|N | − 1)/( s 2 (|N | − s)) and then randomly sampling a coalition of that size. When our masking function operates on super-pixels, we refer to this as the second-order generalization of Kernel SHAP. This also gives insight into the proper form for a second-order generalization of LIME. In particular we add L1 regularization (Tibshirani, 1996) and replace our kernel with a local similarity, Λ(S) = exp(−λ|mask(x, S); mask(y, S) − x; y| 2 2 ) where ";" represents concatenation, to create a higher-order analogue of LIME. Finally we note that certain terms of the kernel are undefined due to the presence of s 2 and |N | − |S| in the denominator. These "infinite" weight terms encode hard constraints in the linear system and correspond to the efficiency axiom. In practice we enumerate these terms and give them a very large weight (10 8 ) in our regression. We reiterate that our kernel approximator converges to the same, uniquely-defined, values as prior sampling approaches but requires significantly fewer function evaluations.
SECOND-ORDER SEARCH ACTIVATION MAPS
In the first-order case, CAM and its search engine generalization, Search Activation Maps, arise naturally from the Shapley values of our first-order value function, Equation 2. To derive a second order generalization of SAM we now look to the Shapley-Taylor indices of our second order value function, Equation 6, applied to the same GAP architecture described in Proposition 4.1.
Proposition 5.1 Let the spaces X , Y and function d be as in Proposition 4.1. Let the grand coalition, N , index into the spatial coordinates of both the query image features x and retrieved image features y. Let the function mask(y, S) act on a feature map y by replacing the corresponding features with a background feature map a for query features and b for retrieved features. Then: (8) We note that the first term of the summation corresponds to the frequently used correlation layer (Fischer et al., 2015b;Sun et al., 2020;Wang et al., 2020a;Chen et al., 2020c) and generalizes To extend the method in a principled way we use our second-order kernel approximator and refer to this as second-order KSAM. We also introduce a generalization using a higher order analogue of Integrated Gradients, Integrated Hessians (Janizek et al., 2020), applied to our feature maps. We refer to this as secondorder IGSAM. In Section A.3 of the Supplement we prove that this approach is proportional to the Shapley-Taylor indices for the GAP architecture. We can visualize these second-order explanations by aggregating these Shapley-Taylor indices into a matrix with query image locations as rows and retrieved locations as columns. Using this matrix, we can "project" signals from a query to retrieved image. We show a few examples of attention projection using our second-order SAM in Figure 4.
φ v2 ({(h, w) ∈ L q , (i, j) ∈ L r }) = 1 H 2 W 2 c x chw y cij − a chw y cij − x chw b cij + a chw b cij
EXPERIMENTAL EVALUATION
First Order Evaluation Evaluating the quality of an interpretability method requires careful experimental design and is independent from what "looks good" to our human eye. If a model explanation method produces "semantic" connections between images it should be because to the underlying model is sensitive to these semantics. As a result, we adopt the evaluation strategy of Lundberg & Lee (2017), which measures how well the model explanation approximates the expected influence of individual features. In particular, these works calculate each feature's importance, replace the top n% of features with background signal, and measure the effect on the function. A good model interpretability method should cause the replacement of the most important features, and hence cause the largest expected change in the function. We refer to this metric as the "Faithfulness" of an interpretation measure as it directly measures how well an interpretation method captures the behavior of an underlying model. Figure 7 in the Supplement diagrams this process for clarity. In our experiments we blur the top 30% of image pixels to compute faithfulness. For those methods that permit it, we also measure how much the explanation violates the efficiency axiom. In particular we compare the sum of explanation coefficients with the value of v(N ) − v(∅) and refer to this as the "Inefficiency" of the method. For additional details and evaluation code please see Section A.2 in the Supplement.
Second Order Evaluation
In the second-order case we adopt the evaluation strategy of Janizek et al. (2020) which introduce a analogous second-order faithfulness measure. In particular, we measure how well model explanations approximate the expected interaction between two features. To achieve this, we select an object from the query image, use the second order explanation to find the corresponding object in the retrieved image, censor all but these two objects. We measure the new similarity as a measure of Faithfulness and illustrate this process in In Figure 6. We additionally quantify the inefficiency of several second-order methods as well as their effectiveness for semantic segmentation label propagation. In particular, we measure how well the explanation method can project a source object onto a target object. We treat this as a binary segmentation problem and measure the mean intersection over union (mIoU) of the projected object with respect to the true object mask. We note that mIoU is not a direct measurement of interpretation quality, but it can be useful for those intending to use model-interpretation methods for label propagation (Ahn et al., 2019;Wang et al., 2020b). These results demonstrate that axiomatically grounded model explanation methods such as IG SAM could offer improvement on downstream tasks. Because human evaluations introduce biases such as preference for compact or smoothness explanations, we consider Mechanical Turk (Paolacci et al., 2010) studies outside the scope of this work. Results In Table 1 and . We note that SBSM was not originally presented as a second-order method, and we describe how it can be lifted to this higher order setting in Section A.11 of the Supplement. We also evaluate several existing classifier explanation approaches applied to our search explanation value functions such as Integrated Gradients ( . For second-order variants of LIME and SHAP we used the local weighting kernel and our Shapley-Taylor approximation kernel from Section 5.2. Overall, several key trends appear. First, Shapley and Aumann-Shapley based approaches tend to be the most faithful and efficient methods, but at the price of longer computation time. One method that strikes a balance between speed and quality is our Integrated Gradient generalization of CAM which has both high faithfulness, low inefficiency, and only requires a handful of network evaluations (∼ 10 2 ). Furthermore, grey-box feature interpretation methods like SAM and IG SAM tend to perform better for label propagation. Finally, our methods beat existing baselines in several different categories and help to complete the space of higher order interpretation approaches. We point readers to the Section A.2 for additional details, compute information, and code.
Datasets
CONCLUSION
In this work we have presented a uniquely specified and axiomatic framework for model-agnostic search, retrieval, and metric learning interpretability using the theory of Harsanyi dividends. We characterize search engine interpretability methods as either "first" or "second" order methods depending on whether they extract the most important areas or pairwise correspondences, respectively. We show that Shapley values of a particular class of value functions generalize many first-order methods, and this allows us to fix issues present in existing approaches and extend these approaches to counterfactual explanations. For second order methods we show that Shapley-Taylor indices generalize the work of Zhu et al. (2019) and use our framework to introduce generalizations of LIME, SHAP, and GradCAM. We apply these methods to extract image correspondences from opaque-box similarity models, a feat not yet presented in the literature. To accelerate estimation higher order Shapley-Taylor indices, we contribute a new weighting kernel that requires 10× fewer function evaluations. Finally, we show this game-theoretic formalism yields methods that are more "faithful" to the underlying model and better satisfy efficiency axioms across several visual similarity methods.
A APPENDIX
A.1 VIDEO AND CODE
We include a short video description of our work at https://aka.ms/axiomatic-vdieo.
We also provide training and evaluation code at https://aka.ms/axiomatic-code Models: Our evaluation experiments use visual similarity systems built from "backbone" networks that featurize images and compare their similarity using cosine distance. We consider supervised backbones and contrastive unsupervised backbones. In particular, ResNet50 (He et al., 2016), VGG11 (Simonyan & Zisserman, 2014), and DenseNet121 (Huang et al., 2017) are trained with human classification annotations from the ImageNet dataset (Deng et al., 2009) and MoCo V2 is trained using unsupervised contrastive learning on ImageNet. We use torchvision (Marcel & Rodriguez, 2010) based model implementations and pre-trained weights except for MocoV2 which we download from He & Wu (2021) (800 epoch model). For kernel convergence experiments in Figure 5 we use randomly initialized three layer deep networks with Glorot (Glorot et al., 2011) initialization, rectified linear unit activations, and a 20 dimensional hidden layer. We note that the functional form is not of much importance for these experiments so long as the function is nonlinear and non-quadratic. We provide an additional example in Figure 10 on random 15 dimensional Boolean functions formed by enumerating and summing all possible variable products and weighting each by a uniform coefficient between 0 and 10.
Data: For evaluations within Table 1 we use the Pascal VOC (Everingham et al., 2010) dataset. In particular we form a paired image dataset by using MoCo V2 to featurize the training and validation sets. All experiments use images that have been bi-linearly resized to 224 × 224 pixels. For each image in the PascalVOC validation set we choose a random segmentation class that contains over 5% of image pixels. We then find each validation image's closest "Conditional Nearest Neighbor" (Hamilton et al., 2020) from the images of the training set of the chosen segmentation class. We use cosine similarity between MoCoV2 deep features to find nearest neighbors. With this dataset of pairs, we can then compute our first and second order evaluation metrics. We provide instructions for downloading the pairs of images in the attached code. We note that our approach for selecting pairs of images with matching segmentation labels allow for measuring Faithfulness and success in label propagation as measured by mIoU.
Metrics: Our attached code contains implementations all metrics for preciseness but we include descriptions of metrics here for clarity. To measure first order faithfulness, we take a given validation image and training image from our dataset of paired images and compute the first order heat-map over the validation image. We then blur the top 30% of pixels by blurring the image with a 25 × 25 pixel blur kernel and replacing the top 30% of original image pixels with those from the blurred image. The drop in cosine similarity between the unblurred images and the unblurred training and blurred validation image is the first order faithfulness. We illustrate our first-order evaluation strategy in Figure 7.
For our second-order evaluation, we use the ground truth semantic segmentation mask of the training image as a "query" attention signal. We then use the second-order interpretation methods to "project" this attention to the "retrieved" validation image. We censor all but the most-attended pixels in the retrieved image. The size of the remaining pixels matches the size of the validation image's selected semantic segmentation mask. In the second-order case we additionally measure the mean intersection over union (mIoU) of the resulting mask compared to the ground-truth retrieved image segmentation. A good approach should attend to jut the pixels of the segmentation class and thus yield a mIoU of 1 (maximum value) as a binary segmentation problem. We illustrate our second-order evaluation strategy in Figure 6.
Finally, for those methods that permit it, we measure how much they violate the efficiency axiom by summing the interpretation coefficients and comparing with v(N ) − v(∅). In the first order setting v(N ) is the similarity between query and retrieved image, and v(∅) is the similarity between query and a blurred retrieved image (with 25 pixel blur). In the second order setting v(∅) represents the similarity when both images are blurred. For SAM-based methods we replace features with those from blurred images. To compute the sum of interpretation coefficients for kernel methods we sum over Shapley values in the first order case and over Shapley-Taylor indices of order k ≤ 2 in the second-order case. For Partition SHAP Lundberg we sum coefficients over all pixels. For Integrated Hessian's we sum over all first and second order coefficients as described in Janizek et al. (2020).
In tables we report mean values of Inefficiency, and Faithfulness metrics and note that for these experiemtns the Standard Error of the Mean (SEM) is far below the three significant figure precision of the table.
First Order Methods: For first order explanations we use the official implementation of Image-LIME (Ribeiro, 2021) and use the SHAP package for Integrated Gradients, Partition SHAP, and Kernel SHAP (Lundberg). We re-implement SBSM and VESM in PyTorch from the instructions provided in their papers. For sampling procedures such as LIME, Kernel SHAP, and Partition SHAP we use 5000 function evaluations. For first and second-order super-pixel based methods (LIME, Kernel-SHAP) we use the SLIC superpixel method (Achanta et al., 2010) provided in the Scipy library (Virtanen et al., 2020) with 50 segments, compactness = 10, and σ = 3. For SBSM we use a window size of 20 pixels and a stride of 3 pixels. We batch function evaluations with minibatch size 64 for backbone networks and 64 × 20 for SAM based methods. For all background distributions we blur the images with a 25-pixel blur kernel with the exception of LIME and SBSM which use mean color backgrounds. Let v(S) : [0, 1] N → R := f (mask(x, S)) represent soft masking of the spatial locations of a deep feature map x with the vector of zeros and applying a differentiable function f . We begin with the formulation of Integrated Gradients:
IG hw (S) = (S hw − S hw ) 1 α=0 ∂v(αS + (1 − α)S ) ∂T hw
In our case the foreground, S := 1 HW , is a mask of all 1s and the background, S , is the zero mask of the same shape. We note that the ∂ ∂T hw refers to taking the partial of the full input αS, not just the mask S. We include this to stress the subtle difference which can be missed in a quick reading of the equations of Sundararajan et al. (2017a). In this case our formula is simplified to:
IG hw (S) = 1 α=0 ∂v(αS) ∂T hw
Approximating this integral with a single sample at α = 1 yields:
IG hw (S) ≈ ∂v(S) ∂S hw = ∂f (mask(x, S)) ∂T hw = ∂ ∂T hw f (x S) = c x chw ∂f (x) ∂x chw = c x chw GAP (∇ x f (x)) (Spatially Invariant Derivatives)
Which is precisely the formulation of GradCAM. This also makes it clear that the global average pooling of GradCAM causes the method to deviate from integrated gradients in the general case.
To construct a function where GradCAM violates the dummy axiom we simply have to violate the spatial invariance of gradients. We provide a specific example of this violation in A.4.
A.4 GRADCAM VIOLATES THE DUMMY AXIOM Where sim cosine represents cosine similarity, represents elementwise multiplcation, and M ∈ [0, 1] CHW is a mask where M chw = 0 if w ≤ W 2 and M chw = 1 otherwise. Intuitively, M removes the influence of any feature on the left of the image making these features "dummy" features for the model. Because GradCAM spatially averages the gradients prior to taking the inner product with the feature map all features are treated equally regardless of how they are used. In this example, depicted in Figure 8, positive contributions from the right side of the image are extended to the left side of the image despite the fact that the mask, M stops these features from impacting the prediction. Using a Shapley or Aumann-Shapley approximator on the feature space does not suffer from this effect as shown in the two right columns of Figure 8.
A.5 INTEGRATED GRADIENT CAM Sections A.4 and A.3 demonstrate that GradCAM can violate the dummy axiom when the function has spatially varying gradients which is a common occurrence especially if one is trying to interpret deeper layers of a network. We remedy this by instead considering Integrated Gradients on a function which masks the spatial locations of a deep feature map. More specifically our Integrated Gradient generalization of CAM takes the following form:
IGCAM (h, w) := 1 α=0 ∂f (b + αM (x − b)) ∂T hw(9)
Where f is the classification "head", x ∈ R CHW is a tensor of deep image features, M := 1 HW is a mask of 1s over the spatial location of the features, b ∈ R CHW is a background signal commonly taken to be zero in GradCAM. We note that the ∂ ∂T hw refers to taking the partial of the full input b + αM (x − b), not just the mask. We include this to stress the subtle difference which can be missed in a quick reading of the equations of Sundararajan et al. (2017a). This variant of GradCAM does not violate the dummy axiom and satisfies the axioms of the Aumann-Shapley fair credit assignment.
A.6 ADDITIONAL SIMILARITY VISUALIZATIONS Figure 9: Additional first-order search interpretations on random image pairs from the Pascal VOC dataset A.7 ADDITIONAL RESULTS FOR STANFORD ONLINE PRODUCTS to joint interpretability we will review the original implementation for marginal interpretability. SBSM uses a sliding square mask and multiple evaluations of the search engine to determine which regions of the image are important for similarity. More formally, let q, and r represent the pixels of the query image and retrieved image. Let M s ij (q) represent the result of replacing a square of pixels of size s × s centered at pixel (i, j) with a "background value" which in our case is black. SBSM "slides" this mask across the query image and compares the similarity between the masked query and retrieved image. These masked similarity values are compares to the baseline similarity value and stored in a weight matrix, w: w ij = min d M s ij (q), r − d (q, r) , 0
Intuitively speaking, the weights w ij represent the impact of masking a square centered at (i, j). For areas that are critical to the similarity, this will result in w ij > 0. Finally, an attention mask on the query image is formed by a weighted average of the masks used to censor the images. For square masks, this can be achieved efficiently using a deconvolution with a kernel of ones of size s × s on the weight matrix w. We also note that instead of evaluating the (expensive) distance computation d for every pixel (i, j), one can also sample pixels to censor. We use this approach in our joint generalization.
To generalize SBSM we use a pair of masks, one for the query image and one for the retrieved image respectively. We sample mask locations and calculate weights designed to capture the intuition that censoring corresponding areas cause similarity to increase as opposed to decrease. More specifically we use the following weighting scheme:
w hw ij = min d (q, r) − d M s ij (q) , M s hq (r) , 0
Because evaluating the similarity function for every (i, j, h, w) combination is prohibitively expensive, we instead sample masked images for our computation. To project attention from a query pixel, we query for all masks that overlap with the selected query pixel, and then average their corresponding retrieved masks according to the weights calculated in Equation 11.
Published as a conference paper at ICLR 2022 expand the definition of v 2 (αβS):
v 2 (αβS) = d(mask(x, αβS), mask(y, αβS))
= 1 H 2 W 2 c h,w a chw + αβS hw (x chw − a chw ) i,j b cij + αβS ij (y cij − b cij )
From this function we can read off the appropriate term of the hessian with respect to the mask at location (h, w) and location (i, j)
∂ 2 v 2 (αβS) ∂T hw ∂T ij = 1 H 2 W 2 c
x chw y cij − x chw b cij − a chw y cij + a chw b cij = ψ 2 hw,ij (v) We can now pull this outside the integral to yeild:
Γ hw,ij (v 2 ) = 1 α=0 1 β=0 αβ ∂ 2 v 2 (αβS) ∂T hw ∂T ij = ψ 2 hw,ij (v) 1 α=0 1 β=0 αβ = 1 4 ψ 2 hw,ij (v)
Which proves that the Shapley-Taylor index and second order Aumann-Shapley values are proportional for the GAP architecture.
A.15 EXPLAINING DISSIMILARITY
In addition to explaining the similarity between two images, our methods naturally explain image dissimilarity. In particular, regions with a negative Shapely values (Blue regions in Figure 11) contribute negatively to the similarity between the two images. These coefficients can be helpful when trying to understand why an algorithm does not group two images together. Figure 11: Explanation of why two images are similar (Red) and dissimilar (Blue). Blue regions highlight major differences between the images such as the dog playing the guitar, and the chain-link fence in the retrieved image.
Figure 1 :
1Architectures for search engine interpretability. Like classifier explanations, First-order search explanations yield heatmaps of important pixels for similarity (bottom row third column).
Figure 4 :
4Visualization of how regions of two similar images "correspond" according to the second-order search interpretability method SAM. We can use this correspondence to transfer labels or attention between similar images.3 RELATED WORKThere is a considerable body of literature on model interpretability and we mention just a handful of the works that are particularly related. One of our baseline methods, Dong et al., was one of the first to present a generic visual search engine explanation reminiscent of a Parzen-Window based estimator. Fong & Vedaldi(2017)introduce a method for explaining classifiers based on meaningful perturbation and Chefer et al. (2021) introduce a method for improving interpretation for transformer-based classifiers. Zhu et al. (2019) lifted CAM to search engines and we find that our Shapley-Taylor based method aligns with their approach for GAP architectures. Singh & Anand (2019) and Fernando et al. (2019) use LIME and DeepSHAP to provide first-order interpretations of text but do not apply their methods to images. Ancona et al. (2019) introduce a distribution propagation approach for improving the estimation of Shapley Values for deep models and can be combined with our approach. Many works implicitly use components that align with Shapley-Taylor indices for particular functions. Works such as Fischer et al. (2015a); Sun et al. (2020); Wang et al. (2020a); Chen et al. (2020c); Hou et al. (2019) use feature correlation layers to estimate and utilize correspondences between images. We show these layers are equivalent to Shapley-Taylor indices on the GAP architecture, and this allows create a correlation layer that handles counterfactual backgrounds. Other recent works have used learned co-attention within transformer architectures to help pool and share information across multiple domain types (Wei et al., 2020). Fu et al. (2020) attempt to learn a variant of GradCAM that better aligns with axioms similar to Shapley Values by adding efficiency regularizers.
SBSM (Dong et al.) andVESM (Zheng et al., 2020), on a pair of images and a MocoV2 based image similarity model. In addition to generalizing LIME and SHAP we note that this approach generalizes VEDML(Zhu et al., 2019), a metric-learning adaptation of CAM: Proposition 4.1 Let X = Y = R CHW and represent the space of deep network features where C, H, W represent a channel, height, and width of the feature maps respectively. Let the function d := c GAP (x) c GAP (y) c . Let the grand coalition, N = [0, H] × [0, W ], index the spatial coordinates of the image feature map y. Let the function mask(y, S) act on a feature map y by replacing the features at locations S with a background signal b. Then:
Figure 5 :
5Convergence of Shapley-Taylor estimation schemes with respect to the Mean Squared Error (MSE) on randomly initialized deep networks with 15 dimensional input. Our strategies (Kernel) converge with significantly fewer function evaluations.
Figure 6 :
6Our Second-order explanation evaluation strategy. A good method should project query objects (top left and middle) to corresponding objects in the retrieved image (bottom left and middle)
1 :
1Comparison of performance of first-and second-order search explanation methods. Methods introduced in this work are highlighted in pink. *Though SAM generalizes (Zhu et al., 2019) we refer to it as a baseline. For additional details see point-to-point" signal in Zhu et al. (2019). In particular, our axiomatically derived version has the extra terms allow counterfactual explanations against different background signals. Like in the first-order case, this closed form only holds in the GAP architecture.
We evaluate our methods on the Pascal VOC(Everingham et al., 2010) and MSCoCoCaesar et al. (2018) semantic segmentation datasets. To compute first and second order faithfulness we mine pairs of related images with shared object classes. We use the MoCo V2 unsupervised image representation method to featurize the training and validation sets. For each image in the validation set we choose a random object from the image and find the training image that contains an object of the same class(Hamilton et al., 2020).
Sundararajan et al., 2017a) on image pixels, Partition SHAP (Lundberg), LIME, Kernel SHAP (KSHAP), and GradCAM (GCAM) on deep feature maps(Selvaraju et al., 2017)
Figure 7 :
7First-order interpretation evaluation strategy. A good method should highlight pixels in the query image (top left and middle) that, when censored (top right), have the largest possible impact on the cosine distance.
Figure 8 :
8Interpretations of a function that purposely ignores the left half of the image. KSAM and IGSAM properly assign zero weight to these features. GradCAM does not and hence violates the dummy axiom of fair credit assignment. It is straightforward to construct examples where GradCAM violates the dummy axiom. For example, consider the function: d(x, y) = sim cosine (GAP (x), GAP (y M ))
Figure 10 :
10Kernel convergence for random functions generated by randomly choosing coefficients. Results generally mirror those for randomly initialized deep networks A.11 GENERALIZING SBSM TO JOINT SEARCH ENGINE INTERPRETABILITY Before generalizing SBSM (Dong et al.)
Table
Table 4
4of the Supplement we report experimental results for PascalVOC and MSCoCo respectively. We evaluate across visual search engines created from four different backbone networks: DenseNet121(Huang et al., 2017), MoCo v2,ResNet50 (He et al., 2016), and VGG11(Simonyan & Zisserman, 2014) using cosine similarity on GAP features. As baselines we include VESM, SBSM, and SAM which generalizes(Zhu et al., 2019)
Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick Van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015a. Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick Van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015b. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 315-323. JMLR Workshop and Conference Proceedings, 2011.Trevor J Hastie and Robert J Tibshirani. Generalized additive models, volume 43. CRC press, 1990.Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with
application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision
and Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005.
Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influ-
ence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and
privacy (SP), pp. 598-617. IEEE, 2016.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi-
erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition,
pp. 248-255. Ieee, 2009.
Alejandro Diaz. Through the google goggles: Sociopolitical bias in search engine design. In Web
search, pp. 11-34. Springer, 2008.
Bo Dong, Roddy Collins, and Anthony Hoogs. Explainability for content-based image retrieval.
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object
classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, June 2010.
Zeon Trevor Fernando, Jaspreet Singh, and Avishek Anand. A study on the interpretability of
neural retrieval models using deepshap. In Proceedings of the 42nd International ACM SIGIR
Conference on Research and Development in Information Retrieval, pp. 1005-1008, 2019.
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful pertur-
bation. In Proceedings of the IEEE international conference on computer vision, pp. 3429-3437,
2017.
Ruigang Fu, Qingyong Hu, Xiaohu Dong, Yulan Guo, Yinghui Gao, and Biao Li. Axiom-
based grad-cam: Towards accurate visualization and explanation of cnns. arXiv preprint
arXiv:2008.02312, 2020.
Eric Goldman. Search engine bias and the demise of search engine utopianism. Yale JL & Tech., 8:
188, 2005.
Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. Counterfactual visual
explanations. In International Conference on Machine Learning, pp. 2376-2384. PMLR, 2019.
Mark Hamilton, Stephanie Fu, William T Freeman, and Mindren Lu. Conditional image retrieval.
arXiv preprint arXiv:2007.07177, 2020.
John C Harsanyi. A simplified bargaining model for the n-person cooperative game. International
Economic Review, 4(2):194-220, 1963.
Kaiming He and Yuxin Wu. Moco: Momentum contrast for unsupervised visual representation
learning. https://github.com/facebookresearch/moco, 2021.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-
nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.
770-778, 2016.
Xi Wei, Tianzhu Zhang, Yan Li, Yongdong Zhang, and Feng Wu. Multi-modality cross attention
network for image and sentence matching. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pp. 10941-10950, 2020.
H Peyton Young. Monotonic solutions of cooperative games. International Journal of Game Theory,
14(2):65-72, 1985.
Meng Zheng, Srikrishna Karanam, Terrence Chen, Richard J Radke, and Ziyan Wu. Towards visu-
ally explaining similarity models. arXiv preprint arXiv:2008.06035, 2020.
B. Zhou, A. Khosla, Lapedriza. A., A. Oliva, and A. Torralba. Learning Deep Features for Discrim-
inative Localization. CVPR, 2016.
Sijie Zhu, Taojiannan Yang, and Chen Chen. Visual explanation for deep metric learning. arXiv
preprint arXiv:1909.12977, 2019.
Table 2 :
2Comparison of performance of first-order search interpretation methods across different visual search systems on the Stanford Online Product dataset. Methods introduced in this work are highlighted in pink. *Though SAM generalizes Zhu et al. (2019) we refer to it as a baseline. For additional details see Section 6 A.8 ADDITIONAL RESULTS FOR CALTECH-UCSD BIRDS 200 (CUB) DATASETS B S M
P S H
A
P L I M
E
K
S H
A
P V
E S M
G
C A
M
S A
M
*
I G
S A
M K
S A
M
Metric Model
Model Agnostic
Architecture Dependent
DN121
0.18 0.23 0.20 0.22 0.09 0.13 0.12 0.18 0.18
MoCoV2 0.24 0.30 0.27 0.18 0.14 0.2 0.21 0.24 0.24
RN50
0.11 0.14 0.12 0.13 0.03 0.07 0.07 0.10 0.10
Faith.
VGG11
0.15 0.16 0.14 0.15 0.04 0.08 0.09 0.12 0.12
DN121
-
0.00 0.24 0.00
-
11.2 0.54 0.02 0.00
MoCoV2
-
0.00 0.17 0.00
-
0.34 0.57 0.02 0.00
RN50
-
0.00 0.21 0.00
-
13.6 0.39 0.02 0.00
Ineff.
VGG11
-
0.00 0.24 0.00
-
4.13 0.47 0.04 0.00
Table 3 :
3Comparison of performance of first-order search interpretation methods across different visual search systems on the CUB dataset. Methods introduced in this work are highlighted in pink. *Though SAM generalizes Zhu et al. (2019) we refer to it as a baseline. RN50-ML refers to a ResNet50 architecture trained for metric learning on the CUB dataset with the margin loss Roth et al. (2020). For additional details see Section 6 RN50-ML 0.39 0.49 0.47 0.49 0.04 0.14 0.17 0.41 0.41 MoCoV2 0.32 0.47 0.39 0.41 0.26 0.26 0.26 0.34 0.34S B S M
P S H
A
P
L I M
E
K
S H
A
P
V
E S M
G
C A
M
S A
M
*
I G
S A
M K
S A
M
Metric Model
Model Agnostic
Architecture Dependant
DN121
0.25 0.38 0.31 0.34 0.15 0.10 0.12 0.30 0.30
RN50
0.14 0.21 0.18 0.18 0.05 0.07 0.07 0.14 0.14
Faith.
VGG11
0.23 0.31 0.26 0.27 0.11 0.15 0.16 0.23 0.22
DN121
-
0.00 0.17 0.00
-
16.0 0.58 0.02 0.00
RN50-ML
-
0.00 0.13 0.00
-
5.23 0.48 0.03 0.00
MoCoV2
-
0.00 0.19 0.00
-
0.44 0.60 0.03 0.00
RN50
-
0.00 0.15 0.00
-
15.5 0.43 0.02 0.00
Ineff.
VGG11
-
0.00 0.17 0.00
-
4.25 0.54 0.05 0.00
Table 4 :
4Comparison of performance of first and second-order search interpretation methods across different visual search systems on the MSCOCO dataset. Methods introduced in this work are highlighted in pink. *Though SAM generalizes Zhu et al. (2019) we refer to it as a baseline. For additional details see Section 6S B S M
P S H
A
P L I M
E
K
S H
A
P V
E S M
G
C A
M
S A
M
*
I G
S A
M K
S A
M
M
e t r i c
O
r d e r
M
o d e l
Model Agnostic
Architecture Dependent
ACKNOWLEDGMENTSWe would like to thank Siddhartha Sen for sponsoring access to the Microsoft Research compute infrastructure. We would also like to thank Zhoutong Zhang, Jason Wang, and Markus Weimer for their helpful commentary on the work. We thank the Systems that Learn program at MIT CSAIL for their funding and support.This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2021323067. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of the National Science Foundation.This work is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/)Let X = Y = R CHW and represent the space of deep network features where C, H, W represent a channel, height, and width of the feature maps respectively. Let the function d := c GAP (x) c GAP (y) c . Let the grand coalition, N = [0, H] × [0, W ], index into the spatial coordinates of the image feature map y. Let the function mask(y, S) act on a feature map y by replacing the features at locations S with a background signal b. For notational convenience let ψ i (v) := φ v (i) represent the Shapley value the i th player under the value function v. We begin by expressing the left-hand side of the proposition:A.13 PROOF OF PROPOSITION 5.1Let the spaces X , Y and function d be as in Proposition 4.1. As a reminder the function d represents the un-normalized GAP similarity function. Let the grand coalition, N , index into the spatial coordinates of both the query image features x ∈ R CHW and retrieved image features y ∈ R CHW . Let the function mask(y, S) act on a feature map y by replacing the corresponding features with a background feature map a for query features and b for retrieved features. We can represent the set of players, N , as a set of ordered pairs of coordinates with additional information about which tensor, the query (0) or retrieved (1) features, they represent:In the subsequent proof we omit these 0, 1 tags as it is clear from our notation which side, query or retrieved, the index refers to based on the index h, w for the query and i, j for the retrieved image. We first consider the zero background value function, v(S ⊂ N ), defined by censoring the spatially varying features prior to global average pooling and comparing their inner product:and likewise, for y cij . When S contains all i, j, h, w this represents the similarity judgement from the GAP network architecture. We seek the Shapley-Taylor index for a pair of image locations S = {(h, w), (i, j)}. For notational convenience let ψ k S (v) := φ k v (S) represent the k−order interaction = 1 H 2 W 2 c x chw y cij By following the same set of reasoning, we can introduce nonzero background values a chw and b cij to yield the following:A.14 PROOF THAT SHAPLEY TAYLOR IS PROPORTIONAL TO INTEGRATED HESSIANS FOR GAP ARCHITECTURE As in Proposition 4.2 we consider the soft masking or "multilinear extension" of our second-order value function v 2 : v 2 (S) : [0, 1] N → R := d(mask(x, S), mask(y, S))let hw, and ij be members of the grand coalition N such that hw = ij. We begin our proof with the expression for the off-diagonal terms of the Integrated Hessian.Where ∂ ∂T h w represents the hw component of the partial derivative with respect to αβS, not to be confused with the partial derivative of just S. Like in our proof of Proposition 4.2, because our function is defined on the interval [0, 1] N many of the terms mentioned in Janizek et al. (2020) drop out and instead are captured in the Hessian of the function with repspect to the soft mask. We now The term "axiomatic" can mean different things to different readers. When this work refers to "axiomatic" methods we refer to methods that approximate the uniquely specified explanation values dictated by the axioms of fair-credit assignment. In the first-order case, these explanations are the Shapey Values and satisfy the axioms of linearity, efficiency, dummy, and symmetry. In the higher-order case these fair credit assignments are the Shapley-Taylor Indices and satisfy analogous axiomsSundararajan et al. (2020). We note that our methods converge to the true Shapley and Shapley-Taylor indices and thus the deviations that arise as part of convergence induce corresponding deviations from the axioms of fair credit assignment. Nevertheless, we find that these deviations become negligible as our methods converge to the true Shapley and Shapley-Taylor values. This starkly contrasts the behavior of methods that do not converge to values that satisfy the axioms of fair credit assignment such as GradCAM as shown inFigure 8.
Slic superpixels. Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, Sabine Süsstrunk, Technical reportRadhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. Slic superpixels. Technical report, 2010.
Weakly supervised learning of instance segmentation with inter-pixel relations. Jiwoon Ahn, Sunghyun Cho, Suha Kwak, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJiwoon Ahn, Sunghyun Cho, and Suha Kwak. Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2209-2218, 2019.
Explaining deep neural networks with a polynomial time algorithm for shapley value approximation. Marco Ancona, Cengiz Oztireli, Markus Gross, International Conference on Machine Learning. PMLRMarco Ancona, Cengiz Oztireli, and Markus Gross. Explaining deep neural networks with a poly- nomial time algorithm for shapley value approximation. In International Conference on Machine Learning, pp. 272-281. PMLR, 2019.
Values of non-atomic games. J Robert, Lloyd S Aumann, Shapley, Princeton University PressRobert J Aumann and Lloyd S Shapley. Values of non-atomic games. Princeton University Press, 2015.
On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek, PloS one. 107130140Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
Representation learning: A review and new perspectives. Yoshua Bengio, Aaron Courville, Pascal Vincent, IEEE transactions on pattern analysis and machine intelligence. 35Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
Beyond text queries: Searching with bing visual search. Bing, Bing. Beyond text queries: Searching with bing visual search, Jun 2017. URL https://aka. ms/AAas7jg.
Coco-stuff: Thing and stuff classes in context. Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHolger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1209- 1218, 2018.
Unsupervised learning of visual features by contrasting cluster assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, arXiv:2006.09882arXiv preprintMathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.
Transformer interpretability beyond attention visualization. Hila Chefer, Shir Gur, Lior Wolf, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 782-791, 2021.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, arXiv:2002.05709arXiv preprintTing Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a.
Xinlei Chen, Haoqi Fan, arXiv:2003.04297Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprintXinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
Show, match and segment: Joint weakly supervised learning of semantic matching and object co-segmentation. Yun-Chun Chen, Yen-Yu Lin, Ming-Hsuan Yang, Jia-Bin Huang, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Yun-Chun Chen, Yen-Yu Lin, Ming-Hsuan Yang, and Jia-Bin Huang. Show, match and segment: Joint weakly supervised learning of semantic matching and object co-segmentation. IEEE Trans- actions on Pattern Analysis and Machine Intelligence (PAMI), 2020c.
Cross attention network for few-shot classification. Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, Xilin Chen, arXiv:1910.07677arXiv preprintRuibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Cross attention network for few-shot classification. arXiv preprint arXiv:1910.07677, 2019.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
Explaining explanations: Axiomatic feature interactions for deep networks. D Joseph, Pascal Janizek, Su-In Sturmfels, Lee, arXiv:2002.04138arXiv preprintJoseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. arXiv preprint arXiv:2002.04138, 2020.
Bias plus variance decomposition for zero-one loss functions. Ron Kohavi, H David, Wolpert, ICML. 96Ron Kohavi, David H Wolpert, et al. Bias plus variance decomposition for zero-one loss functions. In ICML, volume 96, pp. 275-83, 1996.
The bellkor solution to the netflix grand prize. Netflix prize documentation. Yehuda Koren, 81Yehuda Koren. The bellkor solution to the netflix grand prize. Netflix prize documentation, 81 (2009):1-10, 2009.
Analysis of regression in game theory approach. Stan Lipovetsky, Michael Conklin, Applied Stochastic Models in Business and Industry. 174Stan Lipovetsky and Michael Conklin. Analysis of regression in game theory approach. Applied Stochastic Models in Business and Industry, 17(4):319-330, 2001.
. Scott Lundberg, Shap, Scott Lundberg. shap. URL https://github.com/slundberg/shap.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Ad- vances in Neural Information Processing Systems 30, pp. 4765-4774. Curran Associates, Inc., 2017.
Torchvision the machine-vision package of torch. Sébastien Marcel, Yann Rodriguez, Proceedings of the 18th ACM international conference on Multimedia. the 18th ACM international conference on MultimediaSébastien Marcel and Yann Rodriguez. Torchvision the machine-vision package of torch. In Pro- ceedings of the 18th ACM international conference on Multimedia, pp. 1485-1488, 2010.
Interpretable machine learning. Christoph Molnar, Lulu. com. Christoph Molnar. Interpretable machine learning. Lulu. com, 2020.
Assessing bias in search engines. Abbe Mowshowitz, Akira Kawaguchi, Information Processing & Management. 381Abbe Mowshowitz and Akira Kawaguchi. Assessing bias in search engines. Information Processing & Management, 38(1):141-156, 2002.
Understanding searches better than ever before. Pandu Nayak, Pandu Nayak. Understanding searches better than ever before, Oct 2019. URL https://blog. google/products/search/search-language-understanding-bert/.
Interpretml: A unified framework for machine learning interpretability. Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana, arXiv:1909.09223arXiv preprintHarsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223, 2019.
Multilinear extensions of games. Guillermo Owen, Management Science. 185Guillermo Owen. Multilinear extensions of games. Management Science, 18(5-part-2):64-79, 1972.
Running experiments on amazon mechanical turk. Gabriele Paolacci, Jesse Chandler, Panagiotis G Ipeirotis, Judgment and Decision making. 55Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. Running experiments on amazon mechanical turk. Judgment and Decision making, 5(5):411-419, 2010.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, arXiv:2103.00020arXiv preprintAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
. Marco Ribeiro, 2021Marco Ribeiro. lime. https://github.com/marcotcr/lime, 2021.
why should I trust you?": Explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, CA, USAMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 1135-1144, 2016.
Revisiting training strategies and generalization performance in deep metric learning. Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Björn Ommer, Joseph Paul Cohen, Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Björn Ommer, and Joseph Paul Cohen. Revisiting training strategies and generalization performance in deep metric learning, 2020.
Interpreting random forests. Ando Saabas, Ando Saabas. Interpreting random forests, Oct 2014. URL http://blog.datadive.net/ interpreting-random-forests/.
Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based local- ization. In Proceedings of the IEEE international conference on computer vision, pp. 618-626, 2017.
Notes on the n-person game-ii: The value of an n-person game. S Lloyd, Shapley, Lloyd S Shapley. Notes on the n-person game-ii: The value of an n-person game. 1951.
Learning important features through propagating activation differences. Avanti Shrikumar, Peyton Greenside, Anshul Kundaje, Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences, 2017.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556arXiv preprintKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Exs: Explainable search using local model agnostic interpretability. Jaspreet Singh, Avishek Anand, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningJaspreet Singh and Avishek Anand. Exs: Explainable search using local model agnostic inter- pretability. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 770-773, 2019.
Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems. Igor Erikštrumbelj, Kononenko, 41ErikŠtrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647-665, 2014.
A survey of collaborative filtering techniques. Xiaoyuan Su, M Taghi, Khoshgoftaar, Advances in artificial intelligence. Xiaoyuan Su and Taghi M Khoshgoftaar. A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009, 2009.
Mining cross-image semantics for weakly supervised semantic segmentation. Guolei Sun, Wenguan Wang, Jifeng Dai, Luc Van Gool, European Conference on Computer Vision. SpringerGuolei Sun, Wenguan Wang, Jifeng Dai, and Luc Van Gool. Mining cross-image semantics for weakly supervised semantic segmentation. In European Conference on Computer Vision, pp. 347-365. Springer, 2020.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, International Conference on Machine Learning. PMLRMukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319-3328. PMLR, 2017a.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, International Conference on Machine Learning. PMLRMukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319-3328. PMLR, 2017b.
The shapley taylor interaction index. Mukund Sundararajan, Kedar Dhamdhere, Ashish Agarwal, International Conference on Machine Learning. PMLRMukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International Conference on Machine Learning, pp. 9259-9268. PMLR, 2020.
Regression shrinkage and selection via the lasso. Robert Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological). 581Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, J Stéfan, Matthew Van Der Walt, Joshua Brett, K Jarrod Wilson, Nikolay Millman, Mayorov, R J Andrew, Eric Nelson, Robert Jones, Eric Kern, C J Larson, İlhan Carey, Yu Polat, Eric W Feng, Jake Moore, Denis Vanderplas, Josef Laxalde, Robert Perktold, Ian Cimrman, E A Henriksen, Charles R Quintero, Anne M Harris, Antônio H Archibald, Fabian Ribeiro, Pedregosa, 10.1038/s41592-019-0686-2Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. 17Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Courna- peau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nel- son, Eric Jones, Robert Kern, Eric Larson, C J Carey,İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mul- bregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272, 2020. doi: 10.1038/s41592-019-0686-2.
Learning feature descriptors using camera pose supervision. Qianqian Wang, Xiaowei Zhou, Bharath Hariharan, Noah Snavely, Proc. European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)Qianqian Wang, Xiaowei Zhou, Bharath Hariharan, and Noah Snavely. Learning feature descriptors using camera pose supervision. In Proc. European Conference on Computer Vision (ECCV), 2020a.
Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, Xilin Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYude Wang, Jie Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Self-supervised equivari- ant attention mechanism for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12275-12284, 2020b. |
260,126,025 | Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations | Robust reinforcement learning (RL) seeks to train policies that can perform well under environment perturbations or adversarial attacks. Existing approaches typically assume that the space of possible perturbations remains the same across timesteps. However, in many settings, the space of possible perturbations at a given timestep depends on past perturbations. We formally introduce temporally-coupled perturbations, presenting a novel challenge for existing robust RL methods. To tackle this challenge, we propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable twoplayer zero-sum game. By finding an approximate equilibrium in this game, GRAD ensures the agent's robustness against temporally-coupled perturbations. Empirical experiments on a variety of continuous control tasks demonstrate that our proposed approach exhibits significant robustness advantages compared to baselines against both standard and temporally-coupled attacks, in both state and action spaces. | [
249538336,
222177165,
231662383,
235458224,
235593394,
12529428,
15995898,
235377213
] | Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations
Yongyuan Liang
Yanchao Sun
Ruijie Zheng [email protected]
Xiangyu Liu
Tuomas Sandholm [email protected]
Furong Huang [email protected]
Stephen Mcaleer [email protected]
Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations
Robust reinforcement learning (RL) seeks to train policies that can perform well under environment perturbations or adversarial attacks. Existing approaches typically assume that the space of possible perturbations remains the same across timesteps. However, in many settings, the space of possible perturbations at a given timestep depends on past perturbations. We formally introduce temporally-coupled perturbations, presenting a novel challenge for existing robust RL methods. To tackle this challenge, we propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable twoplayer zero-sum game. By finding an approximate equilibrium in this game, GRAD ensures the agent's robustness against temporally-coupled perturbations. Empirical experiments on a variety of continuous control tasks demonstrate that our proposed approach exhibits significant robustness advantages compared to baselines against both standard and temporally-coupled attacks, in both state and action spaces.
Introduction
In recent years, reinforcement learning (RL) has demonstrated remarkable success in tackling complex decision-making problems in various domains. However, the vulnerability of deep RL algorithms to test-time changes in the environment or adversarial attacks has raised significant concerns for real-world applications. Developing robust RL algorithms that can defend against these adversarial attacks is crucial for the safety, reliability and effectiveness of RL-based systems.
In most existing research on robust RL [24; 32; 64; 66; 74], the adversary is able to perturb the observation or action every timestep under a static constraint. Specifically, the adversary's perturbations are constrained within a predefined space, such as an L p norm, which remains unchanged from one timestep to the next. This standard assumption in the robust RL literature can be referred to as a non-temporally-coupled assumption. This static constraint, however, can result in much different way of perturbation at every consecutive time steps. For example, the attacker may be able to blow the wind hard southeast at time t but northwest at time t + 1 within this L p norm under this static constraint. In contrast, in the realm of real-world settings, the adversary may not have complete flexibility to perturb the environment differently across timesteps. For example, it is unlikely for the wind to move in one direction in one second, then in the opposite direction in the next second. In these temporally-coupled settings, employing a robust policy learning technique designed for the static attack strategy would result in an excessively conservative policy. However, by formulating the robust RL problem as a partially-observable two-player game, we introduce a game-theoretic algorithm which lets the agent automatically adapt to the adversary under any attack constraints, either standard or temporally-coupled.
In this paper, we propose a novel approach: Game-theoretic Response approach for Adversarial Defense (GRAD) that leverages Policy Space Response Oracles (PSRO) [28] for robust training in the temporally-coupled setting. Our method aims to enhance the agent's resilience against the most powerful adversary in both state and action spaces. We model the interaction between the agent and the temporally-coupled adversary as a two-player zero-sum game and employ PSRO to ensure the agent's best response against the learned adversary and find an approximate equilibrium. This game-theoretic framework empowers our approach to effectively maximize the agent's worst-case rewards by adapting to the strongest adversarial strategies.
Our contributions are three-fold: First, we propose a novel class of temporally-coupled adversarial attacks to identify the realistic pitfalls of prior threat models and propose a challenge for existing robust RL methods which overlook the strength of temporally-coupled adversaries. Secondly, we introduce a game-theoretic response approach, referred to as GRAD, for robust training with a temporally-coupled adversary. We elaborate the theoretical advantages of our approach compared to existing robust RL methods. Lastly, we provide extensive empirical results that demonstrate the effectiveness of our approach in defending against both temporally-coupled attacks and standard (non-temporally coupled) attacks. Our evaluations span across various continuous control tasks, considering perturbations in both state and action spaces. Figure 1 shows interpretable phenomenons of GRAD agent and robust baselines under different types of attacks in Humanoid.
Our robust model: more robust and stable under different types of attacks
Robust baselines: robust under standard attacks, fall down under the temporally-coupled attacks
Standard Attacks
Temporally-coupled Attacks Figure 1: The robust GRAD agents (top) and the state-of-the-art robust WocaR-RL [32](bottom) show different learned behaviors. Under standard non-temporally-coupled attacks, both agents maintain basic body stability, with the GRAD agent attempting to avoid lateral rotations. However, under temporally-coupled attacks, the baseline agent is prone to falling towards one side, while GRAD maintains a higher level of robustness. Moreover, in a multi-agent game, an agent's behavior can create adversarial perturbations to a victim agent [17]. Pinto et al. [54] model the competition between the agent and the attacker as a zero-sum two-player game, and train the agent under a learned attacker to tolerate both environment shifts and adversarial disturbances.
Robust Markov decision process and safe RL. There are several lines of work that study RL under safety/risk constraints [21; 16; 15; 1; 67] or under intrinsic uncertainty of environment dynamics [33; 37]. In particular, there are several works discussing coupled or non-rectangular uncertainty sets, which allow less conservative and more efficient robust policy learning by incorporating realistic conditions that naturally arise in practice. Mannor et al. [38] propose to model coupled uncertain parameters based on the intuition that the total number of states with deviated parameters will be small. Mannor et al. [39] identify "k-rectangular" uncertainty sets defined by the cardinality of possible conditional projections of uncertainty sets, which can lead to more tractable solutions. Another recent work by Goyal et al. [18] propose to model the environment uncertainty with factor matrix uncertainty sets, which can efficiently compute an optimal robust policy.
Two-player zero-sum games. There are a number of related deep reinforcement learning methods for two-player zero-sum games. CFR-based techniques such as Deep CFR [4], DREAM [62], and ESCHER [40], use deep reinforcement learning to approximate CFR. Policy-gradient techniques such as RPG [60], NeuRD [23], Friction-FoReL [53; 52], and MMD [59], approximate Nash equilibrium via modified actor-critic algorithms. Our robust RL approach takes the double oracle techniques such as PSRO [28] as the backbone. PSRO-based algorithms have been shown to outperform the previously-mentioned algorithms in certain games [42]. More related work on game-theoretic RL is discussed in Appendix A.
Preliminaries
Notations and Background. A Markov decision process (MDP) can be defined as a tuple S, A, P, R, γ , where S and A represent the state space and the action space, R is the reward function: R : S × A → R, P : S × A → ∆(S) represents the set of probability distributions over the state space S and γ ∈ (0, 1) is the discount factor. The agent selects actions based on its policy, π : S → ∆(A), which is represented by a function approximator (e.g. a neural network) that is updated during training and fixed during testing. The value function is denoted by V π (s) := E P,π [ ∞ t=0 γ t R(s t , a t ) | s 0 = s], which measures the expected cumulative discounted reward that an agent can obtain from state s ∈ S by following policy π.
State Adversaries. State adversary is a type of test-time attacker that perturbs the agent's state observation returned by the environment at each time step and aims to reduce the expected episode reward gained by the agent. While the input to the agent's policy is perturbed, the underlying state in the environment remains unchanged. State adversaries, such as those presented in [74; 73; 64], typically consider perturbations on a continuous state space under a certain attack budget . The attacker perturbs a state s intos ∈ B (s), where B (s) is a p norm ball centered at s with radius .
Action Adversaries. Action adversaries' goal is to manipulate the behavior of the agent by directly perturbing the action a executed by the agent toã before the environment receives it (altering the output of the agent's policy), causing it to deviate from the optimal policy. In addition to directly perturbing actions, recent work [66] has also considered the setting where the action adversary selects a different, adversarial action with the probability α as an uncertainty constraint. In this paper, we focus solely on continuous-space perturbations and employ an admissible action perturbation budget as a commonly used p threat model, similar to the state perturbation.
Zero-sum Game. We model the game between the agent and the adversary as a two-player zero-sum game that is a tuple S, Π a , Π v , P, R, γ , where Π a and Π v denote the sets of policies for the agent and the adversary, respectively. In this framework, both the transition kernels P and the reward function R of the victim agent depend on not only its own policy π a ∈ Π a , but also the adversary's policy π v ∈ Π v . The adversary's reward R(s t ,ā t ) is defined as the negative of the victim agent's reward R(s t , a t ), reflecting the zero-sum nature of the game.
Double Oracle Algorithm (DO) and Policy Space Response Oracles (PSRO). Double oracle [45] is an algorithm for finding a NE in normal-form games. The algorithm operates by keeping a population of strategies Π t at time t. Each iteration, a NE π * ,t is computed for the game restricted to strategies in Π t . Then, a best response BR i (π * ,t −i ) to this NE is computed for each player i and added Algorithm 1 Policy Space Response Oracles [28] Result: Nash Equilibrium Input: Initial population Π 0 repeat {for t = 0, 1, . . .} π r ← NE in game restricted to strategies in Π t
for i ∈ {1, 2} do Find a best response β i ← BR i (π r −i ) Π t+1 i ← Π t i ∪ {β i } end for
until Approximate exploitability is less than or equal to zero Return: π r to the population, Π t+1
i = Π t i ∪ {BR i (π * ,t −i )} for i ∈ {1, 2}.
Although in the worst case DO must expand all pure strategies before π * ,t converges to a NE in the original game, in many games DO terminates early and outperforms alternative methods. An interesting open problem is characterizing games where DO will outperform other methods.
Policy Space Response Oracles (PSRO) [28; 48; 11; 44; 41], shown in Algorithm 1 are a method for approximately solving very large games. PSRO maintains a population of reinforcement learning policies and iteratively trains a best response to a mixture of the opponent's population. PSRO is a fundamentally different method than the previously described methods in that in certain games it can be much faster but in other games it can take exponentially long in the worst case. Neural Extensive Form Double Oracle (NXDO) [42] combines PSRO with extensive-form game solvers and can be used to converge faster that PSRO.
Methodology
In this section, we first formally define temporally-coupled attacks. Then, we introduce our algorithm, a game-theoretic response approach for adversarial defense against the proposed attacks.
Temporally-coupled Attack
In adversarial RL, it is common and reasonable to impose restrictions on the power of an adversary. To achieve this, we introduce the concept of standard admissible perturbations, as defined in Definition 1, which limits the adversary's ability to perturb a state s t or an action a t to a predefined set. Definition 1 ( -Admissible Adversary Perturbations). An adversarial perturbation p t is considered admissible in the context of a state adversary if, for a given state s t at timestep t, the perturbed statẽ s t defined ass t = s t + p t satisfies s t −s t ≤ , where is the state budget constraint. Similarly, if p t is generated by an action adversary, the perturbed actionã t defined asã t = a t + p t should be under the action constraint of a t −ã t ≤ . While the budget constraint is commonly applied in prior adversarial attacks, it may not be applicable in many real-world scenarios where the attacker needs to consider the past perturbations when determining the current perturbations. Specifically, in the temporal dimension, perturbations exhibit a certain degree of correlation. To capture this characteristic, we introduce the concept of temporally-coupled attackers.
Standard Perturbations
Temporally-coupled Perturbations
We propose a temporally-coupled constraint as defined in Definition 2, which sets specific limitations on the perturbation at the current timestep based on the previous timestep's perturbation. Definition 2 (¯ -Temporally-coupled Perturbations). A temporally-coupled state perturbation p t is deemed acceptable if it satisfies the temporally-coupled constraint¯ : s t −s t − (s t+1 −s t+1 ) a ≤¯ wheres t ands t+1 are the perturbed states obtained by adding p t and p t+1 to s t and s t+1 , respectively. For action adversaries, the temporally-coupled constraint¯ is similarly denoted as a t −ã t − (a t+1 −ã t+1 ) ≤¯ , whereã t andã t+1 are the perturbed actions.
When an adversary is subjected to both of these constraints, it is referred to as a temporally-coupled adversary in this paper. For a temporally-coupled adversary, each timestep's perturbation is restricted within a certain range , similar to other regular adversarial attacks. However, it is further confined within a smaller range¯ based on the previous timestep's perturbation. This design offers two significant benefits.
Firstly, it enables the adversary to consider the temporal coupling between perturbations over time. By constraining the perturbations to a smaller range and discouraging drastic changes in direction, the adversary can launch continuous and stronger attacks while preserving a certain degree of stability. Intuitively, if the adversary consistently attacks in one direction, it can be more challenging for the victim to preserve balance and defend effectively compared to when the perturbations alternate between the left and right directions.
Then, the temporally-coupled constraint also enables the adversary to efficiently discover the optimal attack strategy by narrowing down the range of choices for each timestep's perturbation. Reducing the search space does not necessarily weaken the adversary; in fact, it can potentially make the adversary stronger if the optimal attack lies within the temporally-determined search space, which is supported by our empirical results. By constraining the adversary to a more focused exploration of attack strategies, the temporally-coupled constraint facilitates the discovery and exploitation of more effective and targeted adversarial tactics that exhibit less variation at consecutive timesteps. This characteristic enhances the adversary's ability to launch consistent and potent attacks.
Practically, it is crucial to carefully determine¯ to guarantee that this additional temporally-coupled constraint does not impede the performance of attacks but rather amplifies their effectiveness. The effectiveness of different choices for¯ was empirically evaluated in our empirical studies, highlighting the benefits it brings to adversarial learning. By leveraging such a temporally-coupled adversary, we propose a novel approach for robust training that enhances the agent's robustness. The detailed advantages of this approach will be elaborated in the following section.
GRAD: Game-theoretic Response approach for Adversarial Defense
Existing works primarily focus on the non-temporally-coupled assumption and thus may not be suitable in many real-world scenarios, but by treating the game-theoretic framework with a temporallycoupled adversary, our robust RL approach offers a more generalized solution that covers both temporally-coupled and standard non-temporally-coupled settings.
In our Game-theoretic Response approach for Adversarial Defense (GRAD) framework as a modification of PSRO [28], an agent and a temporally-coupled adversary are trained as part of a two-player game. They play against each other and update their policies in response to each other's policies. The adversary is modeled as a separate agent who attempts to maximize the impact of attacks on the original agent's performance and whose action space is constrained by both and¯ . Note that existing robust RL approaches such as [32] heavily rely on the -budget assumption, while the temporallycoupled constraints or other types of attack constraints are not considered or addressed. In contrast, GRAD naturally considers both the traditional -budget constraint and the new temporally-coupled constraint when calculating the best response. Meanwhile, the original agent's objective function is based on the reward obtained from the environment, taking into account the perturbations imposed by the adversary. The process continues until an approximate equilibrium is reached, at which point the original agent is considered to be robust to the attacks learned by the adversary. We show our full algorithm in Algorithm 2.
For different types of attackers, the agent generates different trajectories while training against a fixed attacker. If the attacker only targets the state, then the agent's training data will consist of the altered stateŝ after adding the perturbations from the fixed attacker. If the attacker targets the agent's action, the agent's policy output a will be altered asâ by the attacker, even if the agent receives the correct state s during training. However, this action alteration may not be detectable in the trajectories collected by the agent. As for the adversary's training, after defining the adversary's attack method and policy model, the adversary applies attacks to the fixed agent and collects the original state, along with the negative of the agent's reward −r, to train the adversary.
While both GRAD and ATLA [73] require training an adversary alongside the agent using RL, there is a key difference in their training approaches. In GRAD, both the agent and the adversary have two policy sets. During each training epoch, the agent aims to find an approximate best response to the Algorithm 2 Game-theoretic Response approach for Adversarial Defense (GRAD)
Input: Initial policy sets for the agent and adversary Π : {Π a , Π v } Compute expected utilities as empirical payoff matrix U Π for each joint π : {π a , π v } ∈ Π Compute meta-Nash equilibrium σ a and σ v over policy sets (Π a , Π v ) for epoch in {1, 2, . . .} do for many iterations N πa do Sample the adversary policy π v ∼ σ v Train π a with trajectories against the fixed adversary π v :
D π a := {(ŝ k t , a k t , r k t ,ŝ k t+1 )} B k=1
(when the fixed adversary only attacks the action space,ŝ t = s t .) end for Π a = Π a ∪ {π a } for many iterations N πv do Sample the agent policy π a ∼ σ a Train the adversary policy π v with trajectories:
D π v := {(s k t ,ā k t , −r k t , s k t+1 )} B k=1
(π v applies attacks to the fixed victim agent π a based onā t using different methods) end for
Π v = Π v ∪ {π v } Compute missing entries in U Π from Π
Compute new meta strategies σ a and σ v from U Π end for Return: current meta Nash equilibrium on whole population σ a and σ v fixed adversary, and vice versa for the adversary. This iterative process promotes the emergence of stable and robust policies. After each epoch, the trained policies are added to the respective policy sets. GRAD has the capability to continuously explore and learn new policies that are not present in the current policy set, thereby enabling ongoing improvement for both the agent and the adversary, which allows for a more thorough exploration of the policy space. In contrast, ATLA employs a limited number of iterations to train each agent in each round, which is not sufficient to allow the agent and adversary to find each other's best response within the policy space. It is also worth noting that the original ATLA utilizes standard attack methods to train the adversary. However, several experimental observations indicate that agents trained with non-temporally-coupled adversaries tend to exhibit a conservative and overfitted behavior towards specific types of adversaries. In the next section, we empirically demonstrate that our approach exhibits superior and comprehensive robustness, which is capable of adapting to various attack scenarios and effectively countering different types of adversaries.
Experiments
This paper introduces a novel concept called temporally coupled attacks, which distinguishes itself from standard adversarial attacks by incorporating temporal coupling. Previous research has primarily focused on attackers with different functionalities, specifically targeting either the state space or the action space. In our experimental setup, we investigate three types of attackers: those specialized in perturbing the state space, those focused on perturbing the action space, and those capable of adaptably targeting both spaces. Within each attack domain, we consider both the standard attackers without temporal coupling and the temporally-coupled attackers. By employing this diverse set of attackers, we conduct a comprehensive evaluation of the state-of-the-art robustness of our proposed method, GRAD, in comparison to existing robust baselines. This evaluation sheds light on the effectiveness of GRAD across a wide range of attack scenarios and highlights its robustness against different types of adversaries.
Experiment setup. Our experiments are conducted on five various and challenging MuJoCo environments: Hopper, Walker2d, Halfcheetah, Ant, and Humanoid, all using the v2 version of MuJoCo. We use the Proximal Policy Optimization (PPO) algorithm as the policy optimizer and a Long Short-Term Memory (LSTM) network as the policy network for all of the robust training methods we evaluate. To maintain methodological consistency and minimize potential discrepancies arising from different PPO implementations across methods, we ensure highly similar benchmark results. For attack constraint , we use the commonly adopted values for each environment. For the temporally-coupled constraint , we set the optimal maximum¯ as /5 (with minor adjustments in some environments). Other choices of¯ will be further discussed in the ablation studies.
In terms of evaluation metrics, we report the average test episodic rewards both under no attack and against the strongest adversarial attacks to reflect both the natural performance and robustness of trained agents, by training adversaries targeting the trained agents from scratch. For reproducibility, we train each agent configuration with 10 seeds and report the one with the median robust performance, rather than the best one. More implementation details are in Appendix B.1.
Baselines. We compare our approach GRAD with other robust RL baselines in this paper. Robust training frameworks can be categorized into two types. The first type requires training with a specified adversary during training, such as the alternating training framework (ATLA [73]) and GRAD. The second type does not require training with an adversary, such as WocaR-PPO [32] and AR-PPO (PPO variant of AR-DDPG [66]). The baselines we chose demonstrate state-of-the-art or great robustness in prior works. The first type of approaches require training agents with adversaries targeting specific attack domains and the second type of baselines can be evaluated directly for their robustness without the need for additional adversary training.
Case I: Against attacks on state space
In this experiment, our focus is on evaluating the robustness of our methods against state adversaries that perturb the states received by the agent. Among the ATLA [73; 64] methods, PA-ATLA-PPO is the most robust, which trains with the standard strongest PA-AD attacker. As a modification, we train PA-ATLA-PPO* with a temporally-coupled PA-AD attacker. For a more intuitive and fair comparison, in Table 1 we only present the rewards of the best-performing ATLA agents under the type of attacks they were trained with. Our GRAD method utilizes the temporally-coupled PA-AD attacker for training. We report the lowest rewards achieved under both standard and temporally-coupled state attacks among the six existing strongest adversaries to present the robustness of robust models.
In the absence of any attacks, GRAD maintains a competitive natural reward, which indicates that the agent's performance does not degrade significantly in the environment where is no adversary after approaching an approximate Nash equilibrium with the adversary. Even without training with regular attackers, our method demonstrates significantly better robustness under the non-temporally-coupled type of attack, particularly in the highest-dimensional and challenging environment, Humanoid, where it outperforms other methods by a large margin. Under our proposed temporally-coupled attacks, the average performance of our approach surpasses the state-of-the-art by up to 45%, highlighting the strong robustness of the policies learned by GRAD against all types of state adversarial attacks.
Case II: Against attacks on action space
In addition to state attacks, we assess the robustness of our methods against action adversaries that perturb the actions taken by the agent. Action perturbations have been extensively studied in the context of model uncertainty in control. In this attack domain, we are the first to train an RL-based action adversary using the trajectory outlined in Algorithm 2, aiming to showcase the worst-case performance of our robust agents under action perturbations. Table 2: Average episode rewards ± standard error for over 100 episodes action robust baselines and our GRAD under no attack and best action attacks.
Among our baselines, we include AR-PPO, although it is not robust against strong action adversaries and performs well only under random noise. Another modification we made is AC-ATLA-PPO, where we train the agent alternately with the aforementioned action adversary. Similar to PA-ATLA-PPO*, we also train AC-ATLA-PPO* agents with a temporally-coupled action adversary, which is also utilized to train our GRAD agents. Since RL-based action adversaries lead to a more significant drop in rewards compared to action noise, we report the best action attack rewards achieved by the robust agents under the strongest trained action attacks for robustness evaluation.
In general, while action perturbation may not cause as strong of a "damage" as state perturbation, our GRAD method still achieves superior robustness. In terms of natural reward, GRAD performs comparably with other baselines. While the advantage of GRAD may not be apparent or significant under standard action attacks in less challenging environments, it surpasses other methods by more than 10% on Ant and Humanoid. Under temporally-coupled action attacks, GRAD consistently outperforms the most robust baseline by an average of over 20%, particularly exhibiting exceptional robustness on Humanoid. These results demonstrate the effective defense of GRAD against different types of adversarial attacks in the action space.
Case III: Against attacks on either state or action spaces
In prior works, adversarial attacks typically focus on perturbing either the agent's observations or introducing noise to the action space. However, in real-world scenarios, agents may encounter both types of attacks. To address this challenge, we propose an adversary called the State or Action Adversary (SA-AD), which allows the adversary to choose between attacking the agent's state or action at each time step, integrating this choice into the adversary's action space. The SA-AD attacker, inspired by the PA-AD attacker [64], only needs to learn the best policy perturbation directions that can be transformed into state or action perturbations according to the adversary's choices while maintaining manageable training complexity. For further details on the SA-AD attacker, please refer to Appendix B.2. Similar to the previous experiments, We train SA-ATLA-PPO with SA regular attacker, while SA-ATLA-PPO* and GRAD are trained with temporally-coupled SA attackers.
Our experimental results demonstrate that GRAD obtains similar natural rewards compared to the ATLA baselines, which is consistent with the findings from previous experiments. To summarize the results under SA attacks, our findings indicate that the combination of two different forms of attacks can effectively target robust agents in most scenarios, providing strong evidence of their robustness.
In the case of regular SA attackers, GRAD outperforms other methods in all five environments, with a margin of over 20% in the Humanoid environment. Moreover, when defending against temporallycoupled attacks, GRAD significantly enhances robustness by more than 30% in multiple environments, with a minimum improvement of 10%. These results clearly demonstrate the robustness of GRAD against attackers that can target different domains.
Summary. We calculated the average normalized rewards for each evaluation metric and each robust agent in all the environments as in Figure 3. This visualization vividly showcases that GRAD demonstrates notably superior robustness under both standard and temporally-coupled attacks, in comparison to other approaches. Overall, these findings emphasize our empirical potential and Table 3: Average episode rewards ± standard error over 100 episodes for adaptable adversarial defense baselines and our GRAD. Figure 3: Histograms of normalized average rewards for GRAD and baselines across five environments, which mitigate the impact of varying reward ranges across environments. Each bar represents the distribution of rewards obtained by robust agents, as presented in Tables 1, 2, and 3. contributions of GRAD and provide intuitive insights into improving the robustness of agents through a novel and convincing evaluation framework for robust RL.
Ablation studies for temporally-coupled constraint¯ . As defined in our framework, the temporally-coupled constraint¯ limits the perturbations within a range that varies between timesteps. When¯ is set too large, the constraint becomes ineffective, resembling a standard attacker. Conversely, setting¯ close to zero overly restricts perturbations, leading to a decline in attack performance. An appropriate value for¯ is critical for effective temporally-coupled attacks. Figure 4 illustrates the performance of robust models against temporally-coupled state attackers trained with different max-imum¯ . For WocaR-PPO, the temporally-coupled attacker achieves optimal attack performance when the values of¯ are set to 0.02. As the¯ values increase and the temporally-coupled constraint weakens, the agent's performance improves, indicating a decrease in the adversary's attack effectiveness. In the case of GRAD agents, they consistently maintain robust performance as the¯ values become larger. This observation highlights the impact of temporal coupling on the vulnerability of robust baselines to such attacks. In contrast, GRAD agents consistently demonstrate robustness against these attacks.
Conclusion and Discussion
In this paper, we introduce a novel attack model to challenge deep RL models, based on a temporallycoupled constraint that can naturally arise in real life. Since existing robust RL methods usually focus on a traditional threat model that perturbs state observations or actions arbitrarily within an L p norm ball, they become too conservative and can fail to perform a good defense under the temporally-coupled attacks. In contrast, we propose a game-theoretical response approach GRAD, which finds the best response against attacks with various constraints including temporally-coupled ones. Extensive experiments in continuous control tasks show that GRAD significantly outperforms prior robust RL methods under both traditional attack models and the new temporally-coupled attacks in either state spaces or action spaces.
GRAD is based on the PSRO paradigm, which is shown to be effective and theoretically grounded in finding Nash equilibrium in two-player zero-sum games. The current PSRO-based approach requires multiple iterations for convergence to the best response, which can be a limitation when applied to high-dimensional tasks with limited computation resources. Extensions to other game-theoretic RL approaches can possibly mitigate this issue and can be investigated in future work.
A Additional Related Work
A.1 Game-Theoretic Reinforcement Learning
Superhuman performance in two-player games usually involves two components: the first focuses on finding a model-free blueprint strategy, which is the setting we focus on in this paper. The second component improves this blueprint online via model-based subgame solving and search [10; 47; 9; 3; 7; 55]. This combination of blueprint strategies with subgame solving has led to state-of the art performance in Go [58], Poker [6; 8; 46], Diplomacy [19], and The Resistance: Avalon [56]. Methods that only use a blueprint have achieved state-of-the-art performance on Starcraft [69], Gran Turismo [71], DouDizhu [72], Mahjohng [31], and Stratego [43; 52]. In the rest of this section we focus on other model-free methods for finding blueprints.
Deep CFR [5; 61] is a general method that trains a neural network on a buffer of counterfactual values. However, Deep CFR uses external sampling, which may be impractical for games with a large branching factor, such as Stratego and Barrage Stratego. DREAM [62] and ARMAC [20] are model-free regret-based deep learning approaches. ReCFR [34] propose a bootstrap method for estimating cumulative regrets with neural networks. ESCHER [40] remove the importance sampling term of Deep CFR and show that doing so allows scaling to large games. Neural Fictitious Self-Play (NFSP) [22] approximates fictitious play by progressively training a best response against an average of all past opponent policies using reinforcement learning. The average policy converges to an approximate Nash equilibrium in two-player zero-sum games.
There is an emerging literature connecting reinforcement learning to game theory. QPG [60] shows that state-conditioned Q-values are related to counterfactual values by a reach weighted term summed over all histories in an infostate and proposes an actor-critic algorithm that empirically converges to a NE when the learning rate is annealed. NeuRD [23], and F-FoReL [53] approximate replicator dynamics and follow the regularized leader, respectively, with policy gradients. Actor Critic Hedge (ACH) [14] is similar to NeuRD but uses an information set based value function. All of these policy-gradient methods do not have theory proving that they converge with high probability in extensive form games when sampling trajectories from the policy. In practice, they often perform worse than NFSP and DREAM on small games but remain promising approaches for scaling to large games [52].
B Experiment Details and Additional Results
B.1 Implementation details
We provide detailed implementation information for our proposed method (GRAD) and baselines.
Training Steps For GRAD, we specify the number of training steps required for different environments. In the Hopper, Walker2d, and Halfcheetah environments, we train for 10 million steps. In the Ant and Humanoid environments, we extend the training duration to 20 million steps. For the ATLA baselines, we train for 2 million steps and 10 million steps in environments of varying difficulty.
Network Structure Our algorithm (GRAD) adopts the same PPO network structure as the baselines to maintain consistency. The network comprises a single-layer LSTM with 64 hidden neurons. Additionally, an input embedding layer is employed to project the state dimension to 64, and an output layer is used to project 64 to the output dimension. Both the agents and the adversaries use the same policy and value networks to facilitate training and evaluation. Furthermore, the network architecture for the best response and meta Nash remains consistent with the aforementioned configuration.
Schedule of and¯ During the training process, we gradually increase the values of and¯ from 0 to their respective target maximum values. This incremental adjustment occurs over the first half of the training steps. We reference the attack budget used in other baselines for the corresponding environments. This ensures consistency and allows for a fair comparison with existing methods. The target value of¯ is determined based on the adversary's training results, which is set as /5. In some smaller dimensional environments,¯ can be set to /10. We have observed that the final performance of the trained robust models does not differ by more than 5% when using these values for¯ .
Training Time The training time for GRAD varies based on the specific environment and its associated difficulty. On a single V100 GPU, training GRAD typically requires over 20 hours for the Hopper, Walker2d, and Halfcheetah environments. For the more complex Ant and Humanoid environments, the training duration extends to approximately 40 hours. The training time required for defense against state adversaries or action adversaries is relatively similar.
Observation and Reward Normalization To ensure consistency with PPO implementation and maintain comparability across different codebases, we apply observation and reward normalization. Normalization helps to standardize the input observations and rewards, enhancing the stability and convergence of the training process. We have verified the performance of vanilla PPO on different implementations, and the results align closely with our implementation of GRAD based on Ray rllib.
Hyperparameter Selection Hyperparameters such as learning rate, entropy bonus coefficient, and other PPO-specific parameters are crucial for achieving optimal performance. Referring to the results obtained from vanilla PPO and the ATLA baselines as references, a small-scale grid search is conducted to fine-tune the hyperparameters specific to GRAD. Because of the significant training time and cost associated with GRAD, we initially perform a simplified parameter selection using the Inverted Pendulum as a test environment.
B.2 Adversaries in experiments
State Adversaries Aimed to introduce the attack methods utilized during training and testing in our experiments. When it comes to state adversaries, PA-AD as Alogrithm 3 stands out as the strongest attack compared to other state attacks. Therefore, we report the best state attack rewards under PA-AD attacks.
Action Adversaries In terms of action adversaries, an RL-based action adversary as Alogrithm 4 can inflict more severe damage on agents' rewards compared to OU noise and parameter noise in [66].
Adaptable Adversaries For adaptable adversaries capable of perturbing both state and action spaces, considering the attack budget and cost, we prefer not to allow the adversary to perturb both spaces simultaneously at one timestep. Hence, it is necessary for the adversary to decide which space to perturb for each timestep. In alg:sa-ad, we introduce an additional dimension θ t ∈ [−1, 1] in the adversary's action space to determine the attack domain. If θ t 0, the adversary perturbs the observation state s t tos t ; otherwise, it attacks the agent's policy output action a t toã t . Building upon PA-AD [64], the adversary director only needs to learnâ t , which is composed of the policy perturbationd t concatenated with θ. Depending on the adversary's choice θ, different actors will craft state or action perturbations for a given policy perturbation directiond. This means that the SA-AD attacker only requires an additional dimension for domain choice compared to the PA-AD attacker, without significantly increasing the complexity of adversary training, thereby minimizing the impact on adversary performance. We show the adaptable attack method in Algorithm 5.
Algorithm 3 Policy Adversarial Actor Director (PA-AD)
Input: Initialization of adversary director's policy v; victim policy π, the actor function g for the state space S, initial state s 0 for t = 0, 1, 2, . . . do
Director v samples a policy perturbing direction and perturbed choice, a t ∼ ν(·|s t ) Actor perturbs s t tos t = g( a t , s t ) Victim takes action a t ∼ π(·|s t ), proceeds to s t+1 , receives r t Director saves (s t , a t , −r t , s t+1 ) to the adversary buffer Director updates its policy v using any RL algorithms end for
B.3 Attack budgets
In Figure 5, we report the performance of baselines and GRAD under different attack budget . As the value of increases, the rewards of robust agents under different types of attacks decrease accordingly. However, our approach consistently demonstrates superior robustness as the attack budget changes.
Algorithm 4 Action Adversary (AC-AD)
Input: Initialization of action adversary policy v; victim policy π, initial state s 0 for t = 0, 1, 2, . . . do adversary v samples an action perturbations a t ∼ ν(·|s t ), victim policy π outputs action a t ∼ π(·|s t ) the environment receivesã t = a t + a t , returns s t+1 and r t adversary saves (s t , a t , −r t , s t+1 ) to the adversary buffer adversary updates its policy v end for Algorithm 5 State or Action Adversary (SA-AD) Input: Initialization of adversary director's policy v; victim policy π, the actor function g s for the state space S and g a for the action space A, initial state s 0 for t = 0, 1, 2, . . . do
Director v samples a policy perturbing direction and perturbed choice, a t ∼ ν(·|s t ), where a t = ( d t , θ t ), θ t ∈ [−1, 1] if θ t ≥ 0 then Actor perturbs s t tos t = g s ( d t , s t ) Victim takes action a t ∼ π(·|s t ), proceeds to s t+1 , receives r t else victim policy outputs action a t ∼ π(·|s t ) Actor perturbs a t toã t = g a ( d t , a t ) The environment receivesã t , returns s t+1 and r t end if Director saves (s t , a t , −r t , s t+1 ) to the adversary buffer Director updates its policy v using any RL algorithms end for
B.4 Temporally-coupled constraints
We also investigate the impact of temporally-coupled constraints¯ on attack performance, as we explained in our experiment section.
Figure 2 :
2Standard perturbations and Temporallycoupled perturbations in a 2d example.
Figure 4 :
4Ablated studies for¯ .
Figure 5 :
5Comparisons under state or action or adaptable temporally-coupled attacks w.r.t. diverse attack budgets 's on Hopper and Humanoid.
Figure 6 :
6Comparisons under state or action or adaptable temporally-coupled attacks with diverse temporallycoupled constraints 's on Ant and Humanoid.
enforce the policy to have similar outputs under similar inputs, which achieves certifiable performance for DQN in some Atari games. But in continuous control tasks, these methods may not reliably improve the worst-case performance. A recent work by Korkmaz[25] points out that these adversarially trained models may still be sensible to new perturbations. Attack-driven methods train DRL agents with adversarial examples. Some early works [26; 2; 36; 51; 13; 68] apply weak or strong gradient-based attacks on state observations to train RL agents against adversarial perturbations. Zhang et al.[73] and Sun et al.[64] propose to alternately train an RL agent and a strong RL adversary, namely ATLA, which significantly improves the policy robustness against rectangle state perturbations. A recent work by Liang et al.[32] introduce a more principled adversarial training framework which does not explicitly learn the adversary, and both the efficiency and robustness of RL agents are boosted. There is also a line of work studying theoretical guarantees of adversarial defenses in RL[35; 49; 12; 27; 70; 63] in various settings.2 Related Work
Robust RL against perturbations of state observations. Regularization-based methods [74; 57;
49] Robust RL against action perturbations. Besides observation perturbations, attacks can happen in
many other scenarios. For example, the agent's executed actions can be perturbed [50; 65; 66; 30; 29].
Table 1 :
1Average episode rewards ± standard error over 100 episodes for three state robust baselines and our GRAD. Bold numbers indicate the best results under different types of attacks on state spaces. The gray rows are the most robust agents.
AcknowledgementsWe thank Ben Eysenbach for helpful conversations. This material is based upon work supported by the National Science
Curious ilqr: Resolving uncertainty in model-based rl. Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, Franziska Meier, PMLRProceedings of the Conference on Robot Learning. Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiurathe Conference on Robot Learning100Sarah Bechtle, Yixin Lin, Akshara Rai, Ludovic Righetti, and Franziska Meier. Curious ilqr: Resolving uncertainty in model-based rl. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura, editors, Proceedings of the Conference on Robot Learning, volume 100 of Proceedings of Machine Learning Research, pages 162-171. PMLR, 30 Oct-01 Nov 2020.
Whatever does not kill deep reinforcement learning, makes it stronger. Vahid Behzadan, Arslan Munir, abs/1712.09344CoRRVahid Behzadan and Arslan Munir. Whatever does not kill deep reinforcement learning, makes it stronger. CoRR, abs/1712.09344, 2017.
Combining deep reinforcement learning and search for imperfect-information games. Noam Brown, Anton Bakhtin, Adam Lerer, Qucheng Gong, Advances in Neural Information Processing Systems. 33Noam Brown, Anton Bakhtin, Adam Lerer, and Qucheng Gong. Combining deep reinforce- ment learning and search for imperfect-information games. Advances in Neural Information Processing Systems, 33:17057-17069, 2020.
Deep counterfactual regret minimization. Noam Brown, Adam Lerer, Sam Gross, Tuomas Sandholm, International conference on machine learning. PMLRNoam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. Deep counterfactual regret minimization. In International conference on machine learning, pages 793-802. PMLR, 2019.
Deep counterfactual regret minimization. Noam Brown, Adam Lerer, Sam Gross, Tuomas Sandholm, International Conference on Machine Learning. Noam Brown, Adam Lerer, Sam Gross, and Tuomas Sandholm. Deep counterfactual regret minimization. In International Conference on Machine Learning, pages 793-802, 2019.
Libratus: The superhuman AI for no-limit poker. Noam Brown, Tuomas Sandholm, IJCAI. Noam Brown and Tuomas Sandholm. Libratus: The superhuman AI for no-limit poker. In IJCAI, pages 5226-5228, 2017.
Safe and nested subgame solving for imperfectinformation games. Advances in neural information processing systems. Noam Brown, Tuomas Sandholm, 30Noam Brown and Tuomas Sandholm. Safe and nested subgame solving for imperfect- information games. Advances in neural information processing systems, 30, 2017.
Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Noam Brown, Tuomas Sandholm, Science. 3596374Noam Brown and Tuomas Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418-424, 2018.
Depth-limited solving for imperfectinformation games. Noam Brown, Tuomas Sandholm, Brandon Amos, Advances in neural information processing systems. 31Noam Brown, Tuomas Sandholm, and Brandon Amos. Depth-limited solving for imperfect- information games. Advances in neural information processing systems, 31, 2018.
Solving imperfect information games using decomposition. Neil Burch, Michael Johanson, Michael Bowling, Twenty-eighth AAAI conference on artificial intelligence. Neil Burch, Michael Johanson, and Michael Bowling. Solving imperfect information games using decomposition. In Twenty-eighth AAAI conference on artificial intelligence, 2014.
Discovering multi-agent auto-curricula in two-player zero-sum games. Xidong Feng, Oliver Slumbers, Yaodong Yang, Ziyu Wan, Bo Liu, Stephen Mcaleer, Ying Wen, Jun Wang, Advances in Neural Information Processing Systems (NeurIPS). 2021Xidong Feng, Oliver Slumbers, Yaodong Yang, Ziyu Wan, Bo Liu, Stephen McAleer, Ying Wen, and Jun Wang. Discovering multi-agent auto-curricula in two-player zero-sum games. Advances in Neural Information Processing Systems (NeurIPS), 2021.
Online robustness training for deep reinforcement learning. Marc Fischer, Matthew Mirman, Steven Stalder, Martin T Vechev, abs/1911.00887CoRRMarc Fischer, Matthew Mirman, Steven Stalder, and Martin T. Vechev. Online robustness training for deep reinforcement learning. CoRR, abs/1911.00887, 2019.
Illusory attacks: Detectability matters in adversarial attacks on sequential decision-makers. Tim Franzmeyer, Stephen Mcaleer, F João, Jakob N Henriques, Foerster, H S Philip, Adel Torr, Christian Bibi, Schroeder De Witt, arXiv:2207.10170v2arXiv preprintTim Franzmeyer, Stephen McAleer, João F Henriques, Jakob N Foerster, Philip HS Torr, Adel Bibi, and Christian Schroeder de Witt. Illusory attacks: Detectability matters in adversarial attacks on sequential decision-makers. arXiv preprint arXiv:2207.10170v2, 2022.
Actor-critic policy optimization in a large-scale imperfectinformation game. Haobo Fu, Weiming Liu, Shuang Wu, Yijia Wang, Tao Yang, Kai Li, Junliang Xing, Bin Li, Bo Ma, Qiang Fu, Yang Wei, Proceedings of the Tenth International Conference on Learning Representations (ICLR). the Tenth International Conference on Learning Representations (ICLR)2022Haobo Fu, Weiming Liu, Shuang Wu, Yijia Wang, Tao Yang, Kai Li, Junliang Xing, Bin Li, Bo Ma, Qiang Fu, and Yang Wei. Actor-critic policy optimization in a large-scale imperfect- information game. In Proceedings of the Tenth International Conference on Learning Represen- tations (ICLR), 2022.
A comprehensive survey on safe reinforcement learning. Javier Garcıa, Fernando Fernández, Journal of Machine Learning Research. 161Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437-1480, 2015.
Reinforcement learning under circumstances beyond its control. Chris Gaskett, International Conference on Computational Intelligence for Modelling Control and Automation. Chris Gaskett. Reinforcement learning under circumstances beyond its control. In International Conference on Computational Intelligence for Modelling Control and Automation, 2003.
Adversarial policies: Attacking deep reinforcement learning. Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, Stuart Russell, International Conference on Learning Representations. Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. Adversarial policies: Attacking deep reinforcement learning. In International Conference on Learning Representations, 2020.
Vineet Goyal, Julien Grand-Clement, Robust markov decision processes: Beyond rectangularity. Mathematics of Operations Research. 48Vineet Goyal and Julien Grand-Clement. Robust markov decision processes: Beyond rectangu- larity. Mathematics of Operations Research, 48(1):203-226, 2023.
Human-level performance in no-press diplomacy via equilibrium search. Jonathan Gray, Adam Lerer, Anton Bakhtin, Noam Brown, International Conference on Learning Representations. Jonathan Gray, Adam Lerer, Anton Bakhtin, and Noam Brown. Human-level performance in no-press diplomacy via equilibrium search. In International Conference on Learning Represen- tations, 2020.
The advantage regret-matching actor-critic. Audrūnas Gruslys, Marc Lanctot, Rémi Munos, Finbarr Timbers, Martin Schmid, Julien Perolat, Dustin Morrill, Vinicius Zambaldi, Jean-Baptiste Lespiau, John Schultz, arXiv:2008.12234arXiv preprintAudrūnas Gruslys, Marc Lanctot, Rémi Munos, Finbarr Timbers, Martin Schmid, Julien Perolat, Dustin Morrill, Vinicius Zambaldi, Jean-Baptiste Lespiau, John Schultz, et al. The advantage regret-matching actor-critic. arXiv preprint arXiv:2008.12234, 2020.
Consideration of risk in reinforcement learning. Matthias Heger, International Conference on Machine Learning. Matthias Heger. Consideration of risk in reinforcement learning. In International Conference on Machine Learning, 1994.
Deep reinforcement learning from self-play in imperfectinformation games. Johannes Heinrich, David Silver, arXiv:1603.01121arXiv preprintJohannes Heinrich and David Silver. Deep reinforcement learning from self-play in imperfect- information games. arXiv preprint arXiv:1603.01121, 2016.
Neural replicator dynamics: Multiagent learning via hedging policy gradients. Daniel Hennes, Dustin Morrill, Shayegan Omidshafiei, Rémi Munos, Julien Perolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Paavo Parmas, Edgar Duéñez-Guzmán, Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. the 19th International Conference on Autonomous Agents and MultiAgent SystemsDaniel Hennes, Dustin Morrill, Shayegan Omidshafiei, Rémi Munos, Julien Perolat, Marc Lanctot, Audrunas Gruslys, Jean-Baptiste Lespiau, Paavo Parmas, Edgar Duéñez-Guzmán, et al. Neural replicator dynamics: Multiagent learning via hedging policy gradients. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 492-501, 2020.
Adversarial attacks on neural network policies. H Sandy, Nicolas Huang, Ian J Papernot, Yan Goodfellow, Pieter Duan, Abbeel, International Conference on Learning Representations. Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, and Pieter Abbeel. Adversar- ial attacks on neural network policies. In International Conference on Learning Representa- tions(Workshop), 2017.
Investigating vulnerabilities of deep neural policies. Ezgi Korkmaz, In Uncertainty in Artificial Intelligence. PMLREzgi Korkmaz. Investigating vulnerabilities of deep neural policies. In Uncertainty in Artificial Intelligence, pages 1661-1670. PMLR, 2021.
Delving into adversarial attacks on deep policies. Jernej Kos, Dawn Song, International Conference on Learning Representations(Workshop. Jernej Kos and Dawn Song. Delving into adversarial attacks on deep policies. In International Conference on Learning Representations(Workshop), 2017.
Policy smoothing for provably robust reinforcement learning. Aounon Kumar, Alexander Levine, Soheil Feizi, International Conference on Learning Representations. Aounon Kumar, Alexander Levine, and Soheil Feizi. Policy smoothing for provably robust reinforcement learning. In International Conference on Learning Representations, 2022.
A unified game-theoretic approach to multiagent reinforcement learning. Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, Thore Graepel, Advances in Neural Information Processing Systems (NeurIPS). Marc Lanctot, Vinicius Zambaldi, Audrunas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
Feasible adversarial robust reinforcement learning for underspecified environments. J B Lanier, Stephen Mcaleer, Pierre Baldi, Roy Fox, arXiv:2207.09597arXiv preprintJB Lanier, Stephen McAleer, Pierre Baldi, and Roy Fox. Feasible adversarial robust reinforce- ment learning for underspecified environments. arXiv preprint arXiv:2207.09597, 2022.
Query-based targeted action-space adversarial policies on deep reinforcement learning agents. Yasaman Xian Yeow Lee, Kai Liang Esfandiari, Soumik Tan, Sarkar, Proceedings of the ACM/IEEE 12th International Conference on Cyber-Physical Systems, ICCPS '21. the ACM/IEEE 12th International Conference on Cyber-Physical Systems, ICCPS '21New York, NY, USAAssociation for Computing MachineryXian Yeow Lee, Yasaman Esfandiari, Kai Liang Tan, and Soumik Sarkar. Query-based targeted action-space adversarial policies on deep reinforcement learning agents. In Proceedings of the ACM/IEEE 12th International Conference on Cyber-Physical Systems, ICCPS '21, page 87-97, New York, NY, USA, 2021. Association for Computing Machinery.
Junjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, Hsiao-Wuen Hon, arXiv:2003.13590Suphx: Mastering mahjong with deep reinforcement learning. arXiv preprintJunjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, and Hsiao-Wuen Hon. Suphx: Mastering mahjong with deep reinforcement learning. arXiv preprint arXiv:2003.13590, 2020.
Efficient adversarial training without attacking: Worst-case-aware robust reinforcement learning. Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Furong Huang, Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun ChoYongyuan Liang, Yanchao Sun, Ruijie Zheng, and Furong Huang. Efficient adversarial training without attacking: Worst-case-aware robust reinforcement learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
Reinforcement learning in robust markov decision processes. Huan Shiau Hong Lim, Shie Xu, Mannor, Advances in Neural Information Processing Systems. 26Shiau Hong Lim, Huan Xu, and Shie Mannor. Reinforcement learning in robust markov decision processes. Advances in Neural Information Processing Systems, 26:701-709, 2013.
Model-free neural counterfactual regret minimization with bootstrap learning. Weiming Liu, Bin Li, Julian Togelius, IEEE Transactions on Games. Weiming Liu, Bin Li, and Julian Togelius. Model-free neural counterfactual regret minimization with bootstrap learning. IEEE Transactions on Games, 2022.
Certified adversarial robustness for deep reinforcement learning. Björn Lütjens, Michael Everett, Jonathan P How, Conference on Robot Learning. PMLRBjörn Lütjens, Michael Everett, and Jonathan P How. Certified adversarial robustness for deep reinforcement learning. In Conference on Robot Learning, pages 1328-1337. PMLR, 2020.
Adversarially robust policy learning: Active construction of physically-plausible perturbations. Ajay Mandlekar, Yuke Zhu, Animesh Garg, Li Fei-Fei, Silvio Savarese, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEAjay Mandlekar, Yuke Zhu, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Adversarially robust policy learning: Active construction of physically-plausible perturbations. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3932-3939. IEEE, 2017.
Robust reinforcement learning for continuous control with model misspecification. Daniel J Mankowitz, Nir Levine, Rae Jeong, Abbas Abdolmaleki, Jost Tobias Springenberg, Yuanyuan Shi, Jackie Kay, Todd Hester, Timothy Mann, Martin Riedmiller, International Conference on Learning Representations. Daniel J. Mankowitz, Nir Levine, Rae Jeong, Abbas Abdolmaleki, Jost Tobias Springenberg, Yuanyuan Shi, Jackie Kay, Todd Hester, Timothy Mann, and Martin Riedmiller. Robust reinforcement learning for continuous control with model misspecification. In International Conference on Learning Representations, 2020.
Lightning does not strike twice: Robust mdps with coupled uncertainty. Shie Mannor, Ofir Mebel, Huan Xu, arXiv:1206.4643arXiv preprintShie Mannor, Ofir Mebel, and Huan Xu. Lightning does not strike twice: Robust mdps with coupled uncertainty. arXiv preprint arXiv:1206.4643, 2012.
Robust mdps with k-rectangular uncertainty. Shie Mannor, Ofir Mebel, Huan Xu, Mathematics of Operations Research. 414Shie Mannor, Ofir Mebel, and Huan Xu. Robust mdps with k-rectangular uncertainty. Mathe- matics of Operations Research, 41(4):1484-1509, 2016.
Escher: Eschewing importance sampling in games by computing a history value function to estimate regret. Stephen Mcaleer, Gabriele Farina, Marc Lanctot, Tuomas Sandholm, International Conference on Learning Representations. Stephen McAleer, Gabriele Farina, Marc Lanctot, and Tuomas Sandholm. Escher: Eschew- ing importance sampling in games by computing a history value function to estimate regret. International Conference on Learning Representations, 2023.
Self-play psro: Toward optimal populations in two-player zero-sum games. Stephen Mcaleer, Kevin Lanier, Pierre Wang, Roy Baldi, Tuomas Fox, Sandholm, arXiv:2207.06541arXiv preprintStephen McAleer, JB Lanier, Kevin Wang, Pierre Baldi, Roy Fox, and Tuomas Sandholm. Self-play psro: Toward optimal populations in two-player zero-sum games. arXiv preprint arXiv:2207.06541, 2022.
XDO: A double oracle algorithm for extensive-form games. Stephen Mcaleer, John Lanier, Pierre Baldi, Roy Fox, Advances in Neural Information Processing Systems (NeurIPS). 2021Stephen McAleer, John Lanier, Pierre Baldi, and Roy Fox. XDO: A double oracle algorithm for extensive-form games. Advances in Neural Information Processing Systems (NeurIPS), 2021.
Pipeline PSRO: A scalable approach for finding approximate Nash equilibria in large games. Stephen Mcaleer, John Lanier, Roy Fox, Pierre Baldi, Advances in Neural Information Processing Systems. Stephen McAleer, John Lanier, Roy Fox, and Pierre Baldi. Pipeline PSRO: A scalable approach for finding approximate Nash equilibria in large games. In Advances in Neural Information Processing Systems, 2020.
Anytime optimal psro for two-player zero-sum games. Stephen Mcaleer, Kevin Wang, Marc Lanctot, John Lanier, Pierre Baldi, Roy Fox, arXiv:2201.07700arXiv preprintStephen McAleer, Kevin Wang, Marc Lanctot, John Lanier, Pierre Baldi, and Roy Fox. Anytime optimal psro for two-player zero-sum games. arXiv preprint arXiv:2201.07700, 2022.
Planning in the presence of cost functions controlled by an adversary. Geoffrey J H Brendan Mcmahan, Avrim Gordon, Blum, Proceedings of the 20th International Conference on Machine Learning (ICML). the 20th International Conference on Machine Learning (ICML)H Brendan McMahan, Geoffrey J Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. Proceedings of the 20th International Conference on Machine Learning (ICML), 2003.
Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Michael Bowling, Science. 3566337Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508-513, 2017.
Refining subgames in large imperfect information games. Matej Moravcik, Martin Schmid, Karel Ha, Milan Hladik, Stephen Gaukrodger, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence30Matej Moravcik, Martin Schmid, Karel Ha, Milan Hladik, and Stephen Gaukrodger. Refining subgames in large imperfect information games. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
A generalized training approach for multiagent learning. Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, International Conference on Learning Representations. Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, et al. A generalized training approach for multiagent learning. In International Conference on Learning Representations, 2019.
Robust deep reinforcement learning through adversarial loss. Tuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanTuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, and Tsui-Wei Weng. Robust deep reinforcement learning through adversarial loss. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
Characterizing attacks on deep reinforcement learning. Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, Dawn Song, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). 2022AAMASXinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, and Dawn Song. Characterizing attacks on deep reinforcement learning. In AAMAS, pages 1010-1018. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2022.
Robust deep reinforcement learning with adversarial attacks. Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, Girish Chowdhary, International Foundation for Autonomous Agents and Multiagent Systems. Richland, SC, USAACMAAMASAnay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust deep reinforcement learning with adversarial attacks. In AAMAS, pages 2040-2042. International Foundation for Autonomous Agents and Multiagent Systems Richland, SC, USA / ACM, 2018.
Mastering the game of stratego with model-free multiagent reinforcement learning. Julien Perolat, Daniel Bart De Vylder, Eugene Hennes, Florian Tarassov, Strub, Paul Vincent De Boer, Jerome T Muller, Neil Connor, Thomas Burch, Anthony, arXiv:2206.15378arXiv preprintJulien Perolat, Bart de Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas Anthony, et al. Mastering the game of stratego with model-free multiagent reinforcement learning. arXiv preprint arXiv:2206.15378, 2022.
From Poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization. Julien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, International Conference on Machine Learning. PMLRJulien Perolat, Remi Munos, Jean-Baptiste Lespiau, Shayegan Omidshafiei, Mark Rowland, Pedro Ortega, Neil Burch, Thomas Anthony, David Balduzzi, Bart De Vylder, et al. From Poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization. In International Conference on Machine Learning, pages 8525-8535. PMLR, 2021.
Robust adversarial reinforcement learning. Lerrel Pinto, James Davidson, Rahul Sukthankar, Abhinav Gupta, International Conference on Machine Learning. PMLRLerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In International Conference on Machine Learning, pages 2817-2826. PMLR, 2017.
. Martin Schmid, Matej Moravcik, Neil Burch, Rudolf Kadlec, Josh Davidson, Kevin Waugh, Nolan Bard, Finbarr Timbers, Marc Lanctot, Zach Holland, arXiv:2112.03178Player of games. arXiv preprintMartin Schmid, Matej Moravcik, Neil Burch, Rudolf Kadlec, Josh Davidson, Kevin Waugh, Nolan Bard, Finbarr Timbers, Marc Lanctot, Zach Holland, et al. Player of games. arXiv preprint arXiv:2112.03178, 2021.
Finding friend and foe in multi-agent games. Jack Serrino, Max Kleiman-Weiner, C David, Josh Parkes, Tenenbaum, Advances in Neural Information Processing Systems. 32Jack Serrino, Max Kleiman-Weiner, David C Parkes, and Josh Tenenbaum. Finding friend and foe in multi-agent games. Advances in Neural Information Processing Systems, 32, 2019.
Deep reinforcement learning with robust and smooth policy. Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, Tuo Zhao, International Conference on Machine Learning. PMLRQianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, and Tuo Zhao. Deep reinforcement learning with robust and smooth policy. In International Conference on Machine Learning, pages 8707-8718. PMLR, 2020.
Mastering the game of go without human knowledge. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, nature. 5507676David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
Ioannis Mitliagkas, Noam Brown, and Christian Kroer. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. Samuel Sokota, D' Ryan, Zico Orazio, Nicolas Kolter, Marc Loizou, Lanctot, arXiv:2206.05825arXiv preprintSamuel Sokota, Ryan D'Orazio, J Zico Kolter, Nicolas Loizou, Marc Lanctot, Ioannis Mitliagkas, Noam Brown, and Christian Kroer. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. arXiv preprint arXiv:2206.05825, 2022.
Actor-critic policy optimization in partially observable multiagent environments. Sriram Srinivasan, Marc Lanctot, Vinicius Zambaldi, Julien Pérolat, Karl Tuyls, Rémi Munos, Michael Bowling, Advances in neural information processing systems. 31Sriram Srinivasan, Marc Lanctot, Vinicius Zambaldi, Julien Pérolat, Karl Tuyls, Rémi Munos, and Michael Bowling. Actor-critic policy optimization in partially observable multiagent environments. Advances in neural information processing systems, 31, 2018.
Eric Steinberger, arXiv:1901.07621Single deep counterfactual regret minimization. arXiv preprintEric Steinberger. Single deep counterfactual regret minimization. arXiv preprint arXiv:1901.07621, 2019.
DREAM: Deep regret minimization with advantage baselines and model-free learning. Eric Steinberger, Adam Lerer, Noam Brown, arXiv:2006.10410arXiv preprintEric Steinberger, Adam Lerer, and Noam Brown. DREAM: Deep regret minimization with advantage baselines and model-free learning. arXiv preprint arXiv:2006.10410, 2020.
Certifiably robust policy learning against adversarial multi-agent communication. Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang, The Eleventh International Conference on Learning Representations. Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, and Furong Huang. Certifiably robust policy learning against adversarial multi-agent communication. In The Eleventh International Conference on Learning Representations, 2023.
Who is the strongest enemy? towards optimal and efficient evasion attacks in deep RL. Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang, International Conference on Learning Representations. Yanchao Sun, Ruijie Zheng, Yongyuan Liang, and Furong Huang. Who is the strongest enemy? towards optimal and efficient evasion attacks in deep RL. In International Conference on Learning Representations, 2022.
Robustifying reinforcement learning agents via action space adversarial training. Kai Liang Tan, Yasaman Esfandiari, Yeow Xian, Soumik Lee, Sarkar, 2020 American control conference (ACC). IEEEKai Liang Tan, Yasaman Esfandiari, Xian Yeow Lee, Soumik Sarkar, et al. Robustifying reinforcement learning agents via action space adversarial training. In 2020 American control conference (ACC), pages 3959-3964. IEEE, 2020.
Action robust reinforcement learning and applications in continuous control. Chen Tessler, Yonathan Efroni, Shie Mannor, International Conference on Machine Learning. PMLRChen Tessler, Yonathan Efroni, and Shie Mannor. Action robust reinforcement learning and applications in continuous control. In International Conference on Machine Learning, pages 6215-6224. PMLR, 2019.
Safe reinforcement learning by imagining the near future. Garrett Thomas, Yuping Luo, Tengyu Ma, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanGarrett Thomas, Yuping Luo, and Tengyu Ma. Safe reinforcement learning by imagining the near future. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
Eugene Vinitsky, Yuqing Du, Kanaad Parvate, Kathy Jang, arXiv:2008.01825Pieter Abbeel, and Alexandre Bayen. Robust reinforcement learning using adversarial populations. arXiv preprintEugene Vinitsky, Yuqing Du, Kanaad Parvate, Kathy Jang, Pieter Abbeel, and Alexandre Bayen. Robust reinforcement learning using adversarial populations. arXiv preprint arXiv:2008.01825, 2020.
Grandmaster level in StarCraft II using multi-agent reinforcement learning. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, H David, Richard Choi, Timo Powell, Petko Ewalds, Georgiev, Nature. 5757782Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Jun- young Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
CROP: certifying robust policies for reinforcement learning through functional smoothing. Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, Bo Li, International Conference on Learning Representations. Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, and Bo Li. CROP: certify- ing robust policies for reinforcement learning through functional smoothing. In International Conference on Learning Representations, 2022.
Outracing champion gran turismo drivers with deep reinforcement learning. Samuel Peter R Wurman, Kenta Barrett, James Kawamoto, Kaushik Macglashan, Subramanian, J Thomas, Roberto Walsh, Alisa Capobianco, Franziska Devlic, Florian Eckert, Fuchs, Nature. 6027896Peter R Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subrama- nian, Thomas J Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, et al. Outracing champion gran turismo drivers with deep reinforcement learning. Nature, 602(7896):223-228, 2022.
Douzero: Mastering doudizhu with self-play deep reinforcement learning. Daochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, Ji Liu, International Conference on Machine Learning. PMLRDaochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, and Ji Liu. Douzero: Mastering doudizhu with self-play deep reinforcement learning. In International Conference on Machine Learning, pages 12333-12344. PMLR, 2021.
Robust reinforcement learning on state observations with learned optimal adversary. Huan Zhang, Hongge Chen, Duane S Boning, Cho-Jui Hsieh, International Conference on Learning Representations. Huan Zhang, Hongge Chen, Duane S Boning, and Cho-Jui Hsieh. Robust reinforcement learning on state observations with learned optimal adversary. In International Conference on Learning Representations, 2021.
Robust deep reinforcement learning against adversarial perturbations on state observations. Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, and Cho- Jui Hsieh. Robust deep reinforcement learning against adversarial perturbations on state observations. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 21024-21037. Curran Associates, Inc., 2020. |
3,524,564 | THE KANERVA MACHINE: A GENERATIVE DISTRIBUTED MEMORY | We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train. | [] | THE KANERVA MACHINE: A GENERATIVE DISTRIBUTED MEMORY
Yan Wu [email protected]
Greg Wayne [email protected]
Alex Graves [email protected]
Timothy Lillicrap Deepmind
THE KANERVA MACHINE: A GENERATIVE DISTRIBUTED MEMORY
Published as a conference paper at ICLR 2018
We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.
INTRODUCTION
Recent work in machine learning has examined a variety of novel ways to augment neural networks with fast memory stores. However, the basic problem of how to most efficiently use memory remains an open question. For instance, the slot-based external memory in models like Differentiable Neural Computers (DNCs Graves et al. (2016)) often collapses reading and writing into single slots, even though the neural network controller can in principle learn more distributed strategies. As as result, information is not shared across memory slots, and additional slots have to be recruited for new inputs, even if they are redundant with existing memories. Similarly, Matching Networks (Vinyals et al., 2016;Bartunov & Vetrov, 2016) and the Neural Episodic Controller (Pritzel et al., 2017) directly store embeddings of data. They therefore require the volume of memory to increase with the number of samples stored. In contrast, the Neural Statistician (Edwards & Storkey, 2016) summarises a dataset by averaging over their embeddings. The resulting "statistics" are conveniently small, but a large amount of information may be dropped by the averaging process, which is at odds with the desire to have large memories that can capture details of past experience.
Historically developed associative memory architectures provide insight into how to design efficient memory structures that store data in overlapping representations. For example, the Hopfield Net (Hopfield, 1982) pioneered the idea of storing patterns in low-energy states in a dynamic system. This type of model is robust, but its capacity is limited by the number of recurrent connections, which is in turn constrained by the dimensionality of the input patterns. The Boltzmann Machine (Ackley et al., 1985) lifts this constraint by introducing latent variables, but at the cost of requiring slow reading and writing mechanisms (i.e. via Gibbs sampling). This issue is resolved by Kanerva's sparse distributed memory model (Kanerva, 1988), which affords fast reads and writes and dissociates capacity from the dimensionality of input by introducing addressing into a distributed memory store whose size is independent of the dimension of the data 1 .
In this paper, we present a conditional generative memory model inspired by Kanerva's sparse distributed memory. We generalise Kanerva's original model through learnable addresses and reparametrised latent variables (Rezende et al., 2014;Kingma & Welling, 2013;Bornschein et al., 2017). We solve the challenging problem of learning an effective memory writing operation by exploiting the analytic tractability of our memory model -we derive a Bayesian memory update rule that optimally trades-off preserving old content and storing new content. The resulting hierarchical generative model has a memory dependent prior that quickly adapts to new data, providing top-down knowledge in addition to bottom-up perception from the encoder to form the latent code representing data. As a generative model, our proposal provides a novel way of enriching the often over-simplified priors in VAE-like models (Rezende et al., 2016) through a adaptive memory. As a memory system, our proposal offers an effective way to learn online distributed writing which provides effective compression and storage of complex data.
BACKGROUND: VARIATIONAL AUTOENCODERS
Our memory architecture can be viewed as an extension of the variational autoencoder (VAE) (Rezende et al., 2014;Kingma & Welling, 2013), where the prior is derived from an adaptive memory store. A VAE has an observable variable x and a latent variable z. Its generative model is specified by a prior distribution p θ (z) and the conditional distribution p θ (x|z). The intractable posterior p θ (z|x) is approximated by a parameterised inference model q φ (z|x). Throughout this paper, we use θ to represent the generative model's parameters, and φ to represent the inference model's parameters. All parameterised distributions are implemented as multivariate Gaussian distributions with diagonal covariance matrices, whose means and variances are outputs from neural networks as in (Rezende et al., 2014;Kingma & Welling, 2013).
We assume a dataset with independently and identically distributed (iid) samples D = {x 1 , . . . , x n , . . . , x N }. The objective of training a VAE is to maximise its log-likelihood E x∼D [ln p θ (x)]. This can be achieved by jointly optimising θ and φ for a variational lower-bound of the likelihood (omitting the expectation over all x for simplicity):
L = E q φ (z|x) [ln p θ (x|z)] − D KL (q φ (z|x) p θ (z))(1)
where the first term can be interpreted as the negative reconstruction loss for reconstructing x using its approximated posterior sample from q φ (z|x), and the second term as a regulariser that encourages the approximated posterior to be near the prior of z.
THE KANERVA MACHINE
To introduce our model, we use the concept of an exchangeable episode: X = {x 1 , . . . , x t , . . . , x T } ⊂ D is a subset of the entire dataset whose order does not matter. The objective of training is the expected conditional log-likelihood (Bornschein et al., 2017),
J = p(X, M ) ln p θ (X |M ) dM dX = p(X)p(M |X) T t=1 ln p θ (x t |M ) dM dX (2)
The equality utilises the conditional independence of x t given the memory M , which is equivalent to the assumption of an exchangeable episode X (Aldous, 1985). We factorise the joint distribution of p(X, M ) into the marginal distribution p(X) and the posterior p(M |X), so that computing p(M |X) can be naturally interpreted as writing X into the memory.
We propose this scenario as a general and principled way of formulating memory-based generative models, since J is directly related to the mutual information I(X; M ) through I(X; M ) = H(X) − H(X|M ) = H(X) + p(X, M ) ln p θ (X|M ) dXdM = H(X) + J . As the entropy of the data H(X) is a constant, maximising J is equivalent to maximising I(X; M ), the mutual information between the memory and the episode to store.
THE GENERATIVE MODEL
We write the collection of latent variables corresponding to the observed episode X as Y = {y 1 , . . . , y t , . . . , y T } and Z = {z 1 , . . . , z t , . . . , z T }. As illustrated in Fig. 1 (left), the joint distribution of the generative model can be factorised as
p θ (X, Y, Z|M ) = T t=1 p θ (x t , y t , z t |M ) = T t=1 p θ (x t |z t ) p θ (z t |y t , M ) p θ (y t )(3)
The first equality uses the conditional independence of z t , y t , x t given M , shown by the "plates" in Fig. 1 (left). The memory M is a K × C random matrix with the matrix variate Gaussian distribution (Gupta & Nagar, 1999): where R is a K × C matrix as the mean of M , U is a K × K matrix that provides the covariance between rows of M , and V is a C × C matrix providing covariances between columns of M . This distribution is equivalent to the multivariate Gaussian distribution of vectorised M :
p(M ) = MN (R, U, V )(4)p (vec (M )) = N (vec (M )| vec (R) , V ⊗ U ), where vec (·)
is the vectorisation operator and ⊗ denotes the Kronecker product. We assume independence between the columns but not the rows of M , by fixing V to be the identity matrix I C and allow the full degree of freedom for U . Since our experiments suggest the covariance between rows is useful for coordinating memory access, this setting balances simplicity and performance (Fig. 10).
Accompanying M are the addresses A, a K × S real-value matrix that is randomly initialised and is optimised through back-propagation. To avoid degeneracy, rows of A are normalised to have L2-norms of 1. The addressing variable y t is used to compute the weights controlling memory access. As in VAEs, the prior p θ (y t ) is an isotropic Gaussian distribution N (0, 1). A learned projection b t = f (y t ) then transforms y t into a S × 1 key vector. The K × 1 vector w t , as weights across the rows of M , is computed via the product:
w t = b t · A = f (y t ) · A(5)
The projection f is implemented as a multi-layer perception (MLP), which transforms the distribution of y t , as well as w t , to potentially non-Gaussian distributions that may better suit addressing.
The code z t is a learned representation that generates samples of x t through the parametrised conditional distribution p θ (x t |z t ). This distribution is tied for all t ∈ {1 . . . T }. Importantly, instead of the isotropic Gaussian prior, z t has a memory dependent prior:
p θ (z t |y t , M ) = N z t w t · M, σ 2 I C(6)
whose mean is a linear combination of memory rows, with the noise covariance matrix fixed as an identity matrix by setting σ 2 = 1. This prior results in a much richer marginal distribution, because of its dependence on memory and the addressing variable y t through p θ (z t |M ) = p θ (z t |y t , M )p θ (y t ) dy t .
In our hierarchical model, M is a global latent variable for an episode that captures statistics of the entire episode (Bartunov & Vetrov, 2016;Edwards & Storkey, 2016), while the local latent variables y t and z t capture local statistics for data x t within an episode. To generate an episode of length T , we first sample M once, then sample y t , z t , and x t sequentially for each of the T samples.
THE READING INFERENCE MODEL
As illustrated in Fig. 1 (central), the approximated posterior distribution is factorised using the conditional independence:
q φ (Y, Z|X, M ) = T t=1 q φ (y t , z t |x t , M ) = T t=1 q φ (z t |x t , y t , M ) q φ (y t |x t )(7)
where q φ (y t |x t ) is a parameterised approximate posterior distribution. The posterior distribution q φ (z t |x t , y t , M ) refines the (conditional) prior distribution p θ (z t |y t , M ) with additional evidence from x t . This parameterised posterior takes the concatenation of x t and the mean of p θ (z t |y t , M ) (eq. 6) as input. The constant variance of p θ (z t |y t , M ) is omitted. Similar to the generative model,
q φ (y t |x t ) is shared for all t ∈ {1 . . . T }.
THE WRITING INFERENCE MODEL
A central difficulty in updating memory is the trade-off between preserving old information and writing new information. It is well known that this trade-off can be balanced optimally through Bayes' rule MacKay (2003). From the generative model perspective (eq. 2), it is natural to interpret memory writing as inference -computing the posterior distribution of memory p(M |X). This section considers both batch inference -directly computing p(M |X) and on-line inference -sequentially accumulating evidence from x 1 , . . . , x T .
Following Fig. 1 (right), the approximated posterior distribution of memory can be written as
q φ (M |X) = p θ (M, Y, Z|X) dZdY = p θ (M |{y 1 , . . . , y T }, {z 1 , . . . , z T }) T t=1 q φ (z t |x t )q φ (y t |x t ) dz t dy t ≈ p θ (M |{y 1 , . . . , y T }, {z 1 , . . . , z T }) yt∼q φ (yt|xt),zt∼q φ (zt|xt)(8)
The last line uses one sample of y t , x t to approximate the intractable integral. The posterior of the addressing variable q φ (y t |x t ) is the same as in section 3.2, and the posterior of code q φ (z t |x t ) is a parameterised distribution. We use the short-hand p θ (M |Y, Z) for p θ (M |{y 1 , . . . , y T }, {z 1 , . . . , z T }) when Y, Z are sampled as described here. We abuse notation in this section and use Z = (z 1 ; , . . . , ; z T ) as a T ×C matrix with all the observations in an episode, and W = (w 1 ; . . . ; w T ) as a T × K matrix with all corresponding weights for addressing.
Given the linear Gaussian model (eq. 6), the posterior of memory p θ (M |Y, Z) is analytically tractable, and its parameters R and U can be updated as follows:
∆ ← Z − W R (9) Σ c ← W U Σ z ← W U W + Σ ξ (10) R ← R + Σ c Σ −1 z ∆ U ← U − Σ c Σ −1 z Σ c(11)
where ∆ is the prediction error before updating the memory, Σ c is a T × K matrix providing the cross-covariance between Z and M , Σ ξ is a T × T diagonal matrix whose diagonal elements are the noise variance σ 2 and Σ z is a T × T matrix that encodes the covariance for z 1 , . . . , z T . This update rule is derived from applying Bayes' rule to the linear Gaussian model (Appendix E). The prior parameters of p(M ), R 0 and U 0 are trained through back-propagation. Therefore, the prior of M can learn the general structure of the entire dataset, while the posterior is left to adapt to features presented in a subset of data observed within a given episode.
The main cost of the update rule comes from inverting Σ z , which has a complexity of O(T 3 ). One may reduce the per-step cost via on-line updating, by performing the update rule using one sample at a time -when X = x t , Σ z is a scalar which can be inverted trivially. According to Bayes' rule, updating using the entire episode at once is equivalent to performing the one-sample/on-line update iteratively for all observations in the episode. Similarly, one can perform intermediate updates using mini-batch with size between 1 and T .
Another major cost in the update rule is the storage and multiplication of the memory's row-covariance matrix U , with the complexity of O(K 2 ). Although restricting this covariance to diagonal can reduce this cost to O(K), our experiments suggested this covariance is useful for coordinating memory accessing (Fig. 10). Moreover, the cost of O(K 2 ) is usually small, since parameters of the model are dominated by the encoder and decoder. Nevertheless, a future direction is to investigating low-rank approximation of U that better balance cost and performance.
TRAINING
To train this model, we optimise a variational lower-bound of the conditional likelihood J (eq. 2), which can be derived in a fashion similar to standard VAEs:
L = E q φ (M |X)p(X) T t=1 E q φ (yt,zt|xt,M ) [ln p θ (x t |z t )] −D KL (q φ (y t |x t ) p θ (y t )) − D KL (q φ (z t |x t , y t , M ) p θ (z t |y t , M ))}(12)
To maximise this lower bound, we sample y t , z t from q φ (y t , z t |x t , M ) to approximate the inner expectation. For computational efficiency, we use a mean-field approximation for the memoryusing the mean R in the place of memory samples (since directly sampling M requires expensive Cholesky decomposition of the non-diagonal matrix U ). Alternatively, we can further exploit the analytical tractability of the Gaussian distribution to obtain distribution-based reading and writing operations (Appendix F).
Inside the bracket, the first term is the usual VAE reconstruction error. The first KL-divergence penalises complex addresses, and the second term penalises deviation of the code z t from the memory-based prior. In this way, the memory learns useful representations that do not rely on complex addresses, and the bottom-up evidence only corrects top-down memory reading when necessary.
3.5 ITERATIVE SAMPLING An important feature of Kanerva's sparse distributed memory is its iterative reading mechanism, by which output from the model is fed back as input for several iterations. Kanerva proved that the dynamics of iterative reading will decrease errors when the initial error is within a generous range, converging to a stored memory (Kanerva, 1988). A similar iterative process is also available in our model, by repeatedly feeding-back the reconstructionx t . This Gibbs-like sampling follows the loop in Fig. 1 (central). While we cannot prove convergence, in our experiments iterative reading reliably improves denoising and sampling.
To understand this process, notice that knowledge about memory is helpful in reading, which suggests using q φ (y t |x t , M ) instead of q φ (y t |x t ) for addressing (section 3.2). Unfortunately, training a parameterised model with the whole matrix M as input can be prohibitively costly. Nevertheless, it is well-known in the coding literature that such intractable posteriors that usually arise in non-tree graphs (as in Fig. 1) can be approximated efficiently by loopy belief-propagation, as has been used in algorithms like Turbo coding (Frey & MacKay, 1998). Similarly, we believe iterative reading works in our model because q φ (y t |x t ) models the local coupling between x t and y t well enough, so iterative sampling with the rest of the model is likely to converge to the true posterior q φ (y t |x t , M ). Future research will seek to better understand this process.
EXPERIMENTS
Details of our model implementation are described in Appendix C. We use straightforward encoder and decoder models in order to focus on evaluating the improvements provided by an adaptive memory. In particular, we use the same model architecture for all experiments with both Omniglot and CIFAR dataset, changing only the the number of filters in the convolutional layers, memory size, and code size. We always use the on-line version of the update rule (section 3.3). The Adam optimiser was used for all training and required minimal tuning for our model (Kingma & Ba, 2014).
In all experiments, we report the value of variational lower bound (eq. 12) L divided by the length of episode T , so the per-sample value can be compared with the likelihood from existing models.
We first used the Omniglot dataset to test our model. This dataset contains images of hand-written characters with 1623 different classes and 20 examples in each class (Lake et al., 2015). This large variation creates challenges for models trying to capture the entire complex distribution. We use a 64 × 100 memory M , and a smaller 64 × 50 address matrix A. For simplicity, we always randomly sample 32 images from the entire training set to form an "episode", and ignore the class labels. This represents a worst case scenario since the images in an episode will tend to have relatively little redundant information for compression. We use a mini-batch size of 16, and optimise the variational lower-bound (eq. 12) using Adam with learning rate 1 × 10 −4 .
We also tested our model with the CIFAR dataset, in which each 32 × 32 × 3 real-valued colour image contains much more information than a binary omniglot pattern. Again, we discard all the label information and test our model in the unsupervised setting. Fig. 2 (central) and (right) further shows that the Kanerva Machine achieved better reconstruction and KL-divergence. In particular, the KL-divergence of our model "dips" sharply from about the 2000th step, implying our model learned to use the memory to induce a more informative prior. Fig. 11 confirms this: the KL-divergence for z t has collapsed to near zero, showing that the top-down prior from memory q φ (z t |y t , M ) provides most of the information for the code. This rich prior is achieved at the cost of an additional KL-divergence for y t (Fig. 11, right) which is still much lower than the KL-divergence for z t in a VAE. Similar training curves are observed for CIFAR training (Fig. 12). Gemici et al. (2017) also observed such KL-divergence dips with a memory model. They report that the reduction in KL-divergence, rather than the reduction in reconstruction loss, was particularly important for improving sample quality, which we also observed in our experiments with Omniglot and CIFAR.
At the end of training, our VAE reached a negative log-likelihood (NLL) of ≤ 112.7 (the lower-bound of likelihood), which is worse than the state-of-the-art unconditioned generation that is achieved by rolling out 80 steps of a DRAW model (NLL of 95.5, Rezende et al., 2016), but comparable to results with IWAE training (NLL of 103.4, Burda et al., 2015). In contrast, with the same encoder and decoders, the Kanerva Machine achieve conditional NLL of 68.3. It is not fair to directly compare our results with unconditional generative models since our model has the advantage of its memory contents. Nevertheless, the dramatic improvement of NLL demonstrates the power of incorporating an adaptive memory into generative models. Fig. 3 (left) shows examples of reconstruction at the end of training; as a signature of our model, the weights were well distributed over the memory, illustrating that patterns written into the memory were superimposed on others.
iterations Figure 3: Left: reconstruction of inputs and the weights used in reconstruction, where each bin represents the weight over one memory slot. Weights are widely distributed across memory slots. Right: denoising through iterative reading. In each panel: the first column shows the original pattern, the second column (in boxes) shows the corrupted pattern, and the following columns show the reconstruction after 1, 2 and 3 iterations.
ONE-SHOT GENERATION
We generalise "one-shot" generation from a single image (Rezende et al., 2016), or a few sample images from a limited set of classes (Edwards & Storkey, 2016;Bartunov & Vetrov, 2016), to a batch of images with many classes and samples. To better illustrate how samples are shaped by the conditioning data, in this section we use the same trained models, but test them using episodes with samples from only 2, 4 or 12 classes (omniglot characters) 2 . Fig. 4 compares samples from the VAE and the Kanerva Machine. While initial samples from our model (left most columns) are visually about as good as those from the VAE, the sample quality improved in consecutive iterations and the final samples clearly reflects the statistics of the conditioning patterns. Most samples did not change much after the 6th iteration, suggesting the iterative sampling had converged. Similar conditional samples from CIFAR are shown in Fig. 5. Notice that this approach, however, does not apply to VAEs, since VAEs do not have the structure we discussed in section 3.5. This is illustrated in Figure 8 by feeding back output from VAEs as input to the next iteration, which shows the sample quality did not improve after iterations.
DENOISING AND INTERPOLATION
To further examine generalisation, we input images corrupted by randomly positioned 12 × 12 blocks, and tested whether our model can recover the original image through iterative reading. Our model was not trained on this task, but Fig. 3 (right) shows that, over several iterations, input images can be recovered. Due to high ambiguity, some cases (e.g., the second and last) ended up producing incorrect but still reasonable patterns.
The structure of our model affords interpretability of internal representations in memory. Since representations of data x are obtained from a linear combination of memory slots (eq. 6), we expect linear interpolations between address weights to be meaningful. We examined interpolations by computing 2 weight vectors from two random input images, and then linearly interpolating between these two vectors. These vectors were then used to read z t from memory (eq. 6), which is then decoded to produce the interpolated images. Fig. 7 in Appendix A shows that interpolating between these access weights indeed produces meaningful and smoothly changing images. This section compares our model with the Differentiable Neural Computer (DNC, Graves et al., 2016), and a variant of it, the Least Recently Used Architecture (LRUA, Santoro et al., 2016). We test these using the same episode storage and retrieval task as in previous experiments with Omniglot data. For a fair comparison, we fit the DNC models into the same framework, as detailed in Appendix D. Fig. 6 (left) illustrates the process of training the DNC and the Kanerva Machine. The LRUA did not passed the loss level of 150, so we did not include it in the figure. The DNC reached a test loss close to 100, but was very sensitive to hyper-parameters and random initialisation: only 2 out of 6 instances with the best hyper-parameter configuration (batch size = 16, learning rate= 3 × 10 −4 ) found by grid search reached this level. On the other hand, the Kanerva Machine was robust to these hyper-parameters, and worked well with batch sizes between 8 and 64, and learning rates between 3 × 10 −5 and 3 × 10 −4 . The Kanerva Machine trained fastest with batch size 16 and learning rate 1 × 10 −4 and eventually converged below 70 test loss with all tested configurations. Therefore, the Kanerva Machine is significantly easier to train, thanks to principled reading and writing operations that do not depend on any model parameter.
COMPARISON WITH DIFFERENTIABLE NEURAL COMPUTERS
We next analysed the capacity of our model versus the DNC by examining the lower bound of then likelihood when storing and then retrieving patterns from increasingly large episodes. As above, these models are still trained with episodes containing 32 samples, but are tested on much larger episodes. We tested our model with episodes containing different numbers of classes and thus varying amounts of redundancy. Fig. 6 (right) shows both models are able to exploit this redundancy, since episodes with fewer classes (but the same number of images) have lower reconstruction losses. Overall, the Kanerva Machine generalises well to larger episodes, and maintained a clear advantage over the DNC (as measured by the variational lower-bound).
DISCUSSION
In this paper, we present the Kanerva Machine, a novel memory model that combines slow-learning neural networks and a fast-adapting linear Gaussian model as memory. While our architecture is inspired by Kanerva's seminal model, we have removed the assumption of a uniform data distribution by training a generative model that flexibly learns the observed data distribution. By implementing memory as a generative model, we can retrieve unseen patterns from the memory through sampling. This phenomenon is consistent with the observation of constructive memory neuroscience experiments (Hassabis et al., 2007).
Probabilistic interpretations of Kanerva's model have been developed in previous works: Anderson (1989) explored a conditional probability interpretation of Kanerva's sparse distributed memory, and generalised binary data to discrete data with more than two values. Abbott et al. (2013) provides an approximate Bayesian interpretation based on importance sampling. To our knowledge, our model is the first to generalise Kanerva's memory model to continuous, non-uniform data while maintaining an analytic form of Bayesian inference. Moreover, we demonstrate its potential in modern machine learning through integration with deep neural networks.
Other models have combined memory mechanisms with neural networks in a generative setting. For example, Li et al. (2016) used attention to retrieve information from a set of trainable parameters in a memory matrix. Notably, the memory in this model is not updated following learning. As a result, the memory does not quickly adapt to new data as in our model, and so is not suited to the kind of episode-based learning explored here. Bornschein et al. (2017) used discrete (categorical) random variables to address an external memory, and train the addressing mechanism, together with the rest of the generative model, though a variational objective. However, the memory in their model is populated by storing images in the form of raw pixels. Although this provides a mechanism for fast adaptation, the cost of storing raw pixels may be overwhelming for large data sets. Our model learns to to store information in a compressed form by taking advantage of statistical regularity in the images via the encoder at the perceptual level, the learned addresses, and Bayes' rule for memory updates.
Central to an effective memory model is the efficient updating of memory. While various approaches to learning such updating mechanisms have been examined recently (Graves et al., 2016;Edwards & Storkey, 2016;Santoro et al., 2016), we designed our model to employ an exact Bayes' update-rule without compromising the flexibility and expressive power of neural networks. The compelling performance of our model and its scalable architecture suggests combining classical statistical models and neural networks may be a promising direction for novel memory models in machine learning.
APPENDIX A EXTRA FIGURES
Interpolation steps
B SPARSE DISTRIBUTED MEMORY
This section reviews Kanerva's sparse distributed memory (Kanerva, 1988). For consistency with the rest of this paper, many of the notations are different from Kanerva's description. In contrast to many recent models, Kanerva's memory model is characterised by its distributed reading and writing operations. The model has two main components: a fixed table of addresses A pointing to a modifiable memory M . Both A and M have the same size of K × D, where K is the number of addresses that and D is the input dimensionality. Kanerva assumes all the inputs are uniform random vectors y ∈ {−1, 1} D . Therefore, the fixed addresses A i are uniformly randomly sampled from {−1, 1} D to reflect the input statistics.
An input y is compared with each address A k in A through the Hamming distance. For binary vectors a, b ∈ {−1, 1} D , the Hamming distance can be written as h(a, b) = 1 2 (D − a · b) where · represents inner product between two vectors. An address k is selected when the hamming distance between x and A k is smaller than a threshold τ , so the selection can be summarised by the binary weight vector:
w k = 1, h(x, A k ) τ 0, otherwise(13)
During writing, a pattern x is stored into M by adding M k ← M k + w k x. For reading, the memory contents pointed to by all the selected addresses are summed together to pass a threshold at 0 to produce a read out:x
= 1, K k=1 w k M k > 0 −1, otherwise(14)
This reading process can be iterated several times by repeatedly feeding-back the outputx as input.
It has been shown analytically by Kanerva that when both K and D are large enough, a small portion of the addresses will always be selected, thus the operations are sparse and distributed. Although an address' content may be over-written many times, the stored vectors can be retrieved correctly. Moreover, Kanerva proved that even a significantly corrupted query can be discovered from the memory through iterative reading. However, the application of Kanerva's model is restricted by the assumption of a uniform and binary data distribution, on which Kanerva's analyses and bounds of performance rely (Kanerva, 1988). Unfortunately, this assumption is rarely true in practice, since real-world data typically lie on low-dimensional manifolds, and binary representation of data is less efficient in high-level neural network implementations that are heavily optimised for floating-point numbers.
C MODEL DETAILS Figure 9 shows the architecture of our model compared with a standard VAE. For all experiments, we use a convolutional encoder to convert input images into 2C embedding vectors e(x t ), where C is the code size (dimension of z t ). The convolutional encoder has 3 consecutive blocks, where each block is a convolutional layer with 4 × 4 filter with stride 2, which reduces the input dimension, followed by a basic ResNet block without bottleneck (He et al., 2016). All the convolutional layers have the same number of filters, which is either 16 or 32 depending on the dataset. The output from the blocks is flattened and linearly projected to a 2C dimensional vector. The convolutional decoder mirrors this structure with transposed convolutional layers. All the "MLP" boxes in Fig. 9 are 2-layer multi-layer perceptron with ReLU non-linearity in between. We found that adding noise to the input into q φ (y t |x t ) helped stabilise training, possibly by restricting the information in the addresses. The exact magnitude of the added noise matters little, and we use Gaussian noise with zero mean and standard deviation of 0.2 for all experiments. We use Bernoulli likelihood function for Omniglot dataset, and Gaussian likelihood function for CIFAR. To avoid Gaussian likelihood collapsing, we added uniform noise U(0, 1 256 ) to CIFAR images during training.
D DNC DETAILS
For a fair comparison, we wrap the differentiable neural computer (DNC) with the same interface as the Kanerva memory so that it can simply replace the memory M in Fig. 9. More specifically, the DNC receives the addressing variable y t with the same size and sampled the same ways as described in the main text in reading and writing stages. During writing it also receives z t sampled from q φ (z t |x t ) as input, by concatenating y t and z t together as input into the memory controller.
Since DNCs do not have separated reading and writing stages, we separated this two process in our experiments: during writing, we discard the read-out from the DNC, and only keep its state as the memory; during reading, we discard the state at each step so it cannot be used for storing new information. In addition, we use a 2-layer MLP with 200 hidden neurons and ReLU nonlinearity as the controller instead of the commonly used LSTM to avoid the recurrent state being used as memory and interference with DNC's external memory. Another issue with off-the-shelf DNC (Graves et al., 2016;Santoro et al., 2016) is that controllers may generate output bypassing the memory, which can be particularly confusing in our auto-encoding setting by simply ignoring the memory and functioning as a skip connection. We avoid this situation by removing this controller output and ensure that the DNC only reads-out from its memory. Further, to focus on the memory performance, we remove
x t e(x t ) p ✓ (x t |z t )
x t e(x t ) q (z t |x t , y t , M) Figure 10: Covariance between memory rows is important. The two curves shows the test loss (negative variational lower bound) as a function of iterations. Four models using full K × K covariance matrix U are shown by red curves and four models using diagonal covariance matrix are shown in blue. All other settings for these 8 models are the same (as described in section 4). These 8 models are trained on machines with similar setup. The models using full covariance matrices were slightly slower per-iteration, but the test loss decreased far more quickly.
p ✓ (x t |z t ) q (y t |x t ) M p ✓ (z t |y t , M)) q (z t |x t ) conv MLP deconv MLP MLP concat x t e(x t ) q (y t |x t ) M MLP q (z t |x t ) MLP p ✓ (x t |z t )
the bottom-up stream in our model that compensates for the memory. This means directly sampling z t from p θ (z t |y t , M ), instead of p θ (z t |x t , y t , M ), for the decoder p θ (x t |z t ), forcing the model to reconstruct solely using read-outs from the memory. Figure 11: The KL-divergence between y t (left) and z t (right) during training. Figure 12: The negative variational lower bound, reconstruction loss, and total KL-divergence during CIFAR training. Although the difference between the lower bound objective is smaller than that during Omniglot training, the general patterns of these curves are similar to those in Fig. 2. The relatively small difference in KL-divergence significantly influences sample quality. Notice at the time of our submission, the training is continuing and the advantage of the Kanerva Machine over the VAE is increasing.
E DERIVATION OF THE ONLINE UPDATE RULE
Eq. 6 defines a linear Gaussian model. Using notations in the main paper, can write the joint distribution p(vec (Z) , vec(M )) = N (vec (Z) , vec(M ); µ j , Σ j ), where
µ j = vec (W R) vec (R)(15)Σ j = Σ z ⊗ I C Σ c ⊗ I C Σ c ⊗ I C U ⊗ I c(16)
We can then use the conditional formula for the Gaussian to derive the posterior distribution p(vec (M ) |vec (Z)) = N (vec (M ) ; µ p , Σ p ), using the property Kronecker product:
µ p = vec (R) + Σ c Σ −1 z ⊗ I C (vec (Z) − vec (W R)) (17) Σ p = U ⊗ I c − Σ c Σ −1 z Σ c ⊗ I C(18)
From properties of matrix variate Gaussian distribution, the above two equations can be re-arranged to the update rule in eq. 9 to 11. F DISTRIBUTION-BASED READING AND WRITING While the model we described in this paper works well using samples from q φ (z t |x t ) for writing to the memory (section 3.3) and the mean-field approximation during reading (section 3.4), here we describe an alternative that fully exploits the analytic tractability of the Gaussian distribution. To simplify notation, we use ψ = {R, U, V } for all parameters of the memory.
Figure 1 :
1The probabilistic graphical model for the Kanerva Machine. Left: the generative model; Central: reading inference model. Right: writing inference model; Dotted lines show approximate inference and dashed lines represent exact inference.
Figure 4 :Figure 5 :
45One-shot generation given a batch of examples. The first panel shows reference samples from the matched VAE. Samples from our model conditioned on 12 random examples from the specified number of classes. Conditioning examples are shown above the samples. Comparison of samples from CIFAR. The 24 conditioning images (top-right) are randomly sampled from the entire CIFAR dataset, so they contains a mix of many classes. Samples from the matched VAE are blurred and lack meaningful local structure. On the other hand, samples from the Kanerva Machine have clear local structures, despite using the same encoder and decoder as the VAE. The 5 columns show samples after 0, 2, 4, 6, and 8 iterations.
Figure 6 :
6Left: the training curves of DNC and Kanerva machine both shows 6 instances with the best hyperparameter configuration for each model found via grid search. DNCs were more sensitive to random initilisation, slower, and plateaued with larger error. Right: the test variational lower-bounds of a DNC (dashed lines) and a Kanerva Machine as a function of different episode sizes and different sample classes.
Figure 7 :
7Interpolation for Omniglot and CIFAR images. The first and last column show 2 random images from the data. Between them are linear interpolations in the space of memory accessing weights w t .
Figure 8 :
8Iteratively sampled priors from VAE, for both Omniglot (left) and Cifar (right). In both panels, the columns show samples after 0, 2, 4, 6, 8 and 10 iterations, mirroring the procedure producing figure 4 and 5.
Figure 9 :
9The architecture of the VAE and the Kanerva Machine used in our experiments. conv/deconv: convolutional and transposed convolutions neural networks. MLP: multiplayer perceptron. concat: vector concatenation. The blue arrows show memory writing as exact inference.
VAE model using the exact same encoder and decoder. Note that there is only a modest increase of parameters in the Kanerva Machine compared the VAE since the encoder and decoder dominates the model parameters.To accommodate the increased
complexity of CIFAR, we use convolutional coders with 32 features at each layer, use a code size
of 200, and a 128 × 200 memory with 128 × 50 address matrix. All other settings are identical to
experiments with Omniglot.
4.1 COMPARISON WITH VAES
We first use the 28 × 28 binary Omniglot from Burda et al. (2015) and follow the same split of 24,345
training and 8,070 test examples. We first compare the training process of our model with a baseline
Figure 2: The negative variational lower bound (left), reconstruction loss (central), and KL-Divergence
(right) during learning. The dip in the KL-divergence suggests that our model has learned to use the
memory.
Fig. 2 shows learning curves for our model along with those for the VAE trained on the Omniglot
dataset. We plot 4 randomly initialised instances for each model. The training is stable and insensitive
to initialisation. Fig. 2 (left) shows that our model reached a significantly lower negative variational
lower-bound versus the VAE.
For readers interested in the historical connection, we briefly review Kanerva's sparse distributed memory in Appendix B
The Omniglot data fromBurda et al. (2015) does not have label information, so for this experiment we produced our own labelled dataset by down-sampling the original Omniglot images(Lake et al., 2015) to 28 × 28 using the Python Image Library and then binarizing by thresholding at 20.
ACKNOWLEDGMENTSWe would like to thank Sergey Bartunov, Charles Blundell, Jörg Bornschein, Karol Gregor, Shakir Mohamed, and Benigno Uria for helpful discussions, and to thank Dillon Graham and Jascha Sohl-Dickstein for pointing out mistakes in earlier manuscripts.p θ (z t |y t , ψ) = p θ (z t |y t , M ) p(M ) dM = N z t | w t R, w U w + σ 2 I CFor writing, the distribution q φ (Z|X) = T t=1 q φ (z t |x t ) = N (µ Q , Σ Q ) (µ Q and Σ Q are functions of X) can be incorporated into the Bayes' update rule by analytically marginalising-out Z:where we used Bayes' rule and dropped the normalising constant p θ (Z|Y ), and then replaced the equality with proportional-to accordingly. The last integral is:where Σ z = W U W + Σ ξ + Σ Q and p (Z) is a distribution of Z whose exact form is unimportant. Therefore, eq. 20 shows that the posterior distribution of M is proportional to the product between the prior p θ (M ) and the above likelihood term. From inspection, we can see the update rule (eq. 9 -11) needs to be modified by replacing Σ z with Σ z by adding the bottom-up uncertainty Σ Q .G DESCRIPTION OF THE ALGORITHMAlgorithm 1 Iterative Reading Input: Memory M , a (potentially noisy) query x t , the number of iteration n Output: An estimate of the noiselessx t initialise i = 0 while i < n do sample y t ∼ q φ (y t |x t ) compute the key b t ← f (y t ) Compute the weights w t ← b t · A read-out mean µ z ← w t · M sample z t ∼ q φ (z t |x t , y t , M ) which takes µ z and x t as inputs sample the new query x t ∼ p θ (x t |z t ) increment i ← i + 1 end while returnx ← x t Algorithm 2 Writing Input: Images {x t } T t=1 , Memory M with parameters R and U Output: Updated memory M for each y t doz Σ c end for return M with the updated parameters R and U
Approximating bayesian inference with a sparse distributed memory system. T Joshua, Jessica B Abbott, Thomas L Hamrick, Griffiths, Proceedings of the 35th annual conference of the cognitive science society. the 35th annual conference of the cognitive science societyJoshua T Abbott, Jessica B Hamrick, Thomas L Griffiths, et al. Approximating bayesian inference with a sparse distributed memory system. In Proceedings of the 35th annual conference of the cognitive science society, pp. 1686-1691, 2013.
A learning algorithm for boltzmann machines. H David, Geoffrey E Ackley, Terrence J Hinton, Sejnowski, Cognitive science. 91David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985.
Exchangeability and related topics. J David, Aldous, École d'Été de Probabilités de Saint-Flour XIII-1983. SpringerDavid J Aldous. Exchangeability and related topics. In École d'Été de Probabilités de Saint-Flour XIII-1983, pp. 1-198. Springer, 1985.
A conditional probability interpretation of kanerva's sparse distributed memory. H Charles, Anderson, Jet Propulsion. 1000Charles H Anderson. A conditional probability interpretation of kanerva's sparse distributed memory. Jet Propulsion, 1000:23-100, 1989.
Fast adaptation in generative models with generative matching networks. Sergey Bartunov, P Dmitry, Vetrov, arXiv:1612.02192arXiv preprintSergey Bartunov and Dmitry P Vetrov. Fast adaptation in generative models with generative matching networks. arXiv preprint arXiv:1612.02192, 2016.
Jörg Bornschein, Andriy Mnih, Daniel Zoran, Danilo J Rezende, arXiv:1709.07116Variational memory addressing in generative models. arXiv preprintJörg Bornschein, Andriy Mnih, Daniel Zoran, and Danilo J Rezende. Variational memory addressing in generative models. arXiv preprint arXiv:1709.07116, 2017.
. Yuri Burda, Roger Grosse, Ruslan Salakhutdinov, arXiv:1509.00519Importance weighted autoencoders. arXiv preprintYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
Harrison Edwards, Amos Storkey, arXiv:1606.02185Towards a neural statistician. arXiv preprintHarrison Edwards and Amos Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016.
A revolution: Belief propagation in graphs with cycles. J Brendan, Frey, J C David, Mackay, Advances in neural information processing systems. Brendan J Frey and David JC MacKay. A revolution: Belief propagation in graphs with cycles. In Advances in neural information processing systems, pp. 479-485, 1998.
Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J Rezende, David Amos, Timothy Lillicrap, arXiv:1702.04649Generative temporal models with memory. arXiv preprintMevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J Rezende, David Amos, and Timothy Lillicrap. Generative temporal models with memory. arXiv preprint arXiv:1702.04649, 2017.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. 5387626Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626): 471-476, 2016.
Matrix variate distributions. K Arjun, Gupta, K Daya, Nagar, CRC Press104Arjun K Gupta and Daya K Nagar. Matrix variate distributions, volume 104. CRC Press, 1999.
Patients with hippocampal amnesia cannot imagine new experiences. Demis Hassabis, Dharshan Kumaran, D Seralynne, Eleanor A Vann, Maguire, Proceedings of the National Academy of Sciences. 1045Demis Hassabis, Dharshan Kumaran, Seralynne D Vann, and Eleanor A Maguire. Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5):1726-1731, 2007.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Neural networks and physical systems with emergent collective computational abilities. J John, Hopfield, Proceedings of the national academy of sciences. the national academy of sciences79John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554-2558, 1982.
Pentti Kanerva, Sparse distributed memory. MIT pressPentti Kanerva. Sparse distributed memory. MIT press, 1988.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, Proceedings of the 2nd International Conference on Learning Representations (ICLR). the 2nd International Conference on Learning Representations (ICLR)Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2013.
Human-level concept learning through probabilistic program induction. Ruslan Brenden M Lake, Joshua B Salakhutdinov, Tenenbaum, Science. 3506266Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
Learning to generate with memory. Chongxuan Li, Jun Zhu, Bo Zhang, International Conference on Machine Learning. Chongxuan Li, Jun Zhu, and Bo Zhang. Learning to generate with memory. In International Conference on Machine Learning, pp. 1177-1186, 2016.
Information theory, inference and learning algorithms. J C David, Mackay, Cambridge university pressDavid JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adrià Puigdomènech, arXiv:1703.01988Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. arXiv preprintAlexander Pritzel, Benigno Uria, Sriram Srinivasan, Adrià Puigdomènech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. arXiv preprint arXiv:1703.01988, 2017.
One-shot generalization in deep generative models. Danilo J Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, Daan Wierstra, Proceedings of the 33rd International Conference on International Conference on Machine Learning. the 33rd International Conference on International Conference on Machine Learning48Danilo J. Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot generalization in deep generative models. In Proceedings of the 33rd International Conference on International Conference on Machine Learning -Volume 48, ICML'16, pp. 1521-1529. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045551.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, The 31st International Conference on Machine Learning (ICML). Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In The 31st International Conference on Machine Learning (ICML), 2014.
Metalearning with memory-augmented neural networks. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap, PMLRProceedings of The 33rd International Conference on Machine Learning. Maria Florina Balcan and Kilian Q. WeinbergerThe 33rd International Conference on Machine LearningNew York, New York, USA48Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta- learning with memory-augmented neural networks. In Maria Florina Balcan and Kilian Q. Wein- berger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1842-1850, New York, New York, USA, 20-22 Jun 2016. PMLR.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, Advances in Neural Information Processing Systems. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630-3638, 2016. |
226,237,047 | SUPERVISED CONTRASTIVE LEARNING FOR PRE-TRAINED LANGUAGE MODEL FINE-TUNING | State-of-the-art natural language understanding classification models follow twostages: pre-training a large language model on an auxiliary task, and then finetuning the model on a task-specific labeled dataset using cross-entropy loss. Crossentropy loss has several shortcomings that can lead to sub-optimal generalization and instability. Driven by the intuition that good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes, we propose a supervised contrastive learning (SCL) objective for the fine-tuning stage. Combined with cross-entropy, the SCL loss we propose obtains improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in both the high-data and low-data regimes, and it does not require any specialized architecture, data augmentation of any kind, memory banks, or additional unsupervised data. We also demonstrate that the new objective leads to models that are more robust to different levels of noise in the training data, and can generalize better to related tasks with limited labeled task data. | [
9586240,
3162051,
40100965,
91184134,
52967399,
5034059,
207847598,
6458072,
52113461,
53295957,
56657912,
52055130
] | SUPERVISED CONTRASTIVE LEARNING FOR PRE-TRAINED LANGUAGE MODEL FINE-TUNING
Beliz Gunel
Stanford University
AI‡ Facebook
Jingfei Du
Stanford University
AI‡ Facebook
Alexis Conneau
Stanford University
AI‡ Facebook
Ves Stoyanov
Stanford University
AI‡ Facebook
SUPERVISED CONTRASTIVE LEARNING FOR PRE-TRAINED LANGUAGE MODEL FINE-TUNING
Preprint. Under review.
State-of-the-art natural language understanding classification models follow twostages: pre-training a large language model on an auxiliary task, and then finetuning the model on a task-specific labeled dataset using cross-entropy loss. Crossentropy loss has several shortcomings that can lead to sub-optimal generalization and instability. Driven by the intuition that good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes, we propose a supervised contrastive learning (SCL) objective for the fine-tuning stage. Combined with cross-entropy, the SCL loss we propose obtains improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in both the high-data and low-data regimes, and it does not require any specialized architecture, data augmentation of any kind, memory banks, or additional unsupervised data. We also demonstrate that the new objective leads to models that are more robust to different levels of noise in the training data, and can generalize better to related tasks with limited labeled task data.
INTRODUCTION
State-of-the-art for most existing natural language processing (NLP) classification tasks is currently achieved by systems that are first pre-trained on auxiliary language modeling tasks and then fine-tuned on the task of interest with cross-entropy loss (Radford et al., 2019;Howard & Ruder, 2018;Liu et al., 2019;Devlin et al., 2019). Although commonly used, cross-entropy loss -the KL-divergence between one-hot vectors of labels and the distribution of model's output logits -has several shortcomings. Cross entropy loss leads to poor generalization performance due to poor margins (Liu et al., 2016;Cao et al., 2019), and it lacks robustness to noisy labels (Zhang & Sabuncu, 2018;Sukhbaatar et al., 2015) or adversarial examples (Elsayed et al., 2018;Nar et al., 2019). Effective alternatives have been proposed to change the reference label distributions through label smoothing (Szegedy et al., 2016;Müller et al., 2019), Mixup , CutMix (Yun et al., 2019), knowledge distillation (Hinton et al., 2015) or self-training (Yalniz et al., 2019;.
Additionally, it has been recently demonstrated in NLP that fine-tuning using cross entropy loss tends to be unstable (Zhang et al., 2020;Dodge et al., 2020), especially when supervised data is limited, a scenario in which pre-training is particularly helpful. To tackle the issue of unstable fine-tuning, recent work proposes local smoothness-inducing regularizers (Jiang et al., 2020) and regularization methods inspired by the trust region theory (Aghajanyan et al., 2020) to prevent representation collapse that lead to poor generalization performance. Empirical analysis suggests that fine-tuning for longer, reinitializing top few layers (Zhang et al., 2020), and using debiased Adam optimizer during fine-tuning (Mosbach et al., 2020) can make the fine-tuning procedure more stable.
We are inspired by the learning strategy that humans deploy when given a few examples -try to find the commonalities between the examples of each class and contrast them with examples from other classes. We hypothesize that a similarity-based loss will be able to hone in on the important dimensions of the multidimensional hidden representations and lead to better few-shot learning results and be more stable while fine-tuning pre-trained models. We propose a novel objective for fine-tuning pre-trained language models that includes a supervised contrastive learning term that pushes examples from the same class close and examples of different classes further apart. The new Preprint. Under review.
term is similar to the contrastive objective used for self-supervised representation learning in various domains such as image, speech, and video domains. (Sohn, 2016;Oord et al., 2018;Wu et al., 2018;Bachman et al., 2019;Hénaff et al., 2019;Conneau et al., 2020;Misra & Maaten, 2020;Chen et al., 2020a;b). In constrast to these methods, however, we use a contrastive objective for supervised learning of the final task, instead of contrasting different augmented views of examples.
Adding supervised contrastive learning (SCL) term to the fine-tuning objective improves performance on several natural language understanding tasks from the GLUE benchmark (Wang et al., 2019), including SST-2, CoLA, MRPC, RTE, and QNLI over the state-of-the-art models fine-tuned with cross entropy loss. The improvements are particularly strong in few-shot learning settings (20, 100, 1000 labeled examples), and models trained with SCL are not only robust to the noise in the training data, but also have better generalization ability to related tasks with limited labeled data. Our approach does not require any specialized architectures (Bachman et al., 2019;Hénaff et al., 2019), memory banks (Wu et al., 2018;Misra & Maaten, 2020), data augmentation of any kind, or additional unsupervised data. To the best of our knowledge, our work is the first to successfully integrate a supervised contrastive learning objective for fine-tuning pre-trained language models.
• We propose a novel objective for fine-tuning of pre-trained language models that includes a supervised contrastive learning term, as described in Section 2.
• We show that our proposed objective improves over cross-entropy loss on several natural language classification tasks of the GLUE benchmark (Wang et al., 2019), including SST-2, CoLA, MRPC, RTE and QNLI, as shown in Table 2, leading up to 1.2% improvement.
• We obtain strong improvements on few-shot learning settings (20, 100, 1000 labeled examples) as shown in Table 4, leading up to 10.7% improvement for 20 labeled examples.
• We demonstrate that our proposed objective is more robust across augmented training datasets with varying noise levels as shown in Table 5, leading to 7% average improvement on MNLI across augmented training sets.
• We show that the task-models fine-tuned with our proposed objective has better generalization ability to a related task with limited labeled data as shown in Table 7, leading to 2.9% improvement on Amazon-2 along with significant reduction in variance across few-shot training samples, when transferred from the source SST-2 task model.
APPROACH
We propose a novel objective that includes a supervised contrastive learning term for fine-tuning pre-trained language models. The loss is meant to capture similarities between examples of the same class and contrast them with examples from other classes.
We work with a batch of training examples of size N, {x i , y i } i=1,...N . Φ(·) ∈ R d denotes the l 2 normalized embedding of the final encoder hidden layer before the softmax projection; N yi is the total number of examples in the batch that have the same label as y i ; τ > 0 is an adjustable scalar temperature parameter that controls the separation of classes; and λ is a scalar weighting hyperparameter that we tune for each downstream task. The loss is given by the following formulas:
L = (1 − λ) · L CE + λL SCL (1) L CE = − 1 N N i=1 y i · log(ŷ i ) + (1 − y i ) · log(1 −ŷ i )(2)L SCL = N i=1 − 1 N yi − 1 N j=1 1 i =j 1 yi=yj log exp (Φ(x i ) · Φ(x j )/τ ) N k=1 1 i =k exp (Φ(x i ) · Φ(x k )/τ )(3)
The overall loss is a weighted average of CE and the SCL loss, as given in equation (1). The canonical definition of CE that we use is given in equation (2). The novel SCL loss is given in equation (3).
This loss can be applied using a variety of encoders Φ(·) ∈ R d -for example a ResNet for a computer vision application or a pre-trained large language model such as BERT for an NLP application. In this work, we focus on fine-tuning pre-trained language models for single sentence and sentence-pair classification. For single sentence classification, each example x i consists of sequence of tokens prepended with the special [CLS] token
x i = [[CLS], t 1 , t 2 , . . . , t L , [EOS]].
The length of sequence L is constrained such that L < L max . Similarly, for sentence-pair classification tasks, each example x i is a concatenation of two sequences of tokens [t 1 , t 2 , . . . t L ] and [s 1 , s 2 , . . . , s M ] corresponding to the sentences with special tokens delimiting them:
x i = [[CLS], t 1 , t 2 , . . . , t L , [SEP ], s 1 , s 2 , . . . , s M , [EOS]
]. The length of concatenated sequences is constrained such that L + M < L max . In both cases, Φ(x i ) ∈ R d uses the embedding of [CLS] token as the representation for example x i . These settings follow standard practices for fine-tuning pre-trained language models for classification (Devlin et al., 2019;Liu et al., 2019). We show examples from SST-2 sentiment analysis dataset from the GLUE benchmark, where class A (shown in red) is negative movie reviews and class B (shown in blue) is positive movie reviews. Although we show a binary classification case for simplicity, the loss is generally applicable to any multi-class classification setting.
Empirical observations show that both l 2 normalization of the encoded embedding representations and an adjustable scalar temperature parameter τ improve performance. Lower temperature increases the influence of examples that are harder to separate, effectively creating harder negatives. Using hard negatives has been previously shown to improve performance in the context of margin-based loss formulations such as triplet loss (Schroff et al., 2015). The empirical behavior of the adjustable temperature parameter is consistent with the observations of previous work related to supervised contrastive learning. (Chen et al., 2020a;Khosla et al., 2020). For a batch of size N, self-supervised contrastive loss is defined as:
Relationship to Self-Supervised Contrastive Learning
L self = 2N i=1 − log exp (Φ(x 2i−1 ) · Φ(x 2i )/τ ) 2N k=1 1 i =k exp (Φ(x i ) · Φ(x k )/τ )(4)
where Φ(·) ∈ R d is the l 2 normalization embedding from the encoder before the final classification softmax layer; τ > 0 is a scalar temperature parameter. A is defined as a data augmentation block that generates two randomly generated augmented examples, x 2i and x 2i−1 from the original example
x i : A({x i , y i } i=1,...N ) = {x i , y i } i=1,...2N .
As an example, A can be RandAugment for a computer vision application; or it could be a back-translation model for an NLP application.
RELATED WORK
Traditional Machine Learning and Theoretical Understanding Several works have analyzed the shortcomings of the widely adopted cross-entropy loss, demonstrating that it leads to poor generalization performance due to poor margins (Liu et al., 2016;Cao et al., 2019), and lack of robustness to noisy labels (Zhang & Sabuncu, 2018;Sukhbaatar et al., 2015) or adversarial examples (Elsayed et al., 2018;Nar et al., 2019). On the other hand, there has been a body of work that has explored the performance difference for classifiers trained with discriminative (i.e., optimizing for p(y|x), where y is the label and x is the input) losses such as cross-entropy loss and generative losses (i.e. optimizing for p(x|y)). Ng & Jordan (2001) show that classifiers trained with generative losses can outperform their counterparts trained with discriminative losses in the context of Logistic Regression and Naive Bayes. Raina et al. (2003) show that a hybrid discriminative and generative objective outperforms both solely discriminative and generative approaches. In the context of contrastive learning, Saunshi et al. (2019) propose a theoretical framework for analyzing contrastive learning algorithms through hypothesizing that semantically similar points are sampled from the same latent class, which allows showing formal guarantees on the quality of learned representations.
Contrastive Learning There has been several investigations for the use of contrastive loss formulations for self-supervised, semi-supervised, and supervised learning methods, primarily in the computer vision domain. Chen et al. (2020a) propose a framework for contrastive learning to learn visual representations without specialized architectures or a memory bank and show state-of-the-art results on ImageNet ILSVRC-2012 (Russakovsky et al., 2015), outperforming previous methods for self-supervised, semi-supervised and transfer learning. Similarly, Khosla et al. (2020) propose a supervised contrastive loss that outperforms cross entropy loss and gets state-of-the-art results on Im-ageNet on both ResNet-50 and ResNet-200 (He et al., 2016) with AutoAugment (Cubuk et al., 2019) data augmentation. They also show increased robustness on the ImageNet-C dataset (Hendrycks & Dietterich, 2019), and demonstrate that supervised contrastive loss is less sensitive to hyperparameter settings such as optimizers or data augmentations compared to cross-entropy loss. Liu & Abbeel (2020) propose a hybrid discriminative-generative training of energy-based models where they approximate the generative term with a contrastive loss using large batch sizes and show improved classification accuracy of WideResNet-28-10 (Zagoruyko & Komodakis, 2016) on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) datasets, outperforming state-of-the-art discriminative and generative classifiers. They also demonstrate improved performance for WideResNet-28-10 on robustness, out-of-distribution detection, and calibration, compared to other state-of-the-art generative and hybrid models. Finally, Fang & Xie (2020) propose pre-training language models using a self-supervised contrastive learning objective at the sentence level using back-translation as the augmentation method, followed by fine-tuning by predicting whether two augmented sentences originate from the same sentence -showing improvements over fine-tuning BERT on a subset of GLUE tasks.
Stability and Robustness of Fine-tuning Language Models There has been several works on analyzing robustness of fine-tuning large pre-trained language models, since they tend to overfit to the labeled task data and fail to generalize to unseen data when there is limited labeled data for the downstream task. To improve the generalization performance, Jiang et al. (2020) propose a local smoothness-inducing regularizer to manage the complexity of the model and a Bregman proximal point optimization method, an instance of trust-region methods, to prevent aggressive updating of the model during fine-tuning. They show state-of-the-art performance on GLUE, SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018), and ANLI (Nie et al., 2020) natural language understanding benchmarks. Similarly, Aghajanyan et al. (2020) propose a regularized fine-tuning procedure inspired by trust-region theory that replaces adversarial objectives with parametric noise sampled from normal or uniform distribution in order to prevent representation collapse during fine-tuning for better generalization performance, without hurting the performance. They show improved performance on a range of natural language understanding and generation tasks including DailyMail/CNN (Hermann et al., 2015), Gigaword (Napoles et al., 2012), Reddit TIFU (Kim et al., 2019), and the GLUE benchmark. There has also been some empirical analysis that suggests fine-tuning for more epochs, reinitializing top few layers (Zhang et al., 2020) In all of our experiments (full dataset and few-shot learning), we sample half of the original validation set of GLUE benchmark and use it as our test set, and sample ∼500 examples for our validation set from the original validation set, both taking the label distribution of the original validation set into account. For each task, we want the validation set to be small enough to avoid easy overfitting on the validation set, and big enough to avoid high-variance when early-stopping at various epochs for few-shot learning experiments. We keep the same smaller validation for full dataset experiments in order to allow easy comparison between low-data and high-data regimes. For full dataset experiments such as the ones shown in Table 2 and Table 3, we use the full training sets of the GLUE benchmark.
We run each experiment with 10 different seeds, and pick the top model out of 10 seeds based on validation accuracy and report its corresponding test accuracy. We pick the best hyperparameter combination based on the average validation accuracy across 10 seeds. For few-shot learning experiments such as the ones shown in Table 4 and Table 5, we sample 10 different training set samples based on the total number of examples N specified from the original training set of the GLUE benchmark, taking the label distribution of the original training set into account. We report the average and the standard deviation of the test accuracies of the top 3 models based on their validation accuracies out of 10 random training set samples. Best hyperparameter combination is picked based on the average validation accuracy of the top 3 models. The reason why we focus on the top 3 models for this setting is that we would like to reduce the variance across training set samples. library and the open-source RoBERTa-Large model for all of our experiments. During all the fine-tuning runs, we use Adam optimizer with a learning rate of 1e-5, batch size of 16 (unless specified otherwise), and dropout rate of 0.1.
Dataset Task
CONSTRUCTING AUGMENTED NOISY TRAINING DATASETS
Machine learning researchers or practitioners often do not know how noisy their datasets are, as input examples might be corrupted or ground truth labeling might not be perfect. Therefore, it is preferable to use robust training objectives that can get more information out of datasets of different noise levels, even where there is limited amount of labeled data. We simulate augmented training datasets of different noise levels using a back-translation model (Edunov et al., 2018), where we increase the temperature parameter to create more noisy examples. Back-translation refers to the procedure of translating an example in language A into language B and then translating it back to language A, and it is a commonly used data augmentation procedure for NLP applications, as the new examples obtained through back-translation provide targeted inductive bias to the model while preserving the meaning of the original example. Specifically, we use WMT'18 English-German and German-English translation models, use random sampling to get more diverse examples, and employ and augmentation ratio of 1:3 for supervised examples:augmented examples. We observe that employing random sampling with a tunable temperature parameter is critical to get diverse paraphrases for the supervised examples, consistent with previous work (Edunov et al., 2018;, since commonly used beam search results in very regular sentences that do not provide diversity to the existing data distribution. We keep the validation and test sets same with the experiments shown in Table 2 and Table 4.
ANALYSIS AND RESULTS
GLUE BENCHMARK FULL DATASET RESULTS
In Table 2, we report results using our proposed objective on six downstream tasks from the GLUE benchmark. We use a very strong baseline of fine-tuning RoBERTa-Large with cross-entropy loss, which is currently the standard practice for the state-of-the-art NLP classification models. Details of the experimental setup are explained in Section 4.
We observe that adding the supervised contrastive learning (SCL) term to the objective improves the performance over the strong RoBERTa-Large baseline across 5 out of 6 datasets, leading to 1.2% improvement on SST-2, 0.9% improvement on CoLA, and 0.9% improvement on QNLI. This shows that our proposed objective is effective both for binary single sentence classification such as sentiment analysis and grammatical correctness; and sentence pair classification tasks such as textual entailment and paraphrasing. On the other hand, we observe that our proposed method does not lead to improvement on MNLI which is a three-way classification textual entailment task. We believe this is due to the fact that number of positive example pairs are quite sparse when we fine-tune our RoBERTa-Large models with batch size 16 due to memory constraints. We leave experiments with larger batch sizes that require additional engineering effort for future work. We show evidence for this hypothesis in our ablation studies that we show in Table 3, where we conduct the full dataset experiments for CE+SCL with the same experimental setup described here for Table 2 on SST-2, CoLA, and QNLI for batch sizes 16, 64, and 256 using RoBERTa-Base. We observe that as we increase the batch size, performance improves significantly across all datasets. Specifically, we observe 0.4% improvement on SST-2, 0.5% improvement on CoLA, and 0.8% improvement on QNLI, when we increase the batch size from 16 to 256.
Model
Loss SST-2 CoLA MRPC RTE QNLI MNLI Avg Table 3: Ablation study fine-tuning RoBERTa-Base with CE+SCL using the full training set of each task, increasing the batch size (Bsz).
GLUE BENCHMARK FEW-SHOT LEARNING RESULTS
We proposed adding the SCL term inspired by the learning strategy of humans when they are given few examples. In Table 4, we report our few-shot learning results on SST-2, QNLI, and MNLI from the GLUE benchmark with 20, 100, 1000 labeled training examples. Details of the experimental setup are explained in Section 4. We use a very strong baseline of fine-tuning RoBERTa-Large with crossentropy loss. We observe that the SCL term improves performance over the baseline significantly across all datasets and data regimes, leading to 10.7% improvement on QNLI, 3.4% improvement Figure 3 in the Appendix. Table 4: Few-shot learning results on the GLUE benchmark where we have N=20, 100, 1000 labeled examples for training. Reported results are the mean and the standard deviation of the test accuracies of the top 3 models based on validation accuracy out of 10 random training set samples.
Model
ROBUSTNESS ACROSS AUGMENTED NOISY TRAINING DATASETS
In Table 5, we report our results on augmented training sets with varying levels of noise. We have 100 labeled examples for training for each task, and we augment their training sets with noisy examples using a back-translation model, as described in detail in Section 4.2. Note that we use the backtranslation model to simulate training datasets of varying noise levels and not as a method to boost model performance. Experimental setup follows what is described in Section 4 for few-shot learning experiments. T is the temperature for the back-translation model used to augment the training sets, and higher temperature corresponds to more noise in the augmented training set.
We observe consistent improvements over the RoBERTa-Large baseline with our proposed objective across all datasets across all noise levels, with 0.4% improvement on SST-2, 2.5% improvement on QNLI, and 7% improvement on MNLI on average across augmented training sets. The improvement is particularly significant for inference tasks (QNLI, MNLI) when the noise levels are higher (higher temperature), leading to 7.7% improvement on MNLI when T=0.7, and 4.2% improvement on QNLI when T=0.9. We show some samples of augmented examples used in this robustness experiment in Table 6. For T=0.3, examples mostly stay the same with minor changes in their phrasing, while for T=0.9, some grammatical mistakes and factual errors are introduced. However, the link did not send the user to a comment field specifically for the rule.
MNLI Original
Tenants could not enter the apartment complex due to a dangerous chemical spill. MNLI Augmented (T=0.9) Tenants were banned from entering the medical property because of a blood positive substance. Table 5. Higher temperature (T) corresponds to more noise in the augmented training set.
GENERALIZATION ABILITY OF TASK MODELS
In this experiment, we first fine-tune RoBERTa-Large on SST-2 using its full training set and get a task model with and without SCL term. Then, we transfer this task model to two related single sentence sentiment analysis binary classification tasks for the movie reviews domain -Amazon-2 and Yelp-2 (Zhang et al., 2015). For both, we sample 20 labeled examples for each class, and follow the few-shot learning experimental setup described in Section 4. In Table 7, we demonstrate that using the SCL term for both source (SST-2) and target domains (Amazon-2, Yelp-2) lead to better generalization ability, with 2.9% improvement on Amazon-2 and 0.4% improvement on Yelp-2 along with significant reduction in variance across training set samples.
Model
CONCLUSION
We propose a supervised contrastive learning objective for fine-tuning pre-trained language models and demonstrate improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in both high-data and low-data regimes. We also show that our proposed objective leads to models that are more robust to different levels of noise in the training data and can generalize better to related tasks with limited labeled task data. Currently, data augmentation methods in NLP and their effects on the downstream tasks are neither as effective nor as well understood as their counterparts in the computer vision domain. In future work, we plan to study principled and automated data augmentation techniques for NLP that would allow extending our supervised contrastive learning objective to both semi-supervised and self-supervised learning settings.
Figure 1 :
1Our proposed objective includes a cross-entropy term (CE) and supervised contrastive learning (SCL) term, and it is formulated to push examples from the same class close and examples of different classes further apart.
Self-supervised contrastive learning has shown success in learning powerful representations, particularly in the computer vision domain.(Chen et al., 2020a;Mnih & Kavukcuoglu, 2013;Gutmann & Hyvärinen, 2012;Kolesnikov et al., 2019) Self-supervised learning methods do not require any labeled data; instead they sample a mini batch from unsupervised data and create positive and negative examples from these samples using strong data augmentation techniques such as AutoAugment(Cubuk et al., 2019) or RandAugment(Cubuk et al., 2020) for computer vision. Positive examples are constructed by applying data augmentation to the same example (cropping, flipping, etc. for an image), and negative examples are simply all the other examples in the sampled mini batch. Intuitively, selfsupervised contrastive objectives are learning representations that are invariant to different views of positive pairs; while maximizing the distance between negative pairs. The distance metric used is often the inner product or the Euclidean distance between vector representations of the examples.
Figure 3 :
3tSNE plots of learned CLS embedding on SST-2 test set where we have 20, 100 labeled examples, and full dataset respectively, comparing CE with and without SCL term. Blue: positive examples; red: negative examples.
instead of only the classification head, and using debiased Adam optimizer instead of BERTAdam(Devlin et al., 2019) during fine-tuning(Mosbach et al., 2020) make the fine-tuning procedure more stable across different runs.4 EXPERIMENTAL SETUP
4.1 DATASETS AND TRAINING DETAILS
We use datasets from the GLUE natural language understanding benchmark (Wang et al., 2019) for
evaluation. We include both single sentence classification tasks and sentence-pair classification tasks
to test whether our hypothesis is generally applicable across tasks. We summarize each dataset based
on their main task, domain, number of training examples and number of classes in Table 1.
Table 2 :
2Results on the GLUE benchmark. We compare fine-tuning RoBERTa-Large with CE with and without SCL using the full training set of each task.Model
Loss
Bsz SST-2 CoLA QNLI
RoBERTa Base CE + SCL 16
93.9
83.4
92.1
RoBERTa Base CE + SCL 64
94.2
84.8
92.7
RoBERTa Base CE + SCL 256
94.3
84.9
92.9
CE + SCL 92.8±1.3 92.6±0.9 91.5±1.0 91.2±0.6 91.5±1.0 91.7±1.0Dataset
Loss
Original
T=0.3
T=0.5
T=0.7
T=0.9
Average
SST-2
CE
91.1±1.3 92.0±1.3 91.4±1.0 91.7±1.3 90.0±0.5 91.3±1.2
SST-2
QNLI
CE
81.9±0.4 81.1±2.3 80.0±2.9 78.9±3.7 75.9±4.0 79.0±3.5
QNLI
CE + SCL 82.5±0.4 82.7±1.9 81.9±2.5 81.3±0.6 80.1±2.5 81.5±2.0
MNLI
CE
59.2±2.1 54.0±1.1 55.3±2.4 54.6±2.2 47.0±1.8 52.7±3.9
MNLI
CE + SCL 61.1±3.0 61.2±2.3 62.1±0.9 62.3±1.1 53.0±2.1 59.7±4.3
Table 5 :
5Results on the GLUE benchmark for robustness across noisy augmented training sets. Average shows the average performance across augmented training sets. The babies are too cute, the image and complications that follow too manipulative. QNLI Original Brain tissue is naturally soft, but can be stiffened with what liquid? QNLI Augmented (T=0.3) Brain tissue is omitted naturally, but with what fluid it can be stiffened? QNLI Original In March 1968, CBS and Sony formed CBS/Sony Records, a Japanese business joint venture. QNLI Augmented (T=0.9) CBS was founded by CBS and Sony Records in March 1962, a Japanese company.MNLI OriginalHowever, the link did not transfer the user to a comment box particular to the rule at issue.Dataset
Type
Sentence
SST-2
Original
As possibly the best actor working in movies today.
SST-2
Augmented (T=0.3)
As perhaps the best actor who now stars in films.
SST-2
Original
The young stars are too cute; the story and ensuing complications are too manipulative.
SST-2
Augmented (T=0.9)
MNLI
Augmented (T=0.3)
Table 6 :
6Sample of augmented examples with different noise levels for the robustness experiment shown in
Table 7 :
7Generalization of the SST-2 task model (fine-tuned using the full training set) to related tasks (Amazon-2, Yelp-2) where there are 20 labeled examples for each class.
Better fine-tuning by reducing representational collapse. ArXiv, abs. Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta, Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse. ArXiv, abs/2008.03156, 2020.
Learning representations by maximizing mutual information across views. Philip Bachman, R Devon Hjelm, William Buchwalter, NeurIPS. Philip Bachman, R. Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In NeurIPS, 2019.
Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. Alexei Baevski, Henry Zhou, NeurIPS. Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. In NeurIPS, 2020.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, EMNLP. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In EMNLP, 2015.
Learning imbalanced datasets with label-distribution-aware margin loss. Kaidi Cao, Colin Wei, Adrien Gaidon, N Aréchiga, Tengyu Ma, NeurIPS. Kaidi Cao, Colin Wei, Adrien Gaidon, N. Aréchiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In NeurIPS, 2019.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey E Hinton, ICML. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020a.
Big self-supervised models are strong semi-supervised learners. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey E Hinton, NeurIPS. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E. Hinton. Big self-supervised models are strong semi-supervised learners. In NeurIPS, 2020b.
Unsupervised cross-lingual representation learning for speech recognition. Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli, arXiv:2006.13979arXiv preprintAlexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. Unsupervised cross-lingual representation learning for speech recognition. arXiv preprint arXiv:2006.13979, 2020.
Learning augmentation strategies from data. E Cubuk, Barret Zoph, Dandelion Mané, V Vasudevan, V Quoc, Le, Autoaugment, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). E. Cubuk, Barret Zoph, Dandelion Mané, V. Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation strategies from data. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 113-123, 2019.
Randaugment: Practical automated data augmentation with a reduced search space. E D Cubuk, Barret Zoph, Jonathon Shlens, Quoc V Le, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). E. D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3008-3017, 2020.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT. J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, Noah A Smith, abs/2002.06305ArXiv. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. ArXiv, abs/2002.06305, 2020.
Understanding back-translation at scale. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, EMNLP. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. In EMNLP, 2018.
Large margin deep networks for classification. F Gamaleldin, Dilip Elsayed, Hossein Krishnan, Kevin Mobahi, Samy Regan, Bengio, NeurIPS. Gamaleldin F. Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. Large margin deep networks for classification. In NeurIPS, 2018.
Cert: Contrastive self-supervised learning for language understanding. ArXiv, abs. Hongchao Fang, Pengtao Xie, Hongchao Fang and Pengtao Xie. Cert: Contrastive self-supervised learning for language understand- ing. ArXiv, abs/2005.12766, 2020.
Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. M Gutmann, A Hyvärinen, J. Mach. Learn. Res. 13M. Gutmann and A. Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. J. Mach. Learn. Res., 13:307-361, 2012.
Video representation learning by dense predictive coding. Tengda Han, Weidi Xie, Andrew Zisserman, IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 1483-1492, 2019.
Deep residual learning for image recognition. X Kaiming He, Shaoqing Zhang, Jian Ren, Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross B Girshick, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9726-9735, 2020.
Data-efficient image recognition with contrastive predictive coding. J Olivier, A Hénaff, J Srinivas, Ali Fauw, C Razavi, S Doersch, A Eslami, Oord, abs/1905.09272ArXiv. Olivier J. Hénaff, A. Srinivas, J. Fauw, Ali Razavi, C. Doersch, S. Eslami, and A. Oord. Data-efficient image recognition with contrastive predictive coding. ArXiv, abs/1905.09272, 2019.
Benchmarking neural network robustness to common corruptions and perturbations. Dan Hendrycks, Thomas G Dietterich, ICLR. Dan Hendrycks and Thomas G. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019.
Teaching machines to read and comprehend. K Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, W Kay, Mustafa Suleyman, P Blunsom, NeurIPS. K. Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, W. Kay, Mustafa Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NeurIPS, 2015.
Distilling the knowledge in a neural network. Geoffrey E Hinton, Oriol Vinyals, J Dean, NeurIPS Deep Learning and Representation Learning Workshop. Geoffrey E. Hinton, Oriol Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NeurIPS Deep Learning and Representation Learning Workshop, 2015.
Learning deep representations by mutual information estimation and maximization. R Devon Hjelm, A Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, Yoshua Bengio, ICLR. R. Devon Hjelm, A. Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, 2019.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 328-339, 2018.
Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao, ACL. Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regu- larized optimization. In ACL, 2020.
. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan, Supervised contrastive learning. ArXiv, absPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. ArXiv, abs/2004.11362, 2020.
Scitail: A textual entailment dataset from science question answering. A Tushar Khot, Peter Sabharwal, Clark, AAAI. Tushar Khot, A. Sabharwal, and Peter Clark. Scitail: A textual entailment dataset from science question answering. In AAAI, 2018.
Abstractive summarization of reddit posts with multi-level memory networks. Byeongchang Kim, Hyunwoo Kim, Gunhee Kim, NAACL-HLT. Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. Abstractive summarization of reddit posts with multi-level memory networks. In NAACL-HLT, 2019.
Revisiting self-supervised visual representation learning. A Kolesnikov, Xiaohua Zhai, Lucas Beyer, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). A. Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1920-1929, 2019.
Learning multiple layers of features from tiny images. A Krizhevsky, A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Hybrid discriminative-generative training via contrastive learning. ArXiv, abs. Hao Liu, P , Hao Liu and P. Abbeel. Hybrid discriminative-generative training via contrastive learning. ArXiv, abs/2007.09070, 2020.
Large-margin softmax loss for convolutional neural networks. Weiyang Liu, Yandong Wen, Zhiding Yu, Meng Yang, ICML. Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolu- tional neural networks. In ICML, 2016.
Y Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, abs/1907.11692A robustly optimized bert pretraining approach. Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019.
Self-supervised learning of pretext-invariant representations. I Misra, L V D Maaten, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). I. Misra and L. V. D. Maaten. Self-supervised learning of pretext-invariant representations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6706-6716, 2020.
Learning word embeddings efficiently with noise-contrastive estimation. A Mnih, K Kavukcuoglu, NeurIPS. A. Mnih and K. Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estima- tion. In NeurIPS, 2013.
On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. Marius Mosbach, Maksym Andriushchenko, Dietrich Klakow, ArXiv, absMarius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. ArXiv, abs/2006.04884, 2020.
When does label smoothing help? In NeurIPS. R Müller, Simon Kornblith, Geoffrey E Hinton, R. Müller, Simon Kornblith, and Geoffrey E. Hinton. When does label smoothing help? In NeurIPS, 2019.
Annotated gigaword. Courtney Napoles, Matthew R Gormley, Benjamin Van Durme, AKBC-WEKEX@NAACL-HLT. Courtney Napoles, Matthew R. Gormley, and Benjamin Van Durme. Annotated gigaword. In AKBC-WEKEX@NAACL-HLT, 2012.
Cross-entropy loss and low-rank features have responsibility for adversarial examples. K Nar, O Ocal, S Sastry, K Ramchandran, abs/1901.08360ArXiv. K. Nar, O. Ocal, S. Sastry, and K. Ramchandran. Cross-entropy loss and low-rank features have responsibility for adversarial examples. ArXiv, abs/1901.08360, 2019.
On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Y Andrew, Michael I Ng, Jordan, NeurIPS. Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In NeurIPS, 2001.
Adversarial nli: A new benchmark for natural language understanding. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, J Weston, Douwe Kiela, Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, J. Weston, and Douwe Kiela. Adversarial nli: A new benchmark for natural language understanding. 2020.
Representation learning with contrastive predictive coding. A Oord, Y Li, Oriol Vinyals, abs/1807.03748ArXiv. A. Oord, Y. Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. ArXiv, abs/1807.03748, 2018.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proceedings of NAACL-HLT 2019: Demonstrations. NAACL-HLT 2019: DemonstrationsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
Language models are unsupervised multitask learners. A Radford, Jeffrey Wu, R Child, David Luan, Dario Amodei, Ilya Sutskever, A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Classification with hybrid generative/discriminative models. Rajat Raina, Yirong Shen, Andrew Y Ng, Andrew Mccallum, NeurIPS. Rajat Raina, Yirong Shen, Andrew Y. Ng, and Andrew McCallum. Classification with hybrid generative/discriminative models. In NeurIPS, 2003.
Imagenet large scale visual recognition challenge. Olga Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Zhiheng Huang, A Karpathy, A Khosla, M Bernstein, A Berg, Li Fei-Fei, International Journal of Computer Vision. 115Olga Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Zhiheng Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115:211-252, 2015.
A theoretical analysis of contrastive unsupervised representation learning. Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, Hrishikesh Khandeparkar, Proceedings of Machine Learning Research. 97Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. volume 97 of Proceedings of Machine Learning Research, pp. 5628-5637, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/saunshi19a.html.
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, D Kalenichenko, J Philbin, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Florian Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815-823, 2015.
Improved deep metric learning with multi-class n-pair loss objective. Kihyuk Sohn, NeurIPS. Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In NeurIPS, 2016.
Training convolutional networks with noisy labels. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir D Bourdev, Rob Fergus, ICLR. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir D. Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. In ICLR, 2015.
Rethinking the inception architecture for computer vision. Christian Szegedy, V Vanhoucke, S Ioffe, Jon Shlens, Z Wojna, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Christian Szegedy, V. Vanhoucke, S. Ioffe, Jon Shlens, and Z. Wojna. Rethinking the inception archi- tecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818-2826, 2016.
Contrastive multiview coding. Yonglong Tian, Dilip Krishnan, Phillip Isola, ECCV. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In ECCV, 2020.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, ICLR. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019. URL https://openreview.net/forum?id=rJ4km2R5t7.
Unsupervised feature learning via non-parametric instance discrimination. Zhirong Wu, Yuanjun Xiong, S Yu, D Lin, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhirong Wu, Yuanjun Xiong, S. Yu, and D. Lin. Unsupervised feature learning via non-parametric instance discrimination. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3733-3742, 2018.
Unsupervised data augmentation for consistency training. Qizhe Xie, Zihang Dai, Eduard H Hovy, Minh-Thang Luong, Quoc V Le, arXiv: LearningQizhe Xie, Zihang Dai, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation for consistency training. arXiv: Learning, 2019.
Self-training with noisy student improves imagenet classification. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionQizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687-10698, 2020.
Billion-scale semisupervised learning for image classification. Hervé I Zeki Yalniz, Kan Jégou, Manohar Chen, Dhruv Paluri, Mahajan, arXiv:1905.00546arXiv preprintI Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi- supervised learning for image classification. arXiv preprint arXiv:1905.00546, 2019.
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, IEEE/CVF International Conference on Computer Vision (ICCV). Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6022-6031, 2019.
Wide residual networks. ArXiv. Sergey Zagoruyko, Nikos Komodakis, abs/1605.07146Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. ArXiv, abs/1605.07146, 2016.
mixup: Beyond empirical risk minimization. Hongyi Zhang, M Cissé, Yann Dauphin, David Lopez-Paz, In ICLR. Hongyi Zhang, M. Cissé, Yann Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
Revisiting few-sample bert fine-tuning. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Q Kilian, Yoav Weinberger, Artzi, ArXiv, absTianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, and Yoav Artzi. Revisiting few-sample bert fine-tuning. ArXiv, abs/2006.05987, 2020.
Character-level convolutional networks for text classification. X Zhang, J Zhao, Y Lecun, NeurIPS. X. Zhang, J. Zhao, and Y. LeCun. Character-level convolutional networks for text classification. In NeurIPS, 2015.
Generalized cross entropy loss for training deep neural networks with noisy labels. Zhilu Zhang, Mert R Sabuncu, In NeurIPS. Zhilu Zhang and Mert R. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS, 2018. |
52,900,371 | FLUCTUATION-DISSIPATION RELATIONS FOR STOCHASTIC GRADIENT DESCENT | The notion of the stationary equilibrium ensemble has played a central role in statistical mechanics. In machine learning as well, training serves as generalized equilibration that drives the probability distribution of model parameters toward stationarity. Here, we derive stationary fluctuation-dissipation relations that link measurable quantities and hyperparameters in the stochastic gradient descent algorithm. These relations hold exactly for any stationary state and can in particular be used to adaptively set training schedule. We can further use the relations to efficiently extract information pertaining to a loss-function landscape such as the magnitudes of its Hessian and anharmonicity. Our claims are empirically verified. | [
65455367
] | FLUCTUATION-DISSIPATION RELATIONS FOR STOCHASTIC GRADIENT DESCENT
Sho Yaida [email protected]
Facebook AI Research Facebook Inc. Menlo Park
94025CaliforniaUSA
FLUCTUATION-DISSIPATION RELATIONS FOR STOCHASTIC GRADIENT DESCENT
Published as a conference paper at ICLR 2019
The notion of the stationary equilibrium ensemble has played a central role in statistical mechanics. In machine learning as well, training serves as generalized equilibration that drives the probability distribution of model parameters toward stationarity. Here, we derive stationary fluctuation-dissipation relations that link measurable quantities and hyperparameters in the stochastic gradient descent algorithm. These relations hold exactly for any stationary state and can in particular be used to adaptively set training schedule. We can further use the relations to efficiently extract information pertaining to a loss-function landscape such as the magnitudes of its Hessian and anharmonicity. Our claims are empirically verified.
INTRODUCTION
Equilibration rules the long-term fate of many macroscopic dynamical systems. For instance, as we pour water into a glass and let it be, the stationary state of tranquility is eventually attained. Zooming into the tranquil water with a microscope would reveal, however, a turmoil of stochastic fluctuations that maintain the apparent stationarity in balance. This is vividly exemplified by the Brownian motion (Brown, 1828): a pollen immersed in water is constantly bombarded by jittery molecular movements, resulting in the macroscopically observable diffusive motion of the solute. Out of the effort in bridging microscopic and macroscopic realms through the Brownian movement came a prototype of fluctuation-dissipation relations (Einstein, 1905;Von Smoluchowski, 1906). These relations quantitatively link degrees of noisy microscopic fluctuations to smooth macroscopic dissipative phenomena and have since been codified in the linear response theory for physical systems (Onsager, 1931;Green, 1954;Kubo, 1957), a cornerstone of statistical mechanics.
Machine learning begets another form of equilibration. As a model learns patterns in data, its performance first improves and then plateaus, again reaching apparent stationarity. This dynamical process naturally comes equipped with stochastic fluctuations as well: often given data too gigantic to consume at once, training proceeds in small batches and random selections of these mini-batches consequently give rise to the noisy dynamical excursion of the model parameters in the loss-function landscape, reminiscent of the Brownian motion. It is thus natural to wonder if there exist analogous fluctuation-dissipation relations that quantitatively link the noise in mini-batched data to the observable evolution of the model performance and that in turn facilitate the learning process.
Here, we derive such fluctuation-dissipation relations for the stochastic gradient descent algorithm. The only assumption made is stationarity of the probability distribution that governs the model parameters at sufficiently long time. Our results thus apply to generic cases with non-Gaussian mini-batch noises and nonconvex loss-function landscapes. Practically, the first relation (FDR1) offers the metric for assessing equilibration and yields an adaptive algorithm that sets learning-rate schedule on the fly. The second relation (FDR2) further helps us determine the properties of the lossfunction landscape, including the strength of its Hessian and the degree of anharmonicity, i.e., the deviation from the idealized harmonic limit of a quadratic loss surface and a constant noise matrix.
Our approach should be contrasted with recent attempts to import the machinery of stochastic differential calculus into the study of the stochastic gradient descent algorithm (Mandt et al., 2015;Li et al., 2015;Mandt et al., 2017;Li et al., 2017;Smith & Le, 2018;Chaudhari & Soatto, 2017;Jastrzebski et al., 2017;Zhu et al., 2018;. This line of work all assumes Gaussian noises and sometimes additionally employs the quadratic harmonic approximation for loss-function landscapes. The more severe drawback, however, is the usage of the analogy with continuous-time stochastic differential equations, which is inconsistent in general (see Section 2.3.3). Instead, the stochastic gradient descent algorithm can be properly treated within the framework of the Kramers-Moyal expansion (Van Kampen, 1992;Gardiner, 2009;Risken, 1984;Radons et al., 1990;Leen & Moody, 1993).
The paper is organized as follows. In Section 2, after setting up notations and deriving a stationary fluctuation-dissipation theorem (FDT), we derive two specific fluctuation-dissipation relations. The first relation (FDR1) can be used to check stationarity and the second relation (FDR2) to delineate the shape of the loss-function landscape, as empirically borne out in Section 3. An adaptive scheduling method is proposed and tested in Section 3.3. We conclude in Section 4 with future outlooks.
FLUCTUATION-DISSIPATION RELATIONS
A model is parametrized by a weight coordinate, θ = {θ i } i=1,...,P . The training set of N s examples is utilized by the model to learn patterns in the data and the model's overall performance is evaluated by a full-batch loss function, f (θ) ≡ 1
Ns Ns α=1 f α (θ), with f α (θ) quantifying the performance of the model on a particular sample α: the smaller the loss is, the better the model is expected to perform. The learning process can thus be cast as an optimization problem of minimizing the loss function. One of the most commonly used optimization schemes is the stochastic gradient descent (SGD) algorithm (Robbins & Monro, 1951) in which a mini-batch B ⊂ {1, 2, . . . , N s } of size |B| is stochastically chosen for training at each time step. Specifically, the update equation is given by
θ(t + 1) = θ(t) − η∇f B [θ(t)] ,(1)
where η > 0 is a learning rate and a mini-batch loss
f B (θ) ≡ 1 |B| α∈B f α (θ). Note that ∇f B (θ) m.b. = ∇f (θ) ,(2)
with . . . m.b. denoting the average over mini-batch realizations. For later purposes, it is convenient to define a full two-point noise matrix C through 1
C i,j (θ) ≡ ∂ i f B (θ) ∂ j f B (θ) m.b.(3)
and, more generally, higher-point noise tensors
C i1,i2,...,i k (θ) ≡ ∂ i1 f B (θ) ∂ i2 f B (θ) · · · ∂ i k f B (θ) m.b. .(4)
Below, we shall not make any assumptions on the distribution of the noise vector ∇f B -other than that a mini-batch is independent and identically distributed from the N s training samples at each time step -and the noise distribution is therefore allowed to have nontrivial higher connected moments indicative of non-Gaussianity.
It is empirically often observed that the performance of the model plateaus after some training through SGD. It is thus natural to hypothesize the existence of a stationary-state distribution, p ss (θ), that dictates the SGD sampling at long time (see Section 2.3.4 for discussion on this assumption). For any observable quantity, O (θ), -something that can be measured during training such as θ 2 and f (θ) -its stationary-state average is then defined as In general the probability distribution of the model parameters evolves as p(θ, t + 1) = dθ p(θ , t)δ θ − θ − η∇f B (θ ) m.b. and in particular for the stationary state
O (θ) ≡ dθ p ss (θ) O (θ) .(5)dθ p ss (θ, t) O (θ) = dθ p ss (θ, t + 1) O (θ) (6) = dθ dθ p ss (θ , t)δ θ − θ − η∇f B (θ ) O (θ) m.b. = dθ p ss (θ ) O θ − η∇f B (θ ) m.b. .
Thus follows the master equation
O (θ) = O θ − η∇f B (θ) m.b. .(FDT)
In the next two subsections, we apply this general formula to simple observables in order to derive various stationary fluctuation-dissipation relations. Incidentally, the discrete version of the Fokker-Planck equation can be derived through the Kramers-Moyal expansion, considering the more general nonstationary version of the above equation and performing the Taylor expansion in η and repeated integrations by parts (Van Kampen, 1992;Gardiner, 2009;Risken, 1984;Radons et al., 1990;Leen & Moody, 1993).
FIRST FLUCTUATION-DISSIPATION RELATION
Applying the master equation (FDT) to the linear observable,
θ = θ − η∇f B (θ) m.b. = θ − η ∇f (θ) .(7)
We thus have ∇f = 0 .
(8) This is natural because there is no particular direction that the gradient picks on average as the model parameter stochastically bounces around the local minimum or, more generally, wanders around the loss-function landscape according to the stationary distribution.
Performing similar algebra for the quadratic observable θ i θ j yields
θ i (∂ j f ) + (∂ i f ) θ j = η C i,j .(9)
In particular, taking the trace of this matrix-form relation, we obtain θ · (∇f ) = 1 2 η Tr C .
More generally, in the case of SGD with momentum µ and dampening ν, whose update equation is given by
v(t + 1) = µv(t) − (1 − ν)∇f B [θ(t)] ,(10)θ(t + 1) = θ(t) + ηv(t + 1) ,(11)
a similar derivation yields (see Appendix A)
θ · (∇f ) = (1 + µ) 2(1 − ν) η v 2 . (FDR1')
The last equation reduces to the equation (FDR1) when µ = ν = 0 with v = −∇f B . Also note that θ · (∇f ) = (θ − θ c ) · (∇f ) for an arbitrary constant vector θ c because of the equation (8).
This first fluctuation-dissipation relation is easy to evaluate on the fly during training, exactly holds without any approximation if sampled well from the stationary distribution, and can thus be used as the standard metric to check if learning has plateaued, just as similar relations can be used to check equilibration in Monte Carlo simulations of physical systems (Santen & Krauth, 2000). [It should be cautioned, however, that the fluctuation-dissipation relations are necessary but not sufficient to ensure stationarity (Odriozola & Berthier, 2011).] Such a metric can in turn be used to schedule changes in hyperparameters, as shall be demonstrated in Section 3.3.
SECOND FLUCTUATION-DISSIPATION RELATION
Applying the master equation (FDT) on the full-batch loss function and Taylor-expanding it in the learning rate η yields the closed-form expression
f (θ) = f θ − η∇f B (θ) m.b. (12) = f + ∞ k=1 (−η) k k! P i1,i2,...,i k =1 (∂ i1 ∂ i2 · · · ∂ i k f ) ∂ i1 f B ∂ i2 f B · · · ∂ i k f B m.b. = f − η (∇f ) 2 + ∞ k=2 (−η) k k! P i1,i2,...,i k =1 F i1,i2,...,i k C i1,i2,...,i k
where we recalled the equation (4) and introduced
F i1,i2,...,i k (θ) ≡ ∂ i1 ∂ i2 · · · ∂ i k f (θ) .(13)
In particular,
H i,j (θ) ≡ F i,j (θ)
is the Hessian matrix. Reorganizing terms, we obtain
(∇f ) 2 = η 2 Tr H C − η 2 ∞ k=3 (−η) k−3 k! P i1,i2,...,i k =1 F i1,i2,...,i k C i1,i2,...,i k .
(FDR2) In the case of SGD with momentum and dampening, the left-hand side is replaced by
(1 − ν) (∇f ) 2 − µ v · ∇f and C i1,i2,...,i k by more hideous expressions (see Appendix A).
We can extract at least two types of information on the loss-function landscape by evaluating the dependence of the left-hand side, G(η) ≡ (∇f ) 2 , on the learning rate η. First, in the small learning rate regime, the value of 2G(η)/η approximates Tr H C around a local ravine. Second, nonlinearity of G(η) at higher η indicates discernible effects of anharmonicity. In such a regime, the Hessian matrix H cannot be approximated as constant (which also implies that {F i1,i2,...,i k } k>2 are nontrivial) and/or the noise two-point matrix C cannot be regarded as constant. Such nonlinearity especially indicates the breakdown of the harmonic approximation, that is, the quadratic truncation of the loss-function landscape, often used to analyze the regime explored at small learning rates.
REMARKS
INTUITION WITHIN THE HARMONIC APPROXIMATION
In order to gain some intuition about the fluctuation-dissipation relations, let us momentarily employ the harmonic approximation, i.e., assume that there is a local minimum of the loss function at θ = θ and retain only up to quadratic terms of the Taylor expansions around it:
f (θ) ≈ f 0 + 1 2 P i,j=1 h i,j (θ i −θ i )(θ j −θ j ). Within this approximation, θ · (∇f ) = (θ − θ ) · (∇f ) ≈ 2 f − f 0 . The relation (FDR1) then becomes f − f 0 ≈ 1 4 η Tr C ,
linking the height of the noise ball to the noise amplitude. This is in line with, for instance, the theorem 4.6 of the reference Bottou et al. (2018) and substantiates the analogy between SGD and simulated annealing, with the learning rate η -multiplied by Tr C -playing the role of temperature (Bottou, 1991).
HIGHER-ORDER RELATIONS
Additional relations can be derived by repeating similar calculations for higher-order observables. For example, at the cubic order,
θ i θ j (∂ k f ) + θ i (∂ j f ) θ k + (∂ i f ) θ j θ k = η θ i C j,k + θ j C k,i + θ k C i,j − η 2 C i,j,k . (14)
The systematic investigation of higher-order relations is relegated to future work.
SGD =SDE
There is no limit in which SGD asymptotically reduces to the stochastic differential equation (SDE). In order to take such a limit with continuous time differential dt → 0 + , each SGD update must become infinitesimal. One may thus try dt ≡ η → 0 + , as in recent work adapting the view that SGD=SDE (Mandt et al., 2015;Li et al., 2015;Mandt et al., 2017;Li et al., 2017;Smith & Le, 2018;Chaudhari & Soatto, 2017;Jastrzebski et al., 2017;Zhu et al., 2018;. But this in turn forces the noise vector with zero mean, ∇f B −∇f , to be multiplied by dt. This is in contrast to the scaling √ dt needed for the standard machinery of SDE -Itô-Stratonovich calculus and all that -to apply; the additional factor of dt 1/2 makes the effective noise covariance be suppressed by dt and the resulting equation in the continuous-time limit, if anything, would just be an ordinary differential equation without noise 2 [unless noise with the proper scaling is explicitly added as in stochastic gradient Langevin dynamics (Welling & Teh, 2011;Teh et al., 2016) and natural Langevin dynamics (Marceau-Caron & Ollivier, 2017;Nado et al., 2018)].
In short, the recent work views η = √ η √ dt and sends dt → 0 + while pretending that η is finite, which is inconsistent. This is not just a technical subtlety. When unjustifiably passing onto the continuous-time Fokker-Planck equation, the diffusive term is incorrectly governed by the connected two-point noise matrix
C i,j (θ) ≡ C i,j (θ)−[∂ i f (θ)] [∂ j f (θ)]
rather than the full two-point noise matrix C i,j (θ) that appears herein. 3 We must instead employ the discrete-time version of the Fokker-Planck equation derived in references Van Kampen (1992); Gardiner (2009); Risken (1984); Radons et al. (1990); Leen & Moody (1993), as has been followed in the equation (6).
ON STATIONARITY
In contrast to statistical mechanics where an equilibrium state is dictated by a handful of thermodynamic variables, in machine learning a stationary state generically depends not only on hyperparameters but also on a part of its learning history. The stationarity assumption made herein, which is codified in the equation (6), is weaker than the typicality assumption underlying statistical mechanics and can hold even in the presence of lingering memory. In the full-batch limit |B| = N s , for instance, any distribution delta-peaked at a local minimum is stationary. For sufficiently small learning rates η as well, it is natural to expect multiple stationary distributions that form disconnected ponds around these minima, which merge upon increasing η and fragment upon decreasing η.
It is beyond the scope of the present paper to formulate conditions under which stationary distributions exist. Indeed, if the formulation were too generic, there could be counterexamples to such a putative existence statement. A case in point is a model with the unregularized cross entropy loss, whose model parameters keep cascading toward infinity in order to sharpen its softmax output (Neyshabur et al., 2014; with logarithmically diverging θ 2 (Soudry et al., 2018). It would be interesting to see if there are any other nontrivial caveats.
EMPIRICAL TESTS
In this section we empirically bear out our theoretical claims in the last section. To this end, two simple models of supervised learning are used (see Appendix B for full specifications): a multilayer perceptron (MLP) learning patterns in the MNIST training data (LeCun et al., 1998) through SGD without momentum and a convolutional neural network (CNN) learning patterns in the CIFAR-10 training data (Krizhevsky & Hinton, 2009) through SGD with momentum µ = 0.9. For both models, the mini-batch size is set to be |B| = 100, and the training data are shuffled at each epoch t = Ns |B|t epoch witht epoch ∈ N. In order to avoid the overfitting cascade mentioned in Section 2.3.4, the L 2 -regularization term 1 2 λθ 2 with the weight decay λ = 0.01 is included in the loss function f .
Before proceeding further, let us define the half-running average of an observable O as
O(t) ≡ 1 t − t 0 t t =t0+1 O(t ) with t 0 = t/2 .(15)
This is the average of the observable up to the time step t, with the initial half discarded as containing transient. If SGD drives the distribution of the model parameters to stationarity at long time, then lim
t→∞ O(t) = O .(16)
FIRST FLUCTUATION-DISSIPATION RELATION AND EQUILIBRATION
In order to assess the proximity to stationarity, define
O L ≡ θ · ∇f B and O R ≡ (1 + µ) 2(1 − ν) ηv 2(17)
(with v replaced by −∇f B for SGD without momentum). 4 Both of these observables can easily be measured on the fly at each time step during training and, according to the relation (FDR1'), the running averages of these two observables should converge to each other upon equilibration. Figure 1: Approaches toward stationarity during the initial trainings for the MLP on the MNIST data (a) and for the CNN on the CIFAR-10 data (b). Top panels depict the half-running average f B (t) (dark green) and the instantaneous value f B (t) (light green) of the mini-batch loss. Bottom panels depict the convergence of the half-running averages of the observables O L = θ · ∇f B and O R = (1+µ) 2(1−ν) ηv 2 , whose stationary-state averages should agree according to the relation (FDR1').
In order to verify this claim, we first train the model with the learning rate η = 0.1 fort total epoch = 100 epochs, that is, for t total = Ns |B|t total epoch = 100 Ns |B| time steps. As shown in the figure 1, the observables O L (t) and O R (t) converge to each other. We then take the model at the end of the initial 100epoch training and sequentially train it further at various learning rates η (see Appendix B). The observables O L (t) and O R (t) again converge to each other, as plotted in the figure 2. Note that the smaller the learning rate is, the longer it takes to equilibrate.
SECOND FLUCTUATION-DISSIPATION RELATION AND SHAPE OF LOSS-FUNCTION
LANDSCAPE
In order to assess the loss-function landscape information from the relation (FDR2), define 2(1−ν) ηv 2 (dotted light-colored). They agree at sufficiently long times but the relaxation time to reach such a stationary regime increases as the learning rate η decreases.
O FB ≡ (1 − ν) (∇f ) 2 − µv · ∇f B(18)
(with the second term nonexistent for SGD without momentum). 5 Note that (∇f ) 2 is a full-batchnot mini-batch -quantity. Given its computational cost, here we measure this first term only at the end of each epoch and take the half-running average over these sparse sample points, discarding the initial half of the run.
The half-running average of the full-batch observable O FB at the end of sufficiently long training, which is a good proxy for O FB , is plotted in the figure 3 as a function of the learning rate η. As predicted by the relation (FDR2), at small learning rates η, the observable O FB approaches zero; its slope -divided by Tr C if preferred -measures the magnitude of the Hessian matrix, component-wise averaged over directions in which the noise preferentially fluctuates. Meanwhile, nonlinearity at higher learning rates η measures the degree of anharmonicity experienced over the distribution p ss (θ). We see that anharmonic effects are pronounced especially for the CNN on the CIFAR-10 data even at moderately small learning rates. This invalidates the use of the quadratic harmonic approximation for the loss-function landscape and/or the assumption of the constant noise matrix for this model except at very small learning rates.
FIRST FLUCTUATION-DISSIPATION RELATION AND LEARNING-RATE SCHEDULES
Saturation of the relation (FDR1) suggests the learning stationarity, at which point it might be wise to decrease the learning rate η. Such scheduling is often carried out in an ad hoc manner but we can now algorithmize this procedure as follows:
1. Evaluate the half-running averages O L (t) and O R (t) at the end of each epoch.
If OL(t)
OR(t) − 1 < X, then decrease the learning rate as η → (1 − Y )η and also set t = 0 for the purpose of evaluating half-running averages.
Here, two scheduling hyperparameters X and Y are introduced, which control the threshold for saturation of the relation (FDR1) and the amount of decrease in the learning rate, respectively.
Plotted in the figure 4 are results for SGD without momentum, with the Xavier initialization (Glorot & Bengio, 2010) and training through (i) preset training schedule with decrease of the learning rate by a factor of 10 for each 100 epochs, (ii) an adaptive scheduler with X = 0.01 (1% threshold) and These two scheduling methods span different subspaces of all the possible schedules. The adaptive scheduling method proposed herein has a theoretical grounding and in practice much less dimensionality for tuning of scheduling hyperparameters than the presetting method, thus ameliorating the optimization of scheduling hyperparameters. The systematic comparison between the two scheduling methods for state-of-the-arts architectures, and also the comparison with the AMSGrad algorithm for natural language processing tasks, could be a worthwhile avenue to pursue in the future.
CONCLUSION
In this paper, we have derived the fluctuation-dissipation relations with no assumptions other than stationarity of the probability distribution. These relations hold exactly even when the noise is non-Gaussian and the loss function is nonconvex. The relations have been empirically verified and used to probe the properties of the loss-function landscapes for the simple models. The relations further have resulted in the algorithm to adaptively set learning-rate schedule on the fly rather than presetting it in an ad hoc manner. In addition to systematically testing the performance of this adaptive scheduling algorithm, it would be interesting to investigate non-Gaussianity and noncovexity in more details through higher-point observables, both analytically and numerically. It would also be interesting to further elucidate the physics of machine learning by extending our formalism to incorporate nonstationary dynamics, linearly away from stationarity (Onsager, 1931;Green, 1954;Kubo, 1957) and beyond (Jarzynski, 1997;Crooks, 1999), so that it can in particular properly treat overfitting cascading dynamics and time-dependent sample distributions.
ACKNOWLEDGMENTS
The author thanks Ludovic Berthier, Léon Bottou, Guy Gur-Ari, Kunihiko Kaneko, Ari Morcos, Dheevatsa Mudigere, Yann Ollivier, Yuandong Tian, and Mark Tygert for discussions. Special thanks go to Daniel Adam Roberts who prompted the practical application of the fluctuationdissipation relations, leading to the adaptive method in Section 3.3.
A SGD WITH MOMENTUM AND DAMPENING
For SGD with momentum µ and dampening ν, the update equation is given by
v(t + 1) = µv(t) − (1 − ν)∇f B [θ(t)] ,(19)θ(t + 1) = θ(t) + ηv(t + 1) .(20)
Here
v = {v i } i=1,.
..,P is the velocity and η > 0 the learning rate; SGD without momentum is the special case with µ = 0. Again hypothesizing the existence of a stationary-state distribution p ss (θ, v), the stationary-state average of an observable O (θ, v) is defined as
O (θ, v) ≡ dθdv p ss (θ, v) O (θ, v) .(21)
Just as in the main text, from the assumed stationarity follows the master equation for SGD with momentum and dampening
O (θ, v) = O θ + η µv − (1 − ν)∇f B (θ) , µv − (1 − ν)∇f B (θ) m.b. .(22)
For the linear observables,
v = µ v − (1 − ν) ∇f (θ)(23)
and
θ = θ + η [µ v − (1 − ν) ∇f (θ) ] = θ + η v ,(24)
thus v = 0 and ∇f = 0 .
For the quadratic observables
v i v j = µ 2 v i v j + (1 − ν) 2 C i,j − (1 − ν)µ [ v i (∂ j f ) + (∂ i f ) v j ] ,(26)v i θ j − η v i v j = µ v i θ j − (1 − ν) (∂ i f ) θ j ,(27)
and
(
1 − ν) [ θ i (∂ j f ) + (∂ i f ) θ j ] − µ ( θ i v j + v i θ j ) = η v i v j .(28)
Note that the relations (26) and (27) are trivially satisfied at each time step if the left-hand side observables are evaluated at one step ahead and thus their being satisfied for running averages has nothing to do with equilibration [the same can be said about the relation (23)]; the only nontrivial relation is the equation (28), which is a consequence of setting θ i θ j constant of time. After taking traces and some rearrangement, we obtain the relation (FDR1') in the main text.
For the full-batch loss function, the algebra similar to the one in the main text yields
(1 − ν) (∇f ) 2 − µ v · ∇f (29) = η P i,j=1 H i,j (1 − ν) 2 C i,j − µ(1 − ν) [v i (∂ j f ) + (∂ i f ) v j ] + µ 2 v i v j + O(η 2 ) .
B MODELS AND SIMULATION PROTOCOLS
B.1 MLP ON MNIST THROUGH SGD WITHOUT MOMENTUM
The MNIST training data consist of N s = 60000 black-white images of hand-written digits with 28-by-28 pixels (LeCun et al., 1998). We preprocess the data through an affine transformation such that their mean and variance (over both the training data and pixels) are zero and one, respectively.
Our multilayer perceptron (MLP) consists of a 784-dimensional input layer followed by a hidden layer of 200 neurons with ReLU activations, another hidden layer of 200 neurons with ReLU activations, and a 10-dimensional output layer with the softmax activation. The model performance is evaluated by the cross-entropy loss supplemented by the L 2 -regularization term 1 2 λθ 2 with the weight decay λ = 0.01.
Throughout the paper, the MLP is trained on the MNIST data through SGD without momentum. The data are shuffled at each epoch with the mini-batch size |B| = 100.
The MLP is initialized through the Xavier method (Glorot & Bengio, 2010) and trained for t total epoch = 100 epochs with the learning rate η = 0.1. We then sequentially train it with (η,t total epoch ) = (0.05, 500) → (0.02, 500) → (0.01, 500) → (0.005, 1000) → (0.003, 1000). This sequential-run protocol is carried out with 4 distinct seeds for the random-number generator used in data shuffling, all starting from the common model parameter attained at the end of the initial 100epoch run. The figure 2 depicts trajectories for one particular seed, while the figure 3 plots means and error bars over these distinct seeds.
B.2 CNN ON CIFAR-10 THROUGH SGD WITH MOMENTUM
The CIFAR-10 training data consist of N s = 50000 color images of objects -divided into ten categories -with 32-by-32 pixels in each of 3 color channels, each pixel ranging in [0, 1] (Krizhevsky & Hinton, 2009). We preprocess the data through uniformly subtracting 0.5 and multiplying by 2 so that each pixel ranges in [−1, 1].
In order to describe the architecture of our convolutional neural network (CNN) in detail, let us associate a tuple [F, C, S, P ; M ] to a convolutional layer with filter width F , a number of channels C, stride S, and padding P , followed by ReLU activations and a max-pooling layer of width M . Then, as in the demo at Karpathy (2014), our CNN consists of a (32, 32, 3) input layer followed by a convolutional layer with [5, 16, 1, 2; 2], another convolutional layer with [5, 20, 1, 2; 2], yet another convolutional layer with [5, 20, 1, 2; 2], and finally a fully-connected 10-dimensional output layer with the softmax activation. The model performance is evaluated by the cross-entropy loss supplemented by the L 2 -regularization term 1 2 λθ 2 with the weight decay λ = 0.01. Throughout the paper (except in Section 3.3 where the adaptive scheduling method is tested for SGD without momentum), the CNN is trained on the CIFAR-10 data through SGD with momentum µ = 0.9 and dampening ν = 0. The data are shuffled at each epoch with the mini-batch size |B| = 100.
The CNN is initialized through the Xavier method (Glorot & Bengio, 2010) and trained for t total epoch = 100 epochs with the learning rate η = 0.1. We then sequentially train it with (η,t total epoch ) = . At each junction of the sequence, the velocity v is zeroed. This sequential-run protocol is carried out with 16 distinct seeds for the random-number generator used in data shuffling, all starting from the common model parameter attained at the end of the initial 100-epoch run. The figure 2 depicts trajectories for one particular seed, while the figure 3 plots means and error bars over these distinct seeds.
C ADDITIONAL SIMULATIONS C.1 ADAM VERSUS AMSGRAD Plotted in the figure S1 are the comparisons between Adam (Kingma & Ba, 2014) and AMS-Grad (J. Reddi et al., 2018) algorithms with the default hyperparameters α = 10 −3 , (β 1 , β 2 ) = (0.9, 0.999), and = 10 −8 . The AMSGrad algorithm marginally outperforms the Adam algorithm for the tasks at hand and thus the results with the AMSGrad are presented in the main text.
C.2 INITIAL ACCURACY GAIN WITH DIFFERENT SCHEDULING HYPERPARAMETERS
In the figure 4(a) for the MNIST classification task with the MLP, the proposed adaptive method with the scheduling hyperparameters X = 0.01 and Y = 0.1 outperforms the AMSGrad algorithm in terms of accuracy attained at long time and also exhibits a quick initial convergence. In the figure 4(b) for the CIFAR-10 classification task with the CNN, however, while the proposed adaptive method attains better accuracy at long time, its initial accuracy gain is visibly slower than the AMSGrad algorithm. This lag in initial accuracy gain can be ameliorated by choosing another combination of the scheduling hyperparameters, e.g., X = 0.1 and Y = 0.3, at the expense of degradation in generalization accuracy with respect to the original choice X = 0.01 and Y = 0.1. See the figure S2. Figure S2: Comparison of preset training schedule (black) and adaptive training schedule (purple) -now with the scheduling hyperparameters X = 0.1 and Y = 0.3 -employing SGD without momentum, and the AMSGrad algorithm (green), for the CNN on the CIFAR-10 data with the same initial seed as in the main text (a) and three different initial seeds (b-d). From top to bottom, plotted are the learning rate η, the full-batch training loss f , and prediction accuracies on the training-set images (solid) and the 10000 test-set images (dashed).
1
A connected noise covariant matrix, Ci,j (θ) ≡ Ci,j (θ) − [∂if (θ)] [∂jf (θ)], will not appear in fluctuation-dissipation relations below but scales nicely with mini-batch sizes as ∝ 1 |B| 1 − |B| Ns(Li et al., 2017).
Figure 2 :
2Approaches toward stationarity during the sequential runs for various learning rates η, seen through the half-running averages of the observables O L = θ · ∇f B (solid) and O R = (1+µ)
Figure 3 :
3The stationary-state average of the full-batch observable O FB as a function of the learning rate η, estimated through half-running averages. Dots and error bars denote mean values and 95% confidence intervals over several distinct runs, respectively. The straight red line connects the origin and the point with the smallest η explored. (a) For the MLP on the MNIST data, linear dependence on η for η 0.01 supports the validity of the harmonic approximation there. (b) For the CNN on the CIFAR-10 data, anharmonicity is pronounced even down to η ∼ 0.001. Y = 0.1 (10% decrease), and (iii) the AMSGrad algorithm (J. Reddi et al., 2018) with the default hyperparameters. The adaptive scheduler attains comparable accuracies with the preset scheduling at long time and outperforms AMSGrad (see Appendix C for additional simulations).
Figure 4 :
4Comparison of preset training schedule (black) and adaptive training schedule (blue), employing SGD without momentum both for the MLP on the MNIST data (a) and the CNN on the CIFAR-10 data (b), along with the AMSGrad algorithm (green). From top to bottom, plotted are the learning rate η, the full-batch training loss f , and prediction accuracies on the training-set images (solid) and the 10000 test-set images (dashed).
( 0 .
005, 200) → (0.02, 200) → (0.01, 200) → (0.005, 400) → (0.003, 400) → (0.002, 400) → (0.0015, 400) → (0.001, 400) → (0.0005, 800) → (0.00025, 800) → (0.0001, 800)
Figure S1 :
S1Comparison of AMSGrad (green) and Adam (orange) algorithms for the MLP on the MNIST data (a) and the CNN on the CIFAR-10 data (b). Top rows plot the full-batch training loss f while bottom rows plot prediction accuracies on the training-set images (solid) and the 10000 test-set images (dashed).
One may try to evade this by employing the 1/ |B|-scaling of the connected noise covariant matrix, but that would then enforces |B| → 0 + as dt → 0 + , which is unphysical.3 Heuristically, (∇f ) 2 ∼ ηH C for small η due to the relation FDR2, and one may thus neglect the difference between C and C, and hence justify the naive use of SDE, when ηH 1 and the Gaussian-noise assumption holds. In the similar vein, the referenceLi et al. (2015) proves faster convergence between SGD and SDE when the term proportional to η∇ (∇f ) 2 is added to the gradient.
If the model parameter θ happens to fluctuate around large values, for numerical accuracy, one may want to replace OL = θ · ∇f B by (θ − θc) · ∇f B where a constant vector θc approximates the vector around which θ fluctuates at long time.
For the second term, in order to ensure that limt→∞ v · ∇f B (t) = limt→∞ v · ∇f (t), we measure the half-running average of v (t) · ∇f B [θ (t)] and not v (t + 1) · ∇f B [θ (t)].
Stochastic modified equations for the asynchronous stochastic gradient descent. Jing An, Jianfeng Lu, Lexing Ying, arXiv:1805.08244arXiv preprintJing An, Jianfeng Lu, and Lexing Ying. Stochastic modified equations for the asynchronous stochas- tic gradient descent. arXiv preprint arXiv:1805.08244, 2018.
Stochastic gradient learning in neural networks. Léon Bottou, Proceedings of Neuro-Nımes. Neuro-Nımes9112Léon Bottou. Stochastic gradient learning in neural networks. Proceedings of Neuro-Nımes, 91(8): 12, 1991.
Optimization methods for large-scale machine learning. Léon Bottou, E Frank, Jorge Curtis, Nocedal, SIAM Review. 602Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223-311, 2018.
on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Robert Brown, Xxvii, The Philosophical Magazine. 421A brief account of microscopical observations made in the months ofRobert Brown. XXVII. A brief account of microscopical observations made in the months of June, July and August 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. The Philosophical Magazine, 4 (21):161-173, 1828.
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. Pratik Chaudhari, Stefano Soatto, arXiv:1710.11029arXiv preprintPratik Chaudhari and Stefano Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv preprint arXiv:1710.11029, 2017.
Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. E Gavin, Crooks, Physical Review E. 603Gavin E Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3):2721-2726, 1999.
Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Albert Einstein, Annalen der physik. 3228Albert Einstein.Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen. Annalen der physik, 322(8):549-560, 1905.
Crispin Gardiner, Stochastic methods. BerlinSpringer4Crispin Gardiner. Stochastic methods, volume 4. Springer Berlin, 2009.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
Markoff random processes and the statistical mechanics of time-dependent phenomena. II. Irreversible processes in fluids. S Melville, Green, The Journal of Chemical Physics. 223Melville S Green. Markoff random processes and the statistical mechanics of time-dependent phe- nomena. II. Irreversible processes in fluids. The Journal of Chemical Physics, 22(3):398-413, 1954.
On the convergence of Adam and beyond. J Sashank, Satyen Reddi, Sanjiv Kale, Kumar, International Conference on Learning Representations. Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations, 2018.
Nonequilibrium equality for free energy differences. Christopher Jarzynski, Physical Review Letters. 7814Christopher Jarzynski. Nonequilibrium equality for free energy differences. Physical Review Let- ters, 78(14):2690-2693, 1997.
Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey, arXiv:1711.04623Three factors influencing minima in SGD. arXiv preprintStanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in SGD. arXiv preprint arXiv:1711.04623, 2017.
Andrej Karpathy, ConvNetJS CIFAR-10 demo. Last accessed on 2018-09-25Andrej Karpathy. ConvNetJS CIFAR-10 demo. https://cs.stanford.edu/people/ karpathy/convnetjs/demo/cifar10.html, 2014. Last accessed on 2018-09-25.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, CiteseerTechnical reportAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, Citeseer, 2009.
Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. Ryogo Kubo, Journal of the Physical Society of Japan. 126Ryogo Kubo. Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. Journal of the Physical Society of Japan, 12 (6):570-586, 1957.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Weight space probability densities in stochastic learning: I. Dynamics and equilibria. K Todd, John E Leen, Moody, Advances in Neural Information Processing Systems. Todd K Leen and John E Moody. Weight space probability densities in stochastic learning: I. Dynamics and equilibria. In Advances in Neural Information Processing Systems, pp. 451-458, 1993.
Batch size matters: A diffusion approximation framework on nonconvex stochastic gradient descent. Chris Junchi Li, Lei Li, Junyang Qian, Jian-Guo Liu, arXiv:1705.07562v1arXiv preprintChris Junchi Li, Lei Li, Junyang Qian, and Jian-Guo Liu. Batch size matters: A diffusion approxi- mation framework on nonconvex stochastic gradient descent. arXiv preprint arXiv:1705.07562v1, 2017.
Stochastic modified equations and adaptive stochastic gradient algorithms. Qianxiao Li, Cheng Tai, Weinan E , arXiv:1511.06251arXiv preprintQianxiao Li, Cheng Tai, and Weinan E. Stochastic modified equations and adaptive stochastic gradient algorithms. arXiv preprint arXiv:1511.06251, 2015.
Continuous-time limit of stochastic gradient descent revisited. Stephan Mandt, D Matthew, David M Hoffman, Blei, 8th NIPS Workshop on Optimization for Machine Learning. Stephan Mandt, Matthew D Hoffman, and David M Blei. Continuous-time limit of stochastic gra- dient descent revisited. In 8th NIPS Workshop on Optimization for Machine Learning, 2015.
Stochastic gradient descent as approximate Bayesian inference. Stephan Mandt, D Matthew, David M Hoffman, Blei, The Journal of Machine Learning Research. 181Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate Bayesian inference. The Journal of Machine Learning Research, 18(1):4873-4907, 2017.
Natural Langevin dynamics for neural networks. Gaétan Marceau-Caron, Yann Ollivier, International Conference on Geometric Science of Information. SpringerGaétan Marceau-Caron and Yann Ollivier. Natural Langevin dynamics for neural networks. In International Conference on Geometric Science of Information, pp. 451-459. Springer, 2017.
Stochastic gradient Langevin dynamics that exploit neural network structure. Zachary Nado, Jasper Snoek, Roger Grosse, David Duvenaud, Bowen Xu, James Martens, Zachary Nado, Jasper Snoek, Roger Grosse, David Duvenaud, Bowen Xu, and James Martens. Stochastic gradient Langevin dynamics that exploit neural network structure, 2018. URL https: //openreview.net/forum?id=ry-Se9kvG.
Ryota Behnam Neyshabur, Nathan Tomioka, Srebro, arXiv:1412.6614search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprintBehnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.
Exploring generalization in deep learning. Srinadh Behnam Neyshabur, David Bhojanapalli, Nati Mcallester, Srebro, Advances in Neural Information Processing Systems. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring general- ization in deep learning. In Advances in Neural Information Processing Systems, pp. 5947-5956, 2017.
Equilibrium equation of state of a hard sphere binary mixture at very large densities using replica exchange Monte Carlo simulations. Gerardo Odriozola, Ludovic Berthier, The Journal of Chemical Physics. 134554504Gerardo Odriozola and Ludovic Berthier. Equilibrium equation of state of a hard sphere binary mixture at very large densities using replica exchange Monte Carlo simulations. The Journal of Chemical Physics, 134(5):054504, 2011.
Reciprocal relations in irreversible processes. Lars Onsager, I. Physical Review. 374Lars Onsager. Reciprocal relations in irreversible processes. I. Physical Review, 37(4):405-426, 1931.
Fokker-Planck description of learning in backpropagation networks. Günter Radons, Heinz Georg Schuster, D Werner, International Neural Network Conference. SpringerGünter Radons, Heinz Georg Schuster, and D Werner. Fokker-Planck description of learning in backpropagation networks. In International Neural Network Conference, pp. 993-996. Springer, 1990.
The Fokker-Planck equation: methods of solution and applications. Hannes Risken, SpringerHannes Risken. The Fokker-Planck equation: methods of solution and applications. Springer, 1984.
A stochastic approximation method. Herbert Robbins, Sutton Monro, The Annals of Mathematical Statistics. 223Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathemat- ical Statistics, 22(3):400-407, 1951.
Absence of thermodynamic phase transition in a model glass former. Ludger Santen, Werner Krauth, Nature. 4056786Ludger Santen and Werner Krauth. Absence of thermodynamic phase transition in a model glass former. Nature, 405(6786):550-551, 2000.
A Bayesian perspective on generalization and stochastic gradient descent. L Samuel, Smith, V Quoc, Le, arXiv:1710.06451arXiv preprintSamuel L Smith and Quoc V Le. A Bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2018.
Mor Shpigel Nacson, and Nathan Srebro. The implicit bias of gradient descent on separable data. Daniel Soudry, Elad Hoffer, International Conference on Learning Representations. Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, and Nathan Srebro. The implicit bias of gradient descent on separable data. In International Conference on Learning Representations, 2018.
Consistency and fluctuations for stochastic gradient Langevin dynamics. Yee Whye Teh, H Alexandre, Sebastian J Thiery, Vollmer, The Journal of Machine Learning Research. 171Yee Whye Teh, Alexandre H Thiery, and Sebastian J Vollmer. Consistency and fluctuations for stochastic gradient Langevin dynamics. The Journal of Machine Learning Research, 17(1):193- 225, 2016.
Nicolaas Godfried Van Kampen, Stochastic processes in physics and chemistry. Elsevier1Nicolaas Godfried Van Kampen. Stochastic processes in physics and chemistry, volume 1. Elsevier, 1992.
Zur kinetischen theorie der brownschen molekularbewegung und der suspensionen. Marian Von Smoluchowski, Annalen der physik. 32614Marian Von Smoluchowski. Zur kinetischen theorie der brownschen molekularbewegung und der suspensionen. Annalen der physik, 326(14):756-780, 1906.
Bayesian learning via stochastic gradient Langevin dynamics. Max Welling, Yee W Teh, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningMax Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, pp. 681-688, 2011.
The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects. Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma, arXiv:1803.00195arXiv preprintZhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, and Jinwen Ma. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects. arXiv preprint arXiv:1803.00195, 2018. |
226,254,365 | Teaching with Commentaries | Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, meta-learned information helpful for training on a particular task or dataset. We present an efficient and scalable gradient-based method to learn commentaries, leveraging recent work on implicit differentiation. We explore diverse applications of commentaries, from learning weights for individual training examples, to parameterizing label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. In these settings, we find that commentaries can improve training speed and/or performance and also provide fundamental insights about the dataset and training process. * Correspondence to: [email protected]. Work done while interning at Google. arXiv:2011.03037v1 [cs.LG] 5 Nov 2020 4. We investigate label-dependent commentaries to define a data augmentation policy, and obtain insights into the design of effective augmentation strategies and improved performance on benchmark tasks as compared to baselines. 5. We parameterize commentaries as attention masks to find important regions of images. Through qualitative and quantitative evaluation, we show these masks identify salient image regions and can be used to improve the robustness of neural networks to spurious background correlations.Teaching with CommentariesDefinition: We define a commentary to be learned information helpful for (i) training a model on a task or (ii) providing insights on the learning process. Formally, let t(x, y, i; φ) denote a commentary that is a function of a data point x, prediction target y, and iteration of training i, with parameters φ. The commentary may be represented in a tabular fashion for every combination of input arguments, or using a neural network that takes these arguments as inputs.The commentary is used in the training of a student network with parameters θ, denoted n(x; θ). | [
59551640,
202712906,
170078603,
3162051
] | Teaching with Commentaries
Aniruddh Raghu
Maithra Raghu
Google Research
Simon Kornblith
Google Research
David Duvenaud
Google Research
University of Toronto
Geoffrey Hinton
Google Research
University of Toronto
Teaching with Commentaries
Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, meta-learned information helpful for training on a particular task or dataset. We present an efficient and scalable gradient-based method to learn commentaries, leveraging recent work on implicit differentiation. We explore diverse applications of commentaries, from learning weights for individual training examples, to parameterizing label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. In these settings, we find that commentaries can improve training speed and/or performance and also provide fundamental insights about the dataset and training process. * Correspondence to: [email protected]. Work done while interning at Google. arXiv:2011.03037v1 [cs.LG] 5 Nov 2020 4. We investigate label-dependent commentaries to define a data augmentation policy, and obtain insights into the design of effective augmentation strategies and improved performance on benchmark tasks as compared to baselines. 5. We parameterize commentaries as attention masks to find important regions of images. Through qualitative and quantitative evaluation, we show these masks identify salient image regions and can be used to improve the robustness of neural networks to spurious background correlations.Teaching with CommentariesDefinition: We define a commentary to be learned information helpful for (i) training a model on a task or (ii) providing insights on the learning process. Formally, let t(x, y, i; φ) denote a commentary that is a function of a data point x, prediction target y, and iteration of training i, with parameters φ. The commentary may be represented in a tabular fashion for every combination of input arguments, or using a neural network that takes these arguments as inputs.The commentary is used in the training of a student network with parameters θ, denoted n(x; θ).
Introduction
Training, regularizing, and understanding complex neural network models is challenging. There remain central open questions on making training faster and more data-efficient [11,23,24], ensuring better generalisation [32] and improving transparency and robustness [2,20]. A promising approach for addressing these questions is learning to teach [35], in which learned auxiliary information about a task is provided to a neural network to inform the training process and help downstream objectives. Examples include providing auxiliary training targets [13,21,22] and reweighting training examples to emphasize important datapoints [5,9,25,27].
Learning to teach approaches have achieved promising results in vision and language applications [9,25,27] using a handful of specific modifications to the training process. In this paper, we take steps towards generalizing these approaches, introducing a flexible and effective learning-to-teach framework using commentaries. Commentaries represent metalearned information helpful for training a model on a task. We demonstrate that commentaries can effectively be used for applications ranging from speeding up training to gaining insights into the neural network model. Specifically, our contributions are:
1. We formalize the notion of commentaries to help provide a unifying framework for learning metainformation to improve network training and examine model learning. 2. We present a gradient-based method to learn commentaries by optimizing a student network's validation loss, leveraging recent work in implicit differentiation to scale to larger models. 3. We use commentaries to define example-weighting curricula, a common method of teaching neural networks. We show that these learned commentaries hold interpretable insights, lead to speedups in training, and improve performance on few-shot learning tasks.
Learning Commentaries
We begin by introducing algorithms for learning commentaries. Throughout, we denote the training set as D T , the validation set as D V and the loss function (e.g. cross-entropy) as L. With θ as the parameters of the student network and φ the commentary parameters, we letθ,φ be the respective optimized parameters. We seek to findφ such that the student network's validation loss, L V , is minimized. As the commentary is used during the training of the student network, L V implicitly depends on φ, enabling the use of gradient-based optimization algorithms to findφ.
Algorithm 1: Backpropagation Through Training When student network training has a small memory footprint, we obtain gradients for the commentary parameters by backpropagating through the training process:
(1) Initialize commentary parameters φ 0
(2) For T steps of meta-training:
(i) Initialize student network n(x; θ) with parameters θ 0 (ii) Train student network with N steps of gradient descent to optimize L T (θ, φ) = E x,y∼D T L (n (x; θ) , t (· ; φ) , y)
whereL is a loss function adjusted from L to incorporate the commentary, and L T (θ, φ) is the expected adjusted loss over the training data. (iii) Output:θ, the optimized parameters of student network. (Note this is implicitly a function of φ, i.e.θ(φ)). (iv) Compute validation loss
L V (φ) = E x,y∼D V L n(x;θ (φ)), y(2)
(v) Compute ∂L V (φ) ∂φ , by backpropagating through the N steps of student training (inner optimization), and update φ.
(3) Output:φ, the optimized parameters of the commentary.
We refer to meta-training as outer optimization, and training the student network as inner optimization. Algorithm 2: Large-Scale Commentary Learning with Implicit Differentiation: When training the student model has a large memory footprint, backpropagating through student training to obtain exact gradients for commentary parameters is too memory expensive. We therefore leverage the Implicit Function Theorem (IFT) and efficient inverse Hessian approximation to obtain approximate gradients for commentary parameters, following Lorraine et al. [17].
The gradient of the validation loss w.r.t. the commentary parameters can be expressed as:
∂L V ∂φ = ∂L V ∂θ × ∂θ ∂φ .(3)
The first term on the right hand side in equation 3 is simple to compute, but the second term is expensive. Under fixed-point and regularity assumptions on student and commentary parameters θ (φ), φ , the IFT allows expressing this second term ∂θ ∂φ as the following product:
∂θ ∂φ = − ∂ 2 L T ∂θ ∂θ T −1 × ∂ 2 L T ∂θ ∂φ T θ (φ) ,(4)
i.e., a product of an inverse Hessian and a matrix of mixed partial derivatives. Following Lorraine et al. [17], we efficiently approximate this product using a truncated Neumann series and implicit vector-Jacobian products. This yields the following algorithm for learning the commentary:
(1) Initialize commentary parameters φ and student network parameters θ
(2) For M steps:
(i) Compute the student network's training loss, L T (θ, φ).
(ii) Compute the gradient of this loss w.r.t the student parameters θ. (iii) Perform a single gradient descent update on the parameters to obtainθ (Note this is implicitly a function of φ, i.e.θ(φ)). (iv) Compute the student network's validation loss, L V (φ).
(v) Compute ∂L V ∂θ . (vi) Approximately compute ∂θ ∂φ with equation 4, using a truncated Neumann series with a single term and implicit vector-Jacobian products [17]. (vii) Compute the overall derivative ∂L V ∂φ using (v) and (vi), and update φ. (vii) Set θ ←θ.
(3) Output:φ, the optimized parameters of the commentary.
Since a single term in the Neumann series is sufficient for learning, this algorithm has similar time complexity to a single iteration of training. This approach scales to millions of commentary parameters, making gradient-based commentary learning practical on large models.
Commentaries for Example Weighting Curricula
Having outlined methods to learn commentaries, in this section, we now explore our first application: commentaries that encode a separate weight for each training example at each training iteration.
We specify these weights using a commentary neural network (or teacher network) t(x, i; φ) → [0, 1] that produces a weight for every training example at every iteration of training of the student network. When training a student network, using the notation of §2.1, the commentary is incorporated in the training loss as:L = t(x, i; φ) · L n(x; θ), y , where L(·) is the original loss function for the task. The validation loss is unweighted. Overlapping Dataset: Example Digits
Synthetic Example: Rotated MNIST Digits
We first learn example weight curriculum commentaries on a synthetic MNIST binary classification problem. Each example in the dataset is a rotated MNIST digit '1', with variable rotation angle that defines the class. We generate two datasets: the non-overlapping dataset and the overlapping dataset. In the non-overlapping dataset, the rotation angle for each example from class 1 and class 0 is drawn from nonoverlapping distributions Uniform [15,45] Figure 1). We use two block CNNs as both the commentary neural network and student network. The commentary network takes as input the image and the iteration of student training, and outputs a weight for each example in the batch. We learn commentary parameters by backpropagating through student training (Algorithm 1, §2.1), and use 500 gradient steps for inner optimisation (i.e., N = 500). Implementation is with the higher library [7]. Further details in Appendix B.1.
Results: Figure 1 visualises the two datasets and plots the learned example weights as a function of rotation at iteration 500 of the student training.
Commentaries for CIFAR10 and CIFAR100
We now learn example weighting curriculum commentaries on CIFAR10 and CIFAR100. The commentary network is again the two block CNN architecture, and when training the commentary network, the student network is also a two block CNN. We use Algorithm 1, §2.1 one more, with 1500 gradient steps in the inner optimization: N = 1500. For evaluation, the trained commentary network is used to produce example weights for (i) 2 Test-set accuracy curves on CIFAR10/100 when using curriculum commentaries, non-curriculum commentaries, and no commentaries, during student network training. The learned curriculum commentary network which generates per-iteration example weights results in learning speed improvements. This learning speed improvement holds when the student network is trained for many more steps than the number of inner loop update steps used during commentary network training (1500 steps). This demonstrates that the commentaries generalise to longer training times. Example weighting commentaries improve learning speed. Figure 2 shows accuracy curves on the test sets of CIFAR10/100 for the two block CNN student with example weight curricula (orange line), a baseline (green line, no example weights) and an ablation (blue line, example weights without curriculum structure). On both datasets, the networks trained using the curriculum commentaries obtain better performance than the baseline and ablation over approximately 25000 steps of training (10 epochs), and have superior learning curves.
Example weighting commentaries generalise to longer training times and across architectures. At training time, the commentary network was learned to produce example weights for the two block CNN student for 1500 inner loop update steps. We see in Figure 2 that the learned example weights lead to student network learning speedups well-beyond this point, suggesting generalizability of the commentaries to longer training times. In addition, when the same commentary network is used to produce example weights for ResNet-18/34 students (Figure 3), we also observe a learning speedup, suggesting that the commentaries are generalizable across architectures.
Commentaries for Few-Shot Learning
Finally, we use example weight commentaries for few-shot learning. We start with the MAML algorithm [6], which learns a student parameter initialization that allows fast adaptation on a new learning task using Table 1: Mean accuracy and 95% confidence interval across 1000 test-time tasks. On common few-shot learning benchmarks, using example weighting for the support set improves the MAML baseline's accuracy. Improvements are also observed when evaluating on out-of-distribution datasets.
Figure 4:
The learned blending augmentation is related to the class error rate. On digit 1s, a class with low average error rate, the learned augmentation blends in large amounts of other digits so as to maximize the learning signal. On digits with higher average error rates, such as 7s and 8s, the level of blending is very low so as to focus on classifying the class alone (rather than making the task more complex by adding another digit). Table 1, specifying the experimental setting (N -way K-shot), and the dataset used for training/testing. In all experiments, incorporating example weighting can improve on the MAML baseline, suggesting the utility of these commentaries in few-shot learning. Further experiments on other benchmark datasets (CIFAR-FS/SVHN) showing similar trends are in the appendix (Table B.1).
Learning to Blend Training Examples
We first consider an augmentation scheme where pairs of images are blended together with a proportion dependent on the classes of the two examples. At each training iteration, we:
• Sample two examples and their labels, (x 1 , y 1 ) and (x 2 , y 2 ) from the training set.
• Obtain the blending proportion λ = t(y 1 , y 2 ; φ), and form a new image x m = λx 1 + (1 − λ)x 2 , and class y m equivalently. For classification problems with N classes, the teacher t(y 1 , y 2 ; φ) outputs an N × N matrix. This augmentation scheme is inspired by mixup [33]. However, we blend with a deterministic proportion, depending on pairs of labels, rather than drawing a blending factor from a Beta distribution. In doing so, we more finely control the augmentation policy.
Augmentation Commentaries for CIFAR10 and CIFAR100
We next learn and evaluate augmentation commentaries on CIFAR10 and CIFAR100. We evaluate the effect of these augmentation commentaries on improving generalization performance for a standard student network architecture. Since this is a memory intensive process, we use the implicit differentiation method (Algorithm 2, § 2.1) to ensure computational tractability. We learn the commentaries jointly with a ResNet-18 student network. Once the commentary is learned, we fix it and train three new students with different random initializations to evaluate the commentary's efficacy.
Results: Table 2 shows test accuracy for different augmentation policies on CIFAR10 and 100. We compare the learned commentary to using only standard data augmentations for CIFAR10/100 (No commentary) and a random initialization for the commentary matrix (Random commentary). We also compare to mixup [33]. We observe that the learned commentary is competitive with mixup and improves on other baselines across both tasks. In the appendix, we compare to an ablation that destroys the structure of the learned commentary grid by shuffling it, and find that the unshuffled commentary does better (Table C.1).
Further Analysis: In Figure 5, we visualize: (i) for two CIFAR10 classes, the blending fractions (defined as (1 − λ)) associated with the other classes (left); and (ii) for two CIFAR100 classes, the blending fractions associated with the five most highly blended classes. For CIFAR10, we see that other classes that are visually similar and therefore sources of misclassification are blended more significantly. On CIFAR100, we see that within the top 5 blended classes, there are classes that are visually similar, but there are also blended classes that have far less visual similarity, which could be blended in to obtain more learning signal per-example. Further analysis in Appendix C.2.
Attention Mask Commentaries for Insights and Robustness
We now turn to a challenging task: studying whether commentaries can learn to identify key (robust) features in the data. We formalize this problem as one of learning commentaries which act as attention masks. We perform a qualitative empirical study, learning commentary attention masks on a variety of image datasets: Visualizing the learned blending proportions on two CIFAR10 classes (left), we see that both classes are most blended with others that are visually similar (truck-automobile, and cat-dog), which may help the network differentiate between them. On CIFAR100 (right), considering the top 5 blended classes in two cases, we observe again the presence of visually similar classes that may be confused (seal-otter, and squirrel-mouse), but also visually unrelated classes. These may provide extra learning signal with each example. an MNIST variant, CIFAR10/100, medical chest X-rays, and Caltech-UCSD Birds (CUB)-200-2011, where we find that the learned commentaries identify salient image regions. We perform quantitative experiments, which illustrate the effectiveness of attention mask commentaries over baselines and in ensuring robustness to spurious correlations. Formally, we learn a commentary network t(x; φ) → [i, j] to output the centre of a 2D Gaussian that is then computed and used (with predefined standard deviation depending on the input image size, see Appendix D) as a pixelwise mask for the input image before feeding it through a student network. We denote the mask based on t(x; φ) as m(x, t). Our goal is to learn masks that highlight the most important regions of the image for training and testing ,so the masks are used both at train time and test time. We therefore have thatL = L = x-ent (n (x m (x, t) ; θ) , y). The commentary network is a UNet [26] with an output layer from KeypointNet [28]. Commentary parameters are learned using implicit differentiation and Algorithm 2, §2.1, and the student network is a ResNet-18.
Qualitative and Quantitative Analysis On Image Datasets
Masks for Coloured MNIST: We learn masks on a dataset where each image has two MNIST digits, coloured red and blue, with the red digit determining the label of the image. As seen in Figure 6 left, the commentary selectively attends to the red digit and not the blue digit. Masks for Chest X-rays: Using a dataset of chest X rays [8], we train a student network to detect cardiomegaly, a condition where an individual's heart is enlarged. Learned masks are centered on the chest cavity, around the location of the heart (Figure 6), which is a clinically relevant region. These masks could be used in medical imaging problems to prevent models relying on spurious background features [1,30].
Masks for CIFAR10/100: The learned masks on CIFAR10/100 ( Figure 6) attend to important image regions that define the class, such as the faces of animals/the baby, wheels/body of vehicles, and the humps of the camel. In the appendix (Table D.1) we show quantitatively that the learned masks are superior to other masking baselines, and also provide further qualitative examples.
Mask Commentaries for Robustness to Background Correlations
Using the task introduced in Koh et al. [10], we now demonstrate that mask commentaries can provide robustness to spurious background correlations. We take the CUB-200-2011 dataset [29] and the associated fine-grained classification problem to classify the image as one of 200 bird species. Using the provided segmentation masks with the dataset, the background of each image is replaced with a background from the Places dataset [34]. For the training and validation sets, there is a specific mapping between the CUB class and the Places class for the background, but for the testing set, this mapping is permuted, so that the background features are now spuriously correlated with the actual image class.
We first learn an attention mask commentary network, and for evaluation, use this learned network when training a ResNet-18 student (pretrained on ImageNet, as in Koh et al. [10]). We assess student performance on the validation and test sets (with the same and different background mappings as the training set, respectively), considering three random seeds. Figure 7; the masks are mostly focused on the bird in the image (though some also contain parts of the background). In a quantitative evaluation (Table 3), we see that using the masks helps model performance significantly on the test set as compared to a baseline that does not use masks. This suggests that the masks are indeed helping networks to rely on more robust features in the images. Note that the accuracy drop on the validation set is expected as we are limiting the region of the image that the model gets as input.
Results: Learned masks are shown in
Related Work
Learning to Teach: Concepts around neural network teaching (and the use of curriculum learning) have been proposed as early as Bengio et al. [3], Zhu [ [5], Jiang et al. [9], Ren et al. [25], Shu et al. [27], adjusting loss functions [31] and metalearning auxiliary tasks/labels for use in training [13,21,22]. In contrast to these approaches (some of which are particularly focused on model performance), our work on commentaries provides a more general learning framework for teaching. This enables applications in both more standard settings such as example weighting, but also novel use-cases (beyond model performance) such attention masks for interpretability and robustness. These diverse applications also provide insights into the training process of neural networks.
Learning with Hypergradients: Our algorithm for learning commentaries uses hypergradients -derivatives of a model's validation loss with respect to training hyperparameters. Prior work has proposed different approaches to compute hypergradients, including memory-efficient exact computation in Maclaurin et al. [19], and approximate computation in Lorraine et al. [17], Lorraine and Duvenaud [16], MacKay et al. [18]. Our algorithm to learn commentaries builds on Lorraine et al. [17], which utilises the implicit function theorem (IFT) and approximate Hessian matrix inversion for efficiency, enabling us to scale commentary learning to larger models.
Conclusion
In this paper, we considered a general framing for teaching neural networks using commentaries, defined as meta-information learned from the dataset/task. We described two gradient-based methods to learn commentaries and three distinct methods of applying commentaries to assist learning: example weight curricula, data augmentation, and attention masks. Empirically, we show that the commentaries can provide insights and result in improved learning speed and/or performance on a variety of datasets. Teaching with commentaries is a proof-of-concept idea, and we hope that this approach will inspire related ways of automatically re-using training insights across tasks and datasets.
A Illustrative Example: Vector Commentaries on Celeb-A
As an illustrative example, we describe how to learn real vector valued commentaries on the CelebA dataset, and then analyse the resulting commentaries. The CelebA dataset contains images of faces, with each image also accompanied by 40 dimensional attribute labels. We learn a commentary neural network t(x; φ) → [−1, 1] 40 to produce a real vector valued commentary for each training example. We train a student network n(x; θ) → (ŷ,t) to predict both the attributes and regress the commentary. The training and validaiton losses are as follows:
L = t − t(x; φ) 2 + x-ent n(x; θ), y ,(5)
L = x-ent n(x; θ), y, .
For both commentary and student networks, we use a 4-block CNN with 3 × 3 filters (similar to the CNN architectures from [6]). We learn commentary parameters using backpropagation through student training (Algorithm 1, §2.1). The inner loop optimization consists of 100 iterations of Adam optimzer updates on the training loss using a batch size of 4. We use the higher library [7]. We use 50 outer optimization iterations to train the commentary network.
Results: Figure A.1 visualizes the examples that are most positively/negatively correlated with certain dimensions of the vector valued commentaries. We see clear correspondence between interpretable attributes and the commentary dimensions. Attributes such as hair colour, wearing glasses, and wearing hats are wellencoded by the commentaries. This suggests that such commentaries could function as an auxiliary task to give a network more signal during the learning process.
B Example Weighting Curricula
We provide further details about the experiments using commentaries to define example weighting curricula. Training details: We train both networks using the Adam optimizer, with a learning rate of 1e-4 for the student, and 1e-3 for the commentary network. The student network is trained for 500 inner optimisation steps, with a batch size of 10. We train for 20 commentary network iterations. Training is implemented using the higher library [7].
Results: Figure , for the non-overlapping dataset, we observe that the rank correlation between the example weight and the rotation magnitude decreases over the course of student training iteration. This is intuitively sensible, since the example weights first prioritise the easy examples (large rotation), and then learn to focus on the harder ones (small rotation) that provide most information to separate the classes. By contrast, in the overlapping case, the curriculum has examples that are further from the boundary (larger rotation magnitude) consistently weighted most highly, which is sensible as these are the most informative of the class label.
Overall, the results on this synthetic case demonstrate that the learned commentaries capture interesting and intuitive structure.
B.2 CIFAR10/100
Network architectures: We use the two block CNN from the MNIST experiments for the commentary network, and as the student network when training the commentary network. We employ the same strategy for encoding the iteration of training. At testing time, we evaluate this commentary network by teaching three different student network architectures: two block CNN, ResNet-18, and ResNet-34. Training details: We train both networks using the Adam optimizer, with a learning rate of 1e-4 for the student, and 1e-3 for the commentary network. During commentary network learning, the student network is trained for 1500 inner optimization steps, with a batch size of 8, and is reset to random intialisation after each commentary network gradient step. We train for 100 commentary network iterations. Training is implemented using the higher library [7]. At testing time, we use a batch size of 64; the small batch size at training time is due to GPU memory constraints.
B.3 Few-Shot Learning (FSL)
Setup: The MAML algorithm finds a network parameter initialization that can then be used for adaptation to new tasks using a support set of examples. The commentary network here is trained to provide example weights for each example in the support set, at each iteration of inner loop adaptation (i.e., the example weights are not used at meta-testing time). We jointly learn the MAML initialization and the commentary network parameters; intuitively, this represents learning an initialization that is able to adapt to new tasks given examples and associated weights.
Dataset details: We use standard datasets used to evaluate FSL methods, and the associated splits between training and testing tasks from prior work [12,15]. We evaluate on two out-of-distribution settings, namely: training the few-shot learner on CIFAR-FS and testing on SVHN; and training the few-shot learner on MiniImageNet and testing on CUB-200.
Network architectures: Both the commentary and the student networks use a 4-block CNN architecture commonly seen in prior work [6]. The student network takes a given image as input. The commentary network takes as input the support set image and the class label. The one-hot labels are converted into a 64 dimensional vector using a fully connected layer, concatenated with input image representations, then passed through two more fully connected layers to produce the output. This output is passed through a sigmoid to ensure example weights lie in the range [0, 1]. These weights are normalized to ensure a mean weight of 1 across the entire support set, which helped stability.
Training details: We use Adam with a learning rate of 1e-3 to learn both the commentary network parameters and the student network initialization point for MAML. A meta-batch size of 4 is used for meta training. We use SGD with a learning rate of 1e-2 for the inner loop adaptation. At meta-training time, we use 5 inner loop updates. At meta-test time, we use 15 inner loop updates (similar to some other methods, to allow for more finetuning). For evaluation, we create 1000 different test time tasks (randomly generated) and we compute mean/95% CI accuracy on this set of tasks. We use the higher library [7]. Table B.1. Each row specifies the experimental setting (N -way K-shot), the dataset used for training, and the dataset used for testing. In all experiments, incorporating example weighting can improve on the MAML baseline, suggesting the utility of these commentaries in few-shot learning.
Results
C Data Augmentation
We provide further details about the experiments using commentaries to define data augmentation policies.
C.1 MNIST
Network and training details: The 2 block CNN is used as the student network. Denoting each entry of the commentary grid as φ i,j , we initialised each entry to 0. The blending proportion is formed as:
λ i,j = 1 − 0.5 × sigmoid(φ i,j )
. This is to ensure that the blending proportion is between 0.5 and 1; this implies that blended image contains more of the first image (class i) than the second (class j). Without this restriction, certain blending combinations could 'flip', making the results harder to interpret. The inner optimization uses SGD with a learning rate of 1e-3, and had 500 gradient steps. We used 50 outer optimization steps to learn the commentary parameters, using Adam with a learning rate of 1e-1. The commentary parameters were learned with the higher library [7]. as 0.5 × sigmoid(φ i,j )) over j, the error rate on class i is correlated (Pearson correlation= −0.54) with the degree of blending of other digits into an example of class i; that is, lower error rate on class i implies that other digits are blended more heavily into it. On MNIST, this means that the class that has on average the lowest error rate (class 1) has other digits blended into it most significantly (seen in Figure 4 left). On the other hand, classes that have on average higher error rate (e.g., class 7, class 8) rarely have other digits blended in (Figure 4 right).
C.2 CIFAR 10/100
Network and training details: The student network is a ResNet18. We use the method from [17] to learn the commentary parameters. The commentary parameters are initialized in the same way as for the MNIST experiments. These parameters are learned jointly with a student, and we alternate updates to the commentary parameters and the student parameters. We use 1 Neumann step to approximate the inverse Hessian when using the IFT. For commentary learning, we use Adam with a LR of 1e-3 as the inner optimiser, and Adam with a LR of 1e-2 as the outer optimiser. For evaluation, we train three randomly initialized students using the fixed commentary parameters. This training uses SGD with common settings for CIFAR (starting LR 1e-1, weight decay of 5e-4, decaying LR after 30, 60, and 80 epochs by a factor of 10). We use standard CIFAR augmentations in addition to the learned augmentations at this evaluation phase.
Baselines: We compare to using no commentary (just the standard CIFAR augmentation policy, random crops and flips), a random commentary (where an augmentation grid is constructed by uniformly sampling blending proportions in the range [0.5, 1]), mixup [33] with blending proportion drawn from Beta(1,1), and a shuffled version of our method where the commentary grid is shuffled at the start of evaluation (destroying the structure, but preserving the scale of augmentation).
Results: Table C.1 shows model accuracy for different augmentation policies on CIFAR10 and 100. We compare the learned commentary to using only standard data augmentations for CIFAR10/100 (No commentary), shuffling the learned commentary grid, using a random initialization for the commentary tensor, and mixup [33]. We observe that the learned commentary is competitive with mixup and improves on other baselines. The fact that the learned commentary does better than the shuffled grid implies that the structure of the grid is also important, not just the scale of augmentations learned.
D Attention Masks
Datasets and Mask Information: We use a number of datasets to evaluate the learned masks, including: Coloured MNIST (a synthetic MNIST variant), CheXpert (a dataset of chest X-ray images from [8]), CIFAR10/100, and CUB-200-2011. More details:
• The Coloured MNIST dataset is formed by randomly sampling two MNIST digits from the dataset, and choosing one to be red and one to be blue. The red digit determines the image label. The two digits are randomly placed on two different quadrants of a 56 × 56 grid. The standard deviation of the mask is set to be 15 pixels.
• The CheXpert dataset has large X-ray radiograph images. Each image is resized to be 320 × 200. The mask standard deviation is 50 pixels.
• CIFAR10/100 masks are set to be 15 pixels standard deviation.
• CUB-200-2011 images are resized to be 224 × 224, and the mask standard deviation is 50 pixels.
Network architectures: The student network was a ResNet18, and the commentary network was a UNet [26] with an output layer from KeypointNet [28]. This takes a probability mass function defined spatially, and the (x, y) centre of the mask is computed as the mean in both spatial dimensions. Producing the mean in this manner significantly helped stability rather than regressing a real value.
Training details: We use the method from [17] to learn the commentary parameters. These parameters are learned jointly with a student, and we alternate updates to the commentary parameters and the student parameters. We use 1 Neumann step to approximate the inverse Hessian when using the IFT. When the commentary network is learned, we use Adam with LR 1e-4 for both inner and outer optimisations. We found balancing this learning rate to be important in the resulting stability of optimization and quality of learned masks. On Coloured MNIST, where the image label is determined by the red digit (blue is a distractor), the masks focus on the red digit. On a dataset of chest X-rays, the masks focus on the chest cavity, which is the appropriate reason for detecting the condition in question (cardiomegaly). On CIFAR10/100, the masks are focused on important regions of the object in the image, such as the faces of animals/babies, and the bodies of vehicles.
When evaluating the learned masks, we trained three new ResNet-18 students with different random seeds, fixing the commentary network. For CIFAR10/100, for evaluation, we use SGD with common settings for CIFAR (starting LR 1e-1, weight decay of 5e-4, decaying LR after 30, 60, and 80 epochs by a factor of 10). We use standard CIFAR augmentations for this also. Baselines for CIFAR experiments: For the random mask baseline, for each example in a batch, we select a centre point randomized over the whole image, then form the mask by considering a gaussian centered at that point with standard deviation 15 (same size as masks from commentary network). This resembles very aggressive random cropping. For the permuted learned mask, we use the learned commentary network to predict masks for all images. Then, we permute the mask-image pairs, so that they no longer match up. We then train with these permuted pairs and assess performance. Our goal is to understand what happens when we have random masks with a similar overall spatial distribution to the real masks.
CIFAR Masking Quantitative Analysis: We compare masks from the learned commentary network to two baselines: randomly chosen mask regions for each image (of the same scale as the learned masks, but with the centre point randomised over the input image), and permuted masks (where we shuffle the learned mask across all the data points). Table D.1 shows the results. Especially on CIFAR100, the learned mask improves noticeably on the other masking methods in both test accuracy and loss. This suggests that overall, the masks are highlighting more informative image regions. We do not expect using the masks on standard problems to result in improved held-out performance, because the backgrounds of images may contain relevant information to the classification decision. Further details on robustness study: The dataset was generated using the open source code from [10].
The student network for this study was pretrained on ImageNet, as in [10]. To train student models at the evaluation stage, we used SGD with a learning rate of 0.01, decayed by a factor of 10 after 50 and 90 epochs. We used nesterov momentum (0.9) and weight decay of 5e-5.
Figure 1 :
1Learned per-example training weights on Rotated MNIST. We visualize rotated digits and rotation distributions from both the datasets. We plot the learned example weights at iteration 500 of student training as a function of rotation after discretising based on digit rotation. In the non-overlapping case (left), examples closer to the decision boundary are weighted most. When classes overlap (right), the more representative examples with greater rotation are weighted highly and examples in the overlap region are downweighted, which is sensible as examples in the overlap region are less indicative of the class label.
Figure 2 :
2Example weight curricula can speed up training.
Figure 3 :
3Example weight curricula generalise across network architectures. Using a learned curriculum commentary network trained with a simple 2 block CNN student, we apply these example weights to train two ResNet architectures. This gives improved test set accuracy curves for ResNet students also, indicating that the commentaries generalise across architectures.
a support set of examples for that task. To incorporate example weighting in MAML, at training time, we jointly learn the MAML initialisation and a commentary network to provide per-example weights for each example in the support set, as a function of the inner loop step number. At test time, we follow the standard MAML procedure, and also incorporate the example weights when computing the support set loss and the resulting updates. Both networks use the 4-conv backbone structure from the original MAML paper. Details are in Appendix B.3.Weevaluate a standard MAML baseline and our commentary variant on standard few-shot learning benchmarks: (i) training/testing on MiniImageNet (MIN); and (ii) training on MIN and testing on CUB-200-2011 (CUB). Results are shown in
•
Use this blended example-label pair (x m , y m ) when computing the training loss. To compute the validation loss, use only unblended examples from the validation set.
Augmentation
Commentaries on MNIST: We learn an augmentation commentary model t on MNIST by direct backpropagating through the inner optimisation of a 2-block CNN student network (Algorithm 1, §2.1). In the learned augmentation, the error rate on class i is correlated (Pearson correlation= −0.54) with the degree of blending of other digits into an example of class i: lower error on class i implies that other digits are blended more heavily into it. On MNIST, this means that the class that has on average the lowest error rate (class 1) has other digits blended into it significantly (Figure 4 left); classes that have on average higher error rate (e.g., class 7, class 8) rarely have other digits blended in (Figure 4 right). Further details in Appendix C.1.
Figure 5 :
5Blending proportions for selected CIFAR10 and CIFAR100 classes reveal interesting insights.
Figure 6 :
6Learned attention masks highlight salient image regions for classification. We learn a commentary network to produce image attention masks on several image datasets, and the masks are qualitatively sensible in all cases. On Coloured MNIST, where the image label is determined by the red digit, the masks focus on the red digit. On a dataset of chest X-rays, the masks focus on the chest cavity, which is the appropriate reason for detecting the condition in question (cardiomegaly). On CIFAR10/100, the masks are focused on important regions, such as the faces of animals, the hump of the camel, and the bodies of vehicles.
Figure 7 :
7Learned masks on CUB dataset are primarily focused on the bird, rather than the background.
Figure A. 1 :
1Dimensions of the metalearned commentaries show correlations with CelebA attributes.
Both the overlapping and non-overlapping datasets are generated to have 10000 training examples, 5000 validation examples, and 5000 test examples.Network architectures: CNNs with two blocks of convolution, batch normalization, ReLU, and max pooling are used for both the student network and commentary network. The commentary network additionally takes in the iteration of student training, which is normalized and passed in as an extra channel in the input image (i.e., the MNIST image has two channels, instead of a single channel). The commentary network uses sigmoid activation to produce an example weight for every data point in the batch.
Figure B. 1 :
1Learned per-example training weights on Rotated MNIST. Visualizing the relationship between the example weights and rotations at the end of training (left) reveals that for the non-overlapping dataset, examples with the largest weights are located close to the decision boundary, which is expected, as these provide the most learning signal. In the overlapping case, weights for examples within the overlap region are lower, again expected as these examples are ambiguous and do not assist in learning.
B.1 visualizes for both datasets: the relation between the learned example weight and the rotation of the digit at the end of student training (iteration 500) following binning. Considering these final example weights, for the non-overlapping case, examples with higher weight are those with smaller rotation and thus closer to the decision boundary (i.e., resembling support vectors) -these provide the most information to separate the classes. Lower weighted examples are those with greater rotation. When classes overlap, the weights are maximised for examples that better characterize each class, and are not in the overlap region -these are most informative of the class label. Now considering the learned curriculum (Figure B.2)
Figure
B.2: A learned curriculum of per-example training weights. Example rotated digits and the two classes are shown on the left. We plot the rank correlation between the magnitude of digit rotation and the example weight over the course of student training for both datasets. When classes are non-overlapping, the curriculum weights examples which are closer to the decision boundary more as training goes on. This resembles teaching the student to focus on easy examples at first, and then moving to progressively harder examples as training progresses. When classes overlap, the curriculum weights more representative examples (near class centroids) more highly as training progresses.
:
We evaluate a standard MAML baseline and our commentary variant on standard few-shot learning benchmarks: (i) training/testing on MiniImageNet (MIN) and CIFAR-FS (in-distribution testing); and (ii) training on MIN and CIFAR-FS and testing on CUB-200-2011 (CUB), and SVHN (out-of-distribution testing). Results are shown in
Visualising the policy: For CIFAR10, we visualize the full augmentation policy in the form of a blending grid, shown in Figure C.1. Each entry represents how much those two classes are blended, with scale on left. This corresponds to 0.5 × sigmoid(φ i,j ), with φ i,j representing an entry in the commentary grid.
Figure C. 1 :
1Augmentation commentary policy on CIFAR10. We observe interesting emergent patterns of the learned blending proportions; classes that are visually similar and potentially confused (cat/dog, automobile/truck) are blended relatively significantly.
Figure D. 1 :
1Learned attention masks highlight salient image regions for classification. We learn a commentary network to produce image attention masks on four datasets, and the masks are qualitatively effective in all cases.
Visualizing
Masks: Figure D.1 shows masks from the main text and further additional examples.
and Uniform[−45, −15] respectively. In the overlapping dataset, the rotation angles are drawn from overlapping distributions Uniform[−5, 30] and Uniform[−30, 5] respectively (
Mode: Train Data → Test Data MAML Baseline Example weighting + MAML5-way 1-shot: MIN→MIN
43.64 ± 0.61
45.07 ± 0.61
5-way 1-shot: MIN→CUB
37.45 ± 0.55
38.00 ± 0.54
5-way 5-shot: MIN→MIN
61.85 ± 0.52
62.99 ± 0.52
5-way 5-shot: MIN→CUB
57.75 ± 0.54
59.06 ± 0.55
Table 2 :
2Learned augmentation commentaries result in competitive or improved model accuracy and loss on CIFAR10/100. Using the learned, label-dependent augmentation commentary during training results in models that are competitive with mixup and superior to other baselines. We show mean/standard deviation over three different initializations.
35]. More recent work examining neural network teaching includes approaches to selecting and weighting training examples Fan et al.[4], Liu et al.[14], Fan et al.Validation Set Accuracy
Test Set Accuracy (Distribution Shift)
Top 1
Top 5
Top 1
Top 5
Baseline (No masks)
78.82 ± 1.19 93.95 ± 0.28 25.81 ± 0.60
56.98 ± 0.69
With Masking Network 74.73 ± 0.76 92.45 ± 0.23 30.46 ± 0.27
61.57 ± 0.70
Table 3 :
3Learned attention masks offer improved robustness to spurious background correlation. We train a commentary network to produce attention masks on a version of the CUB-200-2011 dataset that contains spurious background correlations, with the correlations present in the training/validation set not present in the test set (the test set has a distribution shift). On both top 1 and top 5 accuracy, using the masks results in improved model performance on the shifted test set.
single image x i , label i, and other images x j , label ∀j = i. Averaging the blending proportions (computed We show mean/standard deviation over three different initializations.Further Detail on MNIST Augmentation Commentaries: We learn an augmentation commentary
model t on MNIST, represented as a 10×10 matrix. This commentary is learned by backpropagating through
the inner optimization, using a 2-block CNN student network. For each outer optimization update, we use
500 steps of inner loop training with the augmented dataset, and then compute the validation loss on the
unblended data to obtain commentary parameter gradients.
We find a trend in the learned augmentation relating the error rates and blending proportions. Consider
a CIFAR10
CIFAR100
Test Accuracy
Test Loss
Test Accuracy
Test Loss
No commentary
93.84 ± 0.22
0.239 ± 0.005
74.79 ± 0.39
1.05 ± 0.03
Random commentary
93.84 ± 0.30
0.385 ± 0.007
73.39 ± 0.58
1.14 ± 0.01
mixup [33]
94.42 ± 0.12
0.31 ± 0.01
75.89 ± 0.55
1.01 ± 0.02
Learned commentary (shuffled)
94.01 ± 0.19
0.235 ± 0.005
75.95 ± 0.27
0.99 ± 0.01
Learned commentary (original)
94.12 ± 0.19
0.225 ± 0.007
76.25 ± 0.11
0.97 ± 0.01
Table C.1: Learned augmentation commentaries result in competitive or improved model accuracy
and loss on CIFAR10 and 100. Compared to not using a commentary and using a randomised label-dependent
commentary, the original learned commentary results in models that are competitive with mixup and improve com-
pared to other baselines.
Table D . 1 :
D1Performance of different masking strategies on CIFAR10 and 100. Using the appropriate per-image learned masks improves (in both loss and accuracy) on permuting the masks across the entire dataset and randomly selecting mask regions of the appropriate scale.
Deep learning predicts hip fracture using confounding patient and healthcare variables. M A Badgeley, J R Zech, L Oakden-Rayner, B S Glicksberg, M Liu, W Gale, M V Mcconnell, B Percha, T M Snyder, J T Dudley, NPJ digital medicine. 21M. A. Badgeley, J. R. Zech, L. Oakden-Rayner, B. S. Glicksberg, M. Liu, W. Gale, M. V. McConnell, B. Percha, T. M. Snyder, and J. T. Dudley. Deep learning predicts hip fracture using confounding patient and healthcare variables. NPJ digital medicine, 2(1):1-10, 2019.
Network dissection: Quantifying interpretability of deep visual representations. D Bau, B Zhou, A Khosla, A Oliva, A Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionD. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6541-6549, 2017.
Curriculum learning. Y Bengio, J Louradour, R Collobert, J Weston, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningY. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48, 2009.
Y Fan, F Tian, T Qin, X.-Y. Li, T.-Y Liu, arXiv:1805.03643Learning to teach. arXiv preprintY. Fan, F. Tian, T. Qin, X.-Y. Li, and T.-Y. Liu. Learning to teach. arXiv preprint arXiv:1805.03643, 2018.
Y Fan, Y Xia, L Wu, S Xie, W Liu, J Bian, T Qin, X.-Y. Li, T.-Y Liu, arXiv:2007.04649Learning to teach with deep interactions. arXiv preprintY. Fan, Y. Xia, L. Wu, S. Xie, W. Liu, J. Bian, T. Qin, X.-Y. Li, and T.-Y. Liu. Learning to teach with deep interactions. arXiv preprint arXiv:2007.04649, 2020.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
E Grefenstette, B Amos, D Yarats, P M Htut, A Molchanov, F Meier, D Kiela, K Cho, S Chintala, arXiv:1910.01727Generalized inner loop meta-learning. arXiv preprintE. Grefenstette, B. Amos, D. Yarats, P. M. Htut, A. Molchanov, F. Meier, D. Kiela, K. Cho, and S. Chintala. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727, 2019.
Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. J Irvin, P Rajpurkar, M Ko, Y Yu, S Ciurea-Ilcus, C Chute, H Marklund, B Haghgoo, R Ball, K Shpanskaya, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 590-597, 2019.
Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. L Jiang, Z Zhou, T Leung, L.-J Li, L Fei-Fei, International Conference on Machine Learning. L. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304-2313, 2018.
P W Koh, T Nguyen, Y S Tang, S Mussmann, E Pierson, B Kim, P Liang, arXiv:2007.04612Concept bottleneck models. arXiv preprintP. W. Koh, T. Nguyen, Y. S. Tang, S. Mussmann, E. Pierson, B. Kim, and P. Liang. Concept bottleneck models. arXiv preprint arXiv:2007.04612, 2020.
Do better imagenet models transfer better?. S Kornblith, J Shlens, Q V Le, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS. Kornblith, J. Shlens, and Q. V. Le. Do better imagenet models transfer better? In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2661-2671, 2019.
Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks. H B Lee, H Lee, D Na, S Kim, M Park, E Yang, S J Hwang, International Conference on Learning Representations. H. B. Lee, H. Lee, D. Na, S. Kim, M. Park, E. Yang, and S. J. Hwang. Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks. In International Conference on Learning Representations, 2019.
Self-supervised generalisation with meta auxiliary learning. S Liu, A Davison, E Johns, Advances in Neural Information Processing Systems. S. Liu, A. Davison, and E. Johns. Self-supervised generalisation with meta auxiliary learning. In Advances in Neural Information Processing Systems, pages 1679-1689, 2019.
Iterative machine teaching. W Liu, B Dai, A Humayun, C Tay, C Yu, L B Smith, J M Rehg, L Song, International Conference on Machine Learning. W. Liu, B. Dai, A. Humayun, C. Tay, C. Yu, L. B. Smith, J. M. Rehg, and L. Song. Iterative machine teaching. In International Conference on Machine Learning, pages 2149-2158, 2017.
Maml-pytorch implementation. L Long, L. Long. Maml-pytorch implementation. https://github.com/dragen1860/MAML-Pytorch, 2018.
J Lorraine, D Duvenaud, arXiv:1802.09419Stochastic hyperparameter optimization through hypernetworks. arXiv preprintJ. Lorraine and D. Duvenaud. Stochastic hyperparameter optimization through hypernetworks. arXiv preprint arXiv:1802.09419, 2018.
Optimizing millions of hyperparameters by implicit differentiation. J Lorraine, P Vicol, D Duvenaud, International Conference on Artificial Intelligence and Statistics. J. Lorraine, P. Vicol, and D. Duvenaud. Optimizing millions of hyperparameters by implicit differenti- ation. In International Conference on Artificial Intelligence and Statistics, pages 1540-1552, 2020.
Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. M Mackay, P Vicol, J Lorraine, D Duvenaud, R Grosse, International Conference on Learning Representations (ICLR). M. MacKay, P. Vicol, J. Lorraine, D. Duvenaud, and R. Grosse. Self-tuning networks: Bilevel opti- mization of hyperparameters using structured best-response functions. In International Conference on Learning Representations (ICLR), 2019.
Gradient-based hyperparameter optimization through reversible learning. D Maclaurin, D Duvenaud, R Adams, International Conference on Machine Learning. D. Maclaurin, D. Duvenaud, and R. Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113-2122, 2015.
Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, arXiv:1706.06083arXiv preprintA. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
Auxiliary learning by implicit differentiation. A Navon, I Achituve, H Maron, G Chechik, E Fetaya, A. Navon, I. Achituve, H. Maron, G. Chechik, and E. Fetaya. Auxiliary learning by implicit differenti- ation. 2020.
H Pham, Q Xie, Z Dai, Q V Le, arXiv:2003.10580Meta pseudo labels. arXiv preprintH. Pham, Q. Xie, Z. Dai, and Q. V. Le. Meta pseudo labels. arXiv preprint arXiv:2003.10580, 2020.
Rapid learning or feature reuse? towards understanding the effectiveness of maml. A Raghu, M Raghu, S Bengio, O Vinyals, International Conference on Learning Representations. A. Raghu, M. Raghu, S. Bengio, and O. Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations, 2019.
Transfusion: Understanding transfer learning for medical imaging. M Raghu, C Zhang, J Kleinberg, S Bengio, Advances in neural information processing systems. M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio. Transfusion: Understanding transfer learning for medical imaging. In Advances in neural information processing systems, pages 3347-3357, 2019.
Learning to reweight examples for robust deep learning. M Ren, W Zeng, B Yang, R Urtasun, arXiv:1803.09050arXiv preprintM. Ren, W. Zeng, B. Yang, and R. Urtasun. Learning to reweight examples for robust deep learning. arXiv preprint arXiv:1803.09050, 2018.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmen- tation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
Meta-weight-net: Learning an explicit mapping for sample weighting. J Shu, Q Xie, L Yi, Q Zhao, S Zhou, Z Xu, D Meng, Advances in Neural Information Processing Systems. J. Shu, Q. Xie, L. Yi, Q. Zhao, S. Zhou, Z. Xu, and D. Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems, pages 1919-1930, 2019.
Discovery of latent 3d keypoints via endto-end geometric reasoning. S Suwajanakorn, N Snavely, J J Tompson, M Norouzi, Advances in neural information processing systems. S. Suwajanakorn, N. Snavely, J. J. Tompson, and M. Norouzi. Discovery of latent 3d keypoints via end- to-end geometric reasoning. In Advances in neural information processing systems, pages 2059-2070, 2018.
P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-ucsd birds 200. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-ucsd birds 200. 2010.
Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition. J K Winkler, C Fink, F Toberer, A Enk, T Deinlein, R Hofmann-Wellenhof, L Thomas, A Lallas, A Blum, W Stolz, JAMA dermatology. 15510J. K. Winkler, C. Fink, F. Toberer, A. Enk, T. Deinlein, R. Hofmann-Wellenhof, L. Thomas, A. Lallas, A. Blum, W. Stolz, et al. Association between surgical skin markings in dermoscopic images and diag- nostic performance of a deep learning convolutional neural network for melanoma recognition. JAMA dermatology, 155(10):1135-1141, 2019.
Learning to teach with dynamic loss functions. L Wu, F Tian, Y Xia, Y Fan, T Qin, L Jian-Huang, T.-Y Liu, Advances in Neural Information Processing Systems. L. Wu, F. Tian, Y. Xia, Y. Fan, T. Qin, L. Jian-Huang, and T.-Y. Liu. Learning to teach with dynamic loss functions. In Advances in Neural Information Processing Systems, pages 6466-6477, 2018.
Understanding deep learning requires rethinking generalization. C Zhang, S Bengio, M Hardt, B Recht, O Vinyals, arXiv:1611.03530arXiv preprintC. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
mixup: Beyond empirical risk minimization. H Zhang, M Cisse, Y N Dauphin, D Lopez-Paz, International Conference on Learning Representations. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
Places: A 10 million image database for scene recognition. B Zhou, A Lapedriza, A Khosla, A Oliva, A Torralba, IEEE transactions on pattern analysis and machine intelligence. 40B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1452-1464, 2017.
Machine teaching: An inverse problem to machine learning and an approach toward optimal education. X Zhu, X. Zhu. Machine teaching: An inverse problem to machine learning and an approach toward optimal education. 2015. |
245,836,975 | LANGUAGE-DRIVEN SEMANTIC SEGMENTATION | We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., "grass" or "building") together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., "cat" and "furry"). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero-and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg. | [
225039882,
5959482
] | LANGUAGE-DRIVEN SEMANTIC SEGMENTATION
Boyi Li
Kilian Q Weinberger
Serge Belongie
Vladlen Koltun Apple
René Ranftl
Cornell University
Cornell Tech
Cornell University
University of Copenhagen
Intel Labs
LANGUAGE-DRIVEN SEMANTIC SEGMENTATION
Published as a conference paper at ICLR 2022
We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., "grass" or "building") together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., "cat" and "furry"). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero-and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg.
INTRODUCTION
Semantic segmentation is a core problem in computer vision, with the aim of partitioning an image into coherent regions with their respective semantic class labels. Most existing methods for semantic segmentation assume a limited set of semantic class labels that can potentially be assigned to a pixel. The number of class labels is dictated by the training dataset and typically ranges from tens (Everingham et al., 2015) to hundreds Mottaghi et al., 2014) of distinct categories. As the English language defines several hundred thousand nouns (Li et al., 2020c), it is likely that the limited size of the label set severely hinders the potential recognition performance of existing semantic segmentation models.
The main reason for the restricted label sets in existing methods is the cost of annotating images to produce sufficient training data. To create training datasets, human annotators must associate every single pixel in thousands of images with a semantic class label -a task that is extremely labor intensive and costly even with small label sets. The complexity of the annotation rises significantly as the number of labels increases since the human annotator has to be aware of the fine-grained candidate labels. Additionally, inter-annotator consistency becomes an issue when objects are present in an image that could fit multiple different descriptions or are subject to a hierarchy of labels.
Zero-and few-shot semantic segmentation methods have been proposed as a potential remedy for this problem. Few-shot approaches (Shaban et al., 2017;Rakelly et al., 2018;Siam et al., 2019;Zhang et al., 2019;Nguyen & Todorovic, 2019;Liu et al., 2020b;Tian et al., 2020;Boudiaf et al., 2021;Min et al., 2021) offer ways to learn to segment novel classes based on only a few labeled images. However, these approaches still require labeled data that includes the novel classes in order to facilitate transfer. Zero-shot methods, on the other hand, commonly leverage word embeddings to discover or generate related features between seen and unseen classes (Bucher et al., 2019;Gu et al., 2020) without the need for additional annotations. Existing works in this space use standard word embeddings (Mikolov et al., 2013) and focus on the image encoder.
In this work, we present a simple approach to leveraging modern language models to increase the flexibility and generality of semantic segmentation models. Our work is inspired by the CLIP model Figure 1: Example results. LSeg is able to handle unseen labels as well as label sets of arbitrary length and order. This enables flexible synthesis of zero-shot semantic segmentation models on the fly. From left to right, labels that are removed between runs are underlined, whereas labels that are added are marked in bold red.
for image classification (Radford et al., 2021), which pairs high-capacity image and text encoders to produce robust zero-shot classifiers. We propose to use state-of-the-art text encoders that have been co-trained on visual data, such as CLIP, to embed labels from the training set into an embedding space and to train a visual encoder to produce per-pixel embeddings from an input image that are close to the corresponding label embeddings. Since the text encoder is trained to embed closely related concepts near one another (for example, "dog" is closer to "pet" than to "vehicle"), we can transfer the flexibility of the text encoder to the visual recognition module while only training on the restricted label sets that are provided by existing semantic segmentation datasets. An example is shown in Figure 1 (top row), where the model can successfully label pixels belonging to the class "pet" although the training set did not contain this label.
Our approach enables the synthesis of zero-shot semantic segmentation models on the fly. That is, a user can arbitrarily expand, shrink, or reorder the label set for any image at test time. We further introduce an output module that can spatially regularize the predictions while maintaining this flexibility. We demonstrate several examples of the flexibility of our model in Figure 1. LSeg is able to output different segmentation maps based on the provided label set. For instance, in the last row, output (a) recognizes the chair and identifies all non-chair objects as "other" since these are the only two labels provided to the model. When labels are added, as in (b) and (c), the model is able to successfully segment other objects with the expanded label set.
We conduct quantitative evaluation on a variety of zero-and few-shot semantic segmentation tasks. Our approach outperforms existing methods in zero-shot settings and is competitive across multiple few-shot benchmarks. Unlike the state-of-the-art baselines we compare to, our approach does not require additional training samples. Our experiments also show that introducing the text embeddings incurs only a negligible loss in performance when compared to standard fixed-label segmentation methods.
RELATED WORK
Generalized semantic segmentation. The majority of existing semantic segmentation models are restricted to a fixed label set that is defined by the labels that are present in the training dataset (Minaee et al., 2021). Few-shot semantic segmentation methods aim to relax the restriction of a fixed label set when one or a few annotated examples of novel classes are available at test time. These approaches learn to find reliable visual correspondences between a query image that is to be labeled and labeled support images that may contain novel semantic classes (Shaban et al., 2017;Rakelly et al., 2018;Siam et al., 2019;Zhang et al., 2019;Nguyen & Todorovic, 2019;Liu et al., 2020b;Tian et al., 2020;Tian et al., 2020;Boudiaf et al., 2021;Min et al., 2021). While this strategy can significantly enhance the generality of the resulting model, it requires the availability of at least one labeled example image with the target label set, something that is not always practical.
Zero-shot semantic segmentation approaches aim to segment unseen objects without any additional samples of novel classes. Text embeddings of class labels play a central role in these works. Bucher et al. (2019) and Gu et al. (2020) propose to leverage word embeddings together with a generative model to generate visual features of unseen categories, while Xian et al. (2019) propose to project visual features into a simple word embedding space and to correlate the resulting embeddings to assign a label to a pixel. propose to use uncertainty-aware learning to better handle noisy labels of seen classes, while introduce a structured learning approach to better exploit the relations between seen and unseen categories. While all of these leverage text embeddings, our paper is, to the best of our knowledge, the first to show that it is possible to synthesize zero-shot semantic segmentation models that perform on par with fixed-label and few-shot semantic segmentation methods.
A variety of solutions have been proposed Liu et al., 2020a;Perera et al., 2020;Zhou et al., 2021) for open-set recognition (Scheirer et al., 2012;Geng et al., 2020). These aim to provide a binary decision about whether or not a given sample falls outside the training distribution, but do not aim to predict the labels of entirely new classes.
Finally, a different line of work explores cross-domain adaptation methods for semantic segmentation by using feature alignment, self-training, and information propagation strategies (Yang et al., 2021;Wang et al., 2021). The target of these works is to enhance the transferability of models to novel visual domains, but they do not address the issue of a restricted label set. As such they are orthogonal to our work.
Language-driven recognition. Language-driven recognition is an active area of research. Common tasks in this space include visual question answering (Antol et al., 2015), image captioning (Vinyals et al., 2014), and image-text retrieval . CLIP (Radford et al., 2021) demonstrated that classic recognition tasks that are not commonly associated with language can strongly benefit from language assistance. CLIP uses contrastive learning together with high-capacity language models and visual feature encoders to synthesize extremely robust models for zero-shot image classification. Recent works have extended this basic paradigm to perform flexible object detection. ViLD (Gu et al., 2021) introduces an advanced zero-shot object detection method that leverages CLIP, whereas MDETR (Kamath et al., 2021) proposes an end-to-end approach that modulates a transformer-based baseline detector with text features that are obtained from a state-of-the-art language model. Like CLIP, these works have shown that the robustness and generality of object detection models can be strongly improved by language assistance. Our work is inspired by these approaches and presents, to the best of our knowledge, the first approach to flexibly synthesize zero-shot semantic segmentation models by leveraging high-capacity language models.
LANGUAGE-DRIVEN SEMANTIC SEGMENTATION
Our approach, Language driven Semantic segmentation (LSeg) embeds text labels and image pixels into a common space, and assigns the closest label to each pixel. We illustrate the framework in Figure 2 and describe each part in detail below.
Text encoder. The text encoder embeds the set of N potential labels into a continuous vector space R C , producing N vectors T 1 , . . . , T n ∈ R C as outputs (blue vectors in Figure 2). Multiple network architectures are possible, and we use the pretrained Contrastive Language-Image Pre-training (CLIP) throughout (Radford et al., 2021). By design, the set of output vectors is invariant to the ordering of the input labels and allows their number, N , to vary freely.
Image encoder. Similar to the text encoder, the image encoder produces an embedding vector for every input pixel (after downsampling). We leverage dense prediction transformers (DPT) (Ranftl
Image Encoder
Text Encoder Input Image people, tennis racket, tree, sand, other Figure 2: Overview. A text encoder embeds labels into a vector space. An image encoder extracts per-pixel embeddings from the image and correlates the feature of each pixel to all label embeddings. The image encoder is trained to maximize the correlation between the text embedding and the image pixel embedding of the ground-truth class of the pixel. A final spatial regularization block spatially regularizes and cleans up the predictions. et al., 2021) as the underlying architecture. Assume H × W is the input image size and s is a user-defined downsampling factor (s = 2 in our implementation). We defineH = H s ,W = W s . The output is a dense embedding I ∈ RH ×W ×C (green tensor in Figure 2). We refer to the embedding of pixel (i, j) as I ij .
T 1 T 1 C C Input Label Set T 2 T 2 IH 1 IH 1 IH 2 IH 2 IH 3 IH 3 ···I 13 ··· ··· I 1W I 1W C CW W FH 1 FH 1 FH 2 FH 2 FH 3 FH 3 ··· ··· FHW FHW ··· ··· ··· ··· ··· ··· ··· ··· ··· ··· F 31 F 31 F 32 F 32 F 33 F 33 ··· ··· F 3W F 3W F 21 F 21 F 22 F 22 F 23 F 23 ··· ··· F 2W F 2W F 11 F 11 F 12 F 12 F 13 F 13 ··· ··· F 1W F 1W N Ñ H H T 3 T 3 ··· ··· T N T N Spatial Regularization Blocks Output H HW W
Word-pixel correlation tensor. After the image and the labels are embedded, we correlate them by the inner product, creating a tensor of sizeH ×W × N (orange tensor in Figure 2), defined as
f ijk = I ij · T k .
(1)
We refer to the N -dimensional vector of inner products between the embedding of pixel (i, j) and all N words as
F ij ∈ R N , where F ij = (f ij1 , f ij2 , .
.., f ijk ) T . During training, we encourage the image encoder to provide pixel embeddings that are close to the text embedding of the corresponding groundtruth class. Specifically, given the text embeddings T k ∈ R C of N labels and the image embedding I ij ∈ R C of pixel i, j, we aim to maximize the dot product of the entry f ijk that corresponds to the ground-truth label k = y ij of pixel i, j. We achieve this by defining a pixelwise softmax objective over the whole image:
H,W i,j=1 softmax yij F ij t ,(2)
where t is a user-defined temperature parameter that we set to t = 0.07 (Wu et al., 2018;Radford et al., 2021). During training, we minimize a per-pixel softmax with cross-entropy loss (with temperature scaling) as is standard in semantic segmentation 1 .
Spatial regularization. Due to memory constraints, the image encoder predicts pixel embeddings at lower resolution than the input image resolution. We use an additional post-processing module that spatially regularizes and upsamples the predictions to the original input resolution. During this process, we have to ensure that all operations stay equivariant with respect to the labels. In other words, there should be no interactions between the input channels, whose order is defined by the order of the words and can thus be arbitrary. We evaluate two functions that fulfill this property: a simple cascade of depthwise convolutions (Chollet, 2017) followed by non-linear activations (DepthwiseBlock), and another block that additionally augments the depthwise convolutions with the result of a max-pooling operation over the set of labels (BottleneckBlock) . In a final step we use bilinear interpolation to recover predictions at the original resolution. We refer to these functions as "spatial regularization blocks" and illustrate them in Figure 3. Training details. We initialize the backbone of the image encoder with the official ImageNet pretrained weights from ViT (Dosovitskiy et al., 2021) or ResNet (He et al., 2016) 2 and initialize the decoder of DPT randomly. During training we freeze the text encoder and only update the weights of the image encoder. We provide the full label set that is defined by each training set to the text encoder for each image.
Our model can be trained on any semantic segmentation dataset and supports flexible mixing of multiple datasets through the text encoder. Existing semantic segmentation models assign a fixed channel in the output to represent the probability of a pixel being the corresponding semantic class. In contrast, our approach can dynamically handle label sets with varying length, content, and order. This property allows synthesizing arbitrary zero-shot semantic segmentation models by simply changing the labels that are fed to the text encoder.
EXPERIMENTS
We designed LSeg primarily for the zero-shot setting, where labels that are used for inference have never been seen during training. However, due to a lack of a standardized protocol and sufficient datasets and baselines for the zero-shot setting, we compare LSeg to zero-and few-shot semantic segmentation models on few-shot benchmarks. Note that few-shot methods have access to more information and are thus expected to yield higher accuracy. However, the need for labeled samples severely restricts their flexibility compared to our approach.
EXPERIMENTAL SETUP
We follow the protocol of the recent state-of-the-art few-shot method HSNet (Min et al., 2021) and evaluate on three widely-used few-shot semantic segmentation benchmarks: PASCAL-5 i (Everingham et al., 2015), COCO-20 i (Lin et al., 2014), and FSS-1000 (Li et al., 2020c). Following a standard protocol for few-shot segmentation, we use the mean intersection over union (mIoU) and foreground-background intersection of union (FB-IoU) as the evaluation metrics. The mIoU calculates the average IoU over all classes, FB-IoU computes mean value of foreground and background IoUs in fold i and ignores the object classes.
When not stated otherwise we use an LSeg model with the text encoder provided by CLIP-ViT-B/32 and leverage DPT with a ViT-L/16 backbone as the image encoder. For datasets that provide a background or unknown class, we set the corresponding background label to "other". We use SGD with momentum 0.9 and a polynomial learning rate scheduler with decay rate 0.9. We train with a batch size of 6 on six Quadro RTX 6000. These few-shot methods propose strategies to segment unseen objects based on pretraining on seen categories and finetuning with a few images from the target class. In addition, we also compare to the competitive zero-shot baseline ZS3Net (Bucher et al., 2019), which adopts the DeepLabv3+ framework and to Xian et al. (2019) which leverages DeepLabv2. We follow their official code, training setting and training steps on the basis of their provided model pretrained on ImageNet (Deng et al., 2009). We follow the common experimental setup (Nguyen & Todorovic, 2019) and conduct cross-validation over all folds. Assuming that n i is the number of classes in fold i, for each fold i, we use images of other folds for training and randomly sampled 1000 images of target fold i for evaluation. We show PASCAL-5 i and COCO-20 i results in Tables 1 and 2. Our model (with the same ResNet101 backbone) outperforms the zero-shot baseline by a considerable margin across folds and datasets and is even competitive with several few-shot methods. We also observe an obvious edge of LSeg by using a larger backbone (ViT-L/16).
FSS-1000
FSS-1000 (Li et al., 2020c) is a recent benchmark dataset for few-shot segmentation. It consists of 1000 object classes with pixelwise annotated segmentation masks. It contains a significant number of unseen or unannotated objects in comparison to previous datasets such as PASCAL and COCO. Following the standard protocol, we split the 1000 classes into training, validation, and test classes, with 520, 240, and 240 classes, respectively. We use a base learning rate of 0.05 and train the model for 60 epochs.
EXPLORATION AND DISCUSSION
ABLATION STUDIES
We further empirically explore various properties of LSeg. We conduct experiments on the ADE20K dataset , which is a standard semantic segmentation dataset that includes a diversity of images and provides pixelwise segmentation of 150 different categories. We set the base learning rate to 0.004 and train the model for 240 iterations. We use SGD with momentum 0.9 and a polynomial learning rate scheduler with decay rate 0.9. We use LSeg with DPT and a smaller ViT-B/32 backbone together with the CLIP ViT-B/32 text encoder unless stated otherwise.
Spatial regularization blocks. We first conduct an ablation study on the two variants of the spatial regularization blocks for cleaning up the output. We ablate the different types of blocks as well as stacking various numbers of blocks (N ∈ [0, 1, 2, 4]). The results are shown in Table 4. We notice that a consistent improvement can be achieved by adding a few regularization blocks. The strongest improvement is achieved by stacking two BottleneckBlocks, an addition to the architecture that incurs little overhead.
Text encoders. LSeg supports arbitrary text encoders in principle. We show the influence of using different text encoders in Table 5 Table 4: Ablation study on the depth of BottleneckBlock and DepthwiseBlock before the last layer. For both Pixel Accuracy (pixAcc) and mIoU, higher is better. For depth=1, we directly feed the output to reshape without activation. zero-shot image classification model (Radford et al., 2021). Note that all text encoders feature the same transformer-based architecture that purely operates on text. The main difference between the encoders is the image encoder that was paired during CLIP pretraining (for example, the text encoder denoted by "ViT-B/32" was trained in conjunction with a ViT-B/32 image encoder) and the size of the embedding dimension.
We observe that using RN50×16 achieves the best performance among all text encoders and surpasses the weakest ViT-B/32 text encoder by 2.5%. We conjecture that this is because of the larger size of the embedding that is provided by this encoder.
Comparison on a fixed label set. Language assistance helps boost the recognition performance on unannotated or unseen classes. However, there might be a concern that this flexibility hurts the performance on tasks that have a fixed label set. To test this, we train LSeg on ADE20K using the standard protocol on this dataset, where the training and test labels are fixed (that is, there are no unseen class labels at test time). We compare the results to highly competitive standard semantic segmentation models, including OCNet (Yuan et al., 2020), ACNet (Fu et al., 2019), DeeplabV3 (Chen et al., 2017;Zhang et al., 2020a), and DPT (Ranftl et al., 2021). The results are listed in Table 6. We find that LSeg performs competitively when using the RN50 × 16 text encoder and incurs only a negligible loss in performance when compared to the closest fixed-label segmentation method (DPT).
QUALITATIVE FINDINGS
We finally train LSeg on a mix of 7 different datasets (Lambert et al., 2020), including ADE20K , BDD (Yu et al., 2020), Cityscapes (Cordts et al., 2016), COCO-Panoptic (Lin et al., 2014;Caesar et al., 2018), IDD (Varma et al., 2019), Mapillary Vistas (Neuhold et al., 2017), and SUN RGBD (Song et al., 2015). Note that we train our model on the original label sets that are provided by these datasets without any preprocessing or relabeling. We follow the same training protocol as on ADE20K and train LSeg with a ViT-L/16 backbone and a ViT-B/32 text encoder for 200 epochs with a base learning rate of 0.004. If there are multiple labels for one class, we only use the first label that is provided during training. We select images from the web and show the results in Figure 5 to illustrate the use of the resulting model.
Related but previously unseen labels. We illustrate some salient examples of the capabilities of LSeg to generalize to new classes in Figure 5(a). In the first row, on the left we first start with the label set "sky", "road", "house", and "plant", and observe that the model is capable of segmenting the image into the provided classes. We then change the label "house" to "building" and the label "plant" to "greenery". The model produces a similar segmentation as before on this different but semantically related label set. This is despite the fact that the label "greenery" or even "green" was not present in any of the training images. A similar effect is shown in the second row, where LSeg successfully segments the image and correctly assigns the labels "cake" or "dessert" (again, the label "dessert" was not seen during training), while successfully suppressing the label "bread" which is both visually and semantically related.
Hierarchical unseen labels. Figure 5(b) demonstrates that LSeg can implicitly provide correct segmentation maps for hierarchies of labels. In the first row, the model is able to recognize the "cat", "plant" and "grass" segments of the image, as expected since these labels are present in the training set. When replacing "cat" with the label "furry", we notice that the model is able to successfully recognize this parent category (that is, most cats are furry, but not all furry objects are cats). Similarly, when removing the label "grass", we notice that the original "grass" region is merged into "plant", again an indication of an implicit hierarchy that is afforded by the flexibility of the text embeddings. The second row illustrates a similar scenario, where LSeg recognizes the sofa and other objects.
However, the small shelf on the left is segmented as the unknown category "other". When we change "sofa" to "furniture", LSeg successfully identifies both the sofa and the small shelf as "furniture". Note that "furniture" never appeared in the training label set. Failure cases. While LSeg in general achieves very promising results, we also observe some failure cases, as illustrated in Figure 6. The left image illustrates that LSeg is only trained with positive samples from a class. When the testtime input labels do not contain any of the true labels for the corresponding pixel, the model assigns the highest probability to the closest label in the text embedding space. In this specific example, the model assigns the label "toy" since the visual features of the dog are apparently closer to "toy" than to "grass" in the embedding space and there is no other label that can explain the visual features. A second failure case is shown on the right, where the model focuses on a single most likely object when multiple explanations are consistent with the label set. In this specific example, the windows of the house are labeled as "house" instead of window, even thought the label "window" is available as a choice. We hope that these failure cases can inform future work, which could involve augmenting training with negative samples or building fine-grained language-driven semantic segmentation models that can potentially assign multiple labels when multiple explanations fit the data well.
CONCLUSION
We introduced LSeg, a novel method and architecture for training language-driven semantic segmentation models. LSeg enables a flexible label representation that maps semantically similar labels to similar regions in an embedding space and learns to correlate visual concepts in this space to produce semantic segmentations. Our formulation enables the synthesis of zero-shot semantic segmentation models with arbitrary label sets on the fly. Our empirical results show that the resulting models are strong baselines for zero-shot semantic segmentation and can even rival few-shot segmentation models while not sacrificing accuracy on existing fixed label sets.
ACKNOWLEDGEMENT
This work was supported in part by the Pioneer Centre for AI, DNRF grant number P1. KQW is supported by grants from the National Science Foundation NSF (IIS-2107161, III-1526012, IIS-1149882, and IIS-1724282), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), and SAP America.
ETHICS STATEMENT
We proposed a novel approach to solve the generalized semantic segmentation problem. We use public computer vision datasets and leverage pretrained language models for our experiments. We do not believe that our code or method are inherently subject to concerns of discrimination / bias / fairness, inappropriate potential applications, impact, privacy and security issues, legal compliance, research integrity or research practice issues. However, image datasets and language models may be subject to bias that may be inherited by models trained with our approach.
REPRODUCIBILITY
Our code is reproducible and can be implemented based on the method description in Section 3 as well as training details in Section 4 and 5. We provide an interactive demo for people to try with input images of their choosing.
Figure 3 :
3Illustration of BottleneckBlock and DepthwiseBlock.
4. 2
2PASCAL-5 i AND COCO-20 i PASCAL-5 i and COCO-20 i are few-shot segmentation datasets that have been created from PASCAL VOC 2012(Everingham et al., 2015) and the COCO dataset(Lin et al., 2014), respectively. PASCAL-5 i is composed of 20 object classes with corresponding mask annotations and has been evenly divided into 4 folds of 5 classes each. We denote different folds by 5 i , where i ∈ {0, 1, 2, 3}. Similarly, COCO-20 i is composed of 4 folds of 20 classes each.We compare LSeg to various state-of-the-art few-shot models: OSLSM(Shaban et al., 2017), Co-FCN(Rakelly et al., 2018), AMP-2(Siam et al., 2019), PANet, PGNet(Zhang et al., 2019), FWB(Nguyen & Todorovic, 2019), PPNet(Liu et al., 2020b), DAN, PFENet(Tian et al., 2020), RePRI(Boudiaf et al., 2021), and HSNet(Min et al., 2021).
Figure 4 :
4LSeg zero-shot semantic segmentation results on unseen categories of FSS-1000 dataset.
Figure 5 :
5LSeg examples with related but previously unseen labels, and hierarchical labels. Going from left to right, labels that are removed between runs are underlined, whereas labels that are added are marked in bold red.
Figure 6 :
6Failure cases.
··· IHW IHW··· ···
··· ···
··· ···
··· ···
··· ···
I 31
I 31 I 32
I 32 I 33
I 33 ··· ··· I 3W
I 3W
I 21
I 21 I 22
I 22 I 23
I 23 ··· ··· I 2W
I 2W
I 11
I 11 I 12
I 12 I 13
Table 2 :
2Comparison of mIoU and FB-IoU (higher is better) on COCO-20 i .
Table 3 :
3Comparison of mIoU on FSS-1000.
Table 3
3compares our approach to state-of-the-
art few-shot models. Notably, under the same
ResNet101, LSeg could achieve comparative
results of the state-of-the-art one-shot method.
Also, LSeg even outperforms a state-of-the-art
one-shot method: 87.8 mIoU (ours) vs. 86.5
mIoU (HSNet) with a larger backbone ViT-L/16,
indicating that LSeg generalizes very well to
unseen categories. Figure 4 shows examples of
segmentation results on unseen categories.
Input Image
LSeg Ouput
others, snowman
Input Image
LSeg Ouput
others,
magpie_bird
Input Image
LSeg Ouput
others,
wooden_spoon
others, pizza
others,
potato_chips
others, minicooper
, where we ablate various encoders that are provided by the CLIP# depth
Block Type
Metric
0
1
2
4
pixAcc [%] 79.70 79.72 79.78 7.67
DepthwiseBlock
mIoU [%] 37.83 39.19 39.45 0.18
pixAcc [%] 79.70 79.64 79.70 79.68
BottleneckBlock mIoU [%] 37.83 39.16 39.79 38.78
Method Backbone Text Encoder (fixed) embedding dimension pixAcc [%] mIoU [%]LSeg
ViT-B/32
ViT-B/32
512
79.70
37.83
LSeg
ViT-B/32
ViT-B/16
512
79.77
38.69
LSeg
ViT-B/32
RN50 × 4
640
79.85
38.93
LSeg
ViT-B/32
RN50 × 16
768
80.26
40.36
Table 5 :
5Ablation study on LSeg with fixed text encoders of different CLIP pretrained models.Method
Backbone
Text Encoder pixAcc [%] mIoU [%]
OCNet
ResNet101
-
-
45.45
ACNet
ResNet101
-
81.96
45.90
DeeplabV3 ResNeSt101
-
82.07
46.91
DPT
ViT-L/16
-
82.70
47.63
LSeg
ViT-L/16
ViT-B/32
82.46
46.28
LSeg
ViT-L/16
RN50 × 16
82.78
47.25
Table 6 :
6Comparison of semantic segmentation results on the ADE20K validation set. For LSeg, we conduct experiments with fixed text encoders of ViT-B/32 and RN50 × 16 CLIP pretrained models.Input Image
LSeg Ouput A
LSeg Ouput B
sky, road,
house, plant
sky, road,
building, greenery
other, dog, table,
cake, bread
other, dog, table,
dessert, bread
(a) Related unseen labels.
Input Image
LSeg Ouput A
LSeg Ouput B
cat, grass, plant, stone
furry, plant, stone
sofa, painting, plant,
floor, window, others
furniture, painting, plant,
floor, window, others
(b) Hierarchical unseen labels.
In practice we implement this using the standard nn.CrossEntropyLoss from Pytorch.
We also evaluated on a model initialized with the CLIP image encoder with the same setup and hyperparameters, but observed worse performance than using the ViT initialization.
VQA: Visual Question Answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh, International Conference on Computer Vision. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In International Conference on Computer Vision, 2015.
Few-shot segmentation without meta-learning: A good transductive inference is all you need?. Malik Boudiaf, Hoel Kervadec, Pablo Ziko Imtiaz Masud, Ismail Piantanida, Jose Ben Ayed, Dolz, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMalik Boudiaf, Hoel Kervadec, Ziko Imtiaz Masud, Pablo Piantanida, Ismail Ben Ayed, and Jose Dolz. Few-shot segmentation without meta-learning: A good transductive inference is all you need? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13979-13988, 2021.
Zero-shot semantic segmentation. Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, Patrick Pérez, Advances in Neural Information Processing Systems. 32Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick Pérez. Zero-shot semantic segmentation. Advances in Neural Information Processing Systems, 32:468-479, 2019.
Coco-stuff: Thing and stuff classes in context. Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHolger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1209-1218, 2018.
Rethinking atrous convolution for semantic image segmentation. Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam, arXiv:1706.05587arXiv preprintLiang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
Xception: Deep learning with depthwise separable convolutions. François Chollet, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFrançois Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251-1258, 2017.
The cityscapes dataset for semantic urban scene understanding. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionMarius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213-3223, 2016.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
The pascal visual object classes challenge: A retrospective. M Everingham, S M A Eslami, L Van Gool, C K I Williams, J Winn, A Zisserman, International Journal of Computer Vision. 1111M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98-136, January 2015.
Adaptive context network for scene parsing. Jun Fu, Jing Liu, Yuhang Wang, Yong Li, Yongjun Bao, Jinhui Tang, Hanqing Lu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJun Fu, Jing Liu, Yuhang Wang, Yong Li, Yongjun Bao, Jinhui Tang, and Hanqing Lu. Adaptive context network for scene parsing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6748-6757, 2019.
Recent advances in open set recognition: A survey. Chuanxing Geng, Sheng-Jun, Songcan Huang, Chen, IEEE transactions. 2020Chuanxing Geng, Sheng-jun Huang, and Songcan Chen. Recent advances in open set recognition: A survey. IEEE transactions on pattern analysis and machine intelligence, 2020.
Zero-shot detection via vision and language knowledge distillation. Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, Yin Cui, arXiv:2104.13921arXiv preprintXiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Zero-shot detection via vision and language knowledge distillation. arXiv preprint arXiv:2104.13921, 2021.
Context-aware feature generation for zero-shot semantic segmentation. Zhangxuan Gu, Siyuan Zhou, Li Niu, Zihan Zhao, Liqing Zhang, ACM International Conference on Multimedia. Zhangxuan Gu, Siyuan Zhou, Li Niu, Zihan Zhao, and Liqing Zhang. Context-aware feature generation for zero-shot semantic segmentation. In ACM International Conference on Multimedia, 2020.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Uncertainty-aware learning for zero-shot semantic segmentation. Ping Hu, Stan Sclaroff, Kate Saenko, Advances in Neural Information Processing Systems. Ping Hu, Stan Sclaroff, and Kate Saenko. Uncertainty-aware learning for zero-shot semantic segmentation. In Advances in Neural Information Processing Systems, 2020.
Gabriel Synnaeve, and Nicolas Carion. Mdetrmodulated detection for end-to-end multi-modal understanding. Aishwarya Kamath, Mannat Singh, Yann Lecun, Ishan Misra, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionAishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. Mdetr- modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
Mseg: A composite dataset for multi-domain semantic segmentation. John Lambert, Zhuang Liu, Ozan Sener, James Hays, Vladlen Koltun, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJohn Lambert, Zhuang Liu, Ozan Sener, James Hays, and Vladlen Koltun. Mseg: A composite dataset for multi-domain semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2879-2888, 2020.
Positional normalization. Boyi Li, Felix Wu, Q Kilian, Serge Weinberger, Belongie, Advances in Neural Information Processing Systems. Boyi Li, Felix Wu, Kilian Q Weinberger, and Serge Belongie. Positional normalization. In Advances in Neural Information Processing Systems, pp. 1620-1632, 2019.
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. Gen Li, Nan Duan, AAAI. Yuejian Fang, Ming Gong, and Daxin JiangGen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In AAAI, 2020a.
Consistent structural relation learning for zero-shot segmentation. Peike Li, Yunchao Wei, Yi Yang, Advances in Neural Information Processing Systems. 33Peike Li, Yunchao Wei, and Yi Yang. Consistent structural relation learning for zero-shot segmentation. Advances in Neural Information Processing Systems, 33, 2020b.
Fss-1000: A 1000-class dataset for few-shot segmentation. Xiang Li, Tianhan Wei, Yu-Wing Yau Pun Chen, Chi-Keung Tai, Tang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiang Li, Tianhan Wei, Yau Pun Chen, Yu-Wing Tai, and Chi-Keung Tang. Fss-1000: A 1000-class dataset for few-shot segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2869-2878, 2020c.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014.
Few-shot open-set recognition using meta-learning. Bo Liu, Hao Kang, Haoxiang Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionGang Hua, and Nuno VasconcelosBo Liu, Hao Kang, Haoxiang Li, Gang Hua, and Nuno Vasconcelos. Few-shot open-set recognition using meta-learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8798-8807, 2020a.
Songyang Zhang, and Xuming He. Part-aware prototype network for few-shot semantic segmentation. Yongfei Liu, Xiangyi Zhang, European Conference on Computer Vision. SpringerYongfei Liu, Xiangyi Zhang, Songyang Zhang, and Xuming He. Part-aware prototype network for few-shot semantic segmentation. In European Conference on Computer Vision, pp. 142-158. Springer, 2020b.
Efficient estimation of word representations in vector space. Tomás Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, 1st International Conference on Learning Representations, ICLR 2013. Yoshua Bengio and Yann LeCunScottsdale, Arizona, USAWorkshop Track ProceedingsTomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In Yoshua Bengio and Yann LeCun (eds.), 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013. URL http://arxiv.org/abs/1301.3781.
Hypercorrelation squeeze for few-shot segmentation. Juhong Min, Dahyun Kang, Minsu Cho, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJuhong Min, Dahyun Kang, and Minsu Cho. Hypercorrelation squeeze for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
Image segmentation using deep learning: A survey. Shervin Minaee, Yuri Y Boykov, Fatih Porikli, J Antonio, Nasser Plaza, Demetri Kehtarnavaz, Terzopoulos, IEEE Transactions on Pattern Analysis and Machine Intelligence. Shervin Minaee, Yuri Y Boykov, Fatih Porikli, Antonio J Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
The role of context for object detection and semantic segmentation in the wild. Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, Alan Yuille, IEEE Conference on Computer Vision and Pattern Recognition. Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, and Alan Yuille. The role of context for object detection and semantic segmentation in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
The mapillary vistas dataset for semantic understanding of street scenes. Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, Peter Kontschieder, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionGerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pp. 4990-4999, 2017.
Feature weighting and boosting for few-shot segmentation. Khoi Nguyen, Sinisa Todorovic, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionKhoi Nguyen and Sinisa Todorovic. Feature weighting and boosting for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 622-631, 2019.
Generative-discriminative feature representations for open-set recognition. Pramuditha Perera, I Vlad, Rajiv Morariu, Varun Jain, Curtis Manjunatha, Vicente Wigington, Ordonez, Patel, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPramuditha Perera, Vlad I Morariu, Rajiv Jain, Varun Manjunatha, Curtis Wigington, Vicente Ordonez, and Vishal M Patel. Generative-discriminative feature representations for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11814-11823, 2020.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, Proceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine LearningAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, pp. 8748-8763, 2021.
Conditional networks for few-shot semantic segmentation. Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alyosha Efros, Sergey Levine, International Conference on Learning Representations Workshop. Kate Rakelly, Evan Shelhamer, Trevor Darrell, Alyosha Efros, and Sergey Levine. Conditional networks for few-shot semantic segmentation. In International Conference on Learning Representations Workshop, 2018.
Vision transformers for dense prediction. René Ranftl, Alexey Bochkovskiy, Vladlen Koltun, The IEEE International Conference on Computer Vision. René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In The IEEE International Conference on Computer Vision, 2021.
Toward open set recognition. J Walter, Anderson Scheirer, De Rezende, Archana Rocha, Terrance E Sapkota, Boult, IEEE transactions on pattern analysis and machine intelligence. 35Walter J Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E Boult. Toward open set recognition. IEEE transactions on pattern analysis and machine intelligence, 35(7):1757-1772, 2012.
Irfan Essa, and Byron Boots. One-shot learning for semantic segmentation. Amirreza Shaban, Shray Bansal, Zhen Liu, British Machine Vision Conference. Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots. One-shot learning for semantic segmentation. In British Machine Vision Conference, 2017.
Amp: Adaptive masked proxies for few-shot segmentation. Mennatullah Siam, Martin Boris N Oreshkin, Jagersand, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMennatullah Siam, Boris N Oreshkin, and Martin Jagersand. Amp: Adaptive masked proxies for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5249-5258, 2019.
Sun rgb-d: A rgb-d scene understanding benchmark suite. Shuran Song, P Samuel, Jianxiong Lichtenberg, Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionShuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 567-576, 2015.
Prior guided feature enrichment network for few-shot segmentation. Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, Jiaya Jia, IEEE Transactions on Pattern Analysis & Machine Intelligence. 01Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, and Jiaya Jia. Prior guided feature enrichment network for few-shot segmentation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (01):1-1, 2020.
Idd: A dataset for exploring problems of autonomous navigation in unconstrained environments. Girish Varma, Anbumani Subramanian, Anoop Namboodiri, Manmohan Chandraker, C V Jawahar, 2019 IEEE Winter Conference on Applications of Computer Vision. IEEEGirish Varma, Anbumani Subramanian, Anoop Namboodiri, Manmohan Chandraker, and CV Jawahar. Idd: A dataset for exploring problems of autonomous navigation in unconstrained environments. In 2019 IEEE Winter Conference on Applications of Computer Vision, pp. 1743-1751. IEEE, 2019.
Show and tell: A neural image caption generator. CoRR, abs/1411. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, 4555Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2014. URL http://arxiv.org/abs/1411.4555.
Few-shot semantic segmentation with democratic attention networks. Haochen Wang, Xudong Zhang, Yutao Hu, Yandan Yang, Xianbin Cao, Xiantong Zhen, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part XIII 16Haochen Wang, Xudong Zhang, Yutao Hu, Yandan Yang, Xianbin Cao, and Xiantong Zhen. Few-shot se- mantic segmentation with democratic attention networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIII 16, pp. 730-746. Springer, 2020.
Panet: Few-shot image semantic segmentation with prototype alignment. Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, Jiashi Feng, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionKaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, and Jiashi Feng. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9197-9206, 2019.
Give me your trained model: Domain adaptive semantic segmentation without source data. Yuxi Wang, Jian Liang, Zhaoxiang Zhang, arXiv:2106.11653arXiv preprintYuxi Wang, Jian Liang, and Zhaoxiang Zhang. Give me your trained model: Domain adaptive semantic segmentation without source data. arXiv preprint arXiv:2106.11653, 2021.
Unsupervised feature learning via non-parametric instance-level discrimination. Zhirong Wu, Yuanjun Xiong, Stella Yu, Dahua Lin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance-level discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3733-3742, 2018.
Semantic projection network for zero-and few-label semantic segmentation. Yongqin Xian, Subhabrata Choudhury, Yang He, Bernt Schiele, Zeynep Akata, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYongqin Xian, Subhabrata Choudhury, Yang He, Bernt Schiele, and Zeynep Akata. Semantic projection network for zero-and few-label semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8256-8265, 2019.
Exploring robustness of unsupervised domain adaptation in semantic segmentation. Jinyu Yang, Chunyuan Li, Weizhi An, Hehuan Ma, Yuzhi Guo, Yu Rong, Peilin Zhao, Junzhou Huang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionJinyu Yang, Chunyuan Li, Weizhi An, Hehuan Ma, Yuzhi Guo, Yu Rong, Peilin Zhao, and Junzhou Huang. Exploring robustness of unsupervised domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
Bdd100k: A diverse driving dataset for heterogeneous multitask learning. Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, Trevor Darrell, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionFisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2636-2645, 2020.
Object-contextual representations for semantic segmentation. Yuhui Yuan, Xilin Chen, Jingdong Wang, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part VI 16Yuhui Yuan, Xilin Chen, and Jingdong Wang. Object-contextual representations for semantic segmentation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VI 16, pp. 173-190. Springer, 2020.
Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. Chi Zhang, Guosheng Lin, Fayao Liu, Jiushuang Guo, Qingyao Wu, Rui Yao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionChi Zhang, Guosheng Lin, Fayao Liu, Jiushuang Guo, Qingyao Wu, and Rui Yao. Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9587-9595, 2019.
Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Muller, R Manmatha, Mu Li, Alexander Smola, arXiv:2004.08955Resnest: Split-attention networks. arXiv preprintHang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Muller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks. arXiv preprint arXiv:2004.08955, 2020a.
Hybrid models for open set recognition. Hongjie Zhang, Ang Li, Jie Guo, Yanwen Guo, European Conference on Computer Vision. SpringerHongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. Hybrid models for open set recognition. In European Conference on Computer Vision, pp. 102-117. Springer, 2020b.
Semantic understanding of scenes through the ade20k dataset. Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, Antonio Torralba, International Journal of Computer Vision. 1273Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127(3): 302-321, 2019.
Learning placeholders for open-set recognition. Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDa-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. Learning placeholders for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2021. |
263,829,725 | TOPOMLP: A SIMPLE YET STRONG PIPELINE FOR DRIVING TOPOLOGY REASONING | Topology reasoning aims to comprehensively understand road scenes and present drivable routes in autonomous driving.It requires detecting road centerlines (lane) and traffic elements, further reasoning their topology relationship, i.e., lane-lane topology, and lane-traffic topology.In this work, we first present that the topology score relies heavily on detection performance on lane and traffic elements.Therefore, we introduce a powerful 3D lane detector and an improved 2D traffic element detector to extend the upper limit of topology performance.Further, we propose TopoMLP, a simple yet high-performance pipeline for driving topology reasoning.Based on the impressive detection performance, we develop two simple MLP-based heads for topology generation.TopoMLP achieves state-of-the-art performance on OpenLane-V2 benchmark, i.e., 41.2% OLS with ResNet-50 backbone.It is also the 1st solution for 1st OpenLane Topology in Autonomous Driving Challenge.We hope such simple and strong pipeline can provide some new insights to the community.Code is at https://github.com/wudongming97/TopoMLP. | [] | TOPOMLP: A SIMPLE YET STRONG PIPELINE FOR DRIVING TOPOLOGY REASONING
1 Nov 2023
Dongming Wu [email protected]
Beijing Institute of Technology
Jiahao Chang
University of Science and Technology of China
Fan Jia
MEGVII Technology
Yingfei Liu
MEGVII Technology
Tiancai Wang [email protected]
MEGVII Technology
Jianbing Shen [email protected]
SKL-IOTSC
University of Macau
TOPOMLP: A SIMPLE YET STRONG PIPELINE FOR DRIVING TOPOLOGY REASONING
1 Nov 2023BECE3869345A28B02312D679FAE63921arXiv:2310.06753v2[cs.CV]
Topology reasoning aims to comprehensively understand road scenes and present drivable routes in autonomous driving.It requires detecting road centerlines (lane) and traffic elements, further reasoning their topology relationship, i.e., lane-lane topology, and lane-traffic topology.In this work, we first present that the topology score relies heavily on detection performance on lane and traffic elements.Therefore, we introduce a powerful 3D lane detector and an improved 2D traffic element detector to extend the upper limit of topology performance.Further, we propose TopoMLP, a simple yet high-performance pipeline for driving topology reasoning.Based on the impressive detection performance, we develop two simple MLP-based heads for topology generation.TopoMLP achieves state-of-the-art performance on OpenLane-V2 benchmark, i.e., 41.2% OLS with ResNet-50 backbone.It is also the 1st solution for 1st OpenLane Topology in Autonomous Driving Challenge.We hope such simple and strong pipeline can provide some new insights to the community.Code is at https://github.com/wudongming97/TopoMLP.
INTRODUCTION
Understanding the topology in road scenes is an important task for autonomous driving, since it provides the information about the drivable region as well as the traffic signal.Recently, the topology reasoning task has raised great attention in the community thanks to its crucial application in ego planning (Chai et al., 2020;Casas et al., 2021;Hu et al., 2023).In specific, given multi-view images, topology reasoning aims to learn vectorized road graphs between the centerlines and traffic elements (Li et al., 2023;Wang et al., 2023).It consists of four primary tasks, centerline detection, traffic element detection, lane-lane topology, and lane-traffic topology reasoning.
Different from the conventional perception pipelines that include multiple independent tasks (Li et al., 2022b;Liu et al., 2023b), these four tasks naturally have a logical order, i.e., first-detect-thenreason.If some lane and traffic instances are not detected, the corresponding topology connection will be missed, as illustrated in the right of Fig. 1.It naturally leads to a question: What is the extent of the quantitative effect of basic detection on topology reasoning?To answer this question, we conduct detailed ablation studies on detection performance by varying the backbones.It shows that the topology performances are constantly improved with stronger detection.When the basic detection is freezed, we find that replacing the topology prediction with the ground truth (GT) introduces minor improvements.For example, when using Swin-B backbone, the TOP ll and TOP lt scores with topology GT are 10.0% and 30.9%, which are only higher than using topology prediction by 0.5% and 2.6%, respectively.This phenomenon encourages us to prioritize the design of two detectors.
In specific, we employ two query-based detection branches: one (Liu et al., 2023b) dedicated to the detection of 3D centerlines, and another one (Zhu et al., 2021) for 2D traffic detection.The 3D lane detector utilizes a smooth lane representation and interprets each lane query as a set of control points of a Bézier curve.Inspired by MOTRv2 (Zhang et al., 2023), the performance of 2D traffic detector can be further enhanced by adding an additional YOLOv8 (optional) object detector thanks to its advantage on detecting small objects, such as traffic lights.Despite the basic detection, another challenge in driving topology reasoning is how to effectively model the connection between lanes and traffic elements.Prior works (Langenberg et al., 2019;Can et al., 2021;2022) employ a straightforward method, which uses a multi-layer perceptron (MLP) to predict the topology relationship.However, they mainly focus on associating different lanes in image domain.To cope with the 3D space, some follow-up methods (Li et al., 2023;Xu et al., 2023) tend to utilize graph-based modeling to predict topology structure.
In this paper, we develop a simple yet effective framework, termed TopoMLP, for topology reasoning.Our work is inspired by the pairwise representation in human-object interaction detection (Gao et al., 2018;Chao et al., 2018;Wang et al., 2019), similar to topology reasoning.The pairwise representation is constructed by encoding the human/object pair boxes into two mask embeddings.These embeddings are concatenated together and further used to perform action classification by a simple MLP.We wonder if it is possible to develop a simple MLP-based framework for sufficiently understanding the relationships in driving topology reasoning.Taking lane-lane topology as an example, if the lanes are accurately predicted, the intersection points (see Fig. 2) between lanes can be easily reasoned to be overlapped.As for the lane-traffic topology, the traffic elements can be easily matched with the corresponding centerlines by the relative location between traffic bounding boxes and lane points.Therefore, a simple MLP seems enough for efficient topology reasoning.Specifically, we convert the query representations of both traffic elements and centerlines into two embeddings and concatenate them together for topology classification by an appended MLP.
Moreover, we notice that the topology metrics of OpenLane-V2 have some drawbacks.It uses graphbased mAP, while it focuses more on the order of predictions.Some false positives from unmatched lanes or traffic elements are defaulted to a high confidence score, i.e., 1.0.Accordingly, manually decreasing the priority of these false positive predictions (or increasing the priority of true positive predictions) enables to improve the overall mAP score by a large margin.To tackle this problem, we suggest to include a correctness factor based on existing topology metric to correct the drawback.
Our contributions are summarized as four-fold.First, we provide an in-depth analysis of the nature of driving topology reasoning.It requires following a "first-detect-then-reason" philosophy for better topology prediction.Second, we propose a simple but strong model, named TopoMLP.It includes two well-designed high-performance detectors and two elegant MLP networks with position embedding for topology reasoning.Third, we claim that the current topology reasoning evaluation possesses a significant loophole.To rectify this, we enhance the topology metric by incorporating a correctness factor.Fourth, all experiments are conducted on the popular driving topology reasoning benchmark, OpenLane-V2, showing TopoMLP reaches state-of-the-art performance.Besides, TopoMLP ranks 1st of 1st OpenLane Topology in Autonomous Driving Challenge.
RELATED WORKS
LANE DETECTION METHOD
For a long time, detecting lane markings has been one of the most important topics in autonomous driving.Prior works usually use appearance and geometric cues to detect the road (Tan et al., 2006;Alvarez & Ĺopez, 2010;Paz et al., 2015).With the advancement of deep learning, the development of lane detection has made great progress.Among them, some methods attempt to use a segmen-tation map to describe road lane (Batra et al., 2019;Can et al., 2022;He & Balakrishnan, 2022).Currently, vector-based methods have become mainstream because they can deal well with 3D lane detection (Garnett et al., 2019;Guo et al., 2020;Yan et al., 2022;Chen et al., 2022).However, these methods base a set of predefined Y-axis points in the query to predict 3D lanes, which fail to make the 3D lane prediction only across the Y axis.More recently, TopoNet (Li et al., 2023) models each lane into an anchor query, but it misses the lane prior with a smoothed curve.In our study, we make full use of this prior to providing a smoother representation.
LANE TOPOLOGY LEARNING
Learning lane topology plays an important role in scene understanding for autonomous driving.
Earlier works (Chu et al., 2019;Homayounfar et al., 2019;He et al., 2020;Bandara et al., 2022) focus on generating road graphs from aerial images.However, using aerial images is unreasonable for ongoing vehicles.Therefore, directly using vehicle-mounted sensors to detect lane topology has become popular due to their valuable application.STSU (Can et al., 2021) uses a Transformer-based model to detect centerlines and objects together, and then predict centerline association formatted to a directed graph by an MLP.TopoRoad (Can et al., 2022) further introduces additional minimal cycle queries to ensure the preservation of the order of intersections.Can et al. (Can et al., 2023) also provide additional supervision of the relationship by considering the centerlines as cluster centers to assign objects and greatly improve the lane graph estimation.LaneGAP (Liao et al., 2023) designs a heuristic-based algorithm to recover the graph from a set of lanes.CenterLineDet (Xu et al., 2023) and TopoNet (Li et al., 2023) regard centerlines as vertices and design a graph model to update centerline topology.In this work, we focus on lane topology nature and employ a simple and elegant position embedding to enhance topology modeling.
HD MAP PERCEPTION
HD Map Perception aims to comprehend the layout of the driving scene, such as lanelines, pedestrian crossing, and drivable areas, mirroring the concept of driving scene reasoning.The recent research focuses on learning HD maps using segmentation and vectorization techniques to meet low-cost requirements.HDMapNet (Li et al., 2022a) explores grouping and vectorizing the segmented map with complicated post-processings.VectorMapNet (Liu et al., 2023a) directly uses a sequence of points to represent each map element, further decoding laneline locations.Some follow-up methods propose different modeling strategies to represent the sequence of points, such as the permutationbased (Liao et al., 2022), the piecewise Bézier curve (Qiao et al., 2023), the pivot-based map (Ding et al., 2023).Different from the aforementioned approaches, our method employs simple and elegant modeling, each query referring to a lane.
METHOD
In this section, we elaborate on TopoMLP, a unified query-based framework for driving topology reasoning.It is able to effectively accomplish four different tasks in a single framework, including lane detection, traffic element detection, lane-lane topology, and lane-traffic topology prediction.
The overall pipeline of TopoMLP is shown in Fig. 2.More details are described as follows.
LANE DETECTOR
Our lane detector is inspired by the advanced 3D multi-view object detector PETR (Liu et al., 2022;2023b), which first introduces 3D position embedding (3D PE) into the query-based framework DETR (Carion et al., 2020;Zhu et al., 2021).In this work, we represent each centerline as a smooth Bézier curve with M control points within 3D space and each curve refers to a lane query.Our lane detector performs direct interaction between lane queries with multi-view visual features in transformer decoder and outputs control points, further transformed to lane coordinates.
Formally, given multi-view images from camera sensors, we first employ a backbone (e.g., ResNet-50 (He et al., 2016)) to generate feature maps F ∈ R V ×C×H×W , where V , C, H, and W represent the view number, channel, height, and width of the features, respectively.The 3D PE is encoded into the visual features to generate position-aware features following (Liu et al., 2022).Then we initialize N L learnable 3D lane anchor points, denoted as
Q L ∈ R N L ×3
. After projecting the feature dimension of anchor points from 3 to C using a position encoding and a linear layer, we further feed it into transformer decoder to update the lane query features QL :
QL = LaneDecoder(F , Linear(Q L )) ∈ R N L ×C .(1)
where the LaneDecoder is a stack of Transformer decoder layers.On top of the transformer decoder, we adopt two independent MLPs to predict the offset of control points and the classification scores, respectively.The final control point outputs are ordered and obtained by adding basic anchor points with the relative offsets.The control points are transformed into lane points for training and testing.
TRAFFIC ELEMENT DETECTOR
The prevalent approaches for traffic element detection in driving topology reasoning are mainly query-based and end-to-end deployed (Li et al., 2023;Kalfaoglu et al., 2023;Lu et al., 2023).Although such straightforward end-to-end implementation is appealing, the detection performance is much inferior to the specialized 2D detectors, such as YOLO series, due to small objects and class imbalance problems.To address these limitations, we propose to optionally improve the query-based detectors by elegantly incorporating an extra object detector YOLOv8.
Our traffic element detector typically follows the head design in Deformable DETR (Zhu et al., 2021) to predict bounding boxes and classification scores.It adopts query embeddings to generate a set of reference points as anchors.We modify the reference format into reference boxes with the center points, height, and width.As an alternative, the high-quality proposals from YOLOv8 can serve as an anchor box initialization, providing better local priors.It greatly eases the trade-off between topology reasoning and traffic detection.
Specifically, we first collect the multi-scale feature maps of the front view from multi-view features F , denoted as F 0 .YOLOv8 takes F 0 as input and generates multiple proposals, which are concatenated with a set of reference boxes produced from randomized queries, denoted as R T .The generated boxes by YOLOv8 are encoded by sine-cosine embedding to generate query features, which are concatenated with the randomized queries, denoted as Q T .The query features as well as the reference boxes are fed into the deformable decoder:
QT = TrafficDecoder(F 0 , Q T , R T ) ∈ R N T ×C ,(2)
where the TrafficDecoder is a stack of Deformable decoder layers.Based on the decoded traffic features QT , we implement two independent MLPs for bounding box classification and regression.
LANE-LANE TOPOLOGY REASONING
Lane-lane topology reasoning branch aims to predict the lane-lane connection relationship.To incorporate the discriminative lane information, we integrate the predicted lane points into the lane query features.In specific, we implement MLP to embed the lane coordinates and then add them into the decoded lane query features QL ∈ R N L ×C .For notion simplicity, we still use QL to represent the integrated query features.They are repeated N L times, generating two features with sizes N L ×(N L )×C and (N L )×N L ×C, where (N L ) defines different repeating directions.After concatenation operation generating QLL ∈ R N L ×N L ×2C , we apply MLP to perform binary classification:
G LL = MLP( QLL ) ∈ R N L ×N L ,(3)
where the G LL is lane-lane topology prediction.
LANE-TRAFFIC TOPOLOGY REASONING
The key idea of our lane-traffic topology reasoning is to project two kinds of features into the same space.Given the lane query embedding QL ∈ R N L ×C from 3D space, we sum the view transformation matrix A ∈ R 3×3 from 3D to perspective view with it, i.e., QL + MLP(A).Here, the view transformation matrix A is formulated in terms of camera intrinsic and extrinsic.Similar to lanelane topology, the transformed lane query features and the traffic query embedding QT ∈ R N T ×C are transformed into QLT ∈ R N L ×N T ×2C through repeating and concatenating operations.An MLP network is used to generate lane-traffic topology prediction G LT :
G LT = MLP( QLT ) ∈ R N L ×N T .(4)
LOSS FUNCTION
Our final loss function is defined as follows:
L = L det l + L dett + L top ll + L top lt ,(5)
where L det l is lane detection loss, which includes a focal loss (Lin et al., 2017) supervising classification and an L1 loss for lane regression.L dett is traffic element detection loss, which has a focal loss for classification, a L 1 loss and a GIoU loss for bounding box regression.The lane-lane topology loss L top ll contains a focal loss for binary classification and an L1 loss between the matched lane points in terms of the topology ground-truth.The lane-traffic topology loss L top lt is a focal loss for binary classification.Since our TopoMLP is a query-based method, it requires the matching between the predictions and ground-truth.In this work, we only use bipartite matching on the basic lane and traffic element detection.The matching is directly used in topology reasoning loss as well.
EXPERIMENTS
DATASET AND METRIC
Dataset.The experiments are conducted on the OpenLane-V2 (Wang et al., 2023).OpenLane-V2 is a large-scale perception and reasoning dataset for scene structure in autonomous driving.It has two subsets, i.e. subset A and subset B, developed from Argoverse 2 (Wilson et al., 2021) and nuScenes (Caesar et al., 2020), respectively.Each subset comprises 1,000 scenes with annotations at 2Hz.Note that the subset A contains seven views and subset B contains six views.
Evaluation Metric.These two basic detections require measuring instance-level performance.Therefore, the perception metrics, including DET l and DET t , are mean average precision (mAP) following the work (Wang et al., 2023).Specifically, DET l employs the Fréchet distance for quantifying similarity and is averaged over match thresholds set at {1.0, 2.0, 3.0}.On the other hand, DET t employs Intersection over Union (IoU) as the similarity measure, with averages calculated over various traffic categories.For topology metrics, the TOP score also employs an mAP metric, which is designed specifically for graph data.To summarize the overall effect of primary detection and topology reasoning, the OpenLane-V2 Score (OLS) is conducted as:
OLS = 1 4 [DET l + DETt + f (TOP ll ) + f (TOP lt )],(6)
where f is the square root function.
IMPLEMENTATION DETAILS
Feature Extractor.All images are resized into the same resolution of 1550×2048, and are downsampled with a ratio of 0.5.We implement different backbones, i.e., ResNet-50 (He et al., 2016), VOV (Lee et al., 2019), and Swin-B (Liu et al., 2021) for feature extraction.The number of output channels is set to C = 256.For lane detection, the C5 feature is upsampled and fused with C4 feature using FPN.For traffic detection, the C3, C4, and C5 features are used as the feature pyramid.
Lane Detector.The lane query number is set to N L = 300, and the number of control points is 4.
During training, the control points are transformed into 11 lane points for calculating loss.We set the region to [−51.2m, 51.2m] on the X-axis, [−25.6m, 25.6m] on the Y-axis, and [−8m, 4m] on the Z-axis.The lane detection head is composed of 6 transformer decoder layers.The MLP heads contain two fully connected layers with ReLU activation.For lane detection loss L det l , the weight of the classification part is 1.5, and the weight of the regression part is 0.2.
Traffic Detector.The decoder architecture follows the original designs of Deformable DETR (Zhu et al., 2021).The number of random queries in the traffic decoder is 100.The detection results from YOLOv81 are stored in advance.The weight of the classification loss is 1.0, the weight of L1 loss is 2.5, and the weight of GIoU loss is 1.0.
Topology Head.The MLP network used in two topology heads consists of three linear layers with ReLU activation.We represent the lane-lane topology loss with L1 loss and classification loss as
L top ll = λ L1 L L1 + λ cls L cls
, where λ L1 = 0.1 and λ cls = 5.The loss coefficient of lane-traffic topology loss L top ll is 0.5.
Training and Inference.The overall model TopoMLP is trained by AdamW optimizer (Loshchilov & Hutter, 2017) with a weight decay of 0.01.The learning rate is initialized with 2.0×10 −4 and decayed with cosine annealing policy (Loshchilov & Hutter, 2016).We adopt the HSV augmentation and grid mask strategy for training.All the experiments are trained for 24 epochs on 8 Tesla A100 GPUs with a batch size of 8 if not specified.During the inference time, our model outputs at most 300 lanes for evaluation.Other post-processing techniques are not implemented.
STATE-OF-THE-ART COMPARISON
We compare TopoMLP with the state-of-the-art approaches, such as STSU (Can et al., 2021), Vec-torMapNet (Liu et al., 2023a), MapTR (Liao et al., 2022), TopoNet (Li et al., 2023).Table 1 shows the results on subset A of OpenLane-V2.Without bells and whistles, our method achieves 38.2 OLS using ResNet-50 backbone, surpassing other state-of-the-art methods.Compared to TopoNet, our approach shows a much better topology reasoning accuracy (7.2 v.s.4.1 on TOP ll , 22.8 v.s.20.8 on TOP lt ) while also achieves decent detection accuracy (28.3 v.s.28.5 on DET l , 50.0 v.s.48.1 on DET t ).For a better performance, we apply a more powerful backbone and more training time: when using Swin-B for training 48 epochs, the OLS score rises to 43.7.
Table 2 shows the performance comparison on OpenLane-V2 subset B. Our proposed TopoMLP exceeds other models in all metrics when using the same ResNet-50 backbone.Particularly in terms of topology performance, it surpasses TopoNet by a large margin (7.6 v.s.2.5 on TOP ll , 17.8 v.s.14.2 on TOP lt ).Moreover, the performance boost is also observed when integrating more powerful backbones.Overall, these results significantly highlight the efficacy of our TopoMLP model.
ABLATION STUDY
In this section, we study several important components of our method and conduct ablation experiments on OpenLane-V2 subset A.
Analysis on Lane Detection.We investigate the effect of different settings in lane detection.i) In Table 3 (a), the improvement in lane detection and lane-lane topology performance is clear when the lane query increases from 200 to 300.However, it is observed that any further increase does not contribute to additional improvement.To balance the model efficiency and performance, the number of lane query is set to 300.ii) 3 (d).This is because only adopting boxes lacks category information.Despite the advantages of position embedding, a single MLP network proves sufficient for achieving high-performance topology reasoning.
VISUALIZATION
We visualize the lane detection and lane-lane topology reasoning results in Fig. 3 by projecting 3D lanes into images.Despite potential challenges like intricate intersections or occluded centerlines, TopoMLP well predicts the centerlines and constructs a lane graph in BEV.Fig. 4 displays the results of traffic detection as well as lane-traffic topology reasoning.As clearly shown, TopoMLP identifies the majority of traffic elements, even small objects, and allocates them to the appropriate lanes.
MORE DISCUSSION
Before stepping into our thorough analysis, let's revisit the definition of the topology metric.Given the ground-truth graph G = (V, E) and the predicted one Ĝ = ( V , Ê), we establish a projection on the vertices such that V = V ′ ⊆ V .This projection utilizes the Fréchet and IoU distances to measure similarity among lane centerlines and traffic elements respectively.Within the predicted V ′ , we consider two vertices as being connected if the confidence of the edge surpasses 0.5.Subsequently, the TOP score is derived by averaging vertice mAP between (V, E) and ( V ′ , Ê′ ) overall all vertices: Table 4: The comparison on the original TOP metric and our adjusted TOP (marked by †) when using enhanced prediction or not.TopoNet is reimplemented by ours using the same backbone ResNet-50 with our TopoMLP.The experiments are conducted on OpenLane-V2 subset A.
TOP = 1 |V | v∈V n′ ∈ N ′ (v) P (n ′ )1 condition (n ′ ∈ N (v)) |N (v)| ,(7)
where N (v) is the ordered list of neighbors of vertex v ranked by confidence and P (v) is the precision of the i-th vertex v in the ordered list.We provide a toy example of the important loophole in Fig. 5.A crucial point hinges on the precision of the ordered list.For those unmatched instances that our detector cannot identify, their confidence scores are defaulted to 1.0.That is, there are lots of false positives with high confidence.Suppose we push the prediction confidence into 1.0/0.0 in terms of 0.5, our prediction with true positives will have a higher confidence, hence, leading to enhanced precision.The quantitative results are shown in the first two rows of Table 4. Using this strategy to enhance prediction leads to consistent performance improvement, including TopoNet and our method TopoMLP.
To tackle this issue, we suggest a novel TOP metric incorporated with a correctness factor.Let's symbolize the enhanced precision as P (v) † .The adjusted TOP metric TOP † is formulated as:
TOP † = 1 |V | v∈V n′ ∈ N ′ (v) P (n ′ )1 condition .(n ′ ∈ N (v)) NT P (NT P + NF P ) |N (v)| ,(8)
where N T P is the number of true positives and N F P is the number of false positives.In the last two rows of Table 4, we evaluate TopoNet and TopoMLP on the adjusted topology metric, demonstrating its capability to effectively shield against the "attack".Despite the alteration in the metric, TopoMLP still surpasses other methods such as TopoNet in performance.
CONCLUSION
In this paper, we propose a simple yet strong pipeline for driving scene topology, named TopoMLP.It starts a significant observation that the reasoning performance is limited by the detection scores.Therefore, we first focus on designing two powerful detectors for 3D lane detection and 2D traffic detection, respectively.As for topology reasoning, combining the appreciated position embedding and an elegant MLP network is enough to achieve impressive performance.TopoMLP is the 1st solution for 1st OpenLane Topology in Autonomous Driving Challenge.We hope our work opens up new insights into exploring driving topology reasoning.
Figure 1 :
1
Figure 1: Left: The evaluation of topology when using ground-truth topology (represented as "Topo GT") to replace predicted topology while retaining the detection results.Right: The illustration of how missing detections (represented by dashed lines) influence topology reasoning.They uncover an essential truth: the fundamental detections are paramount to the topology reasoning.
Figure 2 :
2
Figure 2: The overall architecture of TopoMLP.The lane decoder depicts each centerline as a Bézier curve for a smooth representation.The traffic decoder is optionally enhanced by additional YOLOv8 proposals.The prediction of lane-traffic (LT) and lane-lane (LL) topology is accomplished by an MLP with position embedding."Topology" means an operation in § 3.3.
Figure 3 :
3
Figure 3: Qualitative results for lane detection and lane-lane topology of TopoMLP.Given the multi-view images, our method can predict the most lanes and connect them correctly under various challenges, like occluded lanes and complicated intersections.The green lanes are ground-truth and the red lanes are predictions, which are projected into images and BEV map.
Figure 4 :
4
Figure 4: Traffic detection and lane-traffic topology from TopoMLP.Our method can detect the traffic elements in the front view and associate them with lanes.The green represents GT, while the red means our prediction.Traffic predictions are grounded by different colors in terms of category.
Figure 5 :
5
Figure 5: The illustration of loophole on TOP metric.Enhancing the prediction scores leads to true positives prior to some false positives from unmatched instances, further improving precision.
Table 1 :
1
MethodBackbone Epoch DET l DET t TOP ll TOP lt OLS Performance comparison with state-of-the-art methods on OpenLane-V2 subset A set. Results for existing methods are from TopoNet.TopoMLP is trained end-to-end, while '*' indicates using extra YOLOv8 proposals.The best is in bold.MethodBackbone Epoch DET l DET t TOP ll TOP lt OLS
STSU (Can et al., 2021)ResNet-502412.743.00.515.125.4VectorMapNet (Liu et al., 2023a) ResNet-502411.141.70.45.920.8MapTR (Liao et al., 2022)ResNet-502417.743.51.110.426.0TopoNet (Li et al., 2023)ResNet-502428.548.14.120.835.6TopoMLPResNet-502428.350.07.222.838.2TopoMLP*ResNet-502428.853.37.830.141.2TopoMLPVOV2429.752.17.925.640.1TopoMLPSwin-B2430.754.39.528.342.2TopoMLP*Swin-B2430.055.89.431.743.3TopoMLPSwin-B4832.553.511.929.443.7STSU (Can et al., 2021)ResNet-50248.243.90.09.421.2VectorMapNet (Liu et al., 2023a) ResNet-50243.549.10.01.416.3MapTR (Liao et al., 2022)ResNet-502415.254.00.56.125.2TopoNet (Li et al., 2023)ResNet-502424.355.02.514.233.2TopoMLPResNet-502426.658.37.617.838.7TopoMLPVOV2429.662.28.920.541.7TopoMLPSwin-B2432.365.510.523.244.6
Table 2 :
2
Performance comparison with state-of-the-art methods on OpenLane-V2 subset B set. Results for existing methods are from TopoNet.
Table 3 (b) illustrates the influence of a control point in Bézier modeling.Empirically, we choose 4 control points for better performance.Representation Way in Topology Reasoning.It is also of interest to analyze the impact of different lane and traffic element representation methods in topology reasoning.i) We first analyze the lane representation in lane-lane topology, which additionally integrates lane coordinates.As shown in Table 3 (c), removing it leads to a minor performance degradation on TOP ll , which indicates that the explicit lane position is useful for topology reasoning.Moreover, abandoning L1 loss for intersection point supervision also causes a score decrease.ii) Table 3 (d) explores the impact of incorporating
PredictionGround-TruthPredictionGround-TruthYOLOv8 Proposal on Traffic Detection. To study the benefit of using YOLOv8 proposals, wetest its effect under two settings: using ResNet-50 and Swin-B backbone. The main results areshown in Table 1, marked by "*". It is well seen that using YOLOv8 predictions as proposal queriesconsistently improves the detection performance, indicating the effectiveness of YOLOv8 proposals.Moreover, it is worth noticing that TopoMLP without YOLOv8 still achieves higher traffic detectionscores than other counterparts.
Lane Queries DET l DETt TOP ll TOP lt OLS
Control Points DET l DETt TOP ll TOP lt OLS20028.2 49.9 6.1 20.2 36.9326.6 49.9 7.0 21.5 37.330028.3 50.0 7.2 22.8 38.2428.3 50.0 7.2 22.8 38.250027.9 49.6 7.3 22.4 38.0527.8 48.5 6.6 21.5 37.1(a) Different number of lane queries.(b) Different number of control points.LL Topo DET l DETt TOP ll TOP lt OLSLT TopoDET l DETt TOP ll TOP lt OLSOurs28.3 50.0 7.222.8 38.2Ours28.3 50.0 7.2 22.8 38.2w/o position 27.9 50.9 6.921.6 37.9w/o transform 28.4 49.3 7.2 21.4 37.7w/o L1 loss 26.6 50.9 6.522.1 37.5w only box 28.2 49.6 7.1 22.0 37.8(c) LL Topo is the short name for lane-lane topol-(d) LT Topo is the short name for lane-trafficogy. " w/o position" means removing lane coordi-topology. "transform" means using view transfor-nate embedding. "w/o L1 loss" means removingmation matrix on lane feature. "only box" meansthe supervision of interaction points.using bounding box as traffic representation.
Table 3 :
3
The ablation studies of different components in the proposed TopoMLP.The experiments are conducted on OpenLane-V2 subset A. We bold the best scores.theview transformation matrix into the lane feature for lane-traffic topology.It suggests that the integration of this matrix into lane representation improves the reasoning of lane-traffic topology (22.8 v.s.21.4 on TOP lt ).iii) Using the bounding boxes of traffic elements to replace traffic features results in a drop on TOP lt (22.8 v.s.22.0), as shown in the last row of Table
The official codes we adopt are available at https://github.com/ultralytics/ultralytics.
Road detection based on illuminant invariance. José , Álvarez Alvarez, Antonio M Ĺopez, 2010IEEE T-ITS
Spin road mapper: Extracting roads from aerial images via spatial and interaction space graph reasoning for autonomous driving. Wele Gedara, Chaminda Bandara, Jeya , Maria Jose Valanarasu, Patel, ICRA. 2022
Improved road connectivity by joint learning of orientation and segmentation. Anil Batra, Suriya Singh, Guan Pang, Saikat Basu, Manohar Jawahar, Paluri, CVPR. 2019
nuscenes: A multimodal dataset for autonomous driving. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom, CVPR. 2020
Structured bird's-eyeview traffic scene understanding from onboard images. Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool, ICCV. 2021
Topology preserving local road network estimation from single onboard camera image. Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool, CVPR. 2022
Improving online lane graph extraction by object-lane clustering. Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool, ICCV. 2023
End-to-end object detection with transformers. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko, ECCV. 2020
Mp3: A unified model to map, perceive, predict and plan. Sergio Casas, Abbas Sadat, Raquel Urtasun, CVPR. 2021
Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. Yuning Chai, Benjamin Sapp, Mayank Bansal, Dragomir Anguelov, CoRL2020
Learning to detect human-object interactions. Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, Jia Deng, WACV2018
Persformer: 3d lane detection via perspective transformer and the openlane benchmark. Li Chen, Chonghao Sima, Yang Li, Zehan Zheng, Jiajie Xu, Xiangwei Geng, Hongyang Li, Conghui He, Jianping Shi, Yu Qiao, ECCV. 2022
Neural turtle graphics for modeling city road layouts. Hang Chu, Daiqing Li, David Acuna, Amlan Kar, Maria Shugrina, Xinkai Wei, Ming-Yu Liu, Antonio Torralba, Sanja Fidler, ICCV. 2019
Pivotnet: Vectorized pivot learning for end-toend hd map construction. Wenjie Ding, Limeng Qiao, Xi Qiu, Chi Zhang, ICCV. 2023
ican: Instance-centric attention network for humanobject interaction detection. Chen Gao, Yuliang Zou, Jia-Bin Huang, BMVC. 2018
Tomer Pe'er, Roee Lahav, and Dan Levi. 3d-lanenet: end-to-end 3d multiple lane detection. Noa Garnett, Rafi Cohen, ICCV. 2019
Gen-lanenet: A generalized and scalable approach for 3d lane detection. Yuliang Guo, Guang Chen, Peitao Zhao, Weide Zhang, Jinghao Miao, Jingao Wang, Tae Eun Choe, ECCV. 2020
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. 2016
Sat2graph: Road graph extraction through graph-tensor encoding. Songtao He, Hari Balakrishnan, ECCV. 2022 Wacv, Favyen Songtao He, Satvat Bastani, Mohammad Jagwani, Hari Alizadeh, Sanjay Balakrishnan, Mohamed M Chawla, Samuel Elshrif, Mohammad Amin Madden, Sadeghi, 2020Lane-level street map extraction from aerial imagery
Dagmapper: Learning to map by discovering lane topology. Namdar Homayounfar, Wei-Chiu Ma, Justin Liang, Xinyu Wu, Jack Fan, Raquel Urtasun, ICCV. 2019
Planning-oriented autonomous driving. Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, CVPR. 2023
Topomask: Instancemask-based formulation for the road topology problem via transformer-based architecture. Kalfaoglu, Ibrahim Halil, Ozsel Ozturk, Alptekin Kilinc, Temizel, arXiv:2306.054192023arXiv preprint
Deep metadata fusion for traffic light to lane assignment. Tristan Langenberg, Timo Lüddecke, Florentin Wörgötter, IEEE RA-L
An energy and gpu-computation efficient backbone network for real-time object detection. Youngwan Lee, Joong-Won Hwang, Sangrok Lee, Yuseok Bae, Jongyoul Park, CVPRW. 2019
Hdmapnet: An online hd map construction and evaluation framework. Qi Li, Yue Wang, Yilun Wang, Hang Zhao, ICRA. 2022a
Tianyu Li, Li Chen, Huijie Wang, Yang Li, Jiazhi Yang, Xiangwei Geng, Shengyin Jiang, Yuting Wang, Hang Xu, Chunjing Xu, Junchi Yan, Ping Luo, Hongyang Li, arXiv:2304.05277Graph-based topology reasoning for driving scenes. 2023arXiv preprint
Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Yu Qiao, Jifeng Dai, ECCV. 2022b
Maptr: Structured modeling and learning for online vectorized hd map construction. Bencheng Liao, Shaoyu Chen, Xinggang Wang, Tianheng Cheng, Qian Zhang, Wenyu Liu, Chang Huang, arXiv:2208.144372022arXiv preprint
Lane graph as path: Continuity-preserving path-wise modeling for online lane graph construction. Bencheng Liao, Shaoyu Chen, Bo Jiang, Tianheng Cheng, Qian Zhang, Wenyu Liu, Chang Huang, Xinggang Wang, arXiv:2303.088152023arXiv preprint
Kaiming He, and Piotr Dollár. Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, ICCV. 2017
Vectormapnet: End-to-end vectorized hd map learning. Yicheng Liu, Tianyuan Yuan, Yue Wang, Yilun Wang, Hang Zhao, ICML. 2023a
Petr: Position embedding transformation for multi-view 3d object detection. Yingfei Liu, Tiancai Wang, Xiangyu Zhang, Jian Sun, ECCV. 2022
Petrv2: A unified framework for 3d perception from multi-camera images. Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Aqi Gao, Tiancai Wang, Xiangyu Zhang, Jian Sun, ICCV. 2023b
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, ICCV. 2021
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.039832016arXiv preprint
Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. 2017arXiv preprint
Mingjie Lu, Yuanxian Huang, Ji Liu, Jinzhang Peng, Lu Tian, Ashish Sirasao, arXiv:2307.01557Separated roadtopoformer. 2023arXiv preprint
A variational approach to online road and path segmentation with monocular vision. Lina Maria Paz, Pedro Piniés, Paul Newman, ICRA. 2015
End-to-end vectorized hd-map construction with piecewise bezier curve. Limeng Qiao, Wenjie Ding, Xi Qiu, Chi Zhang, CVPR. 2023
Color model-based real-time learning for road following. Ceryen Tan, Tsai Hong, Tommy Chang, Michael Shneier, IEEE ITS. 2006
Huijie Wang, Tianyu Li, Yang Li, Li Chen, Chonghao Sima, Zhenbo Liu, Yuting Wang, Shengyin Jiang, Peijin Jia, Bangjun Wang, Feng Wen, Hang Xu, Ping Luo, Junchi Yan, Wei Zhang, Hongyang Li, arXiv:2304.10440Openlane-v2: A topology reasoning benchmark for scene understanding in autonomous driving. 2023arXiv preprint
Deep contextual attention for human-object interaction detection. Tiancai Wang, Rao Muhammad Anwer, Muhammad Haris Khan, Fahad Shahbaz Khan, Yanwei Pang, Ling Shao, Jorma Laaksonen, ICCV. 2019
Argoverse 2: Next generation datasets for self-driving perception and forecasting. Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Ratnesh Bowen Pan, Andrew Kumar, Jhony Hartnett, Kaesemodel Pontes, NeurIPS. 2021
Centerlinedet: Road lane centerline graph detection with vehicle-mounted sensors by transformer for high-definition map creation. Zhenhua Xu, Yuxuan Liu, Yuxiang Sun, Ming Liu, Lujia Wang, ICRA. 2023
Once-3dlanes: Building monocular 3d lane detection. Fan Yan, Ming Nie, Xinyue Cai, Jianhua Han, Hang Xu, Zhen Yang, Chaoqiang Ye, Yanwei Fu, Michael Bi, Mi , Li Zhang, CVPR. 2022
Motrv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors. Yuang Zhang, Tiancai Wang, Xiangyu Zhang, CVPR. 2023
Deformable detr: Deformable transformers for end-to-end object detection. Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai, ICLR2021 |
52,890,982 | ADVERSARIAL AUDIO SYNTHESIS | While Generative Adversarial Networks (GANs) have seen wide success at the problem of synthesizing realistic images, they have seen little application to audio generation. Unlike for images, a barrier to success is that the best discriminative representations for audio tend to be non-invertible, and thus cannot be used to synthesize listenable outputs. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. Our experiments demonstrate that WaveGAN can produce intelligible words from a small vocabulary of speech, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. Qualitatively, we find that human judges prefer the sound quality of generated examples from WaveGAN over those from a method which naïvely apply GANs on image-like audio feature representations. | [
3338083,
26100519,
3568073,
11758569,
2187805
] | ADVERSARIAL AUDIO SYNTHESIS
Chris Donahue
Computer Science
Julian Mcauley
Computer Science
UC San Diego Departments of Music
ADVERSARIAL AUDIO SYNTHESIS
While Generative Adversarial Networks (GANs) have seen wide success at the problem of synthesizing realistic images, they have seen little application to audio generation. Unlike for images, a barrier to success is that the best discriminative representations for audio tend to be non-invertible, and thus cannot be used to synthesize listenable outputs. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. Our experiments demonstrate that WaveGAN can produce intelligible words from a small vocabulary of speech, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. Qualitatively, we find that human judges prefer the sound quality of generated examples from WaveGAN over those from a method which naïvely apply GANs on image-like audio feature representations.
INTRODUCTION
Synthesizing audio for specific domains has many practical applications in music and film production. Musicians and Foley artists scour large databases of sound effects to find particular audio recordings suitable for specific scenarios. This strategy is painstaking and may result in a negative outcome if the ideal sound effect does not exist in the library. A better approach might allow a sound artist to explore a compact latent space of audio, taking broad steps to find the types of sounds they are looking for (e.g. footsteps) and making small adjustments to latent variables to fine-tune (e.g. a large boot lands on a gravel path). However, audio signals have high temporal resolution, and strategies that learn such a latent representation must perform effectively in high dimensions.
Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are one such unsupervised strategy for mapping low-dimensional latent vectors to high-dimensional data. The potential advantages of GAN-based approaches to audio synthesis are numerous. Firstly, GANs could be useful for data augmentation (Shrivastava et al., 2017) in data-hungry speech recognition systems. Secondly, GANs could enable rapid and straightforward sampling of large amounts of audio. Furthermore, while the usefulness of generating static images with GANs is arguable, there are many applications (e.g. Foley) for which generating sound effects is immediately useful. But despite their increasing fidelity at synthesizing images Berthelot et al., 2017;Karras et al., 2018), GANs have yet to be demonstrated capable of synthesizing audio in an unsupervised setting.
A naïve solution for applying image-generating GANs to audio would be to operate them on imagelike spectrograms, i.e., time-frequency representations of audio. This practice of bootstrapping image recognition algorithms for audio tasks is commonplace in the discriminative setting (Hershey et al., 2017). In the generative setting however, this approach is problematic as the most perceptually-informed spectrograms are non-invertible, and hence cannot be listened to without lossy estimations (Griffin & Lim, 1984) or learned inversion models (Shen et al., 2018).
Recent work (Oord et al., 2016; has shown that neural networks can be trained with autoregression to operate on raw audio. Such approaches are attractive as they dispense with engineered acoustic feature representations. However, unlike with GANs, the autoregressive setting results in slow generation as output audio samples must be fed back into the model one at a time.
In this work, we investigate both waveform and spectrogram strategies for generating slices of audio with GANs. 1 For our spectrogram approach (SpecGAN), we first design an appropriate spectro-gram representation that allows for approximate inversion, and bootstrap the two-dimensional deep convolutional GAN (DCGAN) method to operate on these spectrograms. In WaveGAN, our waveform approach, we flatten the SpecGAN architecture to operate in one dimension, resulting in a model with the same number of parameters and numerical operations as its twodimensional analog. With WaveGAN, we provide both a starting point for practical audio synthesis with GANs and a recipe for modifying other image generation methods to operate on waveforms.
We primarily envisage our method being applied to the generation of short sound effects suitable for use in music and film. For example, we train a WaveGAN on drums, resulting in a procedural drum machine designed to assist electronic musicians (web demo bit.ly/2xL0wJQ).
While our objective is sound effect generation (e.g. generating drum sounds), human evaluation for these tasks would require expert listeners. Therefore, we also consider a speech benchmark, facilitating straightforward assessment by human annotators. Specifically, we explore a task where success can easily be judged by any English speaker: generating spoken digits "zero" through "nine".
Though our evaluation focuses on a speech generation task, we note that it is not our goal to develop a text-to-speech synthesizer. Instead, our investigation concerns whether unsupervised strategies can learn the semantic modes (e.g. words in speech data) implicit in high-dimensional audio signals, rather than being conditioned on them. Our experiments on speech demonstrate that both WaveGAN and SpecGAN can generate spoken digits that are intelligible to humans. On criteria of sound quality and speaker diversity, human judges indicate a preference for the audio generated by WaveGAN compared to that from SpecGAN.
GAN PRELIMINARIES
GANs learn mappings from low-dimensional latent vectors z ∈ Z, i.i.d. samples from known prior P Z , to points in the space of natural data X . In their original formulation (Goodfellow et al., 2014), a generator G : Z → X is pitted against a discriminator D : X → [0, 1] in a two-player minimax game. G is trained to minimize the following value function, while D is trained to maximize it:
V (D, G) = E x∼P X [log D(x)] + E z∼P Z [log(1 − D(G(z)))].
(1)
In other words, D is trained to determine if an example is real or fake, and G is trained to fool the discriminator into thinking its output is real. Goodfellow et al. (2014) demonstrate that their proposed training algorithm for Equation 1 equates to minimizing the Jensen-Shannon divergence between P X , the data distribution, and P G , the implicit distribution of the generator when z ∼ P Z .
In their original formulation, GANs are notoriously difficult to train, and prone to catastrophic failure cases such as mode collapse, in which the generator outputs a single result for all z. demonstrate that, under certain conditions, minimizing Jensen-Shannon divergence of P X and P G with gradient descent is ill-posed as it is discontinuous and provides no usable gradient. As a smoother alternative, they suggest minimizing the Wasserstein-1 distance between distributions
W (P X , P G ) = sup f L ≤1 E x∼P X [f (x)] − E x∼P G [f (x)](2)
where f L ≤ 1 : X → R is the family of functions that are 1-Lipschitz.
To minimize Wasserstein distance, they suggest a GAN training algorithm (WGAN), similar to that of Goodfellow et al. (2014), for the following value function:
V WGAN (D w , G) = E x∼P X [D w (x)] − E z∼P Z [D w (G(z))].(3)
With this formulation, D w : X → R is not trained to identify examples as real or fake, but instead is trained as a function that assists in computing the Wasserstein distance. suggest weight clipping as a means of enforcing that D w is 1-Lipschitz. As an alternative strategy, replace weight clipping with a gradient penalty (WGAN-GP) that also enforces the constraint. They demonstrate that their WGAN-GP strategy can successfully train a variety of model configurations where other GAN losses fail.
WAVEGAN
We motivate our design choices for WaveGAN by first highlighting the different types of structure found in audio versus images.
INTRINSIC DIFFERENCES BETWEEN AUDIO AND IMAGES
One way to illustrate the differences between audio and images is by examining the axes along which these types of data vary most substantially, i.e. by principal component analysis. In Figure 1, we show the first eight principal components for patches from natural images and slices from speech. While the principal components of images generally capture intensity, gradient, and edge characteristics, those from audio form a periodic basis that decompose the audio into constituent frequency bands. In general, natural audio signals are more likely to exhibit periodicity than natural images.
As a consequence, correlations across large windows are commonplace in audio. For example, in a waveform sampled at 16 kHz, a 440 Hz sinusoid (the musical note A4) takes over 36 samples to complete a single cycle. This suggests that filters with larger receptive fields are needed to process raw audio. This same intuition motivated Oord et al. (2016) in their design of WaveNet, which uses dilated convolutions to exponentially increase the model's effective receptive field with layer depth.
WAVEGAN ARCHITECTURE
We base our WaveGAN architecture off of DCGAN which popularized usage of GANs for image synthesis. The DCGAN generator uses the transposed convolution operation ( Figure 2) to iteratively upsample low-resolution feature maps into a high-resolution image. Motivated by our above discussion, we modify this transposed convolution operation to widen its receptive field. Specifically, we use longer one-dimensional filters of length 25 instead of two-dimensional filters of size 5x5, and we upsample by a factor of 4 instead of 2 at each layer ( Figure 2). We modify the discriminator in a similar way, using length-25 filters in one dimension and increasing stride from 2 to 4. These changes result in WaveGAN having the same number of parameters, numerical operations, and output dimensionality as DCGAN.
Because DCGAN outputs 64x64 pixel images -equivalent to just 4096 audio samples -we add one additional layer to the model resulting in 16384 samples, slightly more than one second of audio at 16 kHz. This length is already sufficient for certain sound domains (e.g. sound effects, voice commands), and future work adapting megapixel image generation techniques (Karras et al., 2018) could expand the output length to more than a minute. We requantize the real data from its 16bit integer representation (linear pulse code modulation) to 32-bit floating point, and our generator similarly outputs floating point waveforms. A complete description of our model is in Appendix D.
In summary, we outline our modifications to the DCGAN method which result in WaveGAN. This straightforward recipe already produces reasonable audio, and further contributions outlined below serve to refine results.
1. Flatten 2D convolutions into 1D (e.g. 5x5 2D convolution becomes length-25 1D).
2. Increase the stride factor for all convolutions (e.g. stride 2x2 becomes stride 4).
3. Train using the WGAN-GP strategy. Generative image models that upsample by transposed convolution (such as DCGAN) are known to produce characteristic "checkerboard" artifacts in images (Odena et al., 2016). Periodic patterns are less common in images (Section 3.1), and thus the discriminator can learn to reject images that contain them. For audio, analogous artifacts are perceived as pitched noise which may overlap with frequencies commonplace in the real data, making the discriminator's objective more challenging. However, the artifact frequencies will always occur at a particular phase, allowing the discriminator to learn a trivial policy to reject generated examples. This may inhibit the overall optimization problem.
PHASE SHUFFLE
To prevent the discriminator from learning such a solution, we propose the phase shuffle operation with hyperparameter n. Phase shuffle randomly perturbs the phase of each layer's activations by −n to n samples before input to the next layer ( Figure 3). We apply phase shuffle only to the discriminator, as the latent vector already provides the generator a mechanism to manipulate the phase of a resultant waveform. Intuitively speaking, we want the discriminator to be invariant to the phase of the waveform it is classifying.
SPECGAN: GENERATING SEMI-INVERTIBLE SPECTROGRAMS
While a minority of recent research in discriminative audio classification tasks has used raw audio input (Sainath et al., 2015;Lee et al., 2017), most of these approaches operate on spectrogram representations of audio. A generative model may also benefit from operating in such a time-frequency space. However, commonly-used representations in the discriminative setting discard phase information, rendering them uninvertible.
With SpecGAN, our frequency-domain audio generation model, we design a spectrogram representation that is both well-suited to GANs designed for image generation and can be approximately inverted. Additionally, to facilitate direct comparison, our representation is designed to use the same dimensionality per unit of time as WaveGAN (16384 samples yield a 128x128 spectrogram).
To process audio into suitable spectrograms, we first perform the short-time Fourier transform with 16 ms windows and 8 ms stride, resulting in 128 frequency bins linearly spaced from 0 to 8 kHz.
We take the magnitude of the resultant spectra and scale amplitude values logarithmically to betteralign with human perception. We then normalize each frequency bin to have zero mean and unit variance, and discard the highest frequency bin. Finally, we clip the spectra to 3 standard deviations and rescale to [−1, 1]. Once our dataset has been processed into this format, we operate the DCGAN algorithm on the resultant spectra. To render the resultant generated spectrograms as waveforms, we first invert the steps of spectrogram preprocessing described above, resulting in linear-amplitude magnitude spectra. We then employ the iterative Griffin-Lim algorithm (Griffin & Lim, 1984) with 16 iterations to estimate phase and produce 16384 audio samples.
EXPERIMENTAL PROTOCOL
To facilitate human evaluation, our experimentation focuses on the Speech Commands Dataset (Warden, 2018). This dataset consists of many speakers recording individual words in uncontrolled recording conditions. We explore a subset consisting of the spoken digits "zero" through "nine" and refer to this subset as the Speech Commands Zero Through Nine (SC09) dataset. These ten words encompass many phonemes and two consist of multiple syllables. Each recording is one second in length, and we do not attempt to align the words in time. There are 1850 utterances of each word in the training set, resulting in 5.3 hours of speech. The heterogeneity of alignments, speakers, and recording conditions make this a challenging dataset for generative modeling.
Our baseline configuration for WaveGAN excludes phase shuffle. We compare this to the performance of WaveGAN with phase shuffle (n ∈ {2, 4}) and a variant of WaveGAN which uses nearest-neighbor upsampling rather than transposed convolution (Odena et al., 2016). Hoping to reduce noisy artifacts, we also experiment with adding a wide (length-512) post-processing filter to the output of the generator and learning its parameters with the rest of the generator variables (details in Appendix B). We use the WGAN-GP algorithm for all experiments, finding it to produce reasonable results where others Mao et al., 2017; failed. We compare the performance of these configurations to that of SpecGAN.
We also perform experiments on four other datasets with different characteristics (Figure 4 We train our networks using batches of size 64 on a single NVIDIA P100 GPU. During our quantitative evaluation of SC09 (discussed below), our WaveGAN networks converge by their early stopping criteria (inception score) within four days (200k iterations, equivalent to 700 epochs), and produce speech-like audio within the first hour of training. Our SpecGAN networks converge more quickly, within two days (175 epochs). On the other four datasets, we train WaveGAN for 200k iterations representing nearly 300 epochs for the largest dataset. Unlike with autoregressive methods (Oord et al., 2016;, generation with WaveGAN is fully parallel and can produce an hour of audio in less than two seconds. We list all hyperparameters in Appendix E.
EVALUATION METHODOLOGY
Evaluation of generative models is a fraught topic. Theis et al. (2016) demonstrate that quantitative measures of sample quality are poorly correlated with each other and human judgement. Accordingly, we use several quantitative evaluation metrics for hyperparameter validation and discussion, and also evaluate our most promising models with human judges. Salimans et al. (2016) propose the inception score, which uses a pre-trained Inception classifier (Szegedy et al., 2016) to measure both the diversity and semantic discriminability of generated images, finding that the measure correlates well with human judgement.
INCEPTION SCORE
Given model scores P (y | x) with marginal P (y), the inception score is defined as exp(E x D KL (P (y | x)||P (y))), and is estimated over a large number of samples (e.g. 50k). For n labels, this measure ranges from 1 to n, and is maximized when the model is completely confident about each prediction but predicts each label equally often. We will use this measure as our primary quantitative evaluation method and early stopping criteria.
To measure inception score, we train an audio classifier on SC09. Our classifier first computes a short-time Fourier transform of the input audio with 64 ms windows and 8 ms stride. This representation is projected to 128 frequency bins equally spaced on the Mel scale (Stevens et al., 1937) from 40 Hz to 7800 Hz. Amplitudes are scaled logarithmically and normalized so that each bin has zero mean and unit variance. We process this perceptually-informed representation with four layers of convolution and pooling, projecting the result to a softmax layer with 10 classes. We perform early stopping on the minimum negative log-likelihood of the validation set; the resultant model achieves 93% accuracy on the test set. Because this classifier observes spectrograms, our spectrogramgenerating models may have a representational advantage over our waveform-generating models.
NEAREST NEIGHBOR COMPARISONS
Inception score has two trivial failure cases in which a poor generative model can achieve a high score. Firstly, a generative model that outputs a single example of each class with uniform probability will be assigned a high score. Secondly, a generative model that overfits the training data will achieve a high score simply by outputting examples on which the classifier was trained.
We use two indicators metrics to determine if a high inception score has been caused by either of these two undesirable cases. Our first indicator, |D| self , measures the average Euclidean distance of a set of 1k examples to their nearest neighbor within the set (other than itself). A higher |D| self indicates higher diversity amongst samples. Because measuring Euclidean distance in time-domain audio poorly represents human perception, we evaluate distances in the same frequency-domain representation as our classifier from Section 6.1.
Our second indicator, |D| train , measures the average Euclidean distance of 1k examples to their nearest neighbor in the real training data. If the generative model simply produces examples from the training set, this measure will be 0. We report |D| train and |D| self relative to those of the test set.
QUALITATIVE HUMAN JUDGEMENTS
While inception score is a useful metric for hyperparameter validation, our ultimate goal is to produce examples that are intelligible to humans. To this end, we measure the ability of human an- Table 1: Quantitative and qualitative (human study) results for SC09 experiments comparing real and generated data. A higher inception score suggests that semantic modes of the real data distribution have been captured. |D| self indicates the intra-dataset diversity relative to that of the real test data. |D| train indicates the distance between the dataset and the training set relative to that of the test data; a low value indicates a generative model that is overfit to the training data. Acc. is the overall accuracy of humans on the task of labeling class-balanced digits (random chance is 0.1). Sound quality, ease of intelligibility and speaker diversity are mean opinion scores (1-5); higher is better.
Quantitative
Qualitative ( notators on Amazon Mechanical Turk to label the generated audio. Using our best WaveGAN and SpecGAN models as measured by inception score, we generate random examples until we have 300 for each digit (as labeled by our classifier from Section 6.1). In batches of ten random examples, we ask annotators to label which digit they perceive in each example, and compute their accuracy with respect to the classifier's labels (random accuracy would be 10%). After the ten questions, each annotator is asked to assign subjective values of 1 through 5 for criteria of sound quality, ease of intelligibility, and speaker diversity. We report the accuracy and mean opinion scores in Table 1.
RESULTS AND DISCUSSION
Results for our evaluation appear in Table 1. We also evaluate our metrics on the real training data, the real test data, and a version of SC09 generated by a parametric speech synthesizer (Buchner, 2017). We also compare to SampleRNN and two public implementations of WaveNet (Oord et al., 2016), but neither method produced competitive results (none were stronger than our weakest baseline), and we excluded them from further evaluation. These autoregressive models have not previously been examined on small-vocabulary speech data, and their success at generating full words has only been demonstrated when conditioning on rich linguistic features. Sound examples for all experiments can be found at bit.ly/2xUNCIx.
While the maximum inception score for SC09 is 10, any score higher than the test set score of 8 should be seen as evidence that a generative model has overfit. Our best WaveGAN model uses phase shuffle with n = 2 and achieves an inception score of 4.7. To determine if phase shuffle was improving the learning procedure simply by slowing down training, we also tried using 50% dropout in the discriminator's activations with fixed masks across timesteps. Dropout resulted in a lower inception score compared to the baseline model.
Most experiments produced |D| self (diversity) values higher than that of the test data, and all experiments produced |D| train (distance from training data) values higher than that of the test data. While these measures indicate that our generative models produce examples with statistics that deviate from those of the real data, neither metric indicates that the models achieve high inception scores by the trivial solutions outlined in Section 6.2.
While examples from SpecGAN achieve higher inception score (6.0) than those from our best Wave-GAN model (4.7), human judges are able to label examples from the two models with similar accuracy (58% for WaveGAN vs 66% for SpecGAN). However, on subjective criteria of sound quality and speaker diversity, humans indicate a preference for examples from WaveGAN. It is possible that the poor qualitative ratings for examples from SpecGAN are primarily caused by the noisy Griffin-Lim inversion procedure (Griffin & Lim, 1984) and not the generative process itself; investigation of more sophisticated strategies is an avenue for future work. We see promise in both waveform and spectrogram audio generation with GANs; our study does not suggest a decisive winner.
Finally, we train our best WaveGAN and SpecGAN models (as measured by inception score on SC09) on the four other domains listed in Section 5. Results from these experiments appear in Figure 4. Somewhat surprisingly, we find that the frequency-domain spectra produced by WaveGAN (a time-domain method) are visually more consistent with the training data (e.g. in terms of sharpness) than those produced by SpecGAN. For drum sound effects, WaveGAN captures semantic modes such as kick and snare drums. On bird vocalizations, WaveGAN generates a variety of bird sounds but with more noise than the other domains. On piano, WaveGAN produces musically-consonant motifs that, as with the training data, represent a variety of key signatures and rhythmic patterns. For TIMIT, a large-vocabulary speech dataset with many speakers, WaveGAN produces speech-like babbling similar to results from unconditional autoregressive models (Oord et al., 2016).
RELATED WORK
Much of the work within generative modeling of audio is within the context of text-to-speech. Textto-speech systems are primarily either concatenative or parametric. In concatenative systems, audio is generated by sequencing small, prerecorded portions of speech from a phonetically-indexed dictionary (Moulines & Charpentier, 1990;Hunt & Black, 1996). Parametric systems map text to salient parameters of speech, which are then synthesized by a vocoder (Dudley, 1939); see (Zen et al., 2009) for a comprehensive review. Some of these systems use learning-based approaches such as a hidden Markov models (Yoshimura, 2002;Tokuda et al., 2013), and separately-trained neural networks pipelines (Ling et al., 2015) to estimate speech parameters.
Recently, several researchers have investigated parametric speech synthesis with end-to-end neural network approaches that learn to produce vocoder features directly from text or phonetic embeddings (Arik et al., 2017;Ping et al., 2018;Shen et al., 2018). These vocoder features are synthesized to raw audio using off-the-shelf methods such as WORLD (Morise et al., 2016) and Griffin-Lim (Griffin & Lim, 1984), or trained neural vocoders Shen et al., 2018;Ping et al., 2018). All of these methods are supervised: they are trained to map linguistic features to audio outputs.
Several approaches have explored unsupervised generation of raw audio. Oord et al. (2016) propose WaveNet, a convolutional model which learns to predict raw audio samples by autoregressive modeling. WaveNets conditioned on rich linguistic features have widely been deployed in text-tospeech systems, though they have not been demonstrated capable of generating cohesive words in the unconditional setting. Engel et al. (2017) pose WaveNet as an autoencoder to generate musical instrument sounds. Chung et al. (2014); both train recurrent autoregressive models which learn to predict raw audio samples. While autoregressive methods generally produce higher audio fidelity than WaveGAN, synthesis with WaveGAN is orders of magnitude faster.
The application of GANs (Goodfellow et al., 2014) to audio has so far been limited to supervised learning problems in combination with traditional loss functions. Pascual et al. (2017) apply GANs to raw audio speech enhancement. Their encoder-decoder approach combines the GAN objective with an L 2 loss. Fan et al. (2017); Michelsanti & Tan (2017); Donahue et al. (2018) all use GANs in combination with unstructured losses to map spectrograms in one domain to spectrograms in another. use GANs to map musical performance images into spectrograms.
CONCLUSION
We present WaveGAN, the first application of GANs to unsupervised audio generation. WaveGAN is fully parallelizable and can generate hours of audio in only a few seconds. In its current form, WaveGAN can be used to augment sound libraries for multimedia production. In our future work we plan to extend WaveGAN to operate on variable-length audio and also explore a variety of label conditioning strategies. By providing a template for modifying image generation models to operate on audio, we hope that this work catalyzes future investigation of GANs for audio synthesis.
Pete Warden. Post-processing filters reject frequencies corresponding to noise byproducts created by the generative procedure (top). The filter for speech boosts signal in prominent speech bands, while the filter for bird vocalizations (which are more uniformly-distributed in frequency) simply reduces noise presence.
A CHECKERBOARD ARTIFACTS IN AUDIO VERSUS IMAGES
Generative models that upsample by transposed convolution are known to produce characteristic "checkerboard" artifacts in images (Odena et al., 2016), artifacts with particular spatial periodicities. The discriminator of image-generating GANs can learn to reject images with these artifacts because they are uncommon in real data (as discussed in Section 3.1). However, in the audio domain, the discriminator might not have such luxury as these artifacts correspond to frequencies which might rightfully appear in the real data.
To characterize analogous artifacts in WaveGAN, we measure its impulse response by randomly initializing it 1000 times and passing unit impulses to its first convolutional layer. In Figure 5, we plot the average of these responses in the frequency domain. The response has sharp peaks at linear multiples of the sample rates of each convolutional layer (250 Hz, 1 kHz, 4 kHz, etc.). This is in agreement with our informal observation of results from WaveGAN, which often have a pitched noise close to the musical note B (247 × 2 n Hz).
B LEARNED POST-PROCESSING FILTERS
We experiment with adding a post-processing filter to the generator, giving WaveGAN a simple mechanism to filter out undesirable frequencies created by the generative process. This filter has a long window (512 samples) allowing it to represent intricate transfer functions, and the weights of the filter are learned as part of the generator's parameters. In Figure 5, we compare the postprocessing filters that WaveGAN learns for human speech and bird vocalizations. The filters boost signal in regions of the frequency spectrum that are most prominent in the real data domain, and introduce notches at bands that are artifacts of the generative procedure as discussed in the previous section.
B.1 UPSAMPLING PROCEDURE
Transposed convolution upsamples signals by inserting zeros in between samples and applying a learned filterbank. This operation introduces aliased frequencies, copies of pre-existing frequencies shifted by multiples of the new Nyquist rate, into the upsampled signal. While aliased frequencies are usually seen as undesirable artifacts of a bad upsampling procedure, in the generative setting their existence may be crucial for producing fine-grained details in the output. We experiment with three other upsampling strategies in WaveGAN: nearest-neighbor, linear and cubic interpolation, all of which attenuate aliased frequencies. In Figure 6, we compare these strategies visually. While nearest neighbor upsampling resulted in similar audio output to transposed convolution, linear and cubic interpolation strategies resulted in qualitatively poor audio output (sound examples: bit. ly/2xUNCIx). We hypothesize that the aliased frequencies produced by upsampling convolutions may be more critical to audio generation than image generation.
C FELINE TURING TEST
As our results improved throughout the course of this research, our cats became quite intrigued by the synthetic bird vocalizations produced by WaveGAN (Figure 7). We found this to be encouraging evidence that our method was producing reasonably convincing audio.
D ARCHITECTURE DESCRIPTION
In Tables 2 and 3, we list the full architectures for our WaveGAN generator and discriminator respectively. In Tables 4 and 5, we list the same for SpecGAN. In these tables, n is the batch size, d modifies model size, and c is the number of channels in the examples. In all of our experiments in this paper, c = 1. All dense and convolutional layers include biases. No batch normalization is used in WaveGAN or SpecGAN.
E TRAINING HYPERPARAMETERS
In Table 6, we list the values of these and all other hyperparameters for our experiments, which constitute our out-of-the-box recommendations for WaveGAN and SpecGAN. Adam (α = 1e−4, β1 = 0.5, β2 = 0.9)
Figure 1 :Figure 2 :
12First eight principal components for 5x5 patches from natural images (left) versus those of length-25 audio slices from speech (right). Periodic patterns are unusual in natural images but a fundamental structure in audio. Depiction of the transposed convolution operation for the first layers of the DCGAN (Radford et al., 2016) (left) and WaveGAN (right) generators. DCGAN uses small (5x5), twodimensional filters while WaveGAN uses longer (length-25), one-dimensional filters and a larger upsampling factor. Both strategies have the same number of parameters and numerical operations.
Figure 3 :
3At each layer of the Wave-GAN discriminator, the phase shuffle operation perturbs the phase of each feature map by Uniform ∼ [−n, n] samples, filling in the missing samples (dashed outlines) by reflection. Here we depict all possible outcomes for a layer with four feature maps (n = 1).
Figure 4 :
4Top: Random samples from each of the five datasets used in this study, illustrating the wide variety of spectral characteristics. Middle: Random samples generated by WaveGAN for each domain. WaveGAN operates in the time domain but results are displayed here in the frequency domain for visual comparison. Bottom: Random samples generated by SpecGAN for each domain.
): 1 .
1Drum sound effects (0.7 hours): Drum samples for kicks, snares, toms, and cymbals 2. Bird vocalizations (12.2 hours): In-the-wild recordings of many species (Boesman, 2018) 3. Piano (0.3 hours): Professional performer playing a variety of Bach compositions 4. Large vocab speech (TIMIT) (2.4 hours): Multiple speakers, clean(Garofolo et al., 1993)
Figure 5 :
5(Top): Average impulse response for 1000 random initializations of the WaveGAN generator. (Bottom): Response of learned post-processing filters for speech and bird vocalizations.
Figure 6 :Figure 7 :
67Depiction of the upsampling strategy used by transposed convolution (zero insertion) and other strategies which mitigate aliasing: nearest neighbor, linear and cubic interpolation. Compared to resting state, this cat's level of alertness increased when presented bird vocalizations from WaveGAN and SpecGAN.
human judges)Experiment
Inception score |D|self |D|train Acc. Quality Ease Diversity
Real (train)
9.18 ± 0.04
1.1
0.0
Real (test)
8.01 ± 0.24
1.0
1.0
0.95
3.9
3.9
3.5
Parametric
5.02 ± 0.06
0.7
1.1
WaveGAN
4.12 ± 0.03
1.4
2.0
+ Phase shuffle n = 2
4.67 ± 0.01
0.8
2.3
0.58
2.3
2.8
3.2
+ Phase shuffle n = 4
4.54 ± 0.03
1.0
2.3
+ Nearest neighbor
3.77 ± 0.02
1.8
2.6
+ Post-processing
3.92 ± 0.03
1.4
2.9
+ Dropout
3.93 ± 0.03
1.0
2.6
SpecGAN
6.03 ± 0.04
1.1
1.4
0.66
1.9
2.8
2.6
+ Phase shuffle n = 1
3.71 ± 0.03
0.8
1.6
Takayoshi Yoshimura. Simultaneous modeling of phonetic and prosodic parameters, and characteristic conversion for hmm-based text-to-speech systems. 2002.Speech commands: A dataset for limited-vocabulary speech recognition.
arXiv:1804.03209, 2018.
Heiga Zen, Keiichi Tokuda, and Alan W Black. Statistical parametric speech synthesis. Speech
Communication, 2009.
Table 3 :
3WaveGAN discriminator architectureOperation
Kernel Size
Output Shape
Input x or G(z)
(n, 16384, c)
Conv1D (Stride=4)
(25, c, d)
(n, 4096, d)
LReLU (α = 0.2)
(n, 4096, d)
Phase Shuffle (n = 2)
(n, 4096, d)
Conv1D (Stride=4)
(25, d, 2d)
(n, 1024, 2d)
LReLU (α = 0.2)
(n, 1024, 2d)
Phase Shuffle (n = 2)
(n, 1024, 2d)
Conv1D (Stride=4)
(25, 2d, 4d)
(n, 256, 4d)
LReLU (α = 0.2)
(n, 256, 4d)
Phase Shuffle (n = 2)
(n, 256, 4d)
Conv1D (Stride=4)
(25, 4d, 8d)
(n, 64, 8d)
LReLU (α = 0.2)
(n, 64, 8d)
Phase Shuffle (n = 2)
(n, 64, 8d)
Conv1D (Stride=4)
(25, 8d, 16d) (n, 16, 16d)
LReLU (α = 0.2)
(n, 16, 16d)
Reshape
(n, 256d)
Dense
(256d, 1)
(n, 1)
Table 5 :
5SpecGAN discriminator architecture Operation Kernel Size Output Shape Input x or G(z) (n, 128, 128, c) Conv2D (Stride=2) (5, 5, c, d) (n, 64, 64, d) LReLU (α = 0.2) (n, 64, 64, d) Conv2D (Stride=2) (5, 5, d, 2d)(n, 32, 32, 2d)
Table 6 :
6WaveGAN hyperparametersName
Value
Input data type
16-bit PCM (requantized to 32-bit float)
Model data type
32-bit floating point
Num channels (c)
1
Batch size (b)
64
Model dimensionality (d)
64
Phase shuffle (WaveGAN) 2
Phase shuffle (SpecGAN)
0
Loss
WGAN-GP (Gulrajani et al., 2017)
WGAN-GP λ
10
D updates per G update
5
Optimizer
Sound examples: bit.ly/2xUNCIx, Drum demo: bit.ly/2xL0wJQ, Interactive notebook: bit.ly/2NIEjWW, Code: github.com/chrisdonahue/wavegan
Deep Voice 2: Multi-speaker neural text-to-speech. Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou, NIPS. Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, and Yanqi Zhou. Deep Voice 2: Multi-speaker neural text-to-speech. In NIPS, 2017.
Martin Arjovsky, Soumith Chintala, Léon Bottou, Gan Wasserstein, ICML. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. In ICML, 2017.
BEGAN: Boundary equilibrium generative adversarial networks. David Berthelot, Tom Schumm, Luke Metz, arXiv:1703.10717David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adver- sarial networks. arXiv:1703.10717, 2017.
Bird recordings. Peter Boesman, Peter Boesman. Bird recordings. https://www.xeno-canto.org/contributor/ OOECIWCSWV, 2018. Accessed: 2018-01-08.
Synthetic speech commands dataset. Johannes Buchner, Johannes Buchner. Synthetic speech commands dataset. https://www.kaggle.com/ jbuchner/synthetic-speech-commands-dataset, 2017. Accessed: 2017-01-15.
Deep cross-modal audio-visual generation. Lele Chen, Sudhanshu Srivastava, Zhiyao Duan, Chenliang Xu, Proceedings of the on Thematic Workshops of ACM Multimedia. the on Thematic Workshops of ACM MultimediaLele Chen, Sudhanshu Srivastava, Zhiyao Duan, and Chenliang Xu. Deep cross-modal audio-visual generation. In Proceedings of the on Thematic Workshops of ACM Multimedia, 2017.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, NIPS Deep Learning and Representation Learning Workshop. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning and Representa- tion Learning Workshop, 2014.
Exploring speech enhancement with generative adversarial networks for robust speech recognition. Chris Donahue, Bo Li, Rohit Prabhavalkar, In ICASSP. Chris Donahue, Bo Li, and Rohit Prabhavalkar. Exploring speech enhancement with generative adversarial networks for robust speech recognition. In ICASSP, 2018.
Remaking speech. Homer Dudley, The Journal of the Acoustical Society of America. Homer Dudley. Remaking speech. The Journal of the Acoustical Society of America, 1939.
Neural audio synthesis of musical notes with WaveNet autoencoders. Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, Mohammad Norouzi, Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas Eck, Karen Simonyan, and Mohammad Norouzi. Neural audio synthesis of musical notes with WaveNet autoencoders. In ICML, 2017.
SVSGAN: Singing voice separation via generative adversarial network. Yen-Lin Zhe-Cheng Fan, Jyh-Shing Roger Lai, Jang, arXiv:1710.11428Zhe-Cheng Fan, Yen-Lin Lai, and Jyh-Shing Roger Jang. SVSGAN: Singing voice separation via generative adversarial network. arXiv:1710.11428, 2017.
TIMIT acoustic-phonetic continuous speech corpus. Linguistic data consortium. S John, Lori F Garofolo, Lamel, M William, Jonathan G Fisher, Fiscus, S David, Nancy L Pallett, Victor Dahlgren, Zue, John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, David S Pallett, Nancy L Dahlgren, and Victor Zue. TIMIT acoustic-phonetic continuous speech corpus. Linguistic data consortium, 1993.
Generative adversarial networks. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, NIPS. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NIPS, 2014.
Signal estimation from modified short-time fourier transform. Daniel Griffin, Jae Lim, IEEE Transactions on Acoustics, Speech, and Signal Processing. Daniel Griffin and Jae Lim. Signal estimation from modified short-time fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1984.
Improved training of Wasserstein GANs. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville, NIPS. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Im- proved training of Wasserstein GANs. In NIPS, 2017.
CNN architectures for large-scale audio classification. Shawn Hershey, Sourish Chaudhuri, P W Daniel, Ellis, F Jort, Aren Gemmeke, Channing Jansen, Manoj Moore, Devin Plakal, Platt, A Rif, Bryan Saurous, Seybold, ICASSP. Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. CNN architectures for large-scale audio classification. In ICASSP, 2017.
Unit selection in a concatenative speech synthesis system using a large speech database. J Andrew, Alan W Hunt, Black, ICASSP. Andrew J Hunt and Alan W Black. Unit selection in a concatenative speech synthesis system using a large speech database. In ICASSP, 1996.
Progressive growing of GANs for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, ICLR. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018.
Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms. Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, Juhan Nam, Sound and Music Computing Conference. Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, and Juhan Nam. Sample-level deep convolu- tional neural networks for music auto-tagging using raw waveforms. In Sound and Music Com- puting Conference, 2017.
Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends. Zhen-Hua Ling, Shi-Yin, Heiga Kang, Andrew Zen, Mike Senior, Xiao-Jun Schuster, Helen M Qian, Li Meng, Deng, IEEE Signal Processing Magazine. Zhen-Hua Ling, Shi-Yin Kang, Heiga Zen, Andrew Senior, Mike Schuster, Xiao-Jun Qian, Helen M Meng, and Li Deng. Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends. IEEE Signal Processing Magazine, 2015.
Least squares generative adversarial networks. Xudong Mao, Qing Li, Haoran Xie, Y K Raymond, Zhen Lau, Stephen Paul Wang, Smolley, Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In ICCV, 2017.
SampleRNN: An unconditional end-to-end neural audio generation model. Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio, Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. SampleRNN: An unconditional end-to-end neural audio generation model. In ICLR, 2017.
Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification. Daniel Michelsanti, Zheng-Hua Tan, Daniel Michelsanti and Zheng-Hua Tan. Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification. In INTERSPEECH, 2017.
WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. Masanori Morise, Fumiya Yokomori, Kenji Ozawa, IEICE Transactions on Information and Systems. Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE Transactions on Information and Sys- tems, 2016.
Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. Eric Moulines, Francis Charpentier, Speech communicationEric Moulines and Francis Charpentier. Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech communication, 1990.
Deconvolution and checkerboard artifacts. Augustus Odena, Vincent Dumoulin, Chris Olah, DistillAugustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, arXiv:1609.03499Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv:1609.03499, 2016.
SEGAN: Speech enhancement generative adversarial network. Santiago Pascual, Antonio Bonafonte, Joan Serrà, INTERSPEECH. Santiago Pascual, Antonio Bonafonte, and Joan Serrà. SEGAN: Speech enhancement generative adversarial network. In INTERSPEECH, 2017.
Deep Voice 3: 2000-speaker neural text-to-speech. Wei Ping, Kainan Peng, Andrew Gibiansky, O Sercan, Ajay Arik, Sharan Kannan, Jonathan Narang, John Raiman, Miller, In ICLR. Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O Arik, Ajay Kannan, Sharan Narang, Jonathan Raiman, and John Miller. Deep Voice 3: 2000-speaker neural text-to-speech. In ICLR, 2018.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, ICLR. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
Learning the speech front-end with raw waveform CLDNNs. N Tara, Ron J Sainath, Andrew Weiss, Senior, W Kevin, Oriol Wilson, Vinyals, INTERSPEECH. Tara N Sainath, Ron J Weiss, Andrew Senior, Kevin W Wilson, and Oriol Vinyals. Learning the speech front-end with raw waveform CLDNNs. In INTERSPEECH, 2015.
Improved techniques for training GANs. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, NIPS. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016.
Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Skerry-Ryan, ICASSP. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, et al. Natural TTS synthesis by condi- tioning WaveNet on Mel spectrogram predictions. In ICASSP, 2018.
Learning from simulated and unsupervised images through adversarial training. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, Russell Webb, CVPR. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. Learning from simulated and unsupervised images through adversarial training. In CVPR, 2017.
Char2Wav: End-to-end speech synthesis. Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Felipe Santos, Kyle Kastner, Aaron Courville, Yoshua Bengio, ICLR Workshops. Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. Char2Wav: End-to-end speech synthesis. In ICLR Workshops, 2017.
A scale for the measurement of the psychological magnitude pitch. Stanley Smith Stevens, John Volkmann, Edwin B Newman, The Journal of the Acoustical Society of America. Stanley Smith Stevens, John Volkmann, and Edwin B Newman. A scale for the measurement of the psychological magnitude pitch. The Journal of the Acoustical Society of America, 1937.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, CVPR. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. Lucas Theis, ICLR. Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In ICLR, 2016.
Speech synthesis based on hidden Markov models. Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, Keiichiro Oura, Proceedings of the IEEE. the IEEEKeiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and Keiichiro Oura. Speech synthesis based on hidden Markov models. Proceedings of the IEEE, 2013.
Yuxuan Wang, Daisy Skerry-Ryan, Yonghui Stanton, Ron J Wu, Navdeep Weiss, Zongheng Jaitly, Ying Yang, Zhifeng Xiao, Samy Chen, Bengio, arXiv:1703.10135Towards end-to-end speech synthesis. Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. Tacotron: Towards end-to-end speech synthesis. arXiv:1703.10135, 2017. |
85,459,724 | VISCERAL MACHINES: RISK-AVERSION IN REINFORCEMENT LEARNING WITH INTRINSIC PHYSIOLOGICAL REWARDS | As people learn to navigate the world, autonomic nervous system (e.g., "fight or flight") responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.) Physiological changes are correlated with these biological preparations to protect one-self from danger. We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function trained on peripheral pulse measurements that are correlated with human autonomic nervous system responses. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage. | [] | VISCERAL MACHINES: RISK-AVERSION IN REINFORCEMENT LEARNING WITH INTRINSIC PHYSIOLOGICAL REWARDS
Daniel Mcduff [email protected]
Microsoft Research Redmond
WA
Ashish Kapoor [email protected]
Microsoft Research Redmond
WA
VISCERAL MACHINES: RISK-AVERSION IN REINFORCEMENT LEARNING WITH INTRINSIC PHYSIOLOGICAL REWARDS
Published as a conference paper at ICLR 2019
As people learn to navigate the world, autonomic nervous system (e.g., "fight or flight") responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.) Physiological changes are correlated with these biological preparations to protect one-self from danger. We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function trained on peripheral pulse measurements that are correlated with human autonomic nervous system responses. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage.
INTRODUCTION
The human autonomic nervous system (ANS) is composed of two branches. One of these, the sympathetic nervous system (SNS), is "hard-wired" to respond to potentially dangerous situations often reducing, or by-passing, the need for conscious processing. The ability to make rapid decisions and respond to immediate threats is one way of protecting oneself from danger. Whether one is in the African savanna or driving in Boston traffic.
The SNS regulates a range of visceral functions from the cardiovascular system to the adrenal system (Jansen et al., 1995). The anticipatory response in humans to a threatening situation is for the heart rate to increase, heart rate variability to decrease, blood to be diverted from the extremities and the sweat glands to dilate. This is the body's "fight or flight" response.
While the primary role of these anticipatory responses is to help one prepare for action, they also play a part in our appraisal of a situation. The combination of sensory inputs, physiological responses and cognitive evaluation form emotions that influence how humans learn, plan and make decisions (Loewenstein & Lerner, 2003). Intrinsic motivation refers to being moved to act based on the way it makes one feel. For example, it is generally undesirable to be in a situation that causes fear and thus we might choose to take actions that help avoid these types of contexts in future. This is contrasted with extrinsic motivation that involves explicit goals (Chentanez et al., 2005).
Driving is an everyday example of a task in which we commonly rely on both intrinsic and extrinsic motivations and experience significant physiological changes. When traveling in a car at highspeed one may experience a heightened state of arousal. This automatic response is correlated with the body's reaction to the greater threats posed by the situation (e.g., the need to adjust steering more rapidly to avoid a pedestrian that might step into the road). Visceral responses are likely to preempt accidents or other events (e.g., a person will become nervous before losing control and hitting someone). Therefore, these signals potentially offer an advantage as a reward mechanism compared to extrinsic rewards based on events that occur in the environment, such as a collision. This paper provides a reinforcement learning (RL) framework that incorporates reward functions for achieving task-specific goals and also minimizes a cost trained on physiological responses to Figure 1: We present a novel approach to reinforcement learning that leverages an artificial network trained on physiological signals correlated with autonomic nervous system responses.
the environment that are correlated with stress. We ask if such a reward function with extrinsic and intrinsic components is useful in a reinforcement learning setting. We test our approach by training a model on real visceral human responses in a driving task.
The key challenges of applying RL in the real-world include the amount of training data required and the high-cost associated with failure cases. For example, when using RL in autonomous driving, rewards are often sparse and skewed. Furthermore, bad actions can lead to states that are both catastrophic and expensive to recover from. While much of the work in RL focuses on mechanisms that are task or goal dependent, it is clear that humans also consider the feedback from the body's nervous system for action selection. For example, increased arousal can help signal imminent danger or failure to achieve a goal. Such mechanisms in an RL agent could help reduce the sample complexity as the rewards are continually available and could signal success or failure before the end of the episode. Furthermore, these visceral signals provide a warning mechanism that in turn could lead to safer explorations.
Our work is most closely related to that in intrinsically motivated learning (Chentanez et al., 2005;Zheng et al., 2018;Haber et al., 2018;Pathak et al., 2017) that uses a combination of intrinsic and extrinsic rewards and shows benefits compared to using extrinsic rewards alone. The key distinction in our work is that we specifically aim to build intrinsic reward mechanisms that are visceral and trained on signals correlated with human affective responses. Our approach could also be considered a form of imitation learning (Ross et al., 2011;Ross & Bagnell, 2014;Ho & Ermon, 2016;Chang et al., 2015) as we use the signal from a human expert for training. However, a difference is that our signal is an implicit response from the driver versus an explicit instruction or action which might commonly be the case in imitation learning.
The structural credit assignment problem, or generalization problem, aims to address the challenge posed by large parameter spaces in RL and the need to give the agent the ability to guess, or have some intuition about new situations based on experience (Lin, 1992). A significant advantage of our proposed method is the reduced sparsity of the reward signal. This makes learning more practical in a large parameter space. We conduct experiments to provide empirical evidence that this can help reduce the number of epochs required in learning. In a sense, the physiological response could be considered as an informed guess about new scenarios before the explicit outcome is known. The challenge with traditional search-based structured prediction is the assumptions that must be made in the search algorithms that are required (Daumé et al., 2009). By training a classifier using a loss based on the human physiological response this problem can potentially be simplified.
The core contributions of this paper are to: (1) present a novel approach to learning in which the reward function is augmented with a model learned directly from human nervous system responses, (2) show how this model can be incorporated into a reinforcement learning paradigm and (3) report the results of experiments that show the model can improve both safety (reducing the number of mistakes) and efficiency (reducing the sample complexity) of learning.
In summary, we argue that a function trained on physiological responses could be used as an intrinsic reward or value function for artificially intelligent systems, or perhaps more aptly artificially emotionally intelligent systems. We hypothesize that incorporating intrinsic rewards with extrinsic rewards in an RL framework (as shown in Fig 1) will both improve learning efficiency as well as reduce catastrophic failure cases that occur during the training.
BACKGROUND
SYMPATHETIC NERVOUS SYSTEM
The SNS is activated globally in response to fear and threats. Typically, when threats in an environment are associated with a "fight of flight" response the result is an increase in heart rate and perspiration and release of adrenaline and cortisol into the circulatory system. These physiological changes act to help us physically avoid danger but also play a role in our appraisal of emotions and ultimately our decision-making. A large volume of research has found that purely rational decision-making is sub-optimal (Lerner et al., 2015). This research could be interpreted as indicating that intrinsic rewards (e.g., physiological responses and the appraisal of an emotion) serve a valuable purpose in decision-making. Thus, automatic responses both help people act quickly and in some cases help them make better decisions. While these automatic responses can be prone to mistakes, they are vital for keeping us safe. Logical evaluation of a situation and the threat it presents is also important. Ultimately, a combination of intrinsic emotional rewards and extrinsic rational rewards, based on the goals one has, is likely to lead to optimal results.
REINFORCEMENT LEARNING
We consider the standard RL framework, where an agent interacts with the environment (described by a set of states S), through a set of actions (A). An action a t at a time-step t leads to a distribution over the possible future state p(s t+1 |s t , a t ), and a reward r : S × A → R. In addition, we start with a distribution of initial states p(s 0 ) and the goal of the agent is to maximize the discounted sum of future rewards: R t = ∞ i=t γ i−t r i , where γ is the discount factor. Algorithms such as Deep Q-Networks (DQN) learn a Neural-Network representation of a deterministic policy π : S → A that approximates an optimal Q-function:
Q * (s, a) = E s ∼p(·|s,a) [r(s, a) + γ max a ∈A Q * (s , a )].
The application of RL techniques to real-world scenarios, such as autonomous driving, is challenging due to the high sample complexity of the methods. High-sample complexity arises due to the creditassignment problem: it is difficult to identify which specific action from a sequence was responsible for a success or failure. This issue is further exacerbated in scenarios where the rewards are sparse. Reward shaping (Ng et al., 1999;Russell, 1998) is one way to deal with the sample complexity problem, in which heuristics are used to boost the likelihood of determining the responsible action.
We contrast sparse episodic reward signals in RL agents with physiological responses in humans. We conject that the sympathetic nervous system (SNS) responses for a driver are as informative and useful, and provide a more continuous form of feedback. An example of one such SNS response is the volumetric change in blood in the periphery of skin, controlled in part through vasomodulation. We propose to use a reward signal that is trained on a physiological signal that captures sympathetic nervous system activity. The key insight being that physiological responses in humans indicate adverse and risky situations much before the actual end-of-episode event (e.g. an accident) and even if the event never occurs. By utilizing such a reward function, not only is the system able to get a more continuous and dense reward but it also allows us to reason about the credit assignment problem. This is due to the fact that an SNS response is often tied causally to the set of actions responsible for the eventual success or failure of the episode. A zoomed in section of the pulse wave with frames from the view of the driver are shown. Note how the pulse wave pinches between seconds 285 and 300, during this period the driver collided with a wall while turning sharply to avoid another obstacle. The pinching begins before the collision occurs as the driver's anticipatory response is activated.
Our work is related, in spirit, to a recent study that used facial expressions as implicit feedback to help train a machine learning systems for image generation (Jaques et al., 2018). The model produced sketches that led to significantly more positive facial expressions when trained with input of smile responses from an independent group. However, this work was based on the idea of Social Learning Theory (Bandura & Walters, 1977) and that humans learn from observing the behaviors of others, rather than using their own nervous system response as a reward function.
THE PROPOSED FRAMEWORK
Our proposal is to consider a reward function that has both an extrinsic component r and an intrinsic componentr. The extrinsic component rewards behaviors that are task specific, whereas the intrinsic component specifically aims to predict a human physiological response to SNS activity and reward actions that lead to states that reduce stress and anxiety. The final rewardr then is a function that considers both the extrinsic as well as intrinsic componentsr = f (r,r). Theoretically, the function f (·, ·) can be fairly complex and one possibility would be to parameterize it as a neural network. For simplicity, we consider linear combinations of the extrinsic and intrinsic rewards in this paper. Formally, lets consider an RL framework based on a DQN with reward r. We propose to use a modified rewardr that is a convex combination of the original reward r and a component that mirrors human physiological responsesr:r
= λr + (1 − λ)r(1)
Here λ is a weighting parameter that provides a trade-off between the desire for task completion (extrinsic motivation) and physiological response (intrinsic motivation). For example, in an autonomous driving scenario the task dependent reward r can be the velocity, whiler can correspond to physiological responses associated with safety. The goal of the system then is to complete the task while minimizing the physiological arousal response. The key challenge now is to build a computational model of the intrinsic rewardr given the state of the agent. In the rest of the paper we focus on the autonomous driving scenario as a canonical example and discuss how we can model the appropriate physiological responses and utilize them effectively in this framework. One of the greatest challenges in building a predictive model of SNS responses is the collection of realistic ground truth data. In this work, we use high-fidelity simulations (Shah et al., 2018) to collect physiological responses of humans and then train a deep neural network to predict SNS responses that will ultimately be used during the reinforcement learning process. In particular, we rely on the photoplethysmographic (PPG) signal to capture the volumetric change in blood in the periphery of the skin (Allen, 2007). The blood volume pulse waveform envelope pinches when a person is startled, fearful or anxious, which is the result of the body diverting blood from the extremities to the vital organs and working muscles to prepare them for action, the "fight or flight" response. Use of this phenomenon in affective computing applications is well established and has been leveraged to capture emotional responses in marketing/media testing (Wilson & Sasse, 2000), computer tasks (Scheirer et al., 2002) and many other psychological studies (L. Fredrickson & Levenson, 1998;Gross, 2002). The peripheral pulse can be measured unobtrusively and even without contact (Poh et al., 2010;Chen & McDuff, 2018), making it a good candidate signal for scalable measurement. We leverage the pulse signal to capture aspects of the nervous system response and our core idea is to train an artificial network to mimic the pulse amplitude variations based on the visual input from the perspective of the driver.
To design a reward function based on the nervous system response of the driver in the simulated environment we collected a data set of physiological recordings and synchronized first person video frames from the car. Using this data we trained a convolutional neural network (CNN) to mimic the physiological response based on the input images. Fig 2 shows a section of the recorded blood volume pulse signal with pulse peaks highlighted, notice how the waveform envelope changes.
Reinforcement Learning Environments:
We performed our experiments in AirSim (Shah et al., 2018) where we instantiated an autonomous car in a maze. The car was equipped with an RGB camera and the goal for the agent was to learn a policy that maps the camera input to a set of controls (discrete set of steering angles). The agent's extrinsic reward can be based on various driving related tasks, such as keeping up the velocity, making progress towards a goal, traveling large distances, and can be penalized heavily for collisions. Fig 2 shows example frames captured from the environment. The maze consisted of walls and ramps and was designed to be non-trivial to navigate for the driver.
Intrinsic Reward Architecture: We used a CNN to predict the normalized pulse amplitude derived from the physiological response of the driver. The image frames from the camera sensor in the environment served as an input to the network. The input frames were downsampled to 84 × 84 pixels and converted to grayscale format. They were normalized by subtracting the mean pixel value (calculated on the training set). The network architecture is illustrated in Fig 3. A dense layer of 128 hidden units preceded the final layer that had linear activation units and a mean square error (MSE) loss, so the output formed a continuous signal from 0 to 1.
Training the Reward Network: We recruited four participants (2 male, 2 female) to drive a vehicle around the maze and to find the exit point. All participants were licensed drivers and had at least seven years driving experience. For each participant we collected approximately 20 minutes (∼24,000 frames at a resolution of 256 × 144 pixels and frame rate of 20 frames-per-second) of continuous Figure 4: Frames from the environment ordered by the predicted pulse amplitude from our CNN intrinsic reward model. A lower value indicates a higher SNS/"fight or flight" response. This is associated with more dangerous situations (e.g., driving close to walls and turning in tight spaces).
driving in the virtual environment (for a summary of the data see Table 1). In addition, the PPG signal was recorded from the index finger of the non-dominant hand using a Shimmer3 1 GSR+ with an optical pulse sensor. The signal was recorded at 51.6Hz. The physiological signals were synchronized with the frames from the virtual environment using the same computer clock. A standard custom peak detection algorithm (McDuff et al., 2014) was used to recover the systolic peaks from the pulse waveform. The amplitudes of the peaks were normalized, to a range 0 to 1, across the entire recording. Following the experiment the participants reported how stressful they found the task (Not at all, A little, A moderate amount, A lot, A great deal). The participants all reported experiencing some stress during the driving task. The frames and pulse amplitude measures were then used to train the CNN (details in the next section). The output of the resulting trained CNN (the visceral machine) was used as the reward (r = CNN Output) in the proposed framework.
EXPERIMENTS AND RESULTS
We conducted experiments to answer: (1) if we can build a deep predictive model that estimates a peripheral physiological response associated with SNS activity and (2) if using such predicted responses leads to sample efficiency in the RL framework. We use DQN as a base level approach and build our proposed changes on top of it. We consider three different tasks in the domain of autonomous driving: (a) keeping the velocity high (r is instantaneous velocity), (b) traveling long straight-line distances from the origin (r is absolute distance from origin) without any mishaps and (c) driving towards a goal (r = 10 if the goal is achieved). While the velocity and distance task provides dense rewards, the goal directed task is an example where the rewards are sparse and episodic. Note that in all three cases we terminate the episode with a high negative reward (r = -10) if a collision happens.
HOW WELL CAN WE PREDICT BVP AMPLITUDE?
We trained five models, one for each of the four participants independently and one for all the participants combined. In each case, the first 75% of frames from the experimental recordings were For all three tasks we observe that using appropriately balanced visceral rewards with the extrinsic reward leads to better learning rates when compared to either vanilla DQN (magenta triangle λ = 1) or DQN that only has the visceral component (red circle λ = 0). The error bars in the plots correspond to standard error-non-overlapping bars indicate significant differences (p<0.05).
taken as training examples and the latter 25% as testing examples. The data in the training split was randomized and a batch size of 128 examples was used. Max pooling was inserted between layers 2 and 3, layers 4 and 5, and layers 7 and 8. To overcome overfitting, a dropout layer (Srivastava et al., 2014) was added after layer 7 with rate d 1 = 0.5. The loss during training of the reward model was the mean squared error. Each model was trained for 50 epochs after which the training root mean squared error (RMSE) loss was under 0.1 for all models. The RMSE was then calculated on the independent test set and was between 0.10 and 0.19 for all participants (see Table 1). As a naive baseline the testing loss for a random prediction was 0.210 greater on average. In all cases the CNN model loss was significantly lower than the random prediction loss (based on unpaired T-Tests). Fig 4 illustrates how a trained CNN associates different rewards to various situations. Specifically, we show different examples of the predicted pulse amplitudes on an independent set of the frames from the simulated environment. A lower value indicates a higher stress response. Quantitatively and qualitatively these results show that we could predict the pulse amplitude and that pinching in the peripheral pulse wave, and increased SNS response, was associated with approaching (but not necessarily contacting) obstacles. The remaining results were calculated using the model from P1; however, similar data were obtained from the other models indicating that the performance generalized across participants.
DOES THE VISCERAL REWARD COMPONENT IMPROVE PERFORMANCE?
We then used the trained CNN as the visceral reward component in a DQN framework and used various values of λ to control the relative weight when compared to the task dependent reward component. Fig 5 shows the mean extrinsic reward per episode as a function of training time. The plots are averaged over 10 different RL runs and we show plots for different values of λ. When λ = 1 that RL agent is executing vanilla DQN, whereas λ = 0 means that there is no extrinsic reward signal. For all three tasks, we observe that the learning rate improves significantly when λ is either non-zero or not equal to 1. The exact value of the optimal λ varies from task to task, due to the slightly different final reward structures in the different tasks. One of the main reasons is that the Note that we consider an episode terminated when the agent experiences a collision, so the length of the episode is a surrogate measure of how cautious an agent is. We observed that a low value of λ leads to longer episodes sooner while high values do not lead to much improvement overall. Essentially, a low value of λ leads to risk aversion without having the desire to accomplish the task. This results in a behavior where the agent is happy to make minimal movements while staying safe.
Fig 6 (Distance) shows that the average length of the episodes does not increase with the number of episodes. This is because with increasing numbers of episodes the car travels further but also faster. These two factors cancel one another out resulting in episodes with similar lengths (durations).
WHAT IS THE EFFECT OF A DECAYING VISCERAL REWARD?
What happens if we introduce a time varying intrinsic reward that decays over time? We also ran experiments with varying λ:
λ = 1 − 1 N Episode(2)
Where, N Episode is the current episode number. As before, the reward was calculated as in Eqn. 1. Therefore, during the first episode λ is equal to zero and the reward is composed entirely of the intrinsic component. As the number of episodes increases the contribution of the intrinsic reward decreases. By episode 95 the intrinsic reward contributes to less than 2% of the total reward. Figure 7: Average velocity (left) and average length (right) per episode as the system evolves over time. Blue) Performance with a time decaying contribution from the intrinsic reward (decaying at 1/(No. of Episodes)). Red) Performance of the best λ at each episode (red lines) from the previous velocity experiments (see Fig. 5 and 6). The episode length is superior with the time decaying intrinsic reward because we are directly optimizing for safety initially and the agent quickly learns not to crash. Something we question is, is the CNN predicting the SNS responses doing more than predicting distances to the wall and if there are ways in which the original reward can be shaped to include that information? We did RL experiments where we compared the proposed architecture (λ = 0.25) with an agent that replaced the intrinsic reward component with the reward 1 − exp[−|distance to wall|]. Note that such distance measures are often available through sensors (such as sonar, radar etc.); however, given the luxury of the simulation we chose to use the exact distance for simplicity. Fig 8 shows both the average reward per episode as well as average length per episode, for the velocity task, as a function of training time. We observed that the agent that had used the CNN for the intrinsic reward component performs better than the heuristic. We believe that the trained CNN is far richer than the simple distance-based measure and is able to capture the context around the task of driving the car in confined spaces (e.g., avoiding turning at high speeds and rolling the car).
CONCLUSION AND FUTURE WORK
Heightened arousal is an key part of the "fight or flight" response we experience when faced with risks to our safety. We have presented a novel reinforcement learning paradigm using an intrinsic reward function trained on peripheral physiological responses and extrinsic rewards based on mission goals. First, we trained a neural architecture to predict a driver's peripheral blood flow modulation based on the first-person video from the vehicle. This architecture acted as the reward in our reinforcement learning step. A major advantage of training a reward on a signal correlated with the sympathetic nervous system responses is that the rewards are non-sparse -the negative reward starts to show up much before the car collides. This leads to efficiency in training and with proper design can lead to policies that are also aligned with the desired mission. While emotions are important for decision-making (Lerner et al., 2015), they can also detrimentally effect decisions in certain contexts. Future work will consider how to balance intrinsic and extrinsic rewards and include extensions to representations that include multiple intrinsic drives (such as hunger, fear and pain).
We must emphasize that in this work we were not attempting to mimic biological processes or model them explicitly. We were using a prediction of the peripheral blood volume pulse as an indicator of situations that are correlated with high arousal.
Figure 2 :
2An example of the blood volume pulse wave during driving in the simulated environment.
Figure 3 :
3We used an eight-layer CNN (seven convolutional layers and a fully connected layer) to estimate the normalized pulse amplitude derived from the physiological response of the driver. The inputs were the frames from the virtual environment, AirSim.
Figure 5 :
5The graph plots average extrinsic reward per episode as the system evolves over time for different values of λ.
Figure 6 :
6The graph plots average length per episode as the system evolves over time. For all three tasks we observe that using visceral reward components leads to better longer episodes when compared to vanilla DQN (λ = 1). This implies that the agent with the visceral reward component becomes more cautious about collisions sooner. The error bars in the plots correspond to standard error-non-overlapping bars indicate significant differences (p<0.05).rewards are non-sparse with the visceral reward component contributing effectively to the learning. Low values of λ promote a risk-averse behavior in the agent and higher values λ train an agent with better task-specific behavior, but require longer periods of training. It is the mid-range values of λ (e.g. 0.25) that lead to optimal behavior both in terms of the learning rate and the desire to accomplish the mission.4.3 DOES THE VISCERAL REWARD COMPONENT HELP REDUCE COLLISIONS? Fig 6 plots how the average length of an episode changes with training time for different values of λ.
Fig 7 plots the average velocity (left) and average length (right) per episode as the system evolves over time. The blue lines show the performance with a time decaying contribution from the intrinsic reward. These are compared with the best λ (= 0.25) (red lines) from the previous velocity experiments (see Figs 5 and 6
Table 1 :
1Summary of the Data and Testing Loss of our Pulse Amplitude Prediction Algorithm. We Compare the Testing Loss from the CNN Model with a Random Baseline. In All Cases the CNN Gave a Significantly Lower RMSE.Part. Gender
Age
Driving Exp. Was the Task # Frames Testing Loss Testing Loss Improve.
(Yrs)
(Yrs)
Stressful?
(RMSE)
over Random (RMSE)
P1
M
31
8
A lot
28,968
.189
.150
P2
F
37
20
A lot
23,005
.100
.270
P3
F
33
7
A little
23,789
.102
.234
P4
M
31
15
A little
25,972
.116
.194
All P.
101,734
.115
.210
Number of Episodes
30
50
70
90
110
130
150
Average
Velocity
2.55
2.6
2.65
2.7
2.75
). The episode length is quite superior with the time decaying intrinsic reward. This is because we are directly optimizing for safety initially and the agent quickly learns not to crash. This highlights the value of the intrinsic reward in increasing the safety of the vehicle and extending the length of episodes, especially initially, when the vehicle has little knowledge of how to behave.50
60
70
80
90
100
Average
Velocity
2.6
2.65
2.7
Velocity
Best 6
Adaptive 6
Number of Episodes
40
50
60
70
80
90
100
Average
Length
of
Episode
29
29.5
30
30.5
31
31.5
32
32.5
Velocity
Best 6
Adaptive 6
Figure 8 :
8Comparison of the CNN based intrinsic reward with a reward shaping mechanism. The plots are (left) average extrinsic reward per episode and (right) length of episode as the system evolves and show the advantages of the CNN based approach in both cases. HOW DOES THE PERFORMANCE COMPARE TO REWARD SHAPING?Number of Episodes
30
50
70
90
110
130
150
Average
Velocity
2.3
2.4
2.5
2.6
2.7
2.8
Velocity
CNN with 6 = 0.25
Heuristic with 6 = 0.25
Number of Episodes
30
50
70
90
110
130
150
Average
Length
of
Episode
20
22
24
26
28
30
32
34
Velocity
CNN with 6 = 0.25
Heuristic with 6 = 0.25
4.5
http://www.shimmersensing.com/
Photoplethysmography and its application in clinical physiological measurement. John Allen, Physiological measurement. 2831Allen, John. Photoplethysmography and its application in clinical physiological measurement. Physiological measurement, 28(3):R1, 2007.
Social learning theory. Albert Bandura, Richard H Walters, Prentice-hall1Englewood Cliffs, NJBandura, Albert and Walters, Richard H. Social learning theory, volume 1. Prentice-hall Englewood Cliffs, NJ, 1977.
Learning to search better than your teacher. Kai-Wei Chang, Krishnamurthy, Akshay, Alekh Agarwal, Iii Daumé, Hal Langford, John , Proceedings of the 32nd International Conference on International Conference on Machine Learning. the 32nd International Conference on International Conference on Machine Learning37Chang, Kai-Wei, Krishnamurthy, Akshay, Agarwal, Alekh, Daumé III, Hal, and Langford, John. Learning to search better than your teacher. In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37, pp. 2058-2066. JMLR. org, 2015.
Deepphys: Video-based physiological measurement using convolutional attention networks. Weixuan Chen, Daniel Mcduff, arXiv:1805.07888arXiv preprintChen, Weixuan and McDuff, Daniel. Deepphys: Video-based physiological measurement using convolutional attention networks. arXiv preprint arXiv:1805.07888, 2018.
Intrinsically motivated reinforcement learning. Nuttapong Chentanez, Barto, G Andrew, Singh, P Satinder, Advances in neural information processing systems. Chentanez, Nuttapong, Barto, Andrew G, and Singh, Satinder P. Intrinsically motivated reinforcement learning. In Advances in neural information processing systems, pp. 1281-1288, 2005.
Search-based structured prediction. Hal Daumé, John Langford, Daniel Marcu, Machine learning. 753Daumé, Hal, Langford, John, and Marcu, Daniel. Search-based structured prediction. Machine learning, 75(3): 297-325, 2009.
Emotion regulation: Affective, cognitive, and social consequences. James J Gross, Psychophysiology. 393Gross, James J. Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology, 39(3): 281-291, 2002.
Learning to play with intrinsicallymotivated self-aware agents. Nick Haber, Damian Mrowca, Li Fei-Fei, Yamins, L K Daniel, arXiv:1802.07442arXiv preprintHaber, Nick, Mrowca, Damian, Fei-Fei, Li, and Yamins, Daniel L. K. Learning to play with intrinsically- motivated self-aware agents. arXiv preprint arXiv:1802.07442, 2018.
Generative adversarial imitation learning. Jonathan Ho, Stefano Ermon, Advances in Neural Information Processing Systems. Ho, Jonathan and Ermon, Stefano. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565-4573, 2016.
Central command neurons of the sympathetic nervous system: basis of the fight-or-flight response. Arthur Jansen, Van Sp, Nguyen, Xay, Karpitskiy, Vladimir, Mettenleiter, C Thomas, Loewy , Arthur D , Science. 2705236Jansen, Arthur SP, Van Nguyen, Xay, Karpitskiy, Vladimir, Mettenleiter, Thomas C, and Loewy, Arthur D. Central command neurons of the sympathetic nervous system: basis of the fight-or-flight response. Science, 270(5236):644-646, 1995.
Natasha Jaques, Jesse Engel, Ha, David, Bertsch, Fred, Rosalind Picard, Douglas Eck, arXiv:1802.04877Learning via social awareness: improving sketch representations with facial feedback. arXiv preprintJaques, Natasha, Engel, Jesse, Ha, David, Bertsch, Fred, Picard, Rosalind, and Eck, Douglas. Learning via social awareness: improving sketch representations with facial feedback. arXiv preprint arXiv:1802.04877, 2018.
Positive emotions speed recovery from the cardiovascular sequelae of negative emotions. L Fredrickson, Barbara Levenson, Robert W , Cognition & emotion. 122L. Fredrickson, Barbara and Levenson, Robert W. Positive emotions speed recovery from the cardiovascular sequelae of negative emotions. Cognition & emotion, 12(2):191-220, 1998.
Emotion and decision making. Jennifer S Lerner, Li, Ye, Piercarlo Valdesolo, Karim S Kassam, Annual Review of Psychology. 66Lerner, Jennifer S, Li, Ye, Valdesolo, Piercarlo, and Kassam, Karim S. Emotion and decision making. Annual Review of Psychology, 66, 2015.
Self-improving reactive agents based on reinforcement learning, planning and teaching. Long-Ji Lin, Machine learning. 83-4Lin, Long-Ji. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321, 1992.
The role of affect in decision making. Handbook of affective science. George Loewenstein, Jennifer S Lerner, 6193Loewenstein, George and Lerner, Jennifer S. The role of affect in decision making. Handbook of affective science, 619(642):3, 2003.
Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera. Daniel Mcduff, Sarah Gontarek, Rosalind W Picard, IEEE Transactions on Biomedical Engineering. 6112McDuff, Daniel, Gontarek, Sarah, and Picard, Rosalind W. Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera. IEEE Transactions on Biomedical Engineering, 61(12):2948-2954, 2014.
Policy invariance under reward transformations: Theory and application to reward shaping. Andrew Y Ng, Daishi Harada, Stuart Russell, ICML. 99Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278-287, 1999.
Curiosity-driven exploration by self-supervised prediction. Deepak Pathak, Pulkit, Alexei A Efros, Trevor Darrell, International Conference on Machine Learning. Pathak, Deepak, Agrawal, Pulkit, Efros, Alexei A., and Darrell, Trevor. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning, pp. 2778-2787, 2017.
Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Ming - Poh, Zher, Mcduff, J Daniel, Rosalind W Picard, Optics express. 1810Poh, Ming-Zher, McDuff, Daniel J, and Picard, Rosalind W. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Optics express, 18(10):10762-10774, 2010.
Reinforcement and imitation learning via interactive no-regret learning. Stephane Ross, Bagnell, Andrew, arXiv:1406.5979arXiv preprintRoss, Stephane and Bagnell, J Andrew. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
A reduction of imitation learning and structured prediction to no-regret online learning. Stéphane Ross, Geoffrey Gordon, Drew Bagnell, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsRoss, Stéphane, Gordon, Geoffrey, and Bagnell, Drew. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011.
Learning agents for uncertain environments. Stuart Russell, Proceedings of the eleventh annual conference on Computational learning theory. the eleventh annual conference on Computational learning theoryACMRussell, Stuart. Learning agents for uncertain environments. In Proceedings of the eleventh annual conference on Computational learning theory, pp. 101-103. ACM, 1998.
Frustrating the user on purpose: a step toward building an affective computer. Jocelyn Scheirer, Raul Fernandez, Jonathan Klein, Rosalind W Picard, Interacting with computers. 142Scheirer, Jocelyn, Fernandez, Raul, Klein, Jonathan, and Picard, Rosalind W. Frustrating the user on purpose: a step toward building an affective computer. Interacting with computers, 14(2):93-118, 2002.
Airsim: High-fidelity visual and physical simulation for autonomous vehicles. Shah, Shital, Dey, Debadeepta, Chris Lovett, Ashish Kapoor, Field and Service Robotics. SpringerShah, Shital, Dey, Debadeepta, Lovett, Chris, and Kapoor, Ashish. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics, pp. 621-635. Springer, 2018.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of machine learning research. 151Srivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1): 1929-1958, 2014.
Do users always know what's good for them? utilising physiological responses to assess media quality. Gillian M Wilson, Sasse, Angela, People and computers XIV-Usability or else!. SpringerWilson, Gillian M and Sasse, M Angela. Do users always know what's good for them? utilising physiological responses to assess media quality. In People and computers XIV-Usability or else!, pp. 327-339. Springer, 2000.
On learning intrinsic rewards for policy gradient methods. Zheng, Zeyu, Junhyuk Oh, Singh, Satinder, arXiv:1804.06459arXiv preprintZheng, Zeyu, Oh, Junhyuk, and Singh, Satinder. On learning intrinsic rewards for policy gradient methods. arXiv preprint arXiv:1804.06459, 2018. |
256,416,405 | Interpreting Robustness Proofs of Deep Neural Networks | In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features. | [
248496160,
189856873,
211082795,
3488815,
1450294,
52962648,
214107001
] | Interpreting Robustness Proofs of Deep Neural Networks
Debangshu Banerjee
Avaljot Singh
Gagandeep Singh
Interpreting Robustness Proofs of Deep Neural Networks
In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features.
Introduction
The black box construction and lack of robustness of deep neural networks (DNNs) are major obstacles to their realworld deployment in safety-critical applications like autonomous driving (Bojarski et al., 2016) or medical diagnosis (Amato et al., 2013). To mitigate the lack of trust caused by black-box behaviors, there has been a large amount of work on interpreting individual DNN predictions to gain insights into their internal workings. Orthogonally, the field of DNN verification has emerged to formally prove or disprove the robustness of neural networks in a particular region capturing an infinite set of inputs. Verification can be leveraged during training for constructing more robust models.
We argue that while these methods do improve trust to a certain degree, the insights and guarantees derived from their independent applications are not enough to build sufficient confidence for enabling the reliable real-world deployment 1 University of Illinois Urbana-Champaign 2 VMware Research. Correspondence to: Debangshu Banerjee <[email protected]>.
of DNNs. Existing DNN interpretation methods (Sundararajan et al., 2017) explain the model behavior on individual inputs, but they often do not provide human-understandable insights into the workings of the model on an infinite set of inputs handled by verifiers. Similarly, the DNN verifiers (Singh et al., 2019c;Zhang et al., 2018) can generate formal proofs capturing complex invariants sufficient to prove network robustness but it is not clear whether these proofs are based on any meaningful input features learned by the DNN that are necessary for correct classification. This is in contrast to standard program verification tasks where proofs capture the semantics of the program and property. In this work, to improve trust, we propose for the first time, the problem of interpreting the invariants captured by DNN robustness proofs.
Key Challenges. The proofs generated by state-of-the-art DNN verifiers encode high-dimensional complex convex shapes defined over thousands of neurons in the DNN. It is not exactly clear how to map these shapes to human understandable interpretations. Further, certain parts of the proof may be more important for it to hold than the rest. Thus we need to define a notion of importance for different parts of the proof and develop methods to identify them.
Our Contributions. We make the following contributions to overcome these challenges and develop a new method for interpreting DNN robustness proofs.
• We introduce a novel concept of proof features that can be analyzed independently by generating the corresponding interpretations. A priority function is then associated with the proof features that signifies their importance in the complete proof. • We design a general algorithm called SuPFEx (Sufficient Proof Feature Extraction) that extracts a set of proof features that retain only the more important parts of the proof while still proving the property. • We compare interpretations of the proof features for standard DNNs and state-of-the-art robustly trained DNNs for the MNIST and CIFAR10 datasets. We observe that the proof features corresponding to the standard networks rely on spurious input features while the proofs of adversarially trained DNNs (Madry et al., 2018) filter out some of the spurious features. In contrast, the networks trained with certifiable training produce proofs that do not rely on any spurious features but they also miss out on some meaningful features. Proofs for training methods that combine both empirical robustness and certified robustness (Balunovic & Vechev, 2020) provide a common ground. They not only rely on human interpretable features but also selectively filter out the spurious ones. We also empirically show that these observations are not contingent on any specific DNN verifier.
Related Work
We discuss prior works related to ours. DNN interpretability. There has been an extensive effort to develop interpretability tools for investigating the internal workings of DNNs. These include feature attribution techniques like saliency maps (Sundararajan et al., 2017;Smilkov et al., 2017), using surrogate models to interpret local decision boundaries (Ribeiro et al., 2016), finding influential (Koh & Liang, 2017), prototypical (Kim et al., 2016), or counterfactual inputs (Goyal et al., 2019), training sparse decision layers (Wong et al., 2021), utilizing robustness analysis (Hsieh et al., 2021). Most of these interpretability tools focus on generating local explanations that investigate how DNNs work on individual inputs. Another line of work, rather than explaining individual inputs, tries to identify specific concepts associated with a particular neuron (Simonyan et al., 2014;Bau et al., 2020). However, to the best of our knowledge, there is no existing work that allows us to interpret DNN robustness proofs. DNN verification. Unlike DNN interpretability methods, prior works in DNN verification focus on formally proving whether the given DNN satisfies desirable properties like robustness (Singh et al., 2019c;Wang et al., 2021b), fairness (Mazzucato & Urban, 2021), etc. The DNN verifiers are broadly categorized into three main categories -(i) sound but incomplete verifiers which may not always prove property even if it holds Singh et al., 2018;2019b;a;Zhang et al., 2018;Xu et al., 2020;Salman et al., 2019), (ii) complete verifiers that can always prove the property if it holds (Wang et al., 2018;Gehr et al., 2018;Bunel et al., 2020a;Bak et al., 2020;Ehlers, 2017;Ferrari et al., 2022;Fromherz et al., 2021;Wang et al., 2021a;Palma et al., 2021;Anderson et al., 2020;Zhang et al., 2022) and (iii) verifiers with probabilistic guarantees (Cohen et al., 2019). Robustness and interpretability. Existing works (Madry et al., 2018;Balunovic & Vechev, 2020;Zhang et al., 2020) in developing robustness training methods for neural networks provide a framework to produce networks that are inherently immune to adversarial perturbations in input. Recent works (Tsipras et al., 2019;Zhang et al., 2019) also show that there may be a robustness-accuracy tradeoff that prevents highly robust models achieve high accuracy. Further, in (Tsipras et al., 2019) authors show that networks trained with adversarial training methods learn fundamentally different input feature representations than standard networks where the adversarially trained networks capture more human-aligned data characteristics.
Preliminaries
In this section, we provide the necessary background on DNN verification and existing works on traditional DNN interpretability with sparse decision layers. While our method is applicable to general architectures, for simplicity, we focus on a l-layer feedforward DNN N : R d0 → R d l for the rest of this paper. Each layer i except the final one applies the transformation
X i = σ i (W i · X i−1 + B i ) where W i ∈ R di×di−1 and B i ∈ R di
are respectively the weights and biases of the affine transformation and σ i is a non-linear activation like ReLU, Sigmoid, etc. corresponding to layer i. The final layer only applies the affine transformation and the network output is a vector Y = W l · X l−1 + B l . DNN verification. At a high level, DNN verification involves proving that the network outputs Y = N (X) corresponding to all inputs X from an input region specified by φ, satisfy a logical specification ψ. A common property is -the local robustness where the output specification ψ is defined as linear inequality over the elements of the output vector of the neural network. The output specification, in this case, is written as ψ(Y ) = (C T Y ≥ 0) where C ∈ R d l defines the linear inequality for encoding the robustness property. For the rest of the paper, we refer to the input region φ and output specification ψ together as property (φ, ψ). Next, we briefly discuss how DNN robustness verifiers work. A DNN verifier V symbolically computes a possibly over-approximated output region A ⊆ R d l containing all possible outputs of N corresponding to φ. Let,
Λ(A) = min Y ∈A C T Y denote the minimum value of C T Y where Y ∈ A. Then N satisfies property (φ, ψ) if Λ(A) ≥ 0.
Most existing DNN verifiers (Singh et al., 2018;2019b;Zhang et al., 2018) are exact for affine transformations. However, for non-linear activation functions, these verifiers compute convex regions that over-approximate the output of the activation function. Note that, due to the overapproximations, DNN verifiers are sound but not completethe verifier may not always prove property even if it holds. For piecewise linear activation functions like ReLU, complete verifiers exist handling the activation exactly, which in theory always prove a property if it holds. Nevertheless, complete verification in the worst case takes exponential time, making them practically infeasible. In the rest of the paper, we focus on deterministic, sound, and incomplete verifiers which are more scalable than complete verifiers. DNN interpretation with sparse decision layer. DNNs considered in this paper, have complex multi-layer structures, making them harder to interpret. Instead of interpreting what each layer of the network is doing, recent works (Wong et al., 2021;Liao & Cheung, 2022) treat DNNs as the composition of a deep feature extractor and an affine decision layer. The output of each neuron of the penultimate layer represents a single deep feature and the final affine layer linearly combines these deep features to compute the network output. This perspective enables us to identify the set of features used by the network to compute its output and to investigate their semantic meaning using the existing feature visualization techniques (Ribeiro et al., 2016;Simonyan et al., 2014). However, visualizing each feature is practically infeasible for large DNNs where the penultimate layer can contain hundreds of neurons. To address this, the work of (Wong et al., 2021) tries to identify a smaller set of features that are sufficient to maintain the perfomance of the network. This smaller but sufficient feature set retains only the most important features corresponding to a given input. It is shown empirically (Wong et al., 2021) that a subset of these features of size less than 10 is sufficient to maintain the accuracy of state-of-the-art models.
Interpreting DNN Proofs
Next, we describe our approach for interpreting DNN robustness proofs.
Proof features. Similar to traditional DNN interpretation described above, for proof interpretation, we propose to segregate the final decision layer from the network and look at the features extracted at the penultimate layer. However, DNN verifiers work on an input region (φ) consisting of infinitely many inputs instead of a single input as handled by existing work. In this case, for a given input region φ, we look at the symbolic shape (for example -intervals, zonotopes, polytopes, etc.) computed by the verifier at the penultimate layer and then compute its projection on each dimension of the penultimate layer. These projections yield an interval [l n , u n ] which contains all possible output values of the corresponding neuron n with respect to φ. Definition 1 (Proof Features). Given a network N , input region φ and neural network verifier V, for each neuron n i at the penultimate layer of N , the proof feature F ni extracted at that neuron n i is an interval [l ni , u ni ] such that ∀X ∈ φ, the output of n i always lies in the range [l ni , u ni ].
Note that, the computation of the proof features is verifier dependent, i.e., for the same network and input region, different verifiers may compute different values l n and u n for a particular neuron n. For any input region φ, the first (l − 1) layers of N along with the verifier V act as the proof feature extractor. For the rest of this paper, we use F to denote the set of all proof features at the penultimate layer and F S to denote the proof features corresponding to S ⊆ [d l−1 ].
F S = {F ni | i ∈ S}
Suppose N is formally verified by the verifier V to satisfy the property (φ, ψ). Then in order to gain insights about the proof generated by V, we can directly investigate (described in section 4.3) the extracted proof features F. However, the number of proof features for contemporary networks can be very large (in hundreds). Many of these features may be spurious and not important for the proof. Similar to how network interpretations are generated when classifying individual inputs, we want to identify a smaller set of proof features that are more important for the proof of the property (φ, ψ). The key challenge here is defining the most important set of proof features w.r.t the property (φ, ψ).
Sufficient Proof Features
We argue that a minimum set of proof features F S0 ⊆ F that can prove the property (φ, ψ) with verifier V contains an important set of proof features w.r.t (φ, ψ). The minimality of F S0 enforces that it can only retain the proof features that are essential to prove the property. Otherwise, it would be possible to construct a smaller set of proof features that preserves the property violating the minimality of F S0 . Leveraging this hypothesis, we can model extracting a set of important proof features as computing a minimum proof feature set capable of preserving the property (φ, ψ) with V. To identify a minimum proof feature set, we introduce the novel concepts of proof feature pruning and sufficient proof features below:
Definition 2 (Proof feature Pruning). Pruning any Proof feature F ni ∈ F corresponding to neuron n i in the penultimate layer involves setting weights of all its outgoing connections to 0 so that given any input X ∈ φ the final output of N no longer depends on the output of n i .
Once, a proof feature F ni is pruned the verifier V no longer uses F ni to prove the property (φ, ψ).
Definition 3 (Sufficient proof features). For the proof of property (φ, ψ) on DNN N with verifier V, a nonempty set F S ⊆ F of proof features is sufficient if the property still holds with verifier V even if all the proof features not in F S are pruned.
Definition 4 (Minimum proof features). Minimum proof feature set F S0 ⊆ F for a network N verified with V on (φ, ψ) is a sufficient proof feature set containing the minimum number of proof features.
Extracting a minimum set of proof features F S0 from F is equivalent to pruning maximum number of proof features from F without violating the property (φ, ψ). Let, W l [: , i] ∈ R d l denote the i-th column of the weight matrix W l of the final layer N l . Pruning any proof feature F ni results in setting all weights in W l [:, i] to 0. Therefore, to compute F S0 , it is sufficient to devise an algorithm that can prune maximum number of columns from W l while still preserving the property (φ, ψ). For any proof feature set F S ⊆ F, let W l (S) ∈ R d l ×d l−1 be the weight matrix of the pruned final layer that only retains proof features corresponding to F S . Then columns of W l (S) are defined as follows where 0 ∈ R d l−1 dentoes a constant all-zero vector
W l (S)[:, i] = W l [:, i] i ∈ S 0 otherwise (1)
The proof feature set F S is sufficient iff the property (φ, ψ) can be verified by V on N with the pruned weight matrix W l (S). As described in Section 3, for property verification the verifier computes V an over-approximated output region A of N over the input region φ. Given that we never change the input region φ and the proof feature extractor composed of the first l − 1 layers of N and the verifier V, the output region A only depends on the pruning done at the final layer. Now let A(W l , S) denote the over-approximated output region corresponding to W l (S). The neural network N can be verified by V on the property (φ, ψ) with W l (S) iff the lower bound Λ(A(W l , S)) ≥ 0. Therefore, finding S 0 corresponding to a minimum proof feature set F S0 can be formulated as below where for any S ⊆ [d l−1 ], |S| denotes the number of elements in S.
arg min
S =∅, S⊆[d l−1 ] |S| s.t. Λ(A(W l , S)) ≥ 0 (2)
Approximate Minimum Proof Feature Extraction
The search space for finding F S0 is prohibitively large and contains 2 d l−1 possible candidates. So, computing a minimum solution with an exhaustive search is infeasible. Even checking the sufficiency of any arbitrary proof feature set F S (Definition 3) is not trivial and requires expensive verifier invocations. We note that even making O(d l−1 ) verifier calls is too expensive for the network sizes considered in our evaluation. Given the large DNN size, exponential search space, and high verifier cost, efficiently computing a minimum sufficient proof feature set is computationally intractable. We design a practically efficient approximation algorithm based on a greedy heuristic that can generate a smaller (may not always be minimum) sufficient feature set with only O(log(d l−1 )) verifier calls. At a high level, for each proof feature F ni contained in a sufficient feature set, the heuristic tries to estimate whether pruning F ni violates the property (φ, ψ) or not. Subsequently, we prioritize pruning of those proof features F ni that, if pruned, will likely preserve the proof of the property (φ,ψ) with the verifier V.
For any proof feature F ni ∈ F S where F S is sufficient and proves the property (φ, ψ), we estimate the change ∆(F ni , F S ) that occurs to Λ(A(W l , S)) if F ni is pruned from F S . Let, the over-approximated output region computed by verifier V corresponding to
F S \ {F ni } be A(W l , S \ {i}) then ∆(F ni , F S ) is defined as follows ∆(F ni , F S ) = |Λ(A(W l , S)) − Λ(A(W l , S \ {i}))|
Intuitively, proof features F ni with higher values of ∆(F ni , F S ) for some sufficient feature set F S ⊆ F are responsible for large changes to Λ(A(W l (S))) and likely to break the proof if pruned. Note, ∆(F ni , F S ) depends on the particular sufficient proof set F S and does not estimate the global importance of F ni independent of the choice of F S . To mitigate this issue, while defining the priority P (F ni ) of a proof feature F ni we take the maximum of ∆(F ni , F S ) across all sufficient feature set F S containing F ni . Let, S(F ni ) denote set of all sufficient F S containing F ni . Then, P (F ni ) can be formally defined as follows
P (F ni ) = max F S ∈S(Fn i ) ∆(F ni , F S )(3)
Given the set S(F ni ) can be exponentially large, finding the maximum value of
∆(F ni , F S ) over S(F ni ) is prac- tically infeasible. Instead, we compute a resonably tight upper bound P ub (F ni ) on P (F ni ) by estimating a global upper bound on ∆(F ni , F S ), that holds ∀F S ∈ S(F ni ).
The proposed upper bound is independent of the choice of F S ∈ S(F ni ) and therefore removes the need to iterate over S(F ni ) enabling efficient computation. For the network N and input region φ, let A l−1 denote the overapproximate symbolic region computed by V at the penultimate layer. Then ∀F S ∈ S(F ni ) the global uppper bound of ∆(F ni , F S ) can be computed as follows where for any vector X ∈ R d l−1 , x i denotes its i-th coordinate:
∆(F ni , F S ) ≤ max X∈A l−1 |(C T W l (S)X − C T W l (S \ {i})X)| = max X∈A l−1 |(C T W l [:, i]) · x i )| P (F ni ) ≤ max X∈A l−1 |(C T W l [:, i]) · x i )|
Now, any proof feature F ni = [l ni , u ni ] computed by V contains all possible values of x i where X ∈ A l−1 . Leveraging this observation, we can further simplify the upper bound P ub (F ni ) of P (F ni ) as shown below.
P (F ni ) ≤ max xi∈[ln i ,un i ] |(C T W l [:, i])| · x i )| P ub (F ni ) = |(C T W l [:, i])| · max(|l ni |, |u ni |) (4)
This simplification ensures that P ub (F ni ) for all F ni can be computed with O(d l−1 ) elementary vector operations and a single verifier call that computes the intervals [l ni , u ni ]. Next, we describe how we compute an approximate feature set using the feature priorities P ub (F ni ). For any feature F ni , P ub (F ni ) estimates the importance of F ni in preserving the proof. So, a trivial step is to just prune all the proof features from F whose P ub is 0. These features do not have any contribution to the proof of the property (φ, ψ) by the verifier V. This step forms a trivial algorithm. However, this is not enough. We can further prune some more proof features leading to a yet smaller set. For this, we propose an iterative algorithm SuPFEx shown in Algorithm 1 (A) which maintains two set namely, F Limitations. We note that the scalability of our method is ultimately limited by the scalability of the existing verifiers. Therefore, SuPFEx currently cannot handle networks for larger datasets like ImageNet. Nonetheless, SuPFEx is general and compatible with any verification algorithm. Therefore, SuPFEx will benefit from any future advances to enable the neural network verifiers to scale to larger datasets.
Next, we derive mathematical guarantees about the correctness and efficacy of Algorithm 1. For correctness, we prove that the feature set F This follows from the fact that SuPFEx Algorithm ensures at each step that F the proof features F ni ∈ F with P ub (F ni ) = 0. Note, any proof feature F ni ∈ Z(F) can be trivially removed without breaking the proof. Further, we show that some additional proof features will be filtered out from the original proof feature set. So, the size of the proof feature set F (A) S0 extracted by SuPFEx is guaranteed to be less than the value computed in Theorem 2.
Theorem 2. Let, P max denote the maximum of all priorities P ub (F ni ) over F. Given any network N is verified on (φ, ψ) with verifier V then |F
(A) S0 | ≤ d l−1 − |Z(F)| − Λ(A) Pmax
The exact proof for Theorem 2 can be found in Appendix A
Interpreting proof features
For interpreting proofs of DNN robustness, we now develop methods to analyze the semantic meaning of the extracted proof features. There exists a plethora of works that compute local DNN explanations (Sundararajan et al., 2017;Smilkov et al., 2017). However, these techniques are insufficient to generate an explanation w.r.t an input region. To mitigate this, we adapt the existing local visualization techniques for visualizing the extracted proof features. Given a proof feature F ni , we intend to compute G(F ni , φ) = EX∼φ G(n i , X) which is the mean gradient of the output of n i w.r.t the inputs from φ. For each input dimension (pixel in case of images) j ∈ [d 0 ] the j-th component of G(F ni , φ) estimates its relevance w.r.t proof feature F ni -the higher is the gradient value, the higher is its relevance. Considering that the input region φ contains infinitely many inputs, exactly computing G(F ni , φ) is impossible. Rather, we statistically estimate G(F ni , φ) by a resonably large sample X S drawn uniformly from φ.
Experimental Evaluation
Experimental setup
For evaluation we use convolutional networks trained on two popular datasets - MNIST (LeCun et al., 1989) CIFAR-10 (Krizhevsky, 2009 shown in Table 1. The networks are trained with standard training and three state-of-theart robust training methodologies -adversarial training (PGD training) (Madry et al., 2018), certified robust training (CROWN-IBP) and a combination of both adversarial and certified training (COLT) (Balunovic & Vechev, 2020). For our experiments, we use pre-trained publically available networks -the standard and PGDtrained networks are taken from the ERAN project (Singh et al., 2019c), COLT-trained networks from COLT website (Balunovic & Vechev, 2020), and CROWN-IBP trained networks from the CROWN-IBP repository . Similar to most of the existing works on neural network verification (Carlini & Wagner, 2017;Singh et al., 2019c), we use L ∞ -based local robustness properties. Here, the input region φ contains all images obtained by perturbing the intensity of each pixel in the input image independently within a bound ∈ R. ψ specifies a region where the network output for the correct class is higher than all other classes. We use train = 0.3 for all robustly trained MNIST networks and train = 8/255 for all robustly trained CIFAR-10 networks. Unless specified otherwise, the proofs are generated by running the popular DeepZ (Singh et al., 2019c) verifier.
We perform all our experiments on a 16-core 12th-gen i7 machine with 16 GB of RAM.
Efficacy of SuPFEx Algorithm
In this section, we experimentally evaluate the efficacy of the SuPFEx based on the size of the output sufficient feature sets. Given that there is no existing work for pruning proof feature sets, we use the upper bound computed in Theorem 2 as the baseline. Note that this bound is better than the size of the proof feature set extracted by the trivial algorithm -one that only removes only "zero" features which include the proof features ([l, u]) where both l = u = 0. (Definition 5) For each network, we use 500 randomly picked images from their corresponding test sets. The used for MNIST networks is 0.02 and that for CIFAR-10 networks is 0.2/255. We note that although the robustly trained networks can be verified robust for higher values of , it is not possible to verify standard networks with such high values. To achieve common ground, we use small values for experiments involving standard networks and conduct separate experiments on only robustly trained networks with higher values of (0.1 for MNIST, 2/255 for CIFAR-10 networks). As shown in Table 1 we do not observe any significant change in the performance of SuPFEx w.r.t different -values.
In Table 1, we show the value of used to define the region φ in column 3, and the total number of properties proved out of 500 in column 4. The size of the original proof feature size corresponding to each network is shown in column 5, the mean and median of the proof feature set size computed using Theorem 2 in columns 6 and 7 respectively, and the mean and median of the proof feature set size computed using SuPFEx in columns 8 and 9 respectively. We note that feature sets obtained by SuPFEx are significantly smaller than the upper bound provided by Theorem 2. For example, in the case of the PGD trained MNIST network with 1000 neurons in the penultimate layer, the average size computed from Theorem 2 is 218.02, while that obtained using SuPFEx is only 5.57. In the last two columns of Table 1, we summarise the percentage of cases where we are able to achieve a proof feature set of size less than or equal to 5 and 10 respectively. Figures 1a and 1b display a histogram where the x-axis is the size of the extracted proof feature set using SuPFEx and y-axis is the number of local robustness properties for COLT-trained DNNs. Histograms for other DNNs are presented in Appendix B. These histograms are skewed towards the left which means that for most of the local properties, we are able to generate a small set of proof features using SuPFEx .
Qualititive comparison of robustness proofs
It has been observed in (Tsipras et al., 2019) that the standardly trained networks rely on some of the spurious features in the input in order to gain a higher accuracy and as a result, are not very robust against adversarial attacks. On the other hand, the empirically robustly trained networks rely more on human-understandable features and are, therefore, more robust against attacks. This empirical robustness comes at cost of reduced accuracy. So, there is an inherent dissimilarity between the types of input features that the standard and adversarially trained networks rely on while classifying a single input. Also, certified robust trained networks are even more robust than the empirically trained ones, however, they report even less accuracy (Müller et al., 2021). In this section, we interpret proof features obtained with SuPFEx and use these interpretations to qualitatively check whether the dissimilarities are also evident in the invariants captured by the different proofs of the same robustness property on standard and robustly trained networks. We also study the effect of certified robust training methods like CROWN-IBP , empirically robust training methods like PGD (Madry et al., 2018) and training methods that combine both adversarial and certified training like COLT (Balunovic & Vechev, 2020) on the proof features. For a local input region φ, we say that a robustness proof is semantically meaningful if it focuses on the relevant features of the output class for images contained inside φ and not on the spurious features. In the case of MNIST or CIFAR-10 images, spurious features are the pixels that form a part of the background of the image, whereas important features are the pixels that are a part of the actual object being identified by the network. Gradient map of the extracted proof features w.r.t. to the input region φ gives us an idea of the input pixels that the network focuses on. We obtain the gradient maps by calculating the mean gradient over 100 uniformly drawn samples from φ as described in Section 4.3. As done in (Tsipras et al., 2019), to avoid introducing any inherent bias in proof feature visualization, no preprocessing (other than scaling and clipping for visualization) is applied to the gradients obtained for each individual sample. In Fig. 2, we compare the gradient maps corresponding to the top proof feature (the one having the highest prior-ity P ub (F ni )) on networks from Table 1 on representative images of different output classes in the MNIST and CI-FAR10 test sets. The experiments leads us to interesting observations -even if some property is verified for both the standard network and the robustly trained network, there is a difference in the human interpretability of the types of input features that the proofs rely on. The standard networks and the provably robust trained networks like CROWN-IBP are the two extremes of the spectrum. For the networks obtained with standard training, we observe that although the top-proof feature does depend on some of the semantically meaningful regions of the input image, the gradient at several spurious features is also non-zero. On the other hand, the top proof feature corresponding to state-of-the-art provably robust training method CROWN-IBP filters out most of the spurious features, but it also misses out on some meaningful features. The proofs of PGD-trained networks filter out the spurious features and are, therefore, more semantically aligned than the standard networks. The proofs of the training methods that combine both empirical robustness and provable robustness like COLT in a way provide the best of both worlds by not only selectively filtering out the spurious features but also highlighting the more human interpretable features, unlike the certifiably trained networks. So, as the training methods tend to regularize more for robustness, their proofs become more conservative in relying on the input features. To further support our observation, we show additional plots for the top proof feature visualization in Appendix B.2 and visualization for multiple proof features in Appendix B.4. We also conduct experiments for different values of used for defining φ. The extracted proof features set w.r.t high values ( = 0.1 for MNIST and = 2/255 for CIFAR-10) are similar to those generated with smaller . The gradient maps corresponding to the top feature for higher values are also similar as shown in Appendix B.3. For COLT-trained MNIST networks, in B.5 we compare the gradients of top proof features retained by SuPFEx with the pruned proof features with low priority. As expected, graidents of the pruned proof features with low priority contain spurious input features.
Sensitivity analysis on training parameters
It is expected that DNNs trained with larger train values are more robust. So, we analyze the sensitivity of the extracted proof features to train . We use the DNNs trained with PGD and COLT and train ∈ {0.1, 0.3} on the MNIST dataset. Fig 3 visualize proof features for the DNNs with additional plots are available in Appendix B.6. We observe that by increasing the value of train , the top proof feature filters out more input features. This is aligned with our observation in Section 5.3 that a more robustly trained neural networks are more conservative in using the input features.
Comparing proofs of different verifiers
The proof features extracted by SuPFEx are specific to the proof generated by the verifier. In this experiment, we compare proof features generated by two popular verifiers IBP and DeepZ on networks shown in Table 1 for the same properties as before. Note that, although IBP is computationally efficient, it is less precise than DeepZ. For standard DNNs, most of the properties cannot be proved by IBP. Hence, in this experiment, we omit standard DNNs and also, consider only the properties that can be verified by both DeepZ and IBP.
Conclusion
In this work, we develop a novel method called SuPFEx to interpret neural network robustness proofs. We empirically establish that even if a property holds for a DNN, the proof for the property may rely on spurious or semantically meaningful features depending on the training method used to train the DNNs. We believe that SuPFEx can be applied for diagnosing the trustworthiness of DNNs inside their development pipeline.
S . So, F (A) S0 = F (A) S0 ∪ F (A) S1 and F (A) S = F (A) S2 . So, F (A) S0 ∪ F (A) S = F (A) S0 ∪ F (A) S1 ∪ F (A) S2 . Also, F (A) S1 ∪ F (A) S2 = F (A) S . So, F (A) S0 ∪ F (A) S = F (A) S0 ∪ F (A) S . So, from induction hypothesis, F (A) S0 ∪ F (A) S is sufficient. Lemma 1. ∀F S ⊆ F, δ(F S ) ≤ Fn i ∈F \F S P ub (F ni ) where P ub (F ni ) is defined in (4). Proof. δ(F S ) = |Λ(A) − Λ(A(W l , S))| = max X∈A l−1 | Fn i ∈F \F S C T W [: i]X| ≤ max X∈A l−1 Fn i ∈F \F S |C T W [: i]X| ≤ Fn i ∈F \F S max X∈A l−1 |C T W [: i]X| = Fn i ∈F \F S P ub (F ni ) [ From (4)] Lemma 2. A feature set F S ⊆ F with δ(F S ) ≤ Λ(A) is sufficient provided Λ(A) ≥ 0.
Proof. δ(F S ) = |Λ(A) − Λ(A(W l , S))|. So, there can be two cases:
Proof.
∀F ni ∈ F, P ub (F ni ) ≤ P max From Lemma 1, δ(F c S ) ≤ |F S | × P max Also, |F S | ≤ Λ(A) P max So, δ(F c S ) ≤ Λ(A) From Lemma 2, F c S is sufficient.
Theorem 2: Given any network N is verified on (φ, ψ) with verifier V then |F
(A) S0 | ≤ d l−1 − |Z(F)| − Λ(A) Pmax
Proof. The algorithm 1 arranges the elements of the proof feature set F in decreasing order according to the priority defined by P ub . Let F be the ordered set corresponding to F. So, F = F n1 :: · · · :: F nm , where :: is the list concatenation. The elements of Z(F) will be at the end of this ordering. So, F can be written as F :: Z(F) where Z(F) = F n k+1 :: · · · :: F nm and F = F n1 :: · · · :: F n k and p be some of the last elements of F s.t. the sum of their priorities just less than Λ(A) Pmax , i.e., p = F nj :: · · · ::
F n k k i=j P ub (F ni ) ≤ Λ(A) P max k i=j−1 P ub (F ni ) ≥ Λ(A) P max
Further, let p = p :: Z(F), i.e., p = F nj :: · · · :: F nm . Since P ub is 0 for all elements of Z(F),
B.7. Comparing proofs of different verifiers
We note that for high values of i.e. = 0.1 for MNIST and = 2/255 for CIFAR-10 most of the properties don't get verified even for robustly trained networks with IBP. Hence, we omitted them for this analysis.
S
. The algorithm iteratively prunes the feature F ni with the lowest value of P ub (F ni ) from F as {} (empty set) and F respectively. Removing a single feature in each iteration and checking the sufficiency of the remaining features in the worst case leads to O(d l−1 ) verifier calls which are infeasible. Instead, at each step, fromF (A) S our algorithm greedily picks top-|S|/2 features (line 10) F (A) S1 based on their priority and invokes the verifier V to check the sufficiency of F line 17). The algorithm (A) terminates after all features in F (A) S are exhausted. Since at every step, the algorithm reduces size of F (A) S by half, it always terminates within O(log(d l−1 )) verifier calls.
S0 is always sufficient (Definition 3). For efficacy, we theoretically find a non-trivial upper bound on the size of F(A) S . Theorem 1. If the verifier V can prove the property (φ, ψ) on the network N , then F
S 8 :
8is sufficient. Hence, at termination the feature set F(A) S0 is sufficient. The complete proof of Theorem 1 is in appendix A. Next, we find a non-trivial upper bound on the size of F (A) S computed by the algorithm. Definition 5. For F, zero proof features set Z(F) denotesAlgorithm 1 Approx. minimum proof feature computation 1: Input: network N , property (φ, ψ), verifier V. 2: Output: approximate minimal proof features F all proof features for input region φ. 7: Calculate priority P ub (F ni ) all proof features F ni . Initializationtop-|S|/2 features selected based on P ub (F ni )
Figure 1 .
1Distribution of the size of the proof feature set computed by SuPFEx Algorithm on COLT-trained networks.
( a )
aGradient maps generated on MNIST networks.(b) Gradient maps generated on CIFAR-10 networks.
Figure 2 .
2The top proof feature corresponding to DNNs trained using different methods rely on different input features.
Figure 3 .
3Visualization of gradients of the top proof feature for PGD and COLT networks trained using different values of train.
=S
Proof. By induction on the number of steps of the while loop. Induction Hypothesis: At each step of the loop, F F. Given that V proves the property (φ, ψ) on N , from Definition 3, F is sufficient. Induction Case: Let F be sufficient for n-th step of the loop. Consider the following cases for (n + 1)-th step of the loop. S1 be sufficient at line 12. In this case, F A S is updated by F
Table 4 .Figure 9 .
49|p | = |Z(F)| + Λ(A) Pmax Now, we prove by induction on the number of steps of the while loop in the algorithm 1 that the set F (A) S0 never contains any elements from p . Induction Hypothesis: F (A) S0 ∩ p = {} Base Case: At initialization, F (A) S0 = {}. So, the induction hypothesis holds trivially. InductionStep: Let the induction hypothesis be true for the n-th step of the algorithm 1. For the (n + 1)-th step, let the new F S1 be sufficient at line 12. In this case, F A S 0 = F A S 0 . So, the induction hypothesis holds. for distributions of the size of proof feature setTable 3. Distribution of the size of proof feature set for Standardly trained networks on Distribution of the size of proof feature set for Standardly trained networks on CIFAR-Additional plots for visualizing gradients of the top proof feature for PGD and COLT networks trained using different values of train ∈ {0.1, 0.3} The gradient map corresponding to the networks trained with the higher value of train filter out more input features than the ones trained with smaller train value.
Table 1 .
1SuPFEx Efficacy AnalysisDataset
Training
Input
No. of
Original
Proof
Proof
No. of proofs No. of proofs
Method
Region (φ) proved
Feature
Feature Count
Feature Count
with ≤ 5
with ≤ 10
eps ( )
properties Count
(Baseline)
(SuPFEx )
proof features proof features
Mean Median Mean Median
(SuPFEx )
(SuPFEx )
MNIST
Standard
0.02
297
256
23.19
19
2.23
2
291
297
PGD Trained 0.02
410
1000
218.02
218
5.57
3
317
365
COLT
0.02
447
250
44.43
45
7.37
6
217
351
CROWN-IBP 0.02
482
128
42.38
43
5.84
4
331
400
MNIST
PGD Trained 0.1
163
1000
279.31
278
5.29
3
131
149
COLT
0.1
215
250
51.01
51
5.97
5
133
203
CROWN-IBP 0.1
410
128
47.92
48
5.86
4
267
343
CIFAR-10 Standard
0.2/255
255
100
52.93
53
10.38
7
120
164
PGD Trained 0.2/255
235
100
54.29
54
8.04
3
155
177
COLT
0.2/255
265
250
77.71
78
9.12
4
148
192
CROWN-IBP 0.2/255
265
256
20.23
21
5.30
3
179
222
CIFAR-10 PGD Trained 2/255
133
100
108
65
7.06
3
108
118
COLT
2/255
228
250
86.62
86
8.65
4
127
160
CROWN-IBP 2/255
188
256
23.03
23
4.31
3
140
173
Table 2
2presents the % cases where the top proof feature computed by both the verifiers is the same (column 2), the % cases where the top 5 proof features computed by both the verifiers are the same and the % cases where the complete proof feature sets computed by both the verifiers are same. We observe that for the MNIST dataset, in 100% of the cases for PGD-trained and COLT-trained networks and in 99.79% cases for the CROWN-IBP trained networks, the top feature computed by both the verifiers is the same. Detailed table is available in Appendix B.7.
Table 2 .
2Comparing proofs of IBP & DeepZTheorem 1: If the verifier V can prove the property (φ, ψ) on the network N , then F S0 computed by Algorithm 1 is sufficient (Definition 3).Training
% proofs with the
% proofs with the
% proofs with the
Method
same top feature
same top-5 feature
same feature set
MNIST CIFAR10 MNIST CIFAR10 MNIST CIFAR10
PGD Trained
100 %
100 %
92.0 %
98.31 %
92.0 %
96.87 %
COLT
100 %
97.87 % 87.17 % 92.53 % 82.05 %
89.36 %
CROWN-IBP 99.79 %
100 %
96.26 % 97.92 % 93.15 %
95.89 %
Table 7 .
7Comparing proofs of IBP & DeepZDataset
Training
Input
% properties
% properties
% proofs with the % proofs with the
% proofs with the
Method
Region (φ) proved by IBP proved by DeepZ same top feature
same top-5 feature same feature set
eps ( )
MNIST
PGD Trained 0.02
26.0 %
82.0 %
100 %
92.0 %
92.0 %
COLT
0.02
49.8 %
89.4 %
100.0 %
87.17 %
82.05 %
CROWN-IBP 0.02
93.4 %
96.4 %
99.79 %
96.26 %
93.15 %
CIFAR-10 PGD Trained 0.2/255
10.2 %
47.0 %
100 %
98.31 %
96.87 %
COLT
0.2/255
17.2 %
53.0 %
97.87 %
92.53 %
89.36 %
CROWN-IBP 0.2/255
21.8 %
53.0 %
100 %
97.92 %
95.89 %
On the effectiveness of interval bound propagation for training verifiabl. F Amato, A López, E M Peña-Méndez, P Vaňhara, A Hampl, J Havel, Journal of Applied Biomedicine. 112Artificial neural networks in medical diagnosis. ArXiv preprint, abs/1810.12715Amato, F., López, A., Peña-Méndez, E. M., Vaňhara, P., Hampl, A., and Havel, J. Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine, 11 (2), 2013. an, S. G. On the effectiveness of interval bound propagation for training verifiabl. ArXiv preprint, abs/1810.12715, 2018.
Strong mixed-integer programming formulations for trained neural networks. R Anderson, J Huchette, W Ma, C Tjandraatmadja, J P Vielma, Mathematical Programming. Anderson, R., Huchette, J., Ma, W., Tjandraatmadja, C., and Vielma, J. P. Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, 2020.
Improved geometric path enumeration for verifying relu neural networks. S Bak, H Tran, K Hobbs, Johnson , T T , 10.1007/978-3-030-53288-8_4Computer Aided Verification -32nd International Conference, CAV 2020. Lahiri, S. K. and Wang, C.Los Angeles, CA, USASpringer122242020Proceedings, Part IBak, S., Tran, H., Hobbs, K., and Johnson, T. T. Im- proved geometric path enumeration for verifying relu neural networks. In Lahiri, S. K. and Wang, C. (eds.), Computer Aided Verification -32nd International Con- ference, CAV 2020, Los Angeles, CA, USA, July 21-24, 2020, Proceedings, Part I, volume 12224 of Lecture Notes in Computer Science, pp. 66-96. Springer, 2020. doi: 10.1007/978-3-030-53288-8\ 4. URL https://doi. org/10.1007/978-3-030-53288-8_4.
Adversarial training and provable defenses: Bridging the gap. M Balunovic, M T Vechev, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaBalunovic, M. and Vechev, M. T. Adversarial training and provable defenses: Bridging the gap. In 8th International Conference on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
Understanding the role of individual units in a deep neural network. D Bau, J.-Y Zhu, H Strobelt, A Lapedriza, B Zhou, A Torralba, Proceedings of the National Academy of Sciences. the National Academy of Sciences117Bau, D., Zhu, J.-Y., Strobelt, H., Lapedriza, A., Zhou, B., and Torralba, A. Understanding the role of individual units in a deep neural network. Proceedings of the Na- tional Academy of Sciences, 117(48):30071-30078, 2020.
End to end learning for self-driving cars. M Bojarski, D Del Testa, D Dworakowski, B Firner, B Flepp, P Goyal, L D Jackel, M Monfort, U Muller, J Zhang, arXiv:1604.07316arXiv preprintBojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
Branch and bound for piecewise linear neural network verification. R Bunel, J Lu, I Turkaslan, P Kohli, P Torr, P Mudigonda, Journal of Machine Learning Research. 212020Bunel, R., Lu, J., Turkaslan, I., Kohli, P., Torr, P., and Mudigonda, P. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020), 2020a.
An efficient nonconvex reformulation of stagewise convex optimization problems. R R Bunel, O Hinder, S Bhojanapalli, K Dvijotham, Advances in Neural Information Processing Systems. 33Bunel, R. R., Hinder, O., Bhojanapalli, S., and Dvijotham, K. An efficient nonconvex reformulation of stagewise convex optimization problems. Advances in Neural Information Processing Systems, 33, 2020b.
Towards evaluating the robustness of neural networks. N Carlini, D Wagner, 2017 ieee symposium on security and privacy (sp). IeeeCarlini, N. and Wagner, D. Towards evaluating the robust- ness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39-57. Ieee, 2017.
Certified adversarial robustness via randomized smoothing. J Cohen, E Rosenfeld, Z Kolter, PMLRProceedings of the 36th International Conference on Machine Learning. Chaudhuri, K. and Salakhutdinov, R.the 36th International Conference on Machine Learning97Cohen, J., Rosenfeld, E., and Kolter, Z. Certified ad- versarial robustness via randomized smoothing. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceed- ings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learn- ing Research, pp. 1310-1320. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/ cohen19c.html.
Formal verification of piece-wise linear feedforward neural networks. R Ehlers, International Symposium on Automated Technology for Verification and Analysis. Ehlers, R. Formal verification of piece-wise linear feed- forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, 2017.
Complete verification via multi-neuron relaxation guided branch-and-bound. C Ferrari, M N Mueller, N Jovanović, M Vechev, International Conference on Learning Representations. Ferrari, C., Mueller, M. N., Jovanović, N., and Vechev, M. Complete verification via multi-neuron relaxation guided branch-and-bound. In International Conference on Learning Representations, 2022. URL https:// openreview.net/forum?id=l_amHf1oaK.
Fast geometric projections for local robustness certification. A Fromherz, K Leino, M Fredrikson, B Parno, C Pasareanu, International Conference on Learning Representations. Fromherz, A., Leino, K., Fredrikson, M., Parno, B., and Pasareanu, C. Fast geometric projections for local robustness certification. In International Conference on Learning Representations, 2021. URL https:// openreview.net/forum?id=zWy1uxjDdZJ.
Ai2: Safety and robustness certification of neural networks with abstract interpretation. T Gehr, M Mirman, D Drachsler-Cohen, P Tsankov, S Chaudhuri, M Vechev, 2018 IEEE Symposium on Security and Privacy (SP). Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. Ai2: Safety and robustness certification of neural networks with abstract interpreta- tion. In 2018 IEEE Symposium on Security and Privacy (SP), 2018.
Counterfactual visual explanations. Y Goyal, Z Wu, J Ernst, D Batra, D Parikh, S Lee, International Conference on Machine Learning. PMLRGoyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., and Lee, S. Counterfactual visual explanations. In International Con- ference on Machine Learning, pp. 2376-2384. PMLR, 2019.
Evaluations and methods for explanation through robustness analysis. C.-Y Hsieh, C.-K Yeh, X Liu, P K Ravikumar, S Kim, S Kumar, C.-J Hsieh, International Conference on Learning Representations. Hsieh, C.-Y., Yeh, C.-K., Liu, X., Ravikumar, P. K., Kim, S., Kumar, S., and Hsieh, C.-J. Evaluations and meth- ods for explanation through robustness analysis. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=4dXmpCDGNp7.
Examples are not enough, learn to criticize! criticism for interpretability. B Kim, R Khanna, O O Koyejo, Advances in neural information processing systems. 29Kim, B., Khanna, R., and Koyejo, O. O. Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29, 2016.
Understanding black-box predictions via influence functions. P W Koh, P Liang, International conference on machine learning. PMLRKoh, P. W. and Liang, P. Understanding black-box predic- tions via influence functions. In International conference on machine learning, pp. 1885-1894. PMLR, 2017.
Learning multiple layers of features from tiny images. A Krizhevsky, Krizhevsky, A. Learning multiple layers of features from tiny images. 2009.
Handwritten digit recognition with a back-propagation network. Y Lecun, B E Boser, J S Denker, D Henderson, R E Howard, W E Hubbard, L D Jackel, NIPS. LeCun, Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., and Jackel, L. D. Hand- written digit recognition with a back-propagation network. In NIPS, pp. 396-404, 1989.
Automated invariance testing for machine learning models using sparse linear layers. Z Liao, M Cheung, ICML 2022: Workshop on Spurious Correlations, Invariance and Stability. Interpreting Robustness Proofs of Deep Neural NetworksInterpreting Robustness Proofs of Deep Neural Networks Liao, Z. and Cheung, M. Automated invariance testing for machine learning models using sparse linear lay- ers. In ICML 2022: Workshop on Spurious Correla- tions, Invariance and Stability, 2022. URL https: //openreview.net/forum?id=VP8ATzLGyQx.
Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, Proc. International Conference on Learning Representations (ICLR). International Conference on Learning Representations (ICLR)Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In Proc. International Conference on Learning Representations (ICLR), 2018.
Reduced products of abstract domains for fairness certification of neural networks. D Mazzucato, C Urban, 10.1007/978-3-030-88806-0_15doi: 10.1007/ 978-3-030-88806-0\ 15Static Analysis -28th International Symposium, SAS 2021. Dragoi, C., Mukherjee, S., and Namjoshi, K. S.Chicago, IL, USASpringer129132021ProceedingsMazzucato, D. and Urban, C. Reduced products of ab- stract domains for fairness certification of neural net- works. In Dragoi, C., Mukherjee, S., and Namjoshi, K. S. (eds.), Static Analysis -28th International Symposium, SAS 2021, Chicago, IL, USA, October 17-19, 2021, Pro- ceedings, volume 12913 of Lecture Notes in Computer Science, pp. 308-322. Springer, 2021. doi: 10.1007/ 978-3-030-88806-0\ 15. URL https://doi.org/ 10.1007/978-3-030-88806-0_15.
Scaling polyhedral neural network verification on gpus. C Müller, F Serre, G Singh, M Püschel, M Vechev, Proceedings of Machine Learning and Systems. Smola, A., Dimakis, A., and Stoica, I.Machine Learning and Systems3Müller, C., Serre, F., Singh, G., Püschel, M., and Vechev, M. Scaling polyhedral neural network verification on gpus. In Smola, A., Dimakis, A., and Stoica, I. (eds.), Proceedings of Machine Learning and Systems, volume 3, pp. 733-746, 2021. URL https://proceedings. mlsys.org/paper/2021/file/ ca46c1b9512a7a8315fa3c5a946e8265-Paper. pdf.
Scaling the convex barrier with active sets. A D Palma, H S Behl, R R Bunel, P H S Torr, M P Kumar, 9th International Conference on Learning Representations. AustriaICLR 2021, Virtual EventPalma, A. D., Behl, H. S., Bunel, R. R., Torr, P. H. S., and Kumar, M. P. Scaling the convex barrier with active sets. In 9th International Conference on Learning Representa- tions, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, 2021.
why should i trust you?" explaining the predictions of any classifier. M T Ribeiro, S Singh, C Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningRibeiro, M. T., Singh, S., and Guestrin, C. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144, 2016.
A convex relaxation barrier to tight robustness verification of neural networks. H Salman, G Yang, H Zhang, C Hsieh, P Zhang, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaSalman, H., Yang, G., Zhang, H., Hsieh, C., and Zhang, P. A convex relaxation barrier to tight robustness verification of neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, 2019.
Deep inside convolutional networks: Visualising image classification models and saliency maps. K Simonyan, A Vedaldi, A Zisserman, 2nd International Conference on Learning Representations. Bengio, Y. and Le-Cun, Y.Banff, AB, CanadaWorkshop Track ProceedingsSimonyan, K., Vedaldi, A., and Zisserman, A. Deep in- side convolutional networks: Visualising image classifi- cation models and saliency maps. In Bengio, Y. and Le- Cun, Y. (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6034.
Fast and effective robustness certification. G Singh, T Gehr, M Mirman, M Püschel, M Vechev, Advances in Neural Information Processing Systems. 31Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. Fast and effective robustness certification. Advances in Neural Information Processing Systems, 31, 2018.
Beyond the single neuron convex barrier for neural network certification. G Singh, R Ganvir, M Püschel, M Vechev, Advances in Neural Information Processing Systems. Singh, G., Ganvir, R., Püschel, M., and Vechev, M. Beyond the single neuron convex barrier for neural network certi- fication. In Advances in Neural Information Processing Systems, 2019a.
An abstract domain for certifying neural networks. G Singh, T Gehr, M Püschel, M Vechev, Proceedings of the ACM on Programming Languages. 3Singh, G., Gehr, T., Püschel, M., and Vechev, M. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL), 2019b.
Robustness certification with refinement. G Singh, T Gehr, M Püschel, M Vechev, International Conference on Learning Representations. Singh, G., Gehr, T., Püschel, M., and Vechev, M. Robustness certification with refinement. In International Conference on Learning Representations, 2019c. URL https:// openreview.net/forum?id=HJgeEh09KQ.
Smoothgrad: removing noise by adding noise. CoRR, abs/1706.03825. D Smilkov, N Thorat, B Kim, F B Viégas, M Wattenberg, Smilkov, D., Thorat, N., Kim, B., Viégas, F. B., and Wat- tenberg, M. Smoothgrad: removing noise by adding noise. CoRR, abs/1706.03825, 2017. URL http: //arxiv.org/abs/1706.03825.
Axiomatic attribution for deep networks. M Sundararajan, A Taly, Yan , Q , International conference on machine learning. PMLRSundararajan, M., Taly, A., and Yan, Q. Axiomatic attribu- tion for deep networks. In International conference on machine learning, pp. 3319-3328. PMLR, 2017.
Robustness may be at odds with accuracy. D Tsipras, S Santurkar, L Engstrom, A Turner, A Madry, International Conference on Learning Representations. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=SyxAb30cY7.
Efficient formal safety analysis of neural networks. S Wang, K Pei, J Whitehouse, J Yang, Jana , S , Advances in Neural Information Processing Systems. Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. Efficient formal safety analysis of neural networks. In Ad- vances in Neural Information Processing Systems, 2018.
Beta-crown: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. S Wang, H Zhang, K Xu, X Lin, S Jana, C.-J Hsieh, J Z Kolter, arXiv:2103.06624arXiv preprintWang, S., Zhang, H., Xu, K., Lin, X., Jana, S., Hsieh, C.-J., and Kolter, J. Z. Beta-crown: Efficient bound propaga- tion with per-neuron split constraints for complete and incomplete neural network verification. arXiv preprint arXiv:2103.06624, 2021a.
Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. S Wang, H Zhang, K Xu, X Lin, S Jana, C.-J Hsieh, J Z Kolter, Advances in Neural Information Processing Systems. Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W.Wang, S., Zhang, H., Xu, K., Lin, X., Jana, S., Hsieh, C.-J., and Kolter, J. Z. Beta-CROWN: Efficient bound propaga- tion with per-neuron split constraints for neural network robustness verification. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021b. URL https: //openreview.net/forum?id=ahYIlRBeCFw.
Leveraging sparse linear layers for debuggable deep networks. E Wong, S Santurkar, A Madry, International Conference on Machine Learning. PMLRWong, E., Santurkar, S., and Madry, A. Leveraging sparse linear layers for debuggable deep networks. In Inter- national Conference on Machine Learning, pp. 11205- 11216. PMLR, 2021.
Automatic perturbation analysis for scalable certified robustness and beyond. K Xu, Z Shi, H Zhang, Y Wang, K Chang, M Huang, B Kailkhura, X Lin, C Hsieh, Xu, K., Shi, Z., Zhang, H., Wang, Y., Chang, K., Huang, M., Kailkhura, B., Lin, X., and Hsieh, C. Automatic perturbation analysis for scalable certified robustness and beyond. 2020.
Efficient neural network robustness certification with general activation functions. H Zhang, T.-W Weng, P.-Y Chen, C.-J Hsieh, L Daniel, Advances in neural information processing systems. Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. Efficient neural network robustness certifica- tion with general activation functions. In Advances in neural information processing systems, 2018.
Theoretically principled trade-off between robustness and accuracy. H Zhang, Y Yu, J Jiao, E Xing, L E Ghaoui, Jordan , M , PMLRProceedings of the 36th International Conference on Machine Learning. Chaudhuri, K. and Salakhutdinov, R.the 36th International Conference on Machine Learning97Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L. E., and Jor- dan, M. Theoretically principled trade-off between robust- ness and accuracy. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Confer- ence on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 7472-7482. PMLR, 09- 15 Jun 2019. URL https://proceedings.mlr. press/v97/zhang19p.html.
Towards stable and efficient training of verifiably robust neural networks. H Zhang, H Chen, C Xiao, S Gowal, R Stanforth, B Li, D S Boning, C Hsieh, Proc. International Conference on Learning Representations. International Conference on Learning RepresentationsICLRZhang, H., Chen, H., Xiao, C., Gowal, S., Stanforth, R., Li, B., Boning, D. S., and Hsieh, C. Towards stable and efficient training of verifiably robust neural networks. In Proc. International Conference on Learning Representa- tions, ICLR, 2020.
General cutting planes for boundpropagation-based neural network verification. H Zhang, S Wang, K Xu, L Li, B Li, S Jana, C.-J Hsieh, J Z Kolter, Advances in Neural Information Processing Systems. Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K.Zhang, H., Wang, S., Xu, K., Li, L., Li, B., Jana, S., Hsieh, C.-J., and Kolter, J. Z. General cutting planes for bound- propagation-based neural network verification. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum? id=5haAJAcofjc.
S)) ≥ 0. So, F S is sufficient. So, Λ(A(W lSo, Λ(A(W l , S)) ≥ 0. So, F S is sufficient.
P max denote the maximum of all priorities P ub (F ni ) over F. Let F S ⊆ F. If |F S | ≤ Λ(A) Pmax , then proof feature set F c S = F \ F S is. Lemma 3. Let. sufficient provided Λ(A) ≥ 0Lemma 3. Let, P max denote the maximum of all priorities P ub (F ni ) over F. Let F S ⊆ F. If |F S | ≤ Λ(A) Pmax , then proof feature set F c S = F \ F S is sufficient provided Λ(A) ≥ 0. |
249,097,375 | Mitigating Memorization of Noisy Labels via Regularization between Representations | Designing robust loss functions is popular in learning with noisy labels while existing designs did not explicitly consider the overfitting property of deep neural networks (DNNs). As a result, applying these losses may still suffer from overfitting/memorizing noisy labels as training proceeds. In this paper, we first theoretically analyze the memorization effect and show that a lower-capacity model may perform better on noisy datasets. However, it is non-trivial to design a neural network with the best capacity given an arbitrary task. To circumvent this dilemma, instead of changing the model architecture, we decouple DNNs into an encoder followed by a linear classifier and propose to restrict the function space of a DNN by a representation regularizer. Particularly, we require the distance between two self-supervised features to be positively related to the distance between the corresponding two supervised model outputs. Our proposed framework is easily extendable and can incorporate many other robust loss functions to further improve performance. Extensive experiments and theoretical analyses support our claims. Code is available at github.com/UCSC-REAL/SelfSup_NoisyLabel. | [
211146562,
226282381,
203737303
] | Mitigating Memorization of Noisy Labels via Regularization between Representations
Hao Cheng
Computer Science and Engineering
University of California
Santa Cruz
Zhaowei Zhu
Computer Science and Engineering
University of California
Santa Cruz
Xing Sun
Tencent YouTu Lab
Yang Liu
Computer Science and Engineering
University of California
Santa Cruz
Mitigating Memorization of Noisy Labels via Regularization between Representations
Designing robust loss functions is popular in learning with noisy labels while existing designs did not explicitly consider the overfitting property of deep neural networks (DNNs). As a result, applying these losses may still suffer from overfitting/memorizing noisy labels as training proceeds. In this paper, we first theoretically analyze the memorization effect and show that a lower-capacity model may perform better on noisy datasets. However, it is non-trivial to design a neural network with the best capacity given an arbitrary task. To circumvent this dilemma, instead of changing the model architecture, we decouple DNNs into an encoder followed by a linear classifier and propose to restrict the function space of a DNN by a representation regularizer. Particularly, we require the distance between two self-supervised features to be positively related to the distance between the corresponding two supervised model outputs. Our proposed framework is easily extendable and can incorporate many other robust loss functions to further improve performance. Extensive experiments and theoretical analyses support our claims. Code is available at github.com/UCSC-REAL/SelfSup_NoisyLabel.
Introduction
Deep Neural Networks (DNNs) have achieved remarkable performance in many areas including speech recognition [Graves et al., 2013], computer vision [Krizhevsky et al., 2012, Lotter et al., 2016, natural language processing [Zhang and LeCun, 2015], etc. The high-achieving performance often builds on the availability of quality-annotated datasets. In a real-world scenario, data annotation inevitably brings in label noise Wei et al. [2021b] which degrades the performance of the network, primarily due to DNNs' capability in "memorizing" noisy labels [Zhang et al., 2016].
In the past few years, a number of methods have been proposed to tackle the problem of learning with noisy labels. Notable achievements include robust loss design [Ghosh et al., 2017, Liu and Guo, 2020, Zhang and Sabuncu, 2018, sample selection [Cheng et al., 2021, Han et al., 2018, Yu et al., 2019 and loss correction/reweighting based on noise transition matrix , Liu and Tao, 2015, Natarajan et al., 2013, Patrini et al., 2017, Wei et al., 2022, Zhu et al., 2021b. However, these methods still suffer from limitations because they are agnostic to the model complexity and do not explicitly take the over-fitting property of DNN into consideration when designing these methods Liu et al. [2022], Wei et al. [2021a]. In the context of representation learning, DNN is prone to fit/memorize noisy labels as training proceeds Wei et al. [2021b], Zhang et al. [2016], i.e., the memorization effect. Thus when the noise rate is high, even though the robust losses have some theoretical guarantees in expectation, they are still unstable during training Cheng et al. [2021]. It has been shown that early stopping helps mitigate memorizing noisy labels Li et al. [2020b], Rolnick et al. [2017], Xia et al. [2020]. But intuitively, early stopping will handle overfitting wrong labels at the cost of underfitting clean samples if not tuned properly. An alternative approach is using regularizer to punish/avoid overfitting Cheng et al. [2021], , Liu and Guo [2020], which mainly build regularizers by editing labels. In this paper, we study the effectiveness of a representation regularizer.
To fully understand the memorization effect on learning with noisy labels, we decouple the generalization error into estimation error and approximation error. By analyzing these two errors, we find that DNN behaves differently on various label noise types and the key to prevent over-fitting is to control model complexity. However, specifically designing the model structure for learning with noisy labels is hard. One tractable solution is to use representation regularizers to cut off some redundant function space without hurting the optima. Therefore, we propose a unified framework by utilizing DNN representation to mitigate the memorization of noisy labels. We summarize our main contributions below:
• We first theoretically analyze the memorization effect by decomposing the generalization error into estimation error and approximation error in the context of learning with noisy labels and show that a lower-capacity model may perform better on noisy datasets. • Due to the fact that designing a neural network with the best capacity given an arbitrary task requires formidable effort, instead of changing the model architecture, we decouple DNNs into an encoder followed by a linear classifier and propose to restrict the function space of DNNs by the structural information between representations. Particularly, we require the distance between two self-supervised features to be positively related to the distance between the corresponding two supervised model outputs. • The effectiveness of the proposed regularizer is demonstrated by both theoretical analyses and numerical experiments. Our framework can incorporate many current robust losses and help them further improve performance.
Related Works
Learning with Noisy Labels Many works design robust loss to improve the robustness of neural networks when learning with noisy labels [Feng et al., 2021, Ghosh et al., 2017, Liu and Guo, 2020, Xu et al., 2019, Zhang and Sabuncu, 2018. [Ghosh et al., 2017] proves MAE is inherently robust to label noise. However, MAE has a severe under-fitting problem. [Zhang and Sabuncu, 2018] proposes GCE loss which can combine both the advantage of MAE and CE, exhibiting good performance on noisy datasets. [Liu and Guo, 2020] introduces peer loss, which is proven statistically robust to label noise without knowing noise rate. The extension of peer loss also shows good performance on instance-dependent label noise [Cheng et al., 2021, Zhu et al., 2021a. Another efficient approach to combat label noise is by sample selection [Han et al., 2018, Jiang et al., 2018, Northcutt et al., 2021, Wei et al., 2020, Yao et al., 2020, Yu et al., 2019, Zhang et al., 2020. These methods regard "small loss" examples as clean ones and always involve training multiple networks to select clean samples. Semi-supervised learning is also popular and effective on learning with noisy labels in recent years. Some works [Li et al., 2020a, Nguyen et al., 2020 first perform clustering on the sample loss and divide the samples into clean ones and noisy ones. Then drop the labels of the "noisy samples" and perform semi-supervised learning on all the samples.
Knowledge Distillation Our proposed learning framework is related to the research field of knowledge distillation (KD). The original idea of KD can be traced back to model compression [Buciluǎ et al., 2006], where authors demonstrate the knowledge acquired by a large ensemble of models can be transferred to a single small model. [Hinton et al., 2015] generalize this idea to neural networks and show a small, shallow network can be improved through a teacher-student framework. Due to its great applicability, KD has gained more and more attention in recent years and numerous methods have been proposed to perform efficient distillation [Mirzadeh et al., 2020, Zhang et al., 2019, 2018b. However, the dataset used in KD is assumed to be clean. Thus it is hard to connect KD with learning with noisy labels. In this paper, we theoretically and experimentally show that a regularizer generally used in KD [Park et al., 2019] can alleviate the over-fitting problem on noisy data by using DNN features which offers a new alternative for dealing with label noise.
Preliminary
We introduce preliminaries and notations including definitions and problem formulation.
Problem formulation Consider a K-class classification problem on a set of N training examples denoted by D := {(x n , y n )} n∈ [N ] , where [N ] := {1, 2, · · · , N } is the set of example indices. Examples (x n , y n ) are drawn according to random variables (X, Y ) from a joint distribution D. The classification task aims to identify a classifier C that maps X to Y accurately. In real-world applications, the learner can only observe noisy labelsỹ drawn from Y |X Wei et al. [2021b], e.g., human annotators may wrongly label some images containing cats as ones that contain dogs accidentally or irresponsibly. The corresponding noisy dataset and distribution are denoted by D :
= {(x n ,ỹ n )} n∈[N ] and D. Define the expected risk of a classifier C as R(C) = E D [1(C(X) = Y )].
The goal is to learn a classifier C from the noisy distribution D which also minimizes R(C), i.e., learn the Bayes optimal classifier such that C * (x) = arg max i∈[K] P(Y = i|X = x).
Noise transition matrix The label noise of each instance is characterized by T ij (X) = P( Y = j|X, Y = i), where T (X) is called the (instance-dependent) noise transition matrix Zhu et al. [2021b]. There are two special noise regimes Han et al. [2018] for the simplicity of theoretical analyses: symmetric noise and asymmetric noise. In symmetric noise, each clean label is randomly flipped to the other labels uniformly w.p. , where is the noise rate. Therefore, T ii = 1 − and T ij = K−1 , i = j, i, j ∈ [K]. In asymmetric noise, each clean label is randomly flipped to its adjacent label, i.e., T ii = 1 − , T ii + T i,(i+1) K = 1, where (i + 1) K := i mod K + 1.
Empirical risk minimization
The empirical risk on a noisy dataset with classifier C writes as
1 N n∈[N ] (C(x n ),ỹ n ),
where is usually the cross-entropy (CE) loss. Existing works adapt to make it robust to label noise, e.g., loss correction Natarajan et al. [2013], Patrini et al. [2017], loss reweighting Liu and Tao [2015], generalized cross-entropy (GCE) Zhang and Sabuncu [2018], peer loss Liu and Guo [2020], f -divergence Wei and Liu [2021]. To distinguish their optimization from the vanilla empirical risk minimization (ERM), we call them the adapted ERM.
Memorization effects of DNNs Without special treatments, minimizing the empirical risk on noisy distributions make the model overfit the noisy labels. As a result, the corrupted labels will be memorized Han et al. [2020], Wei et al. [2021b], Xia et al. [2020] and the test accuracy on clean data will drop in the late stage of training even though the training accuracy is consistently increasing. See Figure 1 for an illustration. Therefore, it is important to study robust methods to mitigate memorizing noisy labels.
Random Initialization
Converged Classifier Encoder
Path-1: Traditional learning Path-2: Unfixed encoder Path-3: Fixed encoder Figure 2: Illustration of different learning paths (distinguished by colors). The curve with arrow between two green dots indicates the effort (e.g., number of training instances) of training a model from one state to another state. Outline The rest of the paper is organized as follows. In Section 3, we theoretically understand the memorization effect by analyzing the relationship among noise rates, sample size, and model capacity, which motivate us to design a regularizer to alleviate the memorization effect in Section 4 by restricting model capacity. Section 5 empirically validates our analyses and proposal.
Understanding the Memorization Effect
We quantify the harmfulness of memorizing noisy labels by analyzing the generalization errors on clean data when learning on a noisy dataset D and optimizing over function space C.
Theoretical Tools
Denote by the optimal clean classifier C D := arg min C∈C E D [ (C(X), Y )], the optimal noisy classifier C D = arg min C∈C E D [ (C(X), Y )], and the learned classifier on the noisy dataset C D = arg min C∈C n∈ [N ] [ (C(x n ),ỹ n )]. The expected risk w.r.t the Bayes optimal classifier C * can be decomposed into two parts:
E[ ( C D (f (X)), Y )]−E[ (C * (X), Y )] = Error E (C D , C D )+Error A (C D , C * ),
where the estimation error Error E and the approximation error Error A can be written as
Error E (C D , C D ) = E[ ( C D (X), Y )]−E [ (C D (X), Y )], Error A (C D , C * ) = E [ (C D (X), Y )]−E [ (C * (X), Y )].
We analyze each part respectively.
Estimation error We first study the noise consistency from the aspect of expected loss.
Definition 1 (Noise consistency). One label noise regime satisfies the noise consistency under loss if the following affine relationship holds:
E D [ (C(X), Y )] = γ 1 E D [ (C(X), Y )] + γ 2 ,
where γ 1 and γ 2 are constants in a fixed noise setting.
To study whether popular noise regimes satisfy noise consistency, we need the following lemma:
Lemma 1. A general noise regime with noise transitions T ij (X) : P( Y = j|Y = i, X) can be decoupled to the following form:
E D (C(X), Y ) = T E D [ (C(X), Y )] + j∈[K] i∈[K] P(Y = i)E D|Y =i [U ij (X) (C(X), j)], where U ij (X) = T ij (X), ∀i = j, U jj (X) = T jj (X) − T .
Lemma 1 shows the general instance-dependent label noise is hard to be consistent since the second term is not a constant unless we add more restrictions to T (X). Specially, in Lemma 2, we consider two typical noise regimes for multi-class classifications: symmetric noise and asymmetric noise.
Lemma 2. The symmetric noise is consistent with 0-1 loss:
E D (C(X), Y ) = γ 1 E D [ (C(X), Y )]+γ 2 , where γ 1 = 1 − K K−1 , γ 2 = K−1 . The asymmetric noise is not consistent: E D (C(X), Y ) = (1 − ) · E D [ (C(X), Y )] + i∈[K] P(Y = i)E D|Y =i [ (C(X), (i + 1) K )].
With Lemma 2, we can upper bound the estimation errors in Theorem 1.
Theorem 1. With probability at least 1 − δ, learning with symmetric/asymmetric noise has the following estimation error:
Error E (C D , C D ) ≤ ∆ E (C, ε, δ) := 16 |C| log(N e/|C|) + log(8/δ) 2N (1 − ε) 2 + Bias(C D , C D ),
where |C| is the VC-dimension of function class C Bousquet et al. [2003], Devroye et al. [2013]. The noise rate parameter ε satisfies ε = K K−1 for symmetric noise and ε = for asymmetric noise. The bias satisfies Bias(C D , C D ) = 0 for symmetric noise and Bias(C D ,
C D ) = 1− i∈[K] P(Y = i)E D|Y =i [ (C D (X), (i + 1) K ) − ( C D (X), (i + 1) K )] for asymmetric noise.
Approximation error Analyzing the approximation error for an arbitrary DNN is an openproblem and beyond our scope. For a clean presentation, we borrow the following results from the literature.
Lemma 3 (Approximation error Barron [1994]). For an M C -node neural network with one layer of sigmoidal nonlinearities, there exist a constant α C * such that the approximation error is upper bounded by
Error
A (C D , C * ) ≤ ∆ A (C) := α C * √ M C .
With bounds for estimation error and approximation error, for any two function spaces C 1 and C 2 , we are ready to reveal which one induces a better performance under different noise rates in Theorem 2.
Theorem 2. Assume |C 1 | > |C 2 | and symmetric noise. The larger function class C 1 is worse than C 2 in terms of the upper bound of the expected generalization error (E δ |∆ E (C, ε, δ)| + ∆ A (C)) when
1 − K K − 1 ≤ β(C 1 , C 2 ) := 16 √ 2N |C 1 | log(4N e/|C 1 |) − |C 2 | log(4N e/|C 2 |) α C * / M C 2 − α C * / M C 1 .(1)
Note β(C 1 , C 2 ) > 0 due to |C 1 | > |C 2 |. Consider a scenario when N is sufficiently large such that 0 < β(C 1 , C 2 ) < 1. Theorem 2 informs that, in the clean case, C 1 is always better since the inequality does not hold by letting = 0. However, there always exists a feasible noise rate • ∈ [0, (K − 1)/K) such that the larger function class C 1 is worse than the other smaller one when > • . This observation demonstrates that memorizing clean labels with a larger model is always beneficial when N is sufficiently large, while memorizing noisy labels with a larger model could be harmful given the same sample size N .
As indicated by Theorem 2, one solution to reduce the error caused by memorizing noisy labels is to restrict the function space by adopting a lower-capacity model. However, it is non-trivial to find the best function space or design the best neural network given an arbitrary task. We will introduce a tractable solution in the following sections.
Decoupled Classifiers: From Function Spaces to Representations
One tractable way to restrict the function space is fixing some layers of a given DNN model. Particularly, we can decouple C into two parts: C = f •g, where the encoder f extract representations from raw features and the linear classifier g maps representations to label classes, i.e., C(X) = g(f (X)). Clearly, the function space can be reduced significantly if we only optimize the linear classifier g. But the performance of the classifier depends heavily on the encoder f . By this decomposition, we transform the problem of finding good function spaces to finding good representations. Now we analyze the effectiveness of such decomposition. Figure 2 illustrate three learning paths. Path-1 is the traditional learning path that learns both encoder f and linear classifier g at the same time Patrini et al. [2017]. In Path-2, a pre-trained encoder f is adopted as an initialization of DNNs and both f and g are fine-tuned on noisy data distributions D [Ghosh and Lan, 2021]. The pretrained encoder f is also adopted in Path-3. But the encoder f is fixed/frozen throughout the later training procedures and only the linear classifier g is updated with D. We compare the generalization errors of different paths to provide insights for the effects of representations on learning with noisy labels. Now we instantiate function spaces C 1 and C 2 with different representations. With traditional training or an unfixed encoder (Path-1 or Path-2), classifier C is optimized over function space C 1 = G • F with raw data. With a fixed encoder (Path-3), classifier C is optimized over function space G given representations f (X).
Symmetric noise
Let C 1 = G •F, C 2 = G|f . The approximation errors (α C * / M C 2 −α C * / M C 1 ) in Theorem 2 becomes (α C * / √ M G − α C * / √ M G•F ). Note the constant α C * is different from α C * since the model inputs in Path-3 are representations f (X) instead of the raw X. In this setting, due to C 2 := G|f ⊆ C 1 := G • F, we have |C 1 | > |C 2 | and α C * / √ M G > α C * / √ M G•F .
Therefore, the findings in Theorem 2 also hold in this setting. See Corollary 1.
Corollary 1. A fixed encoder could be better than the unfixed one in terms of the upper bound of the expected generalization error when
1 − K K − 1 ≤ β (G • F, G|f ) := 16 √ 2N |G • F| log(4N e/|G • F|) − |G| log(4N e/|G|) α C * / √ M G − α C * / √ M G•F .
Corollary 1 implies that, for the symmetric noise, a fixed encoder is better in high-noise settings.
Other noise Based on Theorem 1, for asymmetric label noise, the noise consistency is broken and the bias term makes the learning error hard to be bounded. As a result, the relationship between noise rate and generalization error is not clear and simply fixing the encoder may induce a larger generalization error. For the general instance-dependent label noise, the bias term is more complicated thus the benefit of fixing the encoder is less clear.
Insights and Takeaways
With the above analyses, we know learning with an unfixed encoder is not stable, which may overfit noisy patterns and converge to a poor local optimum. Restricting the search space makes the convergence stable (reducing estimation error) with the cost of increasing approximation errors. This motivates us to find a way to compromise between a fixed and unfixed encoder. We explore towards this direction in the next section.
Combating Memorization Effect by Representation Regularization
Our understandings in Section 3 motivate us to use the information from representations to regularize the model predictions. Intuitively, as long as the encoder is not fixed, the approximation error could be low enough. If the ERM is properly regularized, the search space and the corresponding estimation error could be reduced.
Training Framework
The training framework is shown in Figure 3, where a new learning path (Self-supervised learning, SSL) f → h is added to be parallel to Path-2 f → g (SL-training) in Figure 2. The newly added projection head h is one-hidden-layer MLP (Multi Layer Perceptron) whose output represents SSL features (after dimension reduction). Its output is employed to regularize the output of linear classifier g. Given an example (x n ,ỹ n ) and a random batch of features B (x n ∈ B), the loss is defined as:
L((x n ,ỹ n ); f, g, h) = (g(f (x n )),ỹ n ) SL Training + Info (h(f (x n )), B) SSL Training +λ Reg (h(f (x n )), g(f (x n )), B) Representation Regularizer ,(2)
where λ controls the scale of regularizer. The loss for SL training could be either the traditional CE loss or recent robust loss such as loss correction/reweighting Liu and Tao
Info (h(f (x n )), B) := − log exp(sim(h(f (x n )), h(f (x n )))) x n ∈B,n =n exp(sim(h(f (x n )), h(f (x n ))))
.
Note InfoNCE and CE share a common encoder, inspired by the design of self distillation [Zhang et al., 2019]. The regularization loss Reg writes as:
Reg (h(f (x n )), g(f (x n )), B) = 1 |B| − 1 x n ∈B,n =n d(φ w (t n , t n ), φ w (s n , s n )),
where d(·) is a distance measure for two inputs, e.g., l 1 , l 2 or square l 2 distance, t n = h(f (x n )),
s n = g(f (x n )), φ w (t n , t n ) = 1 m t n − t n w , where w ∈ {1, 2}
and m normalizes the distance over a batch:
m = 1 |B|(|B| − 1) xn,x n ∈B,n =n ||t n − t n || w .(3)
The design of Reg follows the idea of clusterability [Zhu et al., 2021b] and inspired by relational knowledge distillation [Park et al., 2019], i.e., instances with similar SSL features should have the same true label and instance with different SSL features should have different true labels, which is our motivation to design Reg . Due to the fact that SSL features are learned from raw feature X and independent of noisy label Y , then using SSL features to regularize SL features is supposed to mitigate memorizing noisy labels. We provide more theoretical understandings in the following subsection to show the effectiveness of this design.
Theoretical Understanding
We theoretically analyze how reg mitigates memorizing noisy labels in this subsection. As we discussed previously, SSL features are supposed to pull the model away from memorizing wrong labels due to clusterability Zhu et al. [2021b]. However, since the SL training is performed on the noisy data, when it achieves zero loss, the minimizer should be either memorizing each instance (for CE loss) or their claimed optimum (for other robust loss functions). Therefore, the global optimum should be at least affected by both SL training and representation regularization, where the scale is controlled by λ. For a clear presentation, we focus on analyzing the effect of reg in a binary classification, whose minimizer is approximate to the global minimizer when λ is sufficiently large.
Consider a randomly sampled batch B. Denote by
X 2 := {(x i , x j )|x i ∈ B, x j ∈ B, i = j} the set of data pairs, and d i,j = d(φ w (t i , t j ), φ w (s i , s j ))
. The regularization loss of batch B is decomposed as:
1 |B| n|xn∈B Reg (h(f (x n )), g(f (x n )), B) = 1 |X 2 | (xi,xj )∈X 2 T d i,j Term-1 + (xi,xj )∈X 2 F d i,j Term-2 + xi∈X T ,xj ∈X F 2d i,j Term-3 .(4)
where X = X T X F , X T /X F denotes the set of instances whose labels are true/false. Note the regularizer mainly works when SSL features "disagree" with SL features, i.e., Term-3. Denote by
X + = X|Y = 1, X − = X|Y = 0, X T = X|Y = Y , X F = X|Y = Y .
For further analyses, we write Term-3 in the form of expectation with d chosen as square l 2 distance, i.e., MSE loss:
L c = E X T ,X F ||g(f (X T )) − g(f (X F ))|| 1 m 1 − ||h(f (X T )) − h(f (X F ))|| 2 m 2 2 ,(5)
where m 1 and m 2 are normalization terms in Eqn (3). Note in L c , we use w = 1 for SL features and w = 2 for SSL features. 1 Denote the variance by var(·). In the setting of binary classification, define notations:
X F + := X|( Y = 1, Y = 0), X F − := X|( Y = 0, Y = 1).
To find a tractable way to analytically measure and quantify how feature correction relates to network robustness, we make three assumptions as follows:
Assumption 1 (Memorize clean instances). ∀n ∈ {n|ỹ n = y n }, (g(f (x n )), y n ) = 0.
Assumption 2 (Same overfitting). var(g(f (X F + ))) = 0 and var(g(f (X F − ))) = 0.
Assumption 3 (Gaussian-distributed SSL features). The SSL features follow Gaussian distribu- tions, i.e., h(f (X + )) ∼ N (µ 1 , Σ) and h(f (X − )) ∼ N (µ 2 , Σ).
Assumption 1 implies that a DNN has confident predictions on clean samples. Assumption 2 implies that a DNN has the same degree of overfitting for each noisy sample. For example, an overparameterized DNN can memorize all the noisy labels [Liu, 2021, Zhang et al., 2016. Thus these two assumptions are reasonable. Assumption 3 assumes that SSL features follow Gaussian distribution. Note other distribution form can also be assumed. We use Gaussian due to its simplicity and it can provide us a with a closed-form solution. Further, some works also observe that SSL features tend to follow Gaussians [Wang and Isola, 2020]. Note in Figure 3, SSL features are from h(f (X)) rather than f (X).
Based on Assumptions 1-3, we present Theorem 3 to analyze the effect of L c . Let e + = P( Y = 0|Y = 1), e − = P( Y = 1|Y = 0), we have:
Theorem 3. When e − = e + = e and P(Y = 1) = P(Y = 0), minimizing L c on DNN results in the following solutions:
E D [1 (g(f (X), Y )] = e · 1 2 − 1 2 + ∆(Σ, µ 1 , µ 2 )
where ∆(Σ, µ 1 , µ 2 ) := 8 · tr(Σ)/||µ 1 − µ 2 || 2 , tr(·) denotes the matrix trace, .
Theorem 3 reveals a clean relationship between the quality of SSL features (given by h(f (X))) and the network robustness on noisy samples. When tr(Σ) → 0 or µ 1 − µ 2 → ∞, the expected risk of the model E D [1 (g(f (X), Y )] will approximate to 0. I.e., for any sample x, the model will predict x to its clean label. Note the proof of Theorem 3 does not rely on any SSL training process. This makes it possible to use some pre-trained encoders from other tasks. In the Appendix, we also provide an theoretical understanding on the regularizer from the perspective of Information Theory.
Empirical Evidences
The Effect of Representations
We perform experiments to study the effect of representations on learning with noisy labels. Figure 5 shows the learning dynamics on symmetric label noise while Figure 4 shows the learning dynamics on asymmetric and instance-dependent label noise. From these two figures, given a good representation, we have some key observations: These four observations are consistent with our analyses in Section 3. We also find an interesting phenomenon in Figure 4 (b) that downsampling (making P( Y = i) = P( Y = j) in the noisy dataset) is very helpful for instance-dependent label noise since down-sampling can reduce noise rate imbalance (we provide an illustration on binary case in the Appendix) which could lower down the estimation error. Ideally, if down-sampling could make noise-rate pattern be symmetric, we could achieve noise consistency (Definition 1). In the next subsection, we perform experiments to show our proposed framework can well compromise between a fixed and unfixed encoder to alleviate over-fitting problem.
The Performance of Using Representations as a Regularizer
Experiments on synthetic label noise We first show that Regularizer can alleviate the overfitting problem when in Equation (2) is simply chosen as Cross-Entropy loss. The experiments are shown in Figure 6. Regularizer is added at the very beginning since recent studies show that for a randomly initialized network, the model tends to fit clean labels first [Arpit et al., 2017] and we hope the regularizer can improve the network robustness when DNN begins to fit noisy labels. From Figure 6 (c) (d), for CE training, the performance first increases then decreases since the network over-fits noisy labels as training proceeds. However, for CE with the regularizer, the performance is more stable after it reaches the peak. For 60% noise rate, the peak point is also much higher than vanilla CE training. For Figure 6 (a) (b), since the network is not randomly initialized, it over-fits noisy labels at the very beginning and the performance gradually decreases. However, for CE with the regularizer, it can help the network gradually increase the performance as the network reaches Encoder is pre-trained by SimCLR. Symmetric noise rate is 20% and 40%, respectively; (c) (d): Encoder is randomly initialized with noise rate 40% and 60%, respectively. Next, we show the regularizer can complement any other loss functions to further improve performance on learning with noisy labels. I.e., we choose in Equation (2) to be other robust losses.
The overall experiments are shown in Table 1. It can be observed that our regularizer can complement other loss functions or methods and improve their performance, especially for the last epoch accuracy. Note that we do not apply any tricks when incorporating other losses since we mainly want to observe the effect of the regularizer. It is possible to use other techniques to further improve performance such as multi-model training [Li et al., 2020a] or mixup [Zhang et al., 2018a].
Experiments on real-world label noise We also test our regularizer on the datasets with realworld label noise: CIFAR10N, CIFAR100N [Wei et al., 2021b] and Clothing1M [Xiao et al., 2015]. The results are shown in Table 2 and Table 3. we can find that our regularizer is also effective on the datasets with real-world label noise even when in Equation (2) is simply chosen to be Cross Entropy. More experiments, analyses, and ablation studies can be found in the Appendix. Let T := arg min X,i T ii (X).
Considering a general instance-dependent label noise where
T ij (X) = P( Y = j|Y = i, X), we have Cheng et al. [2021] E D [ (C(X), Y )] = j∈[K] x P( Y = j, X = x) (C(X), j) dx = i∈[K] j∈[K] x P( Y = j, Y = i, X = x) (C(X), j) dx = i∈[K] j∈[K] P(Y = i) x P( Y = j|Y = i, X = x)P(X = x|Y = i) (C(X), j) dx = i∈[K] j∈[K] P(Y = i)E D|Y =i P( Y = j|Y = i, X = x) (C(X), j) = i∈[K] j∈[K] P(Y = i)E D|Y =i [T ij (X) (C(X), j)] = i∈[K] P(Y = i)E D|Y =i [T ii (X) (C(X), i)] + i∈[K] j∈[K],j =i P(Y = i)E D|Y =i [T ij (X) (C(X), j)] =T i∈[K] P(Y = i)E D|Y =i [ (C(X), i)] + i∈[K] P(Y = i)E D|Y =i [(T ii (X) − T ) (C(X), i)] + i∈[K] j∈[K],j =i P(Y = i)E D|Y =i [T ij (X) (C(X), j)] =T E D [ (C(X), Y )] + j∈[K] i∈[K] P(Y = i)E D|Y =i [U ij (X) (C(X), j)],
where U ij (X) = T ij (X), ∀i = j, U jj (X) = T jj (X) − T .
Proof for Lemma 2
Consider the symmetric label noise. Let T (X) ≡ T, ∀X, where T ii = 1 − , T ij = K−1 , ∀i = j. The general form in Lemma 1 can be simplified as
E D [ (C(X), Y )] =(1 − )E D [ (C(X), Y )] + K − 1 j∈[K] i∈[K],i =j P(Y = i)E D|Y =i [ (C(X), j)] =(1 − − K − 1 )E D [ (C(X), Y )] + K − 1 j∈[K] i∈[K] P(Y = i)E D|Y =i [ (C(X), j)].
When is the 0-1 loss, we have
j∈[K] i∈[K] P(Y = i)E D|Y =i [ (C(X), j)] = 1 and E D [ (C(X), Y )] = (1 − K K − 1 )E D [ (C(X), Y )] + K − 1 .
Consider the asymmetric label noise. Let
T (X) ≡ T, ∀X, where T ii = 1 − , T i,(i+1) K = .
The general form in Lemma 1 can be simplified as
E D [ (C(X), Y )] = (1 − )E D [ (C(X), Y )] + i∈[K] P(Y = i)E D|Y =i [ (C(X), (i + 1) K )].
Proof for Theorem 1
For symmetric noise, we have:
E D ( C D (X), Y ) = E D ( C D (X), Y ) 1 − K/(K − 1) − /(K − 1) 1 − K/(K − 1)
.
Thus the learning error is
E D ( C D (X), Y ) − E D [ (C D (X), Y )] = 1 1 − K/(K − 1) E D ( C D (X), Y ) − E D (C D (X), Y ) . LetÊ D (C(X), Y ) := 1 N n∈[N ]
(C(x n ),ỹ n ).
NotingÊ D (C D (X), Y ) −Ê D ( C D (X), Y ) ≥ 0,
we have the following upper bound:
E D ( C D (X), Y ) − E D (C D (X), Y ) =E D ( C D (X), Y ) −Ê D ( C D (X), Y ) +Ê D (C D (X), Y ) − E D (C D (X), Y ) ≤|E D ( C D (X), Y ) −Ê D ( C D (X), Y ) | + |Ê D (C D (X), Y ) − E D (C D (X), Y ) |.
Recall C ∈ C. Denote the VC-dimension of C by |C| Bousquet et al. [2003], Devroye et al. [2013]. By Hoeffding inequality with function space C, with probability at least 1 − δ, we have
|E D ( C D (X), Y ) −Ê D ( C D (X), Y ) | + |Ê D (C D (X), Y ) − E D (C D (X), Y ) | ≤2 arg max g∈G |E D (C(X), Y ) −Ê D (C(X), Y ) | ≤16 |C| log(N e/|C|) + log(8/δ) 2N .
Thus
E D ( C D (X), Y ) − E D [ (C D (X), Y )] ≤ 16 |C| log(N e/|C|) + log(8/δ) 2N (1 − K K−1 ) 2 .
Similarly, for asymmetric noise, we have:
E D ( C D (X), Y ) = E D ( C D (X), Y ) 1 − − Bias( C D ),
where
Bias( C D ) = 1 − i∈[K] P(Y = i)E D|Y =i [ ( C D (X), (i + 1) K )].
Thus the learning error is
E D ( C D (X), Y ) − E D [ (C D (X), Y )] = 1 1 − E D ( C D (X), Y ) − E D (C D (X), Y ) + Bias(C D ) − Bias( C D )
By repeating the derivation for the symmetric noise, we have
E D ( C D (X), Y ) −E D [ (C D (X), Y )] ≤ 16
|C| log(N e/|C|) + log(8/δ) 2N + Bias(C D ) − Bias( C D ) .
Proof for Theorem 2
From Lemma A.4 in Shalev-Shwartz and Ben-David [2014] and our Theorem 1, we know
E|Error E (C D , C D )| ≤ 16 |C| log(4N e/|C|) + 2 √ 2N .
Therefore,
E δ |∆ E (C 1 , ε, δ)| + ∆ A (C 1 ) − E δ |∆ E (C 2 , ε, δ)| + ∆ A (C 2 ) ≥ 0 ⇔16 |C 1 | log(4N e/|C 1 |) + 2 2N (1 − K K−1 ) 2 − 16 |C 2 | log(4N e/|C 2 |) + 2 2N (1 − K K−1 ) 2 + α C * M C 1 − α C * M C 2 ≥ 0 ⇔1 − K K − 1 ≤ 16 √ 2N |C 1 | log(4N e/|C 1 |) − |C 2 | log(4N e/|C 2 |) α C * / M C 2 − α C * / M C 1
Proof for Theorems in Section 4
Lemma 4. If X and Y are independent and follow gaussian distribution: X ∼ N (µ X , Σ X ) and
Y ∼ N (µ Y , Σ Y ), Then: E X,Y (||X − Y || 2 ) = ||µ X − µ Y || 2 + tr(Σ X + Σ Y ).
Proof for Theorem 3
Before the derivation, we define some notations for better presentation. Following the notation in Section 4, define the labels of X T as Y T and the labels of X F as Y F . Under the label noise, it is easy to verify P(Y T = 1) =
P(Y =1)·(1−e + ) P(Y =1)·(1−e + )+P(Y =0)·(1−e − ) and P(Y F = 1) = P(Y =0)·e − P(Y =0)·e − +P(Y =1)·e + . Let p 1 = P(Y T = 1), p 2 = P(Y F = 1), g(f (X))
and h(f (X)) to be simplified as gf (X) and hf (X).
In the case of binary classification, gf (x) is one dimensional value which denotes the network prediction on x belonging to Y = 1. L c can be written as:
E X T ,X F ( ||gf (X T ) − gf (X F ))|| 1 m 1 − ||hf (X T ) − hf (X F )|| 2 m 2 ) 2 denoted as Ψ (X T ,X F ) (a) = E (X T ,Y T ) (X F ,Y F ) Ψ (X T , X F ) = p 1 · p 2 · E X T + ,X F + Ψ (X T + , X F + ) + (1 − p 1 ) · p 2 · E X T − ,X F + Ψ (X T − , X F + ) + p 1 · (1 − p 2 ) · E X T + ,X F − Ψ (X T + , X F − ) + (1 − p 1 ) · (1 − p 2 ) · E X T − ,X F − Ψ (X T − , X F − )
where m 1 and m 2 are normalization terms from Equation (3). (a) is satisfied because Ψ (X T , X F ) is irrelevant to the labels. We derive Ψ (X T + , X F + ) as follows:
E X T + ,X F + Ψ (X T + , X F + ) (b) = E X T + ,X F + ( ||1 − gf (X F + )|| 1 m 1 − ||hf (X T + ) − hf (X F + )|| 2 m 2 ) 2 (c) = E X T + ,X F + ( 1 − gf (X F + ) m 1 − ||hf (X T + ) − hf (X F + )|| 2 m 2 ) 2 (d) = E X T + ,X F + ( gf (X F + ) m 1 − ( 1 m 1 − ||hf (X T + ) − hf (X F + )|| 2 m 2 )) 2 (b)
is satisfied because from Assumption 1, DNN has confident prediction on clean samples. (c) is satisfied because gf (X) is one dimensional value which ranges from 0 to 1. From Assumption 3, hf (X + ) and hf (X − ) follows gaussian distribution with parameter (µ 1 , Σ) and (µ 2 , Σ). Thus according to Lemma 4, we have E X T
+ ,X F + ||hf (X T + ) − hf (X F + )|| 2 = ||µ 1 − µ 2 || 2 + 2 · tr(Σ). Similarly, one can calculate E X T − ,X F + ||hf (X T − ) − hf (X F + )|| 2 = 2 · tr(Σ)
. It can be seen that (d) is function with respect to gf (X F + ). Similarly, Ψ (X T − , X F + ) is also a function with respect to gf (X F + ) while Ψ (X T + , X F − ) and Ψ (X T − , X F − ) are functions with respect to gf (X F − ). Denote d(+, +) = E X T
+ ,X F + ||hf (X T + ) − hf (X F + )|| 2 .
After organizing Ψ (X T + , X F + ) and Ψ (X T − , X F + ), we have:
min gf (X F + ) p 1 · p 2 · E X T + ,X F + Ψ (X T + , X F + ) + (1 − p 1 ) · p 2 · E X T − ,X F + Ψ (X T − , X F + ) ⇒ min gf (X F + ) (E X F + gf (X F + )) 2 − (2 · p 1 (1 − m 1 · d(+, +) m 2 ) + 2 · (1 − p 1 )( m 1 · d(−, +) m 2 )) · E X F + gf (X F + ) + constant with respect to gf (X F + )(6)
Note in Equation (6), we use (E X F + gf (X F + )) 2 to approximate E X F + gf (X F + ) 2 since from Assumption 2, var(g(f (X F + ))) → 0. Now we calculate m 1 and m 2 from Equation (3): (7). Thus Equation (6) is a quadratic equation with respect to E X F + gf (X F + ). Then when Equation (6) achieves global minimum, we have:
m 1 = p 1 · p 2 · (1 − E X F + gf (X F + )) + (1 − p 1 ) · p 2 · E X F + gf (X F + ) + p 1 · (1 − p 2 ) · (1 − E X F − gf (X F − )) + (1 − p 1 ) · (1 − p 2 ) · E X F − gf (X F − ) (7) m 2 = p 1 · p 2 · d(+, +) + (1 − p 1 ) · p 2 · d(−, +) + p 1 · (1 − p 2 ) · d(+, −) + (1 − p 1 )(1 − p 2 ) · d(−, −) Under the condition of P(Y = 1) = P(Y = 0), e − = e + , we have p 1 = p 2 = 1 2 , m 2 = 4·tr(Σ)+||µ 1 −µ 2 || 2 2 , m 1 = 1 2 , which is constant with respect to E X F + gf (X F + ) and E X F − gf (X F − ) in EquationE X F + gf (X F + ) = p 1 − m 1 m 2 (p 1 · d(+, +) − (1 − p 1 ) · d(−, +)) = 1 2 − 1 2 + 8·tr(Σ) ||µ 1 −µ 2 || 2 (8) Similarly, organizing Ψ (X T + , X F − ) and Ψ (X T − , X F − ) gives the solution of E X F − gf (X F − ): E X F − gf (X F − ) = p 1 + m 1 m 2 (p 1 · d(−, −) − (1 − p 1 ) · d(+, −)) = 1 2 + 1 2 + 8·tr(Σ) ||µ 1 −µ 2 || 2(9)
Denote ∆(Σ, µ 1 , µ 2 ) = 8 · tr(Σ)/||µ 1 − µ 2 || 2 . Now we can write the expected risk as:
E D [1 (g(f (X), Y )] = (1 − e) · E X T ,Y 1 g(f (X T ), Y + e · E X F ,Y 1 g(f (X F ), Y (a) = e · E X F ,Y 1 g(f (X F ), Y (b) = e · ( 1 2 · E X F + ,Y =0 1 g(f (X F + ), 0 + 1 2 · E X F − ,Y =1 1 g(f (X F − ), 1 ) (c) = e · 1 2 − 1 2 + ∆(Σ, µ 1 , µ 2 )(10)
High level understanding on the regularizer
Even though we have built Theorem 3 to show SL features can benefit from the structure of SSL features by performing regularization, there still lacks high-level understanding of what the regularization is exactly doing. Here we provide an insight in Theorem 4 which shows the regularization is implicitly maximizing mutual information between SL features and SSL features.
Theorem 4. Suppose there exists a function ξ such that C(X) = ξ(h(f (X))). The mutual information I(h(f (X)), C(X)) achieves its maximum when L c = 0 in Eqn. (5),
The above results facilitate a better understanding on what the regularizer is exactly doing. Note that Mutual Information itself has several popular estimators [Belghazi et al., 2018, Hjelm et al., 2018. It is a very interesting future direction to develop regularizes based on MI to perform regularization by utilizing SSL features.
Proof for Theorem 4: We first refer to a property of Mutual Information:
I(X; Y ) = I(ψ(X); φ(Y ))(11)
where ψ and φ are any invertible functions. This property shows that mutual information is invariant to invertible transformations [Cover, 1999]. Thus to prove the theorem, we only need to prove that ξ in Theorem 4 must be an invertible function when Equation (5) is minimized to 0. Since when ξ is invertible, I(h(f (X)), C(X)) = I(h(f (X)), ξ(h(f (X)))) = I(h(f (X)), h(f (X))).
We prove this by contradiction.
Let t i = h(f (x i )) and s i = g(f (x i )). Suppose ξ is not invertible, then there must exists s i and s j where s i = s j which satisfy t j = ξ(s i ) = t i . However, under this condition, t i − t j = 0 and s i − s j = 0, Equation (5) can not be minimized to 0. Thus when Equation (5) is minimized to 0, ξ must be an invertible function.
Proof done.
Proof for Lemma 4
By the independence condition, Z = X − Y also follows gaussian distribution with parameter
(µ X − µ Y , Σ X + Σ Y ).
Write Z as Z = µ + LU where U is a standard gaussian and µ = µ X − µ Y , LL T = Σ X + Σ Y . Thus
||Z|| 2 = Z T Z = µ T µ + µ T LU + U T L T µ + U T L T LU(12)
Since U is standard gaussian, E(U ) = 0. We have
E(||Z|| 2 ) = µ T µ + E(U T L T LU ) = µ T µ + E( k,l (L T L) k,l U k U l ) (a) = µ T µ + k (L T L) k,k = µ T µ + tr(L T L) = ||µ X − µ Y || 2 + tr(Σ X + Σ Y )(13)
(a) is satisfied because U is standard gaussian, thus E(U 2 k ) = 1 and E(U k U l ) = 0 (k = l). Proof Done.
Illustrating down-sampling strategy
We illustrate in the case of binary classification with e + + e − < 1. Suppose the dataset is balanced, at the initial state, e + > e − . After down-sampling, the noise rate becomes e * + and e * − . We aim to prove two propositions: Proposition 1. If e + and e − are known, the optimal down-sampling rate can be calculated by e + and e − to make e * + = e * − Proposition 2. If e + and e − are not known. When down-sampling strategy is to make P( Y = 1) = P( Y = 0), then 0 < e * + − e * − < e + − e − .
Proof for Proposition 1: Since dataset is balanced with initial e + > e − , we have P( Y = 1) < P( Y = 0). Thus down-sampling is conducted at samples whose observed label are 0. Suppose the random down-sampling rate is r, then e * + = r·e + 1−e + +r·e + and e * − = e − r·(1−e − )+e − . If e * + = e * − , we have:
r · e + 1 − e + + r · e + = e − r · (1 − e − ) + e −(14)
Thus the optimal down-sampling rate r = e − ·(1−e + ) e + ·(1−e − ) , which can be calculated if e − and e + are known.
Proof for Proposition 2: If down sampling strategy is to make P( Y = 1) = P( Y = 0), then r · (e + + 1 − e − ) = 1 − e + + e − , we have r = 1−e + +e − 1−e − +e + . Thus e * + can be calculated as:
e * + = r · e + 1 − e + + r · e + = (1 − e + + e − ) · e + (1 − e + ) · (1 − e − + e + ) + e + · (1 − e + + e − ) Denote α = 1−e + +e −
(1−e + )·(1−e − +e + )+e + ·(1−e + +e − ) . Since e + > e − , 1 − e − + e + > 1 − e + + e − , α = 1−e + +e − (1−e + )·(1−e − +e + )+e + ·(1−e + +e − ) < 1−e + +e − (1−e + )·(1−e + +e − )+e + ·(1−e + +e − ) = 1.
Similarly, e * − can be calculated as:
e * − = e − e − + r · (1 − e − ) = (1 − e − + e + ) · e − e − · (1 − e − + e + ) + (1 − e − ) · (1 − e + + e − ) Denote β = 1−e − +e + e − ·(1−e − +e + )+(1−e − )·(1−e + +e − ) . Since e + > e − , 1 − e − + e + > 1 − e + + e − , β = 1−e − +e + e − ·(1−e − +e + )+(1−e − )·(1−e + +e − ) > 1−e − +e + e − ·(1−e − +e + )+(1−e − )·
(1−e − +e + ) = 1. Since α · e + < e + and β · e − > e − , we have e * + − e * − = α · e + − β · e − < e + − e − . Next, we prove e * + > e * − , following the derivation below:
e * + > e * − =⇒ r · e + 1 − e + + r · e + > e − e − + r · (1 − e − ) =⇒ r > e − · (1 − e + ) e + · (1 − e − ) =⇒ 1 − e + + e − 1 − e − + e + > e − · (1 − e + ) e + · (1 − e − ) =⇒ e + · (1 − e + ) + e + · e 2 − 1 − e + > e − · (1 − e − ) + e − · e 2 + 1 − e −(15)
Let f (e + ) = e + · (1 − e + ) + e + ·e 2 − 1−e + − e − · (1 − e − ) − e − ·e 2 + 1−e − . Since we have assumed e − < e + and e − + e + < 1. Thus proving e * + > e * − is identical to prove f (e + ) > 0 when e − < e + < 1 − e − . Firstly, it is easy to verify when e + = e − or e + = 1 − e − , f (e + ) = 0. From Mean Value Theory, there must exists a point e 0 which satisfy f (e 0 ) = 0 where e + < e 0 < 1 − e − . Next, we differentiate f (e + ) as follows: (3). Type 1 denotes using l 2 norm to calculate distance between SL features and square l 2 norm to calculate distance between SSL features, which is adopted in our paper. Type 2 denotes using l 2 norm to calculate distance for both SL and SSL features.
f (e + ) = (1 − e + ) 2 · (1 − e − ) + e 2 − · (1 − e − ) − 2 · e + (1 − e + ) 2 (1 − e + ) 2 · (1 − e − )(16)
We depict a figure in Figure 7 to better show the effect of down-sampling strategy. It can be seen the curves in the figure well support our proposition and proof. When e + − e − is large, down-sampling strategy to make P( Y = 1) = P( Y = 0) can well decrease the gap even we do not know the true value of e − and e + .
10 More Discussions and Experiments 10.1 The effect of distance measure in Eqn (3) In this paper and experiment, we use l 2 norm to calculate the feature distance between SL features and square l 2 norm to calculate the distance between SSL features. This choice can lead to good performance from Theory 3 and Figure 6. Practically, since structure regularization mainly captures the relations, different choice does not make a big effect on the performance. We perform an experiment in Figure 8 which shows that the performance of both types are quite close.
Ablation study
In Figure 3, SSL training is to provide SSL features to regularize the output of linear classifier g. However, SSL training itself may have a positive effect on DNN. To show the robustness mainly comes from the regularizer rather than SSL training, we perform an ablation study in Figure 9. From the experiments, it is the regularizer that alleviates over-fitting problem of DNN.
The effect of different SSL-pretrained methods
Our experiments are not restricted to any specific SSL method. Experimentally, other SSL methods are also adoptable to pre-train SSL encoders. In Figure 5, SimCLR [Chen et al., 2020] is adopted to pre-train SSL encoder. For a comparison, we pre-train a encoder with Moco on CIFAR10 and fine-tune linear classifier on noisy labels in Table 4.
Detailed setting of experiments
Datasets: We use DogCat, CIFAR10, CIFAR100, CIFAR10N and CIFAR100N and Clothing1M for experiments. DogCat has 25000 images. We randomly choose 24000 images for training and 1000 images for testing. For CIFAR10 and CIFAR100, we follow standard setting that use 50000 images for training and 10000 images for testing. CIFAR10N and CIFAR100N have the same images of CIFAR10 and CIFAR100 except the labels are annotated by real human via Amazon Mturk which contains real-world huamn noise. For Clothing1M, we use noisy data for training and clean data for testing.
Setting in Section 5.1: SimCLR is deployed for SSL pre-training with ResNet50 for DogCat and ResNet34 for CIFAR10 and CIFAR100. Each model is pre-trained by 1000 epochs with Adam optimizer (lr = 1e-3) and batch-size is set to be 512. During fine-tuning, we fine-tune the classifier on noisy dataset with Adam (lr = 1e-3) for 100 epochs and batch-size is set to be 256.
Setting in Section 5.2: For Table 1, all the methods are trained from scratch with learning rate set to be 0.1 at the initial state and decayed by 0.1 at 50 epochs. For Table 2 and Table 3, the encoder is pre-trained by SimCLR and we finetune the encoder on the noisy dataset with CE + Regularier. The optimizer is Adam with learning rate 1e-3 and batch-size 256. Note that in Eqn (5), we use MSE loss for measuring the relations between SL features and SSL features. However, since MSE loss may cause gradient exploration when prediction is far from ground-truth, we use smooth l 1 loss instead. Smooth l 1 loss is an enhanced version of MSE loss. When prediction is not very far from ground-truth, smooth l 1 loss is MSE, and MAE when prediction is far.
The code with running guideline has been attached in the supplementary material.
Figure 1 :
1Training and test accuracies on CIFAR-10 with symmetric noise with noise rates 0.4 (blue curves) and 0.6 (red curves).
Figure 3 :
3The training framework of using representations (SSL features) to regularize learning with noisy labels (SL features).
•Figure 5 :Figure 4 :
54Observation-1: Fix encoders for high symmetric label noise (a) (b) (c): Performance of CE on DogCat, CIFAR10 and CIFAR100 under symmetric noise rate. For each noise rate, the best epoch test accuracy is recorded. The blue line represents training with fixed encoder and the red line represents training with unfixed encoder; (d): test accuracy of CIFAR10 on each training epoch under symmetric 0.6 noise rate. We use ResNet50[He et al., 2016] for DogCat and ResNet34 for CIFAR10 and CIFAR100. Encoder is pre-trained bySimCLR [Chen et al., 2020]. Detailed settings are reported in the Appendix.• Observation-2: Do not fix encoders for low symmetric label noise • Observation-3: Do not fix encoder when bias exists • Observation-4: A fixed encoder is more stable during learning fixed encoder) CE (unfixed encoder) CE (unfixed encoder with sampling) CE (fixed encoder with sampling) (a) performance of CE on asymmetric label noise. (b) performance of CE on instance-dependent label noise. The generation of instance-dependent label noise follows from CORES [Cheng et al., 2021]. Observation-1, Observation-2 are verified by Figure 5 (a) (b) (c) and Observation-3 is verified by Figure 4(a) (b). Observation-4 is verified by Figure 5 (d).
Figure 6 :
6Experiments w.r.t. regularizer (λ = 1) on CIFAR10. ResNet34 is deployed for the experiments. (a) (b):
a) is satisfied because of Assumption 1 that model can perfectly memorize clean samples. (b) is satisfied because of balanced label and error rate assumption. (c) is satisfied by taking the
Figure 7 :Figure 8 :
78It can be verified thatf (e − ) = 1−e − (1−e − ) 2 ·(1−e − ) > 0 and f (1 − e − ) = 0 e 2 − ·(1−e − ) = 0.Further differentiate f (e + ), we get when e + < 1 − ((1 − e − ) · e 2 − ) Visualizing decreased gap by down-sampling strategy. Comparing difference choices of distance measure in Equation
Figure 9 :
9Ablation study of using the regularizer to train DNN on noisy dataset.
[2015], Patrini et al. [2017], GCE Zhang and Sabuncu [2018], peer loss Liu and Guo [2020]. The SSL features are learned by InfoNCE Van den Oord et al. [2018]:
Table 1 :
1Comparison of test accuracies with each method on CIFAR10. The model is learned from scratch for all methods with λ = 1. Best and last epoch accuracies are reported: best/last.Method
Symm. CIFAR10
Asymm. CIFAR10
ε = 0.6
ε = 0.8
ε = 0.4
CE
61.29/32.83
38.46/15.05
67.28/56.6
CE + Regularizer
69.02/65.13 61.94/56.78
73.38/58.51
GCE [Zhang and Sabuncu, 2018] 72.56/62.84
40.71/20.53
69.19/53.24
GCE + Regularizer
72.61/68.38 63.63/63.05
69.79.61.32
FW [Patrini et al., 2017]
65.95/60.13
40.08/26.7
68.62/58.01
FW + Regularizer
68.73/65.90 60.94/59.88
75.64/67.66
HOC [Zhu et al., 2021b]
62.53/46.17
39.61/16.90
85.88/78.89
HOC + Regularizer
70.07/66.94 60.9/34.90
83.53/82.56
Peer Loss [Liu and Guo, 2020]
77.52/76.07
15.60/10.00
84.47/68.93
Peer Loss + Regularizer
77.61/73.26 61.64/53.52
81.58/75.38
Table 2 :
2Test accuracy for each method on CIFAR10N and CIFAR100N. the lowest point (over-fitting state). This observation supports Theorem 3 that the regularizer can prevent DNN from over-fitting.CE
GCE Co-Teaching+ Peer Loss JoCoR ELR CE + Regularizer
CIFAR10N (Worst) 77.69 80.66
83.26
82.53
83.37
83.58
88.74
CIFAR100N
55.50 56.73
57.88
57.59
59.97
58.94
60.81
Table 3 :
3Test accuracy for each method on Clothing1M dataset. All the methods use ResNet50 backbones. DS: Down-Sampling. Reg: With structural regularizer. References D. Arpit, S. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al. A closer look at memorization in deep networks. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. H. Cheng, Z. Zhu, X. Li, Y. Gong, X. Sun, and Y. Liu. Learning with instance-dependent label noise: A sample sieve approach. In International Conference on Learning Representations, 2021. T. M. Cover. Elements of information theory. John Wiley & Sons, 1999. Wu, F. Chen, L. Zhao, and C.-T. Lu. Self-paced robust learning for leveraging clean labels in noisy data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6853-6860, 2020. Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu. Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4320-4328, 2018b. Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, pages 8778-8788, 2018. Z. Zhu, T. Liu, and Y. Liu. A second-order approach to learning with instance-dependent label noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10113-10123, 2021a.Z. Zhu, Y. Song, and Y. Liu. Clusterability as an alternative to anchor points when learning with noisy labels. arXiv preprint arXiv:2102.05291, 2021b.Foward-T Co-teaching CORES+DS ELR+DS
CE
CE + DS CE + DS + Reg
Initializer ImageNet
ImageNet
ImageNet
ImageNet SimCLR
SimCLR
SimCLR
Accuracy
70.83
69.21
73.24
72.87
70.90
72.95
73.48
6 Conclusions
In this paper, we theoretically analyze the memorization effect by showing the relationship among
noise rates, sample size, and model capacity. By decoupling DNNs into an encoder followed by a
linear classifier, our analyses help reveal the tradeoff between fixing or unfixing the encoder during
training, which inspires us a new solution to restrict overfitting via representation regularization.
Our observations and experiments can serve as a guidance for further research to utilize DNN
representations to solve noisy label problems.
Table 4 :
4Comparing different SSL methods on CIFAR10 with symmetric label noise CE (fixed encoder with SimCLR init) 91.06 90.73 90.2 88.24 CE (fixed encoder with MoCo init) 91.55 91.12 90.45 88.51 It can be observed that different SSL methods have very similar results.Method
Symm label noise ratio
0.2
0.4
0.6
0.8
Practically, different choices make negligible effects on performance. See more details in Appendix.
AppendixOutline The Appendix is arranged as follows: Section 7 proves Lemmas and Theorems in Section 3. Section 8 proves Theorem 3 in Section 4 and provides an high level understanding on the regularizer from the perspective of Information Theory. Section 9 illustrates why down-sampling can decrease the gap of noise rates. Section 10 provides the effect of distance measure in Eqn (3) (w = 1 or 2); ablation study in Section 4; the effect of different SSL pre-trained methods. Section 11 elaborates the detailed experimental setting of all the experiments in the paper.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
R D Hjelm, A Fedorov, S Lavoie-Marchildon, K Grewal, P Bachman, A Trischler, Y Bengio, arXiv:1808.06670Learning deep representations by mutual information estimation and maximization. arXiv preprintR. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Ben- gio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. L Jiang, Z Zhou, T Leung, L.-J Li, L Fei-Fei, International Conference on Machine Learning. PMLRL. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304-2313. PMLR, 2018.
An information fusion approach to learning with instance-dependent label noise. Z Jiang, K Zhou, Z Liu, L Li, R Chen, S.-H Choi, X Hu, International Conference on Learning Representations. Z. Jiang, K. Zhou, Z. Liu, L. Li, R. Chen, S.-H. Choi, and X. Hu. An information fusion ap- proach to learning with instance-dependent label noise. In International Conference on Learning Representations, 2021.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
Dividemix: Learning with noisy labels as semi-supervised learning. J Li, R Socher, S C Hoi, International Conference on Learning Representations. J. Li, R. Socher, and S. C. Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In International Conference on Learning Representations, 2020a. URL https://openreview. net/forum?id=HJgExaVtwr.
Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. M Li, M Soltanolkotabi, S Oymak, International conference on artificial intelligence and statistics. PMLRM. Li, M. Soltanolkotabi, and S. Oymak. Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. In International conference on artificial intelligence and statistics, pages 4313-4324. PMLR, 2020b.
Early-learning regularization prevents memorization of noisy labels. S Liu, J Niles-Weed, N Razavian, C Fernandez-Granda, Advances in neural information processing systems. 33S. Liu, J. Niles-Weed, N. Razavian, and C. Fernandez-Granda. Early-learning regularization pre- vents memorization of noisy labels. Advances in neural information processing systems, 33:20331- 20342, 2020.
Robust training under label noise by over-parameterization. S Liu, Z Zhu, Q Qu, C You, arXiv:2202.14026arXiv preprintS. Liu, Z. Zhu, Q. Qu, and C. You. Robust training under label noise by over-parameterization. arXiv preprint arXiv:2202.14026, 2022.
Classification with noisy labels by importance reweighting. T Liu, D Tao, IEEE Transactions. 383T. Liu and D. Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447-461, 2015.
Understanding instance-level label noise: Disparate impacts and treatments. Y Liu, International Conference on Machine Learning. PMLRY. Liu. Understanding instance-level label noise: Disparate impacts and treatments. In International Conference on Machine Learning, pages 6725-6735. PMLR, 2021.
Peer loss functions: Learning from noisy labels without knowing noise rates. Y Liu, H Guo, International Conference on Machine Learning. PMLRY. Liu and H. Guo. Peer loss functions: Learning from noisy labels without knowing noise rates. In International Conference on Machine Learning, pages 6226-6236. PMLR, 2020.
Deep predictive coding networks for video prediction and unsupervised learning. W Lotter, G Kreiman, D Cox, arXiv:1605.08104arXiv preprintW. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104, 2016.
Improved knowledge distillation via teacher assistant. S I Mirzadeh, M Farajtabar, A Li, N Levine, A Matsukawa, H Ghasemzadeh, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5191-5198, 2020.
Learning with noisy labels. N Natarajan, I S Dhillon, P K Ravikumar, A Tewari, Advances in neural information processing systems. N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances in neural information processing systems, pages 1196-1204, 2013.
Self: Learning to filter noisy labels with self-ensembling. D T Nguyen, C K Mummadi, T P N Ngo, T H P Nguyen, L Beggel, T Brox, International Conference on Learning Representations. D. T. Nguyen, C. K. Mummadi, T. P. N. Ngo, T. H. P. Nguyen, L. Beggel, and T. Brox. Self: Learning to filter noisy labels with self-ensembling. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HkgsPhNYPS.
Confident learning: Estimating uncertainty in dataset labels. C Northcutt, L Jiang, I Chuang, Journal of Artificial Intelligence Research. 70C. Northcutt, L. Jiang, and I. Chuang. Confident learning: Estimating uncertainty in dataset labels. Journal of Artificial Intelligence Research, 70:1373-1411, 2021.
Relational knowledge distillation. W Park, D Kim, Y Lu, M Cho, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionW. Park, D. Kim, Y. Lu, and M. Cho. Relational knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3967-3976, 2019.
Making deep neural networks robust to label noise: A loss correction approach. G Patrini, A Rozza, A Menon, R Nock, L Qu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionG. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1944-1952, 2017.
D Rolnick, A Veit, S Belongie, N Shavit, arXiv:1705.10694Deep learning is robust to massive label noise. arXiv preprintD. Rolnick, A. Veit, S. Belongie, and N. Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017.
Understanding machine learning: From theory to algorithms. S Shalev-Shwartz, S Ben-David, Cambridge university pressS. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
A Van Den Oord, Y Li, O Vinyals, Representation learning with contrastive predictive coding. arXiv e-prints. 1807A. Van den Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv e-prints, pages arXiv-1807, 2018.
Understanding contrastive representation learning through alignment and uniformity on the hypersphere. T Wang, P Isola, International Conference on Machine Learning. PMLRT. Wang and P. Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929- 9939. PMLR, 2020.
Combating noisy labels by agreement: A joint training method with co-regularization. H Wei, L Feng, X Chen, B An, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionH. Wei, L. Feng, X. Chen, and B. An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13726-13735, 2020.
Open-set label noise can improve robustness against inherent label noise. H Wei, L Tao, R Xie, B An, Advances in Neural Information Processing Systems. 34H. Wei, L. Tao, R. Xie, and B. An. Open-set label noise can improve robustness against inherent label noise. Advances in Neural Information Processing Systems, 34, 2021a.
Deep learning from multiple noisy annotators as a union. H Wei, R Xie, L Feng, B Han, B An, IEEE Transactions on Neural Networks and Learning Systems. H. Wei, R. Xie, L. Feng, B. Han, and B. An. Deep learning from multiple noisy annotators as a union. IEEE Transactions on Neural Networks and Learning Systems, 2022.
When optimizing $f$-divergence is robust with label noise. J Wei, Y Liu, International Conference on Learning Representations. J. Wei and Y. Liu. When optimizing $f$-divergence is robust with label noise. In Interna- tional Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=WesiCoRVQ15.
J Wei, Z Zhu, H Cheng, T Liu, G Niu, Y Liu, arXiv:2110.12088Learning with noisy labels revisited: A study using real-world human annotations. arXiv preprintJ. Wei, Z. Zhu, H. Cheng, T. Liu, G. Niu, and Y. Liu. Learning with noisy labels revisited: A study using real-world human annotations. arXiv preprint arXiv:2110.12088, 2021b.
Robust early-learning: Hindering the memorization of noisy labels. X Xia, T Liu, B Han, C Gong, N Wang, Z Ge, Y Chang, International conference on learning representations. X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang. Robust early-learning: Hindering the memorization of noisy labels. In International conference on learning representations, 2020.
Learning from massive noisy labeled data for image classification. T Xiao, T Xia, Y Yang, C Huang, X Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionT. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2691-2699, 2015.
L_dmi: A novel information-theoretic loss function for training deep nets robust to label noise. Y Xu, P Cao, Y Kong, Y Wang, Advances in Neural Information Processing Systems. 32Y. Xu, P. Cao, Y. Kong, and Y. Wang. L_dmi: A novel information-theoretic loss function for training deep nets robust to label noise. In Advances in Neural Information Processing Systems, volume 32, 2019. |
189,898,449 | A SIGNAL PROPAGATION PERSPECTIVE FOR PRUNING NEURAL NETWORKS AT INITIALIZATION | Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures. | [] | A SIGNAL PROPAGATION PERSPECTIVE FOR PRUNING NEURAL NETWORKS AT INITIALIZATION
16 Feb 2020
Namhoon Lee
University of Oxford
Thalaiyasingam Ajanthan
Australian National University
Stephen Gould [email protected]
Australian National University
Philip H S Torr
University of Oxford
A SIGNAL PROPAGATION PERSPECTIVE FOR PRUNING NEURAL NETWORKS AT INITIALIZATION
16 Feb 2020Published as a conference paper at ICLR 2020
Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures.
INTRODUCTION
Deep learning has made great strides in machine learning and been applied to various fields from computer vision and natural language processing, to health care and playing games (LeCun et al., 2015). Despite the immense success, however, it remains challenging to deal with the excessive computational and memory requirements of large neural network models. To this end, lightweight models are often preferred, and network pruning, a technique to reduce parameters in a network, has been widely employed to compress deep neural networks (Han et al., 2016). Nonetheless, designing pruning algorithms has been often purely based on ad-hoc intuition lacking rigorous underpinning, partly because pruning was typically carried out after training the model as a post-processing step or interwoven with the training procedure, without adequate tools to analyze.
Recently, Lee et al. (2019) have shown that pruning can be done on randomly initialized neural networks in a single-shot prior to training (i.e., pruning at initialization). They empirically showed that as long as the initial random weights are drawn from appropriately scaled Gaussians (e.g., Glorot & Bengio (2010)), their pruning criterion called connection sensitivity can be used to prune deep neural networks, often to an extreme level of sparsity while maintaining good accuracy once trained. However, it remains unclear as to why pruning at initialization is effective, how it should be understood theoretically and whether it can be extended further.
In this work, we first look into the effect of initialization on pruning, and find that initial weights have critical impact on connection sensitivity, and therefore, pruning results. Deeper investigation shows that connection sensitivity is determined by an interplay between gradients and weights. Therefore when the initial weights are not chosen appropriately, the propagation of input signals into layers of these random weights can result in saturating error signals (i.e., gradients) under backpropagation, and hence unreliable connection sensitivity, potentially leading to a catastrophic pruning failure.
This result leads us to develop a signal propagation perspective for pruning at initialization, and to provide a formal characterization of how a network needs to be initialized for reliable connection sensitivity measurements and in turn effective pruning. Precisely, we show that a sufficient condition to ensure faithful 1 connection sensitivity is layerwise dynamical isometry, which is defined as all singular values of the layerwise Jacobians being concentrated around 1. Our signal propagation perspective is inspired by the recent literature on dynamical isometry and mean field theory (Saxe et al., 2014;Poole et al., 2016;Pennington et al., 2017), in which the general signal propagation in neural networks is studied. We extend this result to understanding and improving pruning at initialization.
Moreover, we study signal propagation in the pruned sparse networks and its effect on trainability. We find that pruning neural networks can indeed break dynamical isometry, and hence, hinders signal propagation and degrades the training performance of the resulting sparse network. In order to address this issue, we propose a simple, yet effective data-free method to recover the layerwise orthogonality given the sparse topology, which in turn improves the training performance of the compressed network significantly. Our analysis further reveals that in addition to signal propagation, the choice of pruning method and sparsity level can influence trainability in sparse neural networks.
Perfect layerwise dynamical isometry cannot always be ensured in the modern networks that have components such as ReLU nonlinearities (Pennington et al., 2017) and/or batch normalization (Yang et al., 2019). Even in such cases, however, our experiments on various modern architectures (including convolutional and residual neural networks) indicate that connection sensitivity computed based on layerwise dynamical isometry is robust and consistently outperforms pruning based on other initialization schemes. This indicates that the signal propagation perspective is not only important to theoretically understand pruning at initialization, but also it improves the results of pruning for a range of networks of practical interest.
Furthermore, this signal propagation perspective for pruning poses another important question: how informative is the error signal computed on randomly initialized networks, or can we prune neural networks even without supervision? To understand this, we compute connection sensitivity scores with different unsupervised surrogate losses and evaluate the pruning results. Interestingly, our results indicate that we can in fact prune networks in an unsupervised manner to extreme sparsity levels without compromising accuracy, and it often compares competitively to pruning with supervision. Moreover, we test if pruning at initialization can be extended to obtain architectures that yield better performance than standard pre-designed architectures with the same number of parameters. In fact, this process, which we call neural architecture sculpting, compares favorably against hand-designed architectures, taking network pruning one step further towards neural architecture search.
PRELIMINARIES
Pruning at initialization. The principle behind conventional approaches for network pruning is to find unnecessary parameters, such that by eliminating them the complexity of the model is reduced while minimizing the impact on what is learned (Reed, 1993). Naturally, a typical pruning algorithm starts after convergence to a minimum or training to some degree. This pretraining requirement has been left unattended until Lee et al. (2019) recently showed that pruning can be performed on untrained networks at initiailzation prior to training. They proposed a method called SNIP which relies on a new saliency criterion, namely connection sensitivity, defined as follows:
s j (w; D) = |g j (w; D)| m k=1 |g k (w; D)| , where g j (w; D) = ∂L(c ⊙ w; D) ∂c j c=1 .(1)
Here, s j is the saliency of the parameter j, w ∈ R m is the network parameters, c ∈ {0, 1} m is the auxiliary indicator variables representing the connectivity of network parameters, m is the total number of parameters in the network, and D is a given dataset. Also, g j is the derivative of the loss L with respect to c j , which turns out to be an infinitesimal approximation of the change in the loss with respect to removing the parameter j. Designed to be computed at initialization, pruning is performed by keeping top-κ (where κ denotes a desired sparsity level) salient parameters based on the above sensitivity scores.
Dynamical isometry and mean field theory. The success of training deep neural networks is due in large part to the initial weights (Hinton & Salakhutdinov, 2006;Glorot & Bengio, 2010;Pascanu et al., 2013). In essence, the principle behind these random weight initializations is to have the mean squared singular value of a network's input-output Jacobian close to 1, so that on average, an error vector will preserve its norm under backpropagation; however, this is not sufficient to prevent amplification or attenuation of an error vector on worst case. A stronger condition that having as many singular values as possible near 1 is called dynamical isometry (Saxe et al., 2014). Under this condition, error signals backpropagate isometrically through the network, approximately preserving its norm and all angles between error vectors. Alongside dynamical isometry, mean field theory is used to develop a theoretical understanding of signal propagation in neural networks with random parameters (Poole et al., 2016). Precisely, the mean field approximation states that preactivations of wide, untrained neural networks can be captured as a Gaussian distribution. Recent works revealed a maximum depth through which signals can propagate at initialization, and verified that networks are trainable when signals can travel all the way through them Yang & Schoenholz, 2017;Xiao et al., 2018).
SIGNAL PROPAGATION PERSPECTIVE TO PRUNING RANDOM NETWORKS
Problem setup. Consider a fully-connected, feed-forward neural network with weight matrices W l ∈ R N ×N , biases b l ∈ R N , pre-activations h l ∈ R N , and post-activations x l ∈ R N , for l ∈ {1 . . . K} up to K layers. Now, the feed-forward dynamics of a network can be written as,
x l = φ(h l ) , h l = W l x l−1 + b l ,(2)
where φ : R → R is an elementwise nonlinearity, and the input is denoted by x 0 . Given the network configuration, the parameters are initialized by sampling from a probability distribution, typically a zero mean Gaussian with scaled variance (LeCun et al., 1998;Glorot & Bengio, 2010).
EFFECT OF INITIALIZATION ON PRUNING
It is observed in Lee et al. (2019) that pruning results tend to improve when initial weights are drawn from a scaled Gaussian, or so-called variance scaling initialization (LeCun et al., 1998;Glorot & Bengio, 2010;He et al., 2015). As we wish to better understand the role of these random initial weights in pruning, we will examine the effect of varying initialization on the pruning results.
In essence, variance scaling schemes introduce normalization factors to adjust the variance σ of the weight sampling distribution, which can be summarized as σ → α ψ l σ, where ψ l is a layerwise scalar that depends on an architecture specification such as the number of output neurons in the previous layer (e.g., fan-in), and α is a global scalar throughout the network. Notice in case of a network with layers of the same width, the variance can be controlled by a single scalar γ = α ψ as ψ l = ψ for all layers l. In particular, we take both linear and tanh multilayer perceptron networks (MLP) of layers K = 7 and width N = 100 on MNIST with σ = 1 as the default, similar to Saxe et al. (2014). We initialize these networks with different γ, compute the connection sensitivity, prune it, and then visualize layerwise the resulting sparsity patterns c as well as the corresponding connection sensitivity used for pruning in Figure 1.
It is seen in the sparsity patterns that for the tanh network, unlike the linear case, more parameters tend to be pruned in the later layers than the earlier layers. As a result, this limits the learning capability of the subnetwork critically when a high sparsity level is requested; e.g., forκ = 90%, only a few parameters in later layers are retained after pruning. This is explained by the connection sensitivity plot. The sensitivity of parameters in the nonlinear network tends to decrease towards the later layers, and therefore, choosing the top-κ parameters globally based on the sensitivity scores results in a subnetwork in which retained parameters are distributed highly non-uniformly and sparsely towards the end of the network. This result implies that the initial weights have a crucial effect on the connection sensitivity, and from there, the pruning results. Here, black(0)/white(1) pixels refer to pruned/retained parameters; (right) connection sensitivities (CS) measured for the parameters in each layer. All networks are initialized with γ = 1.0. Unlike the linear case, the sparsity pattern for the tanh network is nonuniform over different layers. When pruning for a high sparsity level (e.g.,κ = 90%), this becomes critical and leads to poor learning capability as there are only a few parameters left in later layers. This is explained by the connection sensitivity plot which shows that for the nonlinear network parameters in later layers have saturating, lower connection sensitivities than those in earlier layers.
GRADIENT SIGNAL IN CONNECTION SENSITIVITY
We posit that the unreliability of connection sensitivity observed in Figure 1 is due to poor signal propagation: an initialization that projects the input signal to be strongly amplified or attenuated in the forward pass will saturate the error signal under backpropagation (i.e., gradients), and hence will result in poorly calibrated connection sensitivity scores across layers, which will eventually lead to poor pruning results, potentially with complete disconnection of signal paths (e.g., entire layer).
Precisely, we give the relationship between the connection sensitivity and the gradients as follows. From Equation 1, connection sensitivity is a normalized magnitude of gradients with respect to the connectivity parameters c. Here, we use the vectorized notation where w denotes all learnable parameters and c denotes the corresponding connectivity parameters. From chain rule, we can write:
∂L(c ⊙ w; D) ∂c c=1 = ∂L(c ⊙ w; D) ∂(c ⊙ w) c=1 ⊙ w = ∂L(w; D) ∂w ⊙ w .(3)
Therefore, ∂L/∂c is the gradients ∂L/∂w amplified (or attenuated) by the corresponding weights w, i.e., ∂L/∂c j = ∂L/∂w j w j for all j ∈ {1 . . . m}. Considering ∂L/∂c j for a given j, since w j does not depend on any other layers or signal propagation, the only term that depends on signal propagation in the network is the gradient term ∂L/∂w j . Hence, a necessary condition to ensure faithful ∂L/∂c (and connection sensitivity) is that the gradients ∂L/∂w need to be faithful. In the following section, we formalize this from a signal propagation perspective, and characterize an initial condition that ensures reliable connection sensitivity measurement.
LAYERWISE DYNAMICAL ISOMETRY
GRADIENTS IN TERMS OF JACOBIANS
From the feed-forward dynamics of a network in Equation 2, the network's input-output Jacobian corresponding to a given input x 0 can be written, by the chain rule of differentiation, as:
J 0,K = ∂x K ∂x 0 = K l=1 D l W l ,(4)
where D l ∈ R N ×N is a diagonal matrix with entries D l ij = φ ′ (h l i )δ ij , with φ ′ denoting the derivative of nonlinearity φ, and δ ij = ½[i = j] is the Kronecker delta. Here, we will use J k,l to denote the Jacobian from layer k to layer l. Now, we give the relationship between gradients and Jacobians:
Proposition 1. Let ǫ = ∂L/∂x K denote the error signal and x 0 denote the input signal. Then, 1. the gradients satisfy:
g T w l = ǫ J l,K D l ⊗ x l−1 ,(5)
where J l,K = ∂x K /∂x l is the Jacobian from layer l to the output and ⊗ is the Kronecker product. 2. additionally, for linear networks, i.e., when φ is the identity:
g T w l = ǫ J l,K ⊗ J 0,l−1 x 0 + a ,(6)
where J 0,l−1 = ∂x l−1 /∂x 0 is the Jacobian from the input to layer l − 1 and a ∈ R N is a constant term that does not depend on x 0 .
Proof. This can be proved by an algebraic manipulation of the chain rule while using the feedforward dynamics in Equation 2. We provide the full derivation in Appendix A.
Notice that the gradient at layer l constitutes both the backward propagation of the error signal ǫ up to layer l and the forward propagation of the input signal x 0 up to layer l−1. Moreover, especially in the linear case, the signal propagation in both directions is governed by the corresponding Jacobians. We believe that this interpretation of gradients is useful as it sheds light on how signal propagation affects the gradients. To this end, we next analyze the conditions on the Jacobians, which would guarantee faithful signal propagation in the network, and consequently, faithful gradients.
ENSURING FAITHFUL GRADIENTS
Here, we first consider the layerwise signal propagation which would be useful to derive properties on the initialization to ensure faithful gradients. To this end, let us consider the layerwise Jacobian:
J l−1,l = ∂x l ∂x l−1 = D l W l .(7)
Note that it is sufficient to have layerwise dynamical isometry in order to ensure faithful signal propagation in the network.
Definition 1. (Layerwise dynamical isometry) Let J l−1,l = ∂x l ∂x l−1 ∈ R N l ×N l−1 be the Jacobian matrix of layer l. The network is said to satisfy layerwise dynamical isometry if the singular values of J l−1,l are concentrated near 1 for all layers, i.e., for a given ǫ > 0, the singular value σ j satisfies |1 − σ j | ≤ ǫ for all j.
This would guarantee that the signal from layer l to l − 1 (or vice versa) is propagated without amplification or attenuation in any of its dimension. From Proposition 1 and Equation 7, by induction, it is easy to show that if the layerwise signal propagation is faithful, the error and input signals will faithfully propagate throughout the network, resulting in faithful gradients.
For linear networks, J l−1,l = W l . Therefore, one can initialize the weight matrix to be orthogonal such that (W l ) T W l = I, where I is the identity matrix of dimension N . In this case, all singular values of W l are exactly 1 (i.e., exact dynamical isometry), and such an initialization guarantees faithful gradients. While a linear network is of little practical use, we note that it helps to develop theoretical analysis and provides intuition as to why dynamical isometry is a useful measure.
For nonlinear networks, the diagonal matrix D l needs to be accounted for as it depends on the pre-activations h l at layer l. In this case, it is important to have the pre-activations h l fall into the linear region of the nonlinear function φ. Precisely, mean-field theory assumes that for large-N limit, the empirical distribution of the pre-activations h l converges to a Gaussian with zero mean and variance q l , where the variance follows a recursion relation (Poole et al., 2016). Therefore, to achieve layerwise dynamical isometry, the idea becomes to find a fixed point q * such that h l ∼ N (0, q * ) for all l ∈ {1 . . . K}. Such a fixed point makes D l = D for all layers, and therefore, the pre-activations are placed in the linear region of the nonlinearity. 2 Then, given the nonlinearity, one can find a rescaling such that (DW l ) T (DW l ) = (W l ) T W l /σ 2 w = I. The procedure for finding the rescaling σ 2 w for various nonlinearities are discussed in Pennington et al. (2017;. Also, this easily extends to convolutional neural networks using the initialization method in Xiao et al. (2018). Table 1: Jacobian singular values and resulting sparse networks for the 7-layer tanh MLP network considered in section 3.1. SG, CN, and Sparsity refer to Scaled Gaussian, Condition Number (i.e., s max /s min , where s max and s min are the maximum and minimum Jacobian singular values), and a ratio of pruned prameters to the total number of parameters, respectively. SG (γ=10 −2 ) is equivalent to the variance scaling initialization as in LeCun et al. (1998); Glorot & Bengio (2010). The failure cases correspond to unreliable connection sensitivity resulted from poorly conditioned initial Jacobians. We note that dynamical isometry is in fact a weaker condition than layerwise dynamical isometry. However, in practice, the initialization suggested in the existing works (Pennington et al., 2017;Xiao et al., 2018), i.e., orthogonal initialization for weight matrices in each layer with rescaling based on mean field theory, satisfy layerwise dynamical isometry, even though this term was not mentioned.
Now, recall from Section 3.1 that a network is pruned with a global threshold based on connection sensitivity, and from Section 3.2 that the connection sensitivity is the gradients scaled by the weights. This in turn implies that the connection sensitivity scores across layers are required to be of the same scale. To this end, we require the gradients to be faithful and the weights to be in the same scale for all the layers. Notice, this condition is trivially satisfied when the layerwise dynamical isometry is ensured, as each layer is initialized identically (i.e., orthogonal initialization) and the gradients are guaranteed to be faithful.
Finally, we verify the failure of pruning cases presented in Section 3.1 based on the signal propagation perspective. Specifically, we measure the singular value distribution of the input-output Jacobian (J 0,K ) for the 7-layer tanh MLP network, and the results are reported in Table 1. Note that while connection sensitivity based pruning is robust to moderate changes in the Jacobian singular values, it failed catastrophically when the condition number of the Jacobian is very large (> 1e+11).
In fact, these failure cases correspond to the completely disconnected networks, as a consequence of pruning with unreliable connection sensitivity resulted from poorly conditioned initial Jacobians. As we will show subsequently, these findings extend to modern architectures, and layerwise dynamical isometry yields well-conditioned Jacobians and in turn the best pruning results.
SIGNAL PROPAGATION IN SPARSE NEURAL NETWORKS
So far, we have shown empirically and theoretically that layerwise dynamical isometry can improve the process of pruning at initialization. One remaining question to address is the following: how well do signals propagate in the pruned sparse networks? In this section, we first examine the effect of sparsity on signal propagation after pruning. We find that indeed pruning can break dynamical isometry, degrading trainability of sparse networks. Then we follow up to present a simple, but effective data-free method to recover approximate dynamical isometry on sparse networks.
Setup. The overall process is summarized as follows:
Step 1. Initialize a network with a variance scaling (VS) or layerwise dynamical isometry (LDI) satisfying orthogonal initialization.
Step 2. Prune at initialization for a sparsity levelκ based on connection sensitivity (CS); we also test random (Rand) and magnitude (Mag) based pruning for comparison.
Step 3. (optional) Enforce approximate dynamical isometry, if specified.
Step 4. Train the pruned sparse network using SGD. We measure signal propagation (e.g., Jacobian singular values) on the sparse network right before Step 4, and observe training behavior during Step 4. Different methods are named as {A}-{B}-{C}, where A, B, C stand for initialization scheme, pruning method, (optional) approximate isometry, respectively. We perform this on 7-layer linear and tanh MLP networks as before 3 . We train using SGD with the initial learning rate of 0.1 decayed by 1/10 at every 20k iterations. All results are the average over 10 runs. We provide other singular value statistics (max, min, std), accuracy plot, and extended training results for random and magnitude pruning in Appendix C.
Effect of pruning on signal propagation and trainability. Let us first check signal propagation measurements in the pruned networks (see Figure 2a). In general, Jacobian singular values decrease continuously as the sparsity level increases (except for {·}-{·}-AI which we will explain later), indicating that the more parameters are removed, the less faithful a network is likely to be with regard to propagating signals. Also, notice that the singular values drop more rapidly with random pruning compared to connection sensitivity based pruning methods (i.e., {·}-Rand vs. {·}-CS). This means that pruning using connection sensitivity is more robust to destruction of dynamical isometry and preserve better signal propagation in the sparse network than random pruning. We further note that, albeit marginal, layerwise dynamical isometry allows better signal propagation than variance scaling initialization, with relatively higher mean singular values and much lower standard deviations especially in the low sparsity regime (see Appendix C). Now, we look into the relation between signal propagation and trainability of the sparse networks. Figure 2b shows training behavior of the pruned networks (κ = 90%) obtained by different methods.
We can see a clear correlation between signal propagation capability of a network and its training performance; i.e., the better a network propagates signals, the faster it converges during training. For instance, compare the trainability of a network before and after pruning. That is, compared to LDI-Dense (κ = 0), LDI-{CS, Mag, Rand} decrease the loss much slowly; random pruning starts to decrease the loss around 4k iteration, and finally reaches to close to zero loss around 10k iterations (see Appendix C), which is more than an order of magnitude slower than a network pruned by connection sensitivity. Recall that the pruned networks have much smaller singular values.
Enforcing approximate dynamical isometry. The observation above indicates that the better signal propagation is ensured on sparse networks, the better their training performs. This motivates us to think of the following: what if we can repair the broken isometry, before we start training the pruned network, such that we can achieve trainability comparable to that of the dense network? Precisely, we consider the following:
min W l (C l ⊙ W l ) T (C l ⊙ W l ) − I l F ,(8)
where C l , W l , I l are the sparse mask obtained by pruning, the corresponding weights, the identity matrix at layer l, respectively, and · F is the Frobenius norm. We optimize this for all layers identically using gradient descent. Given the sparsity topology C l and initial weights W l , this datafree method attempts to find an optimal W * such that the combination of the sparse topology and the weights to be layerwise orthogonal, potentially to the full rank capacity. This simple method (i.e., {·}-{·}-AI, where AI is named for Approximate Isometry) turns out to be highly effective. The results are provided in Figure 2, and we summarize our key findings below:
• Signal propagation (LDI-{CS, Rand} vs. LDI-{CS, Rand}-AI). The decreased singular values (by pruningκ > 0) bounce up dramatically and become close to the level before pruning. This means that orthogonality enforced by Equation 8 is achieved in the sparse topology of the pruned network (i.e., approximate dynamical isometry), and therefore, signal propagation on the sparse network is likely to behave similarly to the dense network. As expected, the training performance increased significantly (e.g., compare LDI-CS with LDI-CS-AI for trainability). This works more dramatically for random pruning; i.e., even for randomly pruned sparse networks, training speed increases significantly, implying the benefit of ensuring signal propagation. • Structure (LDI-Rand-AI vs. LDI-CS-AI). Even if the approximate dynamical isometry is enforced identically, the network pruned using connection sensitivity shows better trainability than the randomly pruned network. This potentially means that the sparse topology obtained by different pruning methods also matters, in addition to signal propagation characteristics. • Overparameterization (LDI-Dense vs. LDI-{CS, Rand}-AI). Even though the singular values are restored to a level close to before pruning with approximate isometry, the non-pruned dense network converges faster than pruned networks. We hypothesize that in addition to signal propagation, overparameterization helps in optimization taking less time to find a minimum.
While being simple and data free (thus fast), our signal propagation perspective not only can be used to improve trainability of sparse neural networks, but also to complement a common explanation for decreased trainability of compressed networks which is often attributed merely to a reduced capacity. Our results also extend to the case of convolutional neural network (see Figure 8 in Appendix C).
VALIDATION AND EXTENSIONS
In this section, we aim to demonstrate the efficacy of our signal propagation perspective on a wide variety of settings. We first evaluate the idea of employing layerwise dynamical isometry on various modern neural networks. In addition, we further study the role of supervision under the pruning at initialization regime, extending it to unsupervised pruning. Our results show that indeed, pruning can be approached from the signal propagation perspective at varying scale, bringing forth the notion of neural architecture sculpting. The experiment settings used to generate the presented results are detailed in Appendix B. The code can be found here: https://github.com/namhoonlee/spp-public.
EVALUATION ON VARIOUS NEURAL NETWORKS AND DATASETS
Here, we verify that our signal propagation perspective for pruning neural networks at initialization is indeed valid, by evaluating further on various modern neural networks and datasets. To this end, we provide orthogonality scores (OS) and generalization errors of the sparse networks obtained by different methods and show that layerwise dynamical isometry with enforced approximate isometry results in the best performance; here, we define OS as 1 l l (C l ⊙ W l ) T (C l ⊙ W l ) − I l F , which can be used to indicate how close are the weight matrices in each layer of the pruned network to being orthogonal. All results are the average of 5 runs, and we do not optimize anything specific for a particular case (see Appendix B for experiment settings). The results are presented in Table 2.
The best pruning results are achieved when the approximate dynamical isometry is enforced on the pruned sparse network (i.e., LDI-AI), across all tested architectures. Also, the second best results are achieved with the orthogonal initialization that satisfies layerwise dynamical isometry (i.e., LDI). Looking closely, it is evident that there exists a high correlation between the orthogonality scores and the performance of pruned networks; i.e., the network initialized to have the lowest orthogonality scores achieves the best generalization errors after training. Note that the orthogonality scores being close to 0, by definition, states how faithful a network will be with regard to letting signals propagate without being amplified or attenuated. Therefore, the fact that a pruned network with the lowest orthogonality scores tends to yield good generalization errors further validates that our signal propagation perspective is indeed effective for pruning at initialization. Moreover, we test for other nonlinear activation functions (tanh, leaky-relu, selu), and found that the orthogonal initialization consistently outperforms variance scaling methods (see Table 3).
PRUNING WITHOUT SUPERVISION
So far, we have shown that pruning random networks can be approached from a signal propagation perspective by ensuring faithful connection sensitivity. Another factor that constitutes connection sensitivity is the loss term. At a glance, it is not obvious how informative the supervised loss measured on a random network will be for connection sensitivity. In this section, we look into the effect of supervision, by simply replacing the loss computed using ground-truth labels with different unsupervised surrogate losses as follows: replacing the target distribution using ground-truth labels with uniform distribution (Unif.), and using the averaged output prediction of the network (Pred.; softmax/raw). The results for MLP networks are in Table 4. Even though unsupervised pruning results are not as good as the supervised case, the results are still interesting, especially for the uniform case, in that there was no supervision given. We thus experiment further for the uniform case on other networks, and obtain the following results: 8. 25, 11.69, 11.01, 8.82 errors (%) for VGG16, ResNet32, ResNet56, ResNet110, respectively. Surprisingly, the results are often competitive to that of pruning with supervision (i.e., compare to LDI results in Table 2). Notably, previous pruning algorithms assume the existence of supervision a priori. Being the first demonstration, along with the signal propagation perspective, this unsupervised pruning strategy can be useful in scenarios where there are no labels or only weak supervision is available. To demonstrate further, we also conducted transfer of sparsity experiments such as transferring a pruned network from one task to another (MNIST ↔ Fashion-MNIST). Table 5 shows that, while pruning results may degrade if sparsity is transferred, or done without supervision, less impact is caused for unsupervised pruning when transferred to a different task (i.e., 0.52 to 0.14 on MNIST, and 1.11 to −0.78 on F-MNIST). This indicates that inductive bias exists in data, affecting transfer and unsupervised pruning, and potentially, that "universal" sparse topology might be obtainable if universal data distribution is known (e.g., extremely large dataset in practice). This may help in situations where different tasks from unknown data distribution are to be performed (e.g., continual learning). We also tested two other unsupervised losses, but none performed as well as uniform loss (e.g., Jacobian norms J 1 : 5.03, J 2 : 3.00 vs. Unif.: 2.94), implying that if pruning is to be unsupervised, the uniform loss would better be used, because other unsupervised losses depend on the input data (thus can suffer from inductive bias). Random pruning degrades significantly at high sparsity for all cases.
NEURAL ARCHITECTURE SCULPTING
We have shown that pruning at initialization, even when no supervision is provided, can be effective based on the signal propagation perspective. This begs the question of whether pruning needs to be limited to pre-shaped architectures or not. In other words, what if pruning is applied to an arbitrarily bulky network and is treated as sculpting an architecture? In order to find out, we conduct the following experiments: we take a popular pre-designed architecture (ResNet20 in He et al. (2016)) as a base network, and consider a range of variants that are originally bigger than the base model, but pruned to have the same number of parameters as the base dense network. Specifically, we consider the following equivalents: (1) the same number of residual blocks, but with larger widths;
(2) a reduced number of residual blocks with larger widths; (3) a larger residual block and the same width (see Table 6 in Appendix B for details). The results are presented in Figure 3. Error Reference Equivalent1 Equivalent2 Equivalent3 Figure 3: Neural architecture sculpting results on CIFAR-10. We report generalization errors (avg. over 5 runs). All networks have the same number of parameters (269k) and trained identically.
Overall, the sparse equivalents record lower errors than the dense base model. Notice that some models are extremely sparse (e.g., Equivalent 1 pruned forκ = 98.4%). While all networks have the same number of parameters, discovered sparse equivalents outperform the dense reference network. This result is well aligned with recent findings in Kalchbrenner et al. (2018): large sparse networks can outperform their small dense counterpart, while enjoying increased computational and memory efficiency via a dedicated implementation for sparsity in practice. Also, it seems that pruning wider networks tends to be more effective in producing a better model than pruning deeper ones (e.g., Equivalent 1 vs. Equivalent 3). We further note that unlike existing prior works, the sparse networks are discovered by sculpting arbitrarily-designed architecture, without pretraining nor supervision.
DISCUSSION AND FUTURE WORK
In this work, we have approached the problem of pruning neural networks at initialization from a signal propagation perspective. Based on observing the effect of varying the initialization, we found that initial weights have a critical impact on connection sensitivity measurements and hence pruning results. This led us to conduct theoretical analysis based on dynamical isometry and a mean field theory, and formally characterize a sufficient condition to ensure faithful signal propagation in a given network. Moreover, our analysis on compressed neural networks revealed that signal propagation characteristics of a sparse network highly correlates with its trainability, and also that pruning can break dynamical isometry ensured on a network at initialization, resulting in degradation of trainability of the compressed network. To address this, we introduced a simple, yet effective data-free method to recover the orthogonality and enhance trainability of the compressed network. Finally, throughout a range of validation and extension experiments, we verified that our signal propagation perspective is effective for understanding, improving, and extending the task of pruning at initialization across various settings. We believe that our results on the increased trainability of compressed networks can take us one step towards finding "winning lottery ticket" (i.e., a set of initial weights that given a sparse topology can quickly reach to a generalization performance that is comparable to the uncompressed network, once trained) suggested in Frankle & Carbin (2019).
We point out, however, that there remains several aspects to consider. While pruning on enforced isometry produces trainable sparse networks, the two-stage orthogonalization process (i.e., prune first and enforce the orthogonality later) can be suboptimal especially at a high sparsity level. Also, network weights change during training, which can affect signal propagation characteristics, and therefore, dynamical isometry may not continue to hold over the course of training. We hypothesize that a potential key to successful neural network compression is to address the complex interplay between optimization and signal propagation, and it might be immensely beneficial if an optimization naturally takes place in the space of isometry. We believe that our signal propagation perspective provides a means to formulate this as an optimization problem by maximizing the trainability of sparse networks while pruning, and we intend to explore this direction as a future work.
Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. ICML, 2018.
Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. NeurIPS, 2017.
Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S Schoenholz. A mean field theory of batch normalization. ICLR, 2019.
A GRADIENTS IN TERMS OF JACOBIANS
Proposition 1. Let ǫ = ∂L/∂x K denote the error signal and x 0 denote the input signal. Then, 1. the gradients satisfy:
g T w l = ǫ J l,K D l ⊗ x l−1 ,(9)
where J l,K = ∂x K /∂x l is the Jacobian from layer l to the output and ⊗ is the Kronecker product. 2. additionally, for linear networks, i.e., when φ is the identity:
g T w l = ǫ J l,K ⊗ J 0,l−1 x 0 + a ,(10)
where J 0,l−1 = ∂x l−1 /∂x 0 is the Jacobian from the input to layer l − 1 and a ∈ R N is the constant term that does not depend on x 0 .
Proof. The proof is based on a simple algebraic manipulation of the chain rule. The gradient of the loss with respect to the weight matrix W l can be written as:
g w l = ∂L ∂W l = ∂L ∂x K ∂x K ∂x l ∂x l ∂W l .(11)
Here, the gradient ∂y/∂x is represented as a matrix of dimension y-size × x-size. For gradients with respect to matrices, their vectorized from is used. Notice,
∂x l ∂W l = ∂x l ∂h l ∂h l ∂W l = D l ∂h l ∂W l .(12)
Considering the feed-forward dynamics for a particular neuron i,
h l i = j W l ij x l−1 j + b l i ,(13)∂h l i ∂W l ij = x l−1 j .
Therefore, using the Kronecker product, we can compactly write:
∂x l ∂W l = (D l ) T ⊗ (x l−1 ) T .(14)
Now, Equation 11 can be written as:
g w l = (ǫ J l,K D l ) T ⊗ (x l−1 ) T ,(15)g T w l = ǫ J l,K D l ⊗ x l−1 . Here, A T ⊗ B T = (A ⊗ B) T is used.
Moreover, for linear networks D l = I and x l = h l for all l ∈ {1 . . . K}. Therefore, x l−1 can be written as:
x l−1 = φ(W l−1 φ(W l−2 . . . φ(W 1 x 0 + b 1 ) . . . + b l−2 ) + b l−1 ) ,(16)= W l−1 (W l−2 . . . (W 1 x 0 + b 1 ) . . . + b l−2 ) + b l−1 , = l−1 k=1 W k x 0 + l−1 k=2 W k b 1 + . . . + b l−1 , = J 0,l−1 x 0 + a ,
where a is the constant term that does not depend on x 0 . Hence, the proof is complete.
B EXPERIMENT SETTINGS
Pruning at initialization. By default, we perform pruning at initialization based on connection sensitivity scores as in Lee et al. (2019). When computing connection sensitivity, we always use all examples in the training set to prevent stochasticity by a particular mini-batch. Unless stated otherwise, we set the default sparsity level to beκ = 90% (i.e., 90% of the entire parameters in a network is pruned away). For all tested architectures, pruning for such level of sparsity does not lead to a large accuracy drop. Additionally, we perform either random pruning (at initialization) or a magnitude based pruning (at pretrained) for comparison purposes. Random pruning refers to pruning parameters randomly and globally for a given sparsity level. For the magnitude based pruning, we first train a model and simply prune parameters globally in a single-shot based on the magnitude of the pretrained parameters (i.e., keep the large weights while pruning small ones). For initialization methods, we follow either variance scaling initialization schemes (i.e., VS-L, VS-G, VS-H, as in LeCun et al. (1998); Glorot & Bengio (2010); He et al. (2015), respectively) or (convolutional) orthogonal initialization schemes (Saxe et al., 2014;Xiao et al., 2018).
Training and evaluation. Throughout experiments, we evaluate pruning results on MNIST, CIFAR-10, and Tiny-ImageNet image classification tasks. For training of the pruned sparse networks, we use SGD with momentum and train up to 80k (for MNIST) or 100k (for CIFAR-10 and Tiny-ImageNet) iterations. The initial learning rate is set to be 0.1 and is decayed by 1/10 at every 20k (MNIST) or 25k (CIFAR-10 and Tiny-ImageNet). The mini-batch size is set to be 100, 128, 200 for MNIST, CIFAR-10, Tiny-ImageNet, respectively. We do not optimize anything specific for a particular case, and follow the standard training procedure. For all experiments, we use 10% of training set for the validation set, which corresponds to 5400, 5000, 9000 images for MNIST, CIFAR-10, Tiny-IamgeNet, respectively. We evaluate at every 1k iteration, and record the lowest test error. All results are the average of either 10 (for MNIST) or 5 (for CIFAR-10 and Tiny-ImageNet) runs.
Signal propagation and approximate dynamical isometry. We use the entire training set when computing Jacobian singular values of a network. In order to enforce approximate dynamical isometry when specified, given a pruned sparse network, we optimize for the objective in Equation 8
using gradient descent. The learning rate is set to be 0.1 and we perform 10k gradient update steps (although it usually reaches to convergence far before). This process is data-free and thus fast; e.g., depending on the size of the network and the number of update steps, it can take less than a few seconds on a modern computer.
Neural architecture sculpting. We provide the model details in Table 6. Table 6: All models (Equivalents 1,2,3) are initially bigger than the base network (ResNet20), by either being wider or deeper, but pruned to have the same number of parameters as the base network (269k). The widening factor (k) refers to the filter multiplier; e.g., for the basic filter size of 16, the widening factor of k=2 will result in 32 filters. The block size refers to the number of residual blocks in each block layer; all models have three block layers. More/less number of residual blocks means the network is deeper/shallower. The reported generalization errors are averages over 5 runs. We find that the technique of architecture sculpting, pruning randomly initialized neural networks based on our signal propagation perspective even in the absence of ground-truth supervision, can be used to find models of superior performance under the same parameter budget. We also plot results of LDI-Mag and LDI-Dense on the tanh case for trainability; the training results of non-pruned (LDI-Dense) and magnitude (LDI-Mag) pruning are only reported for the tanh case, because the learning rate had to be lowered for the linear case (otherwise it explodes), which makes the comparison not entirely fair. We provide the singular value statistics for the magnitude pruning in Figure 6 to avoid clutter. Also, extended training logs for random and magnitude based pruning are provided separately in Figure 5 to illustrate the difference in convergence speed. pruning. The sparse networks obtained by random or magnitude pruning take a much longer time to train than that obtained by pruning based on connection sensitivity. All methods are pruned at the layerwise orthogonal initialization, and trained the same way as before. : Signal propagation measurments (all signular value statistics) for the magnitude based pruning (Mag) on the 7-layer linear and tanh MLP networks. As described in the experiment settings in Appendix B, the magnitude based pruning is performed on a pretrained model. Notice that unlike other cases where pruning is done at initialization (i.e., using either random or connection sensitivity based pruning methods), the singular value distribution changes abruptly when pruned (i.e., note of the sharp change of singular values from 0 to 10% sparsity). Also, the singular values are not concentrated (note of high standard deviations), which explains rather inferior trainability compared to other methods. We conjecture that naively pruning based on the magnitude of parameters in a single-shot, without pruning gradually or employing some sophisticated tricks such as layerwise thresholding, can lead to a failure of training compressed networks. To examine the effect of initialization in isolation on the trainability of sparse neural networks, we remove batch normalization (BN) layers for this experiment, as BN tends to improve training speed as well as generalization performance. As a result, enforcing approximate isometry (LDI-CS-AIF) improves the training speed quite dramatically compared to the pruned network without isometry (LDI-CS). We also find that even compared to the non-pruned dense network (LDI-Dense) which is ensured layerwise dynamical isometry, LDI-CS-AIF trains faster in the early training phase. This result is quite promising and more encouraging than the previous case of MLP (see Figures 2 and 7), as it potentially indicates that an underparameterized network (by connection sensitivity pruning) can even outperform an overparameterized network, at least in the early phase of neural network training. Furthermore, we add results of using the spectral norm in enforcing approximate isometry in Equation 8 (LDI-CS-AIS), and find that it also trains faster than the case of broken isometry (LDI-CS), yet not as much as the case of using the Frobenius norm (LDI-CS-AIF).
Figure 1 :
1(left) layerwise sparsity patterns c ∈ {0, 1} 100×100 obtained as a result of pruning for the sparsity levelκ = {10, .., 90}%.
Figure 2 :
2(a) Signal propagation (mean Jacobian singular values) in sparse networks pruned for varying sparsity levelsκ, and (b) training behavior of the sparse network atκ = 90%. Signal propagation, pruning scheme, and overparameterization affect trainability of sparse neural networks.
Figure 4 :
4Full results for (a) signal propagation (all signular value statistics), and (b) training behavior (including accuracy) for 7-layer linear and tanh MLP networks. We provide results of LDI-Rand, LDI-Rand-AI, VS-CS, LDI-CS, LDI-CS-AI on the linear case for both singular value statistics and training log.
Figure 5 :
5Extended training log (i.e., Loss and Accuracy) for random (Rand) and magnitude (Mag)
Figure 6
6Figure 6: Signal propagation measurments (all signular value statistics) for the magnitude based pruning (Mag) on the 7-layer linear and tanh MLP networks. As described in the experiment settings in Appendix B, the magnitude based pruning is performed on a pretrained model. Notice that unlike other cases where pruning is done at initialization (i.e., using either random or connection sensitivity based pruning methods), the singular value distribution changes abruptly when pruned (i.e., note of the sharp change of singular values from 0 to 10% sparsity). Also, the singular values are not concentrated (note of high standard deviations), which explains rather inferior trainability compared to other methods. We conjecture that naively pruning based on the magnitude of parameters in a single-shot, without pruning gradually or employing some sophisticated tricks such as layerwise thresholding, can lead to a failure of training compressed networks.
Figure 7 :Figure 8 :
78Signal propagation and training behavior for ReLU and Leaky-ReLU activation functions. They resemble those of the tanh case as inFigure 2, and hence the conclusion holds about the same. Training performance (loss and accuracy) by different methods for VGG16 on CIFAR-10.
Table 2 :
2Pruning results for various neural networks on different datasets. All networks are pruned at initialization for the sparsityκ = 90% based on connection sensitivity scores as inLee et al. (2019). WRN16); all results are the average over 5 runs. The first and second best results are highlighted in each column of errors. The orthogonal initialization with enforced approximate isometry method (i.e., LDI-AI) achieves the best results across all tested architectures.We report orthogonality scores (OS) and generalization errors (Error) on CIFAR-10 (VGG16,
ResNets) and Tiny-ImageNet (VGG16
ResNet32
ResNet56
ResNet110
WRN16
Initialization
OS
Error
OS
Error
OS
Error
OS
Error
OS
Error
VS-L
13.72
8.16
4.50
11.96
4.64
10.43
4.65
9.13
11.99
45.08
VS-G
13.60
8.18
4.55
11.89
4.67
10.60
4.67
9.17
11.50
44.56
VS-H
15.44
8.36
4.41
12.21
4.44
10.63
4.39
9.08
13.49
46.62
LDI
13.33
8.11
4.43
11.55
4.51
10.08
4.57
8.88
11.28
44.20
LDI-AI
6.43
7.99
2.62
11.47
2.79
9.85
2.92
8.78
6.62
44.12
Table 3 :
3Pruning results for VGG16 and ResNet32
with different activation functions on CIFAR-10. We
report generalization errors (avg. over 5 runs), and
the first and second best results are highlighted.
VGG16
ResNet32
Initialization
tanh
l-relu
selu
tanh
l-relu
selu
VS-L
9.07
7.78
8.70
13.41
12.04
12.26
VS-G
9.06
7.84
8.82
13.44
12.02
12.32
VS-H
9.99
8.43
9.09
13.12
11.66
12.21
LDI
8.76
7.53
8.21
13.22
11.58
11.98
LDI-AI
8.72
7.47
8.20
13.14
11.51
11.68
Table 4 :
4Unsupervised pruning results for
K-layer MLP networks on MNIST. All
networks are pruned for sparsityκ = 90%
at orthogonal initialization. We report gen-
eralization errors (avg. over 10 runs).
Loss
Superv.
K=3
K=5
K=7
GT
✓
2.46
2.43
2.61
Pred. (raw)
✗
3.31
3.38
3.60
Pred. (softmax)
✗
3.11
3.37
3.56
Unif.
✗
2.77
2.77
2.94
Table 5 :
5Transfer of sparsity experiment results for LeNet. We prune forκ = 97% at orthogonal initialization, and report gen. errors (average over 10 runs).Dataset
Error
Error
Category
prune train&test
sup. → unsup.
(∆)
rand
Standard
MNIST
MNIST
2.42 → 2.94 +0.52 15.56
Transfer F-MNIST
MNIST
2.66 → 2.80 +0.14 18.03
Standard F-MNIST F-MNIST
11.90 → 13.01 +1.11 24.72
Transfer
MNIST F-MNIST
14.17 → 13.39 -0.78
24.89
The term faithful is used to describe signals propagating in a network isometrically with minimal amplification or attenuation, and borrowed fromSaxe et al. (2014), the first work to introduce dynamical isometry.
Dynamical isometry can hold for antisymmetric sigmoidal activation functions (e.g., tanh) as shown inPennington et al. (2017). A recent work byTarnowski et al. (2019) have also shown that dynamical isometry is achievable irrespective of the activation function in ResNets.
We conduct the same experiments for ReLU and Leaky-ReLU activation functions (see Appendix C).
ACKNOWLEDGMENTSThis work was supported by the ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1 and the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016). We would also like to acknowledge the Royal Academy of Engineering and FiveAI, and thank Richard Hartley, Puneet Dokania and Amartya Sanyal for helpful discussions.
The lottery ticket hypothesis: Finding sparse, trainable neural networks. ICLR. Jonathan Frankle, Michael Carbin, Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. ICLR, 2019.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, AISTATSXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. AISTATS, 2010.
Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR. Song Han, Huizi Mao, William J Dally, Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR, 2016.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. ICCV, 2015.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPRKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. CVPR, 2016.
Reducing the dimensionality of data with neural networks. E Geoffrey, Hinton, R Ruslan, Salakhutdinov, Science. Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Van Den, Sander Oord, Koray Dieleman, Kavukcuoglu, Efficient neural audio synthesis. ICMLNal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lock- hart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu. Efficient neural audio synthesis. ICML, 2018.
Yann Lecun, Léon Bottou, Genevieve B Orr, Klaus-Robert Müller, Efficient backprop. Neural Networks: Tricks of the Trade. Yann LeCun, Léon Bottou, Genevieve B. Orr, and Klaus-Robert Müller. Efficient backprop. Neural Networks: Tricks of the Trade, 1998.
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, Nature. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 2015.
Snip: Single-shot network pruning based on connection sensitivity. Namhoon Lee, Thalaiyasingam Ajanthan, Philip Hs Torr, ICLRNamhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. ICLR, 2019.
On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. 2013.
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. Jeffrey Pennington, Samuel Schoenholz, Surya Ganguli, NeurIPSJeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. NeurIPS, 2017.
The emergence of spectral universality in deep networks. Jeffrey Pennington, S Samuel, Surya Schoenholz, Ganguli, AISTATS. Jeffrey Pennington, Samuel S Schoenholz, and Surya Ganguli. The emergence of spectral univer- sality in deep networks. AISTATS, 2018.
Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. NeurIPS. Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponen- tial expressivity in deep neural networks through transient chaos. NeurIPS, 2016.
Pruning algorithms-a survey. Russell Reed, Neural Networks. Russell Reed. Pruning algorithms-a survey. Neural Networks, 1993.
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. M Andrew, James L Saxe, Surya Mcclelland, Ganguli, ICLRAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam- ics of learning in deep linear neural networks. ICLR, 2014.
S Samuel, Justin Schoenholz, Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. ICLR. Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. ICLR, 2017.
Dynamical isometry is achieved in residual networks in a universal way for any activation function. Wojciech Tarnowski, Piotr Warchoł, Stanisław Jastrzębski, Jacek Tabor, Maciej A Nowak, AISTATSWojciech Tarnowski, Piotr Warchoł, Stanisław Jastrzębski, Jacek Tabor, and Maciej A Nowak. Dy- namical isometry is achieved in residual networks in a universal way for any activation function. AISTATS, 2019. |
246,442,105 | IMBEDDING DEEP NEURAL NETWORKS | Continuous-depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems. The common solution is to use the adjoint sensitivity method to replicate a forward-backward pass optimisation problem. We propose a new approach which explicates the network's 'depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems. This new method is based on the principle of 'Invariant Imbedding' for which we prove a general solution, applicable to all non-linear, vector-valued optimal control problems with both running and terminal loss. Our new architectures provide a tangible tool for inspecting the theoretical-and to a great extent unexplainedproperties of network depth. They also constitute a resource of discrete implementations of Neural ODEs comparable to classes of imbedded residual neural networks. Through a series of experiments, we show the competitive performance of the proposed architectures for supervised learning and time series prediction. Accompanying code is made available at github.com/andrw3000/inimnet. * Equal Contribution.Published as a conference paper at ICLR 2022 perceptrons. But following the advent of residual neural networks(He et al., 2015)which use 'Euler-step' internal updates between layers, DNN evolution is seen to emulate a continuous dynamical system(Lu et al., 2018;Ruthotto & Haber, 2020). Thus was formed the notion of a 'Neural ODE'(Chen et al., 2018)in which the hidden network state vector z(t) ∈ R N , instead of being defined at fixed layers t ∈ N, is allowed to vary continuously over an interval [p, q] ⊂ R for a system of dimension N ≥ 1. Its evolution is governed by an Ordinary Differential Equation (ODE)where the training function f is controlled by a parameter vector θ(t) ∈ R M for t ∈ [p, q] of length M ≥ 1. Network outputs are retrieved at z(q) = y after fixing the endpoint t = q. As such, the enigmatic 'depth' of a Neural ODE is controlled by varying t = p, at which point we insert the initial condition z(p) = x. This dynamical description has given the theory of DNNs a new home: the mathematical framework of optimal control (Massaroli et al., 2020;Bensoussan et al., 2020). In this framework, whether discrete or continuous, solution networks are sought after that: (A) satisfying their update law (1) over a fixed 'depth' interval [p, q]; (B) minimise a loss function subject to terminal success and internal regulation (see §2.1). As initiated by Euler and Lagrange (Euler, 1766), this mathematical framework determines networks given by (A) satisfying condition (B) using, amongst other techniques, the adjoint method: here a new state, which we call λ(t) or the 'Lagrange multiplier', is introduced containing the system losses with respect to both t and the parameters θ(t); see §2.3.The connection between the traditional method of 'backpropagation-by-chain rule' and the rigorous 'adjoint method' is quite brilliantly explicated in proof byChen et al. (2018), in that they directly deduce the adjoint method using infinitesimal backpropagation-far from the train of thought of Lagrange's multiplier method (Liberzon, 2012, Ch. 2); see also §D and §E.2 in the appendix for extended discussion and derivation.Even within the above theory, the initial condition, z(p) = x, and location, t = p, remain implicit constraints; a clear understanding of network depth remains illusive.Our new class of InImNet architectures may be obtained by imbedding networks of varying depth p whilst keeping the inputs, x, invariant. Explicating these two variables throughout the network, writing z(t) = z(t; p, x), has exciting conceptual consequences: | [] | IMBEDDING DEEP NEURAL NETWORKS
Andrew Corbett [email protected]
University of Exeter
Etcembly Ltd
Dmitry Kangin [email protected]
University of Exeter
Etcembly Ltd
IMBEDDING DEEP NEURAL NETWORKS
Published as a conference paper at ICLR 2022
Continuous-depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems. The common solution is to use the adjoint sensitivity method to replicate a forward-backward pass optimisation problem. We propose a new approach which explicates the network's 'depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems. This new method is based on the principle of 'Invariant Imbedding' for which we prove a general solution, applicable to all non-linear, vector-valued optimal control problems with both running and terminal loss. Our new architectures provide a tangible tool for inspecting the theoretical-and to a great extent unexplainedproperties of network depth. They also constitute a resource of discrete implementations of Neural ODEs comparable to classes of imbedded residual neural networks. Through a series of experiments, we show the competitive performance of the proposed architectures for supervised learning and time series prediction. Accompanying code is made available at github.com/andrw3000/inimnet. * Equal Contribution.Published as a conference paper at ICLR 2022 perceptrons. But following the advent of residual neural networks(He et al., 2015)which use 'Euler-step' internal updates between layers, DNN evolution is seen to emulate a continuous dynamical system(Lu et al., 2018;Ruthotto & Haber, 2020). Thus was formed the notion of a 'Neural ODE'(Chen et al., 2018)in which the hidden network state vector z(t) ∈ R N , instead of being defined at fixed layers t ∈ N, is allowed to vary continuously over an interval [p, q] ⊂ R for a system of dimension N ≥ 1. Its evolution is governed by an Ordinary Differential Equation (ODE)where the training function f is controlled by a parameter vector θ(t) ∈ R M for t ∈ [p, q] of length M ≥ 1. Network outputs are retrieved at z(q) = y after fixing the endpoint t = q. As such, the enigmatic 'depth' of a Neural ODE is controlled by varying t = p, at which point we insert the initial condition z(p) = x. This dynamical description has given the theory of DNNs a new home: the mathematical framework of optimal control (Massaroli et al., 2020;Bensoussan et al., 2020). In this framework, whether discrete or continuous, solution networks are sought after that: (A) satisfying their update law (1) over a fixed 'depth' interval [p, q]; (B) minimise a loss function subject to terminal success and internal regulation (see §2.1). As initiated by Euler and Lagrange (Euler, 1766), this mathematical framework determines networks given by (A) satisfying condition (B) using, amongst other techniques, the adjoint method: here a new state, which we call λ(t) or the 'Lagrange multiplier', is introduced containing the system losses with respect to both t and the parameters θ(t); see §2.3.The connection between the traditional method of 'backpropagation-by-chain rule' and the rigorous 'adjoint method' is quite brilliantly explicated in proof byChen et al. (2018), in that they directly deduce the adjoint method using infinitesimal backpropagation-far from the train of thought of Lagrange's multiplier method (Liberzon, 2012, Ch. 2); see also §D and §E.2 in the appendix for extended discussion and derivation.Even within the above theory, the initial condition, z(p) = x, and location, t = p, remain implicit constraints; a clear understanding of network depth remains illusive.Our new class of InImNet architectures may be obtained by imbedding networks of varying depth p whilst keeping the inputs, x, invariant. Explicating these two variables throughout the network, writing z(t) = z(t; p, x), has exciting conceptual consequences:
UNPACKING CONTINUOUS-DEPTH NEURAL NETWORKS
The long-standing enigma surrounding machine learning still remains paramount today: What is it that machines are learning and how may we extract meaningful knowledge from trained algorithms? Deep Neural Networks (DNNs), whilst undeniably successful, are notorious black-box secret keepers. To solve a supervised learning process, mapping vector inputs x to their targets y, parameters are stored and updated in ever deepening layers with no facility to access the physical significance of the internal function approximating the global mapping x → y. lengths t of projectile curves initiated with identical initial velocities x at a range of points p i along the t-axis. The red curve depicts a regression fit to a t-varying sample set; contrast with the blue InImNet training paradigm which learns the endpoints (diamonds) from varying the input position p i .
We propose a new class of DNNs obtained by imbedding multiple networks of varying depth whilst keeping the inputs, x, invariant; we call these 'Invariant Imbedding Networks' (InImNets).
To illustrate the concept, Figure 1 depicts a system of projectiles fired from a range of positions p 1 < p 2 < · · · < p n with the same initial velocity conditions x. The red curve (initiated at p 1 ) is fit to a sample (circles) along a single trajectory, representing a traditional regression problem. InIm-Net architectures are trained on the output values y = y(p i , x) at p n (the diamonds) as the depth p i of the system varies. This analogy applies to DNN classifiers where increasing the depth from p i to p i−1 outputs a classification decision for each of the i-steps.
As a machine learning tool, the use of deep hidden layers, whilst successful, was first considered ad hoc in implementations, such as in multilayer 1. Forward pass to construct multiple networks: InImNet state vectors z(t; p, x) are computed with respect to the depth variable p rather than t ∈ [p, q], which is considered fixed (in practice at t = q). We build from the bottom up: initiate at p = q with the trivial network z(q; q, x) = x and unwind the p-varying dynamics, as described in Theorem 1, by integrating ∇ p z(q; p, x) = −∇ x z(q; p, x) · f (p, x, θ(p)) (2) from p = q to a greater depth p. Note that at depth p an InImNet returns an external output z(q; p, x) ∼ y, subject to training. This contrasts with convention, where one would obtain z(q; p, x) by integrating from t = p to t = q, where t < q states are considered internal. A general algorithm to implement the forward pass is described in Algorithm 1. The gradient operator ∇ denotes the usual vector, or Jacobian, of partial derivatives. 2. Backpropagate independently from the forward pass: We generalise the adjoint method of Chen et al. (2018), who was able to do away with the backpropagation-by-chain rule method in favour of a continuous approach with at most bounded memory demand. With our bottom-up formulation, we are able to go one step further and do away with the initial forward pass altogether by initiating our 'imbedded' adjoint Λ(p, x), generalising λ(t), with loss gradients for the trivial network z(q; q, x) = x and computing to depth p via
∇ p Λ(p, x) = −[∇ x Λ(p, x) · f (p, x, θ(p)) + ∇ x f (p, x, θ(p)) T · Λ(p, x)].(3)
See Theorem 2 for a precise, more general explication. Backward passes may be made independently of forward passing altogether; see Theorem 2 and Algorithm 1. 3. Pre-imposed optimality: Working in the framework of optimal control theory, we consider both running and terminal losses-a general 'Bolza problem'-see §2.1. We give a necessary first-order criterion for optimal control (Theorem 3). In this way, we account for t-varying parameter controls θ(t) = θ(t; p, x), omitted from the original Neural ODEs, permitting future compatibility with the recent particle-shooting models of Vialard et al. (2020). Bottom: An InImNet separates the forward and backward passes into separate initial value problems along the depth variable p.
z(q) x λ(q) = [∇ z T ](z(q)) λ(p) p q z(t) = f (t, z, θ) λ(t) = −∇ z f (t, z, θ) T · λ(t) z(q; q, x) = x Λ(q, x) = [∇ z T ](x) z(q; p, x) Λ(p + ∆, x) p + ∆ ∇ p z(q; p, x) = −∇ x z(t; p, x) · Φ(p, x) ∇ p Λ(p, x) = −[ ∇ x Λ(p, x) · f (p, x, θ(p)) ∇ x f (p, x, θ(p)) T · Λ(p, x)]
Our contribution. We prove that it is possible to reduce the non-linear, vector-valued optimal control problem to a system of forward-facing initial value problems. We introduce a new theoretical leap for the understanding of depth in neural networks, in particular with respect to viewing DNNs as dynamical systems (Chen et al., 2018;Massaroli et al., 2020;Vialard et al., 2020). We demonstrate that this approach may be used in creating discrete and continuous numerical DNN schemes. We verify such algorithms by successfully applying our method to benchmark experiments, which perform well on a range of complex tasks (high-dimensional rotating MNIST and bouncing balls) in comparison to state-of-the-art techniques.
Architecture. In §3 we give novel 'InImNet' architectures implementing the above results. These may be applied to regression and classification problems using discrete or continuous algorithms.
Experimental performance. In §4 we present some promising results for their practical functionality. These are designed to support the theoretical development of InImNets in §2 & 3 and support the proof of concept in application. We derive various successful discrete implementations, based on computing the state evolutions (2) and (3) with an Euler-step method. We also identify operational aspects in InImNets which we expect to encourage ongoing research and development. Crucially, our architectures are compatible with the various technical advancements since made to Neural ODEs, including those by Dupont et al. (2019); Davis et al. (2020); Yıldız et al. (2019); see also a study of effective ODE solvers for the continuous models (Hopkins & Furber, 2015).
Broader impact for optimal control theory. Further afield, our result applies more generally as a new tool in the theory of optimal control. The mathematical technique that we apply to (1), deriving (2) and (3), is known as the Invariant Imbedding Method. The key output of the method is the reformulation of a two-point boundary value problem as a system of initial value problems, given as functions of initial value x and input location p alone (Meyer, 1973). This stands in the literature as an alternative to applying the calculus of variations (Liberzon, 2012;Vialard et al., 2020). For linear systems, the technique was first developed by Ambarzumian (1943) to study deep stellar atmospheres. It has since found widespread applications in engineering (Bellman & Wing, 1975), optical oceanography (Mobley, 1994), PDEs (Maynard & Scott, 1971), ODEs (Agarwal & Saraf, 1979) and control theory (Bellman et al., 1966;Kalaba & Sridhar, 1969), to name a few. The non-linear case is only touched on in the literature for scalar systems of zero terminal loss (Bellman et al., 1966;Kalaba & Sridhar, 1969)-including some numerical computations to support its efficiency (Spingarn, 1972). In this work we derive a complete invariant imbedding solution, fit for applications such as those mentioned above, for a non-linear, vector-valued Bolza problem.
Overview of the paper. In §2 we give the main theorems used for InImNet architectures and more widely in the field of optimal control. Detailed derivations and proofs may be found in the appendix, §E. In §3 we put forward various architectures to illustrate how InImNets may be utilised in learning paradigms. In §4 we describe our supporting experimental work for such architectures.
OPTIMISATION VIA INVARIANT IMBEDDING
Solutions to (1) depend implicitly on both an input datum x and the input position p at which the input is cast. The (p, x) relationship is at the heart of the invariant imbedding method which explicates these arguments, written into the notation as z(t) = z(t; p, x). In this section we begin by introducing the optimisation problem before giving our invariant imbedding solution. Fix integers M, N ≥ 1 to denote the dimension of the instantaneous parameter space θ(t) ∈ R M and the dynamical system z(t), f (t, z, θ) ∈ R N .
THE BOLZA OPTIMISATION PROBLEM
The key advantage of studying continuous DNNs is to redress the learning problem as an optimal control problem. Here we consider the most general control problem, a Bolza problem, which subject to two forms of loss: a terminal loss, the discrepancy between the system outputs z(q) and the true outputs y measured by a loss function T on R N ; and a running loss, which regulates both the state vector z(t) itself as it evolves over t ∈ [p, q] but also the control θ(t) over [p, q] by a square-integrable functional R on [p, q] × R N × R M . Together, the minimisation problem is to find a control θ(t) and solution z(t) to (1) whilst minimising the total loss
J (θ; p, x) := q p R(t, z(t; p, x), θ(t; p, x))dt + T (z(q; p, x)).(4)
for each known datum pair (x, y). Applying the calculus of variations, one considers small perturbations about an optimal control θ which minimise J (θ; p, x) whilst determining a solution z to (1). The well known first-order Euler-Lagrange optimality equations (see §E.3) thus derived constitute a constrained, two-point boundary value problem as depicted in Figure 2. By contrast, the invariant imbedding method restructures the first-order optimal system as an initial value problem. The t-dependence is brushed implicitly under the rug, with numerical integration performed instead on the depth variable p.
THE INVARIANT IMBEDDING SOLUTION
The fundamental principle we follow is to observe the imbedding of intervals [p + ∆, q] ⊂ [p, q], for 0 ≤ ∆ ≤ p − q, which carry solutions z(t; p + ∆, x) to (1), whilst keeping the input, x, to each invariant. In limiting terms as ∆ → 0, the partial rate of change in depth ∇ p = ∂/∂p is directly related to the vector gradient ∇ x for the initial value x at p. This is controlled by the coefficient
Φ(p, x) := f (p, x, θ(p)).(5)
Theorem 1. Let θ(t) be an admissible control, by which we mean θ is piecewise continuous on t ∈ [p, q]. Suppose the dynamics function f : [p, q] × R N × R M → R N is continuous on (t, θ) ∈ [p, q] × R M and continuously differentiable on z ∈ R N . Let t ∈ [p, q] and suppose z(t; p, x) and θ(t; p, x) satisfy (1) for each x ∈ R N . Then we have the invariant imbedding relation
∇ p z(t; p, x) = −∇ x z(t; p, x) · Φ(p, x).(6)
The assumptions in Theorem 1 could be relaxed further. For instance, one could expand the class of admissible controls θ(t) to those which are measurable and locally bounded. The dynamics function could relax the existence assumption on ∇ z f in replace of a Lipschitz property. The assumptions stated are permissible for a wide range of applications. One may read more about possible weaker hypotheses in (Liberzon, 2012, §3.3.1).
We use this result given by (6) as a model to address the following learning problem. Consider a collection of input values x ∈ R N corresponding to known output values y ∈ R N . We seek to extend an approximation of x → y to larger subsets of R N . We proceed by choosing an interval [p, q] ⊂ R of arbitrary depth (fixing q and varying p) and postulate a state vector z(t; p, x), subject to (1), that approximates y at t = q given the input z(p; p, x) = x.
The parameter control, whilst commonly restricted in applications, is a priori subject to the same dependencies θ(t) = θ(t; p, x). We denote a second coefficient related to its endpoint by Ψ(p, x) := θ(p; p, x)
which, for p ≤ t ≤ q, also satisfies the invariant imbedding relation in Theorem 1:
∇ p θ(t; p, x) = −∇ x θ(t; p, x) · Φ(p, x).(8)
Whilst this observation may be made of the underlying dynamical system alone, the consequences extend to the control problem in §2.1 and form the basis of our solution.
BACKWARD LOSS PROPAGATION
The calculus of variations, introduced by Euler and Lagrange (Euler, 1766), provides a mechanism by which to find an optimal control θ (see Liberzon, 2012, Ch. 2). The key trick is to invoke a function called the Lagrange multiplier λ(t) = λ(t; p, x), also known as the "adjoint state" (Chen et al., 2018) or "Hamiltonian momentum" (Vialard et al., 2020), which encodes the backwardpropagated losses. Indeed, λ(t; p, x), the initial value at the endpoint t = q, is defined to be the gradient of the terminal loss T (z(q; p, x)) with respect to z(q; p, x). To optimise the network, whether directly using (12) or by stochastic gradient descent on the weights, one must propagate λ(t; p, x) back to t = p. To this end we introduce the 'imbedded adjoint' coefficient
Λ(p, x) = λ(p; p, x)(9)
which is the subject of our main backward result. Theorem 2. Suppose that θ, z and f satisfy the hypotheses of Theorem 1 for each p ≤ q. Suppose too that the terminal loss, T , and running loss, R, are defined as in Equation (4), subject to the hypotheses of §2.1. Then the imbedded adjoint Λ(p, x) satisfies the reverse initial value problem
−∇ p Λ(p, x) = ∇ x Λ(p, x)·Φ(p, x)+[∇ z f ](p, x, Ψ(p, x)) T ·Λ(p, x)+[∇ z R](p, x, Ψ(p, x)) (10) initiated at p = q by the value Λ(q, x) = [∇ z T ](x).
The bracketed use of [∇ z f ] etc. is to stress that the differential operator acts before evaluation.
We contrast our approach of fixed t = q and varying depth p with the adjoint method of Chen et al. (2018), who fix p and vary p ≤ t ≤ q. Our derivation provides a new proof of the standard Euler-Lagrange equations which give the adjoint method, manifesting in our account as
λ(p; p, x) = [∇ z T ](z(q; p, x)) − p q ([∇ z f ](t, z, θ) T · λ(t; p, x) + [∇ z R](t, z, θ))dt(11)
with initial value λ(q; p, x) = [∇ z T ](z(q; p, x)) at t = p. Observe that in Theorem 2 the initial loss at p = q is given by [∇ z T ](z(q; q, x)) = [∇ z T ](x), for the trivial network. Back-integrating this term thus does not require a forward pass of the z-state at the cost of computing the derivatives with respect to ∇ x Λ(p, x). Optimising this latter process opens a new window of efficient DNNs.
A FIRST ORDER OPTIMALITY CONDITION
With the insights gleaned from optimal control theory, Vialard et al. (2020) take a different approach facilitating t-varying parameter controls θ(t; p, x). This is based on assuming that θ is optimal from the outset. This is achieved through specifying θ by the t-varying constraint
[∇ θ R](t, z, θ) + [∇ θ f ](t, z, θ) T · λ(t; p, x) = 0.(12)
We obtain a corresponding condition for the coefficients that constitute an InImNet. Theorem 3. Suppose that θ, z and f satisfy the hypotheses of Theorem 1 for each p ≤ q. Suppose the losses T , R and J are defined as in (4) subject to the hypotheses of §2.1. Then the first-order optimality condition for the total loss J (θ; p, x) to be minimised is given by
[∇ θ R](p, x, Ψ(p, x)) + [∇ θ f ](p, x, Ψ(p, x)) T · Λ(p, x) = 0.(13)
Making the (p, x)-dependency explicit for optimal controls, this identity provides a mechanism by which the depth p itself is accessible to optimise. In practice, Ψ(p, x), and hence Φ(p, x), are derived from Λ(p, x); these are connected by Theorem 3 above. Altogether, Equations (6), (8), (10) and (13) constitute the invariant imbedding solution to the general Bolza problem as advertised.
INIMNET ARCHITECTURES
The architectures presented here are based upon the results of Theorems 1, 2 & 3. Whilst the processes for obtaining the p-th network state z(q; p, x) and backward adjoint Λ(p, x) are independent processes, we nevertheless describe their general form together in Algorithm 1.
Algorithm 1 Independent forward and backward pass with InImNet
Require: Training set of input/output pairs (x, y); evaluation points p 1 < · · · < p n ; loss function T (see §2.1); training function f (t, z, θ). Ensure: Track x-operations for auto-differentiation, or substitute a numerical derivative (see §3.2).
Inputs:
z(q; p i , x) = x; Λ(q, x) = ∇ x T (x) for i = n − 1, . . . , 1 do z(q; p i , x) = z(q; p i+1 , x) + pi pi+1 ∇ p z(q; p, x) Use Theorem 1. Λ(p i , x) = Λ(p i+1 , x) + pi pi+1 ∇ p Λ(p, x)
Use Theorem 2. end for Returns: Tuple of outputs z(q; p i , x) corresponding to networks of varying depths p i .
In the remainder of this section we describe various discrete models to implement variants of Algorithm 1. Continuous architectures using black-box auto-differentiable ODE solvers, such as those considered by Chen et al. (2018), may be readily implemented. This approach poses interesting new avenues of research based on implementing accurate numerical alternatives to the computation of nested Jacobians. Simultaneously, the question of stability of DNN dynamics functions becomes another crucial question to address.
For our experiments we seek to show a first-principle implementation of InImNets, and we do so by describing a discrete architecture, executing the minimum computation time whilst demonstrating high performance; see §3.1.
Finally, time-series data, or running losses, are not considered by Algorithm 1 but may be solved by InImNet, their dynamical structure a natural formulation for InImNets. We consider such an architecture in §C of the appendix as well as its application to regression problems in §F.2.
EULER-METHOD EXPERIMENTAL ARCHITECTURE
For implementation, we pass through the underlying ODEs with a proposed architecture based on a simple forward-Euler solution to the integrals in Algorithm 1. This is comparable to the original ResNet architectures (He et al., 2015). To do this we divide up the interval into a collection of layers
[p, q] = ∪ n−1 i=1 [p i , p i+1 ]
and rewrite the invariant imbedding equation of Theorem 1 as
z(t; p i , x) = z(t; p i+1 , x) − (p i − p i+1 )∇ x z(t; p i+1 , x) · Φ(p i , x),(14)
subject to z(p n ; p n , x) = x. Backpropagation may then be executed by either differentiating through the system, as is the standard approach, or by implementing Theorem 2 through the forward-Euler formula
Λ(p i , x) = Λ(p i+1 , x) − (p i − p i+1 )[∇ x Λ(p i , x) · Φ(p i , x)) + ∇ x f (p i , x, θ(p i )) T · Λ(p i+1 , x)] (15)
with the initial condition Λ(p n , x) = ∇ x T (x). For our experimental applications, we also apply a technique to approximate the inherent Jacobians within the InImNet architecture; see Equation (21) in §3.2.
NUMERICAL APPROXIMATION OF INPUT GRADIENTS
An implicit computational speed bump is the computation of the gradients ∇ x in (2), (6) and (10). The immediate solution is to track the gradient graphs of these terms with respect to x and implement automatic differentiation. Indeed, this approach does yield successful models-if one has time on their hands but the drawback is that a high memory cost is incurred for deep or high dimensional networks.
We offer some surrogate numerical solutions. For the sake of example, suppose we wish to compute z(q; p, x) ∈ R N by integrating
∇ p z(q; p, x) = −∇ x z(q; p, x) · Φ(p, x)(16)
with respect to p. To compute the derivatives ∇ x we consider perturbations of the input vector x ∈ R N of the form x ± ∆ i e i for appropriately small ∆ i > 0 and e i := (δ ij ) 1≤j≤N for i = 1, . . . , N . We then solve for the 2N + 1 states z(q; p, x ± ∆ i e i ) by simultaneously integrating (16)
alongside ∇ p z(q; p, x ± ∆ i e i ) = −∇ x z(q; p, x ± ∆ i e i ) · Φ(p, x ± ∆ i e i ) (17) where the gradients ∇ x z(q; p, x 0 ) are modelled by ∇ x z(q; p, x 0 ) ≈ z(q; p, x 0 + ∆ i e i ) − z(q; p, x 0 − ∆ i e i ) 2∆ i i(18)for x 0 = x, x ± ∆ i e i , respectively
. This method is known as the symmetric difference quotient.
Other approximations may also be applied on a bespoke basis, such as Newton's difference quotient. This uses a similar construction but the negative shifts are forgotten, resulting in tracking N + 1 equations along (16) and (17) where we estimate
∇ x z(q; p, x) ≈ z(q; p, x + ∆ i e i ) − z(q; p, x) ∆ i i(19)
and
∇ x z(q; p, x + ∆ i e i ) ≈ −∇ x z(q; p, x).(20)
Alternatively, and more directly, we may tackle computing the successive Jacobians ∇ x z(t, p, x), incurring a high memory cost storing gradient graphs, by approximating such terms through cropping the higher order gradients:
∇ x z(t; p i , x) = ∇ x z(t; p i+1 , x) − ∇ x ∇ x z(t; p i+1 , x) · Φ(p i , x) ≈ ∇ x z(t; p i+1 , x) − ∇ x Φ(p i , x)(21)
Whilst theoretically losses are easily quantifiable, we show experimentally that for this approximation an increasing the number of layers still improves the performance of the model.
EXPERIMENTAL RESULTS
In this section we demonstrate the practical ability of the proposed architectures in solving benchmark problems for deep neural networks. All experiments use backpropagation to differentiate through the system as outlined in §3.1 except for the projectile motion study in the appendix, §F.2, where training is executed via the update rule stated by Equation (15). Without loss of generality, we arbitrarily choose q = 0 and assume p to be in the range [p min , 0] where p min is a negative value whose modulus represents 'depth' of the network. In the 'Rotating MNIST' experiment, we solve the task of learning to generate the handwritten digit '3' for a sequence of sixteen equally spaced rotation angles between [0, 2π] given only the first example of the sequence. To match the experimental setting with Yıldız et al. (2019) and Vialard et al. (2020), we train the model on layer p min using backpropagation by minimising the objective of mean squared error for all angles, except for one fixed angle (in the fifth frame) as well as three random angles for each sample. We report the Mean Squared Error (MSE) at the fifth frame and the standard deviation over ten different random initialisations.
The results of this experiment are given in Table 1. The details of the experimental set-up and the hyperparameters are given in §G.1 of the appendix. The results show comparable performance to that achieved by (Vialard et al., 2020) while using a more computationally efficient discrete invariant imbedding formulation (see §4.1.2). We implement the InImNet dynamics function Φ (as in Theorem 1) as a Multilayer Perceptron (MLP) with either two or three layers. (2019) and Vialard et al. (2020). (**) The standard deviation given is more than half the mean value as stated in Vialard et al. (2020) GT The key insight provided by InImNets is the simulation of multiple networks (for each p) from a single network pass. We give an expanded account of performance of this nature in §F.3 of the appendix, alongside the experimental set-up. To demonstrate such observations, we exhibit a prototype loss-plot for the Rotating MNIST experiment (in the appendix, see Figure 7). The horizontal axis varies with the frame of the rotation-the dip in loss corresponds to a neutral frame, as expected. Each line, p = −1, −2, . . ., corresponds to an output at a different p-layer. The network is tuned (or trained) on its outputs at layer p = p min = −4. However, by extrapolating either side of p min we can make quantitative observations on the rate of loss degradation; for example, the total loss remains lower shifting to p = −5 rather than p = −3.
Existing Models InImNet
p = 0 p = −1 p = −2 p = −3 p = −4
DISCUSSION AND CONCLUSIONS
We have shown that it is possible to reduce the non-linear, vector-valued optimal control problem to a system of forward-facing initial value problems. We have demonstrated that this approach may be used in creating discrete and continuous numerical schemes. In our experiments, we show that (1) for a range of complex tasks (high-dimensional rotating MNIST and bouncing balls), the discrete model exhibits promising results, competitive with the current state-of-the-art methods while being more computationally efficient; and (2) the continuous model, via the Euler method, shows promising results on a projectile motion task. They demonstrate the potential for inference in continuous neural networks by using the invariant imbedding method to vary the initial conditions of the network rather than the well-known forward-backward pass optimisation.
We have outlined a class of DNNs which provide a new conceptual leap within the understanding of DNNs as dynamical systems. In particular, the explication of the depth variable leads to a new handle for the assessment of stacking multiple layers in DNNs. This also fits within the framework of Explainable AI, whereby an InImNet model is able to depict a valid output at every model layer.
Of course, nothing comes for free. The expense we incur is the presence of nested Jacobian terms; for example, ∇ x z(t; p, x). We show experimentally that our models perform well with elementary approximations for the purpose of functionality. But understanding these terms truly is deeply related to the stability of Neural ODEs over a training cycle.
In this article we do not explore the ramifications of the optimality condition of Theorem 3. With the work of (Vialard et al., 2020), in which systems are considered optimal from the outset via Theorem 3, we propose to study the variability of depth of such optimal systems.
We end where we started with the image of Figure 1. In the appendix, in §C and §F.2, we implement a time-series architecture and apply this to modelling projectile motion curves. We discuss the difficulties faced by InImNets for high-depth data in §F.1 and suggest a promising avenue for extended research.
A. Lyapunov. The general problem of the stability of motion.
A OVERVIEW OF NOTATION AND CONVENTIONS
We consider all R-vectors as column vectors, to be denoted in bold font. Let v, w denote the Euclidean product between two vectors v and w. Matrix multiplication between A and B shall be denoted A · B and transposition by A T . For v = (v i ) i introduce the gradient operator by the row vector ∇ v = (∂/∂v i ) i which operates on scalar fields as usual but also on vector fields
F(v) = (F i (v)) i to give the Jacobian matrix ∇ v F = (∂F i /∂v j ) i,j . Then, if v = v(w), the chain rule reads ∇ w F(v(w)) = ∇ v F · ∇ w v,
which could be expressed as ∂φ/∂w = ∇ v φ, ∂v/∂w if F = φ and w = w were both scalar valued. Moreover we extend the gradient notation by using it for scalars p ∈ R to denote the single partial derivative ∇ p = ∂/∂p. Define the second order gradient operator by the matrix of second order partial derivatives ∇ 2 v,w := ∇ w ∇ v = (∂ 2 /∂v i ∂w j ) i,j with respect to v = (v i ) i and w = (w i ) i . We use the Dirac delta symbol given by δ ij = 1 if and only if i = j and δ ij = 0 otherwise. The N × N identity matrix is then given by 1 N := (δ ij ) 1≤i,j≤N . Lastly, for a (possibly vector valued) function F on R we write F(x) = o(x), implicitly with respect to x → 0, to denote that lim x→0 F(x)/x = 0.
B THE IMBEDDING RULE
InImNet architectures are based on the principle of invariant imbedding: the description of a system of a given length in terms of its imbedded sister systems. For example, consider two lengths p 2 < p 1 < q such that the solution z(t; p 1 , x) to (1) is known for t ≥ p 1 . The solution z(t; p 2 , x) over [p 2 , q] restricted to t ≥ p 1 , with x input at p 2 , may be expressed using (6) by z(t; p 2 , x) = z(t; p 1 , x) − p2 p1 ∇ x z(t; p, y)Φ(p, x)dp.
In this way, two separate intervals may be adjoined by observing the imbedding rule
z(t; p 2 , x) = z(t; p 1 , z(p 1 ; p 2 , x))(23)
where the new input z(p 1 ; p 2 , x) is evaluated using (22) at t = p 1 . In the case that f is a linear function of z, the identity in (22) is sometimes known as the Ricatti transformation (Bellman & Wing (1975)). In practical computations, one should devise models, such as step-linearisation, to solve (22) by simulating the gradients ∇ x z(p 1 ; p, x) for p < p 2 .
C LEARNING LATENT DYNAMICS FROM END-POINT OBSERVATIONS
Here we propose an architechture for the application of InImNets to time-series regression problems, such as we implement in §F.2.
Typical regression problems ask for a model, given by a state z(t), to approximate a t-varying process y(t) sampled at a finite number of training points y(t i ) in a fixed region of interest t i ∈ [p, q]. Assuming that y satisfies a well behaved ODE, the learning paradigm of a Neural ODĖ z = f , see (1), is well suited to learn such continuous dynamics. Inherently, the initial condition y(p) = x is assumed fixed.
InImNets are naturally structured to learn a variant of this problem: consider a t-varying process modelled by the dynamical systeṁ
y(t; p, x) = g(t, y); y(p; p, x) = x(24)
for which we are able to vary the location p ≤ q of an invariant initial condition y(p) = x. Suppose the sampled data is only available at the fixed output t = q of the process, meaning that the training data is given by a set of the form
{y i := y(q; p i , x)|p i ≤ q} i .(25)
In this way, an InImNet learns the dynamical system itself, as the initial conditions vary, from the outputs alone. Apply this to the running loss in Theorem 2 by, for a differentiable cost function C : R N × R N → R, defining R(t, z, θ) = R(t, z) := i C(z(q; p, z(t; p, x)), y i )δ t,pi .
Then the p-th invariant imbedding coefficient is
[∇ z R](p, x) = i δ p,pi · [∇C z ](z(q; p, x), y i ) · ∇ x z(q; p, x).(27)
The term δ p,pi features intrinsically in Neural ODE architectures as only a single depth p is considered in a given system. We implement (26) recursively (cf. Chen et al., 2018, Figure 2) so that the discrete architecture for the adjoint reads
Λ(p i , x) = Λ(p i+1 , x) + [∇ z R](p i , x) − (p i − p i+1 )[∇ x Λ(p i , x) · Φ(p i , x)) + ∇ x f (p i , x, θ(p i )) T · Λ(p i+1 , x)]. (28)
The forward pass is executed exactly as in §3.1.
D THE ADJOINT METHOD IN APPLICATION
D.1 BACKPROPAGATION AND THE ADJOINT STATE
As touched on in the introduction, the adjoint method, in optimal control, is a natural consequence of the Euler-Lagrange equations (which we explicitly derive in §E.3). To outline the method, one postulates a multiplier function λ(t; p, x), the 'Lagrange multiplier' or 'adjoint', as in §E.2. One derives the initial value of the adjoint, at the endpoint t = q, to be the loss gradient λ(q; p, x) = [∇ z T ](z(q; p, x)). On the other hand, the t-varying dynamics is seen to follow the dynamical system given by (47). Integrating (47) backward from t = q to t = p may be interpreted as the backpropagation of losses throughout the system. Indeed, using routines such as those describe below in §D.2 the loss gradients with respect to parameters may be explicitly derived.
The interpretation of the adjoint as backpropagated losses was illuminated, in the context of deep neural networks by Chen et al. (2018). They derive the backward dynamical system (47) for λ using the chain rule over infinitesimal steps. This illumination makes clear that the adjoint method is the direct generalisation of the traditional backpropagation method.
In another twist in the tale, the derivation of the adjoint method also leads to a natural first-order optimality condition; we deduce this in (43). This theoretical tool gives an option for single-sweep optimisation by explicitly solving (43) for the parameters via the initial value problem (47). Moreover, this equation permits one to determine continuously varying parameters θ(t). Such an investigation is conducted by Vialard et al. (2020).
In the next section we describe the formulas required to determine loss gradients with respect to parameters in the case when the parameter vector θ = θ(t) is not dependent on t.
D.2 AUGMENTED STATE VARIABLES FOR CONSTANT PARAMETERS
Updates to t-varying parameter controls θ(t; p, x) are a hot subject of research (see Vialard et al., 2020;Massaroli et al., 2020, for example). The original Neural ODEs (Chen et al., 2018) operated by assuming that θ = θ(t; p, x) ∈ R M was a constant. It was then possible to recover loss gradients for the components of θ by augmenting the system and running the adjoint method (Chen et al., 2018, §B.2). A similar trick works in our context, with some quite remarkable consequences.
Consider an augmented state vector composed of z, t and θ, whereby the dynamics in (1) becomes
d dt z t θ = f (t, z, θ) 1 0(29)
with z(p) = x, t = p and θ = θ as the initial condition. Working through the invariant imbedding procedure, the forward process (Theorem 1) for z is preserved. But the backward process (Theorem 2) is revised by replacing Λ with the (new) components Λ (z) , Λ (t) and Λ (θ) . The first, Λ (z) , satisfies Theorem 2 independently of Λ (t) and Λ (θ) . Moreover, the assumption that θ = θ(t) is constant in t implies that [∇ z f ](p, x, Ψ(p, x)) = ∇ x Φ(p, x) so we deduce the new backward equation
∇ p Λ (z) (p, x) = −(∇ x Λ (z) (p, x) · Φ(p, x) + ∇ x Φ(p, x) T · Λ (z) (p, x) + [∇ z R](p, x, Ψ(p, x))).(30)
The term Λ (θ) (p, x), corresponding to the gradient of the terminal loss with respect to the constant parameters θ, assumes the initial value
Λ (θ) (q, x) = 0(31)
for whom the p-evolution (10) becomes
∇ p Λ (θ) (p, x) = −(∇ x Λ (θ) (p, x)·Φ(p, x)+Φ θ (p, x) T ·Λ (z) (p, x)+[∇ θ R](p, x, Ψ(p, x))),(32)
dependant on the simultaneous solution of Λ (z) (p, x).
For the t-term let us additionally assume that f (t, z, θ) = f (z, θ) and R(t, z, θ) = R(z, θ). This implies that ∇ t f (p, x, Ψ(p, x)) = ∇ x Φ(p, x) · Φ(p, x). Then Λ (t) (p, x), the gradient of the terminal loss with respect to t = p, assumes the initial value
Λ (t) (q, x) = Λ (z) (q, x), Φ(q, x)(33)
and the p-evolution follows
∇ p Λ (t) (p, x) = −[∇ x Λ (t) (p, x) · Φ(p, x) + (∇ x Φ(p, x) · Φ(p, x)) T · Λ (z) (p, x) + ∇ x R(p, x, Ψ(p, x)) · Φ(p, x)],(34)
dependant on the simultaneous solution of Λ (z) (p, x). This last identity is particularly exciting. It enables one to compute the loss with respect to p, the depth, itself. Given the explication of this p-variable in our Invariant Imbedding Networks, one may thus plot this loss as a function of p and isolate the minima for any given training run. Observing the flow of such minima provides great insight to the optimal depth that a network should take.
E DERIVING THE INVARIANT IMBEDDING SOLUTION
Here we give proofs of Theorems 1, 2 and 3 which constitute an invariant imbedding solution for a non-linear, vector-valued Bolza problem. Consider the minimisation problem described in §2.1 for the ODE system (1). We maintain the hypotheses stated in Theorem 1 throughout.
E.1 VARIATIONS IN THE CONTROL
We explore the consequences of the calculus of variations (Liberzon, 2012, Ch. 2). This provides a mechanism to find an optimal control θ; that is, θ provides a global minimum for J (θ) ≤ J (θ) over all piecewise continuous controlsθ. For ε ∈ R considered close to 0, we consider controls perturbed from the optimal control θ by θ(t; p, x) + εν(t; p, x)
for an admissible perturbation ν. Following the system (1), the control in (35) determines the state (or trajectory)
z(t; p, x) + εη(t; p, x) + o(ε) (36) where η is a corresponding perturbation subject to η(p; p, x) = 0(37)
so that (36) agrees with z at t = p. To minimise the cost, we consider the Taylor expansion of J (θ + εν) with respect to ε. We call the linear term the first variation of J at θ which may be defined by the Gateaux derivative:
δJ | θ (ν) := lim ε→0 J (θ + εν) − J (θ) ε(38)
for admissible perturbations ν. For the cost function given in (4), the first variation is equal to
δJ | θ (ν) = q p [∇ θ R](t, z, θ), ν(t; p, x) dt + q p [∇ z R](t, z, θ), η(t; p, x) dt + [∇ z T ](z(q; p, x)), η(q; p, x) . (39)
The first-order, necessary condition for optimality (Liberzon, 2012, §1.3.2) is satisfied whenever δJ | θ (ν) = 0.
An auxiliary identity for the manipulation of (39) is obtained by evaluatingη. Differentiating (36) with respect to ε about ε = 0, both directly and via (1), implies thaṫ
η(t; p, x) = [∇ z f ](t, z, θ) · η(t; p, x) + [∇ θ f ](t, z, θ) · ν(t; p, x).(41)
E.2 LAGRANGE MULTIPLIERS
Introduce λ(t) ∈ R N , an arbitrary vector-valued function for t ∈ [p, q] to be specified forthwith. This is the Lagrange multiplier, called so as its namesake used it to multiply Equation (41) and insert it into the first variation (39). We subsequently obtain
δJ | θ (ν) = q p [∇ θ R](t, z, θ) + [∇ θ f ](t, z, θ) T · λ(t), ν(t; p, x) dt + q p λ(t), [∇ z f ](t, z, θ) · η(t; p, x) −η(t; p, x) dt + q p [∇ z R](t, z, θ), η(t; p, x) dt + [∇ z T ](z(q; p, x)), η(q; p, x) .(42)
We obtain an optimality condition by making a choice of λ(t) = λ(t; p, x) such that [∇ θ R](t, z, θ) + [∇ θ f ](t, z, θ) T · λ(t; p, x) = 0 (43) for all t ∈ [q, p]; cf. (Vialard et al., 2020, (3.3)) where their "p i " is equal to our −λ. The first variation (39) simplifies accordingly:
δJ | θ (ν) = q p λ(t; p, x), [∇ z f ](t, z, θ) · η(t; p, x) −η(t; p, x) dt + q p [∇ z R](t, z, θ), η(t; p, x) dt + [∇ z T ](z(q; p, x)), η(q; p, x) . (44)
At this point our method of invariant imbedding diverges from the classical method of Euler and Lagrange. We give a brief aside here to review the Euler-Lagrange equations as the comparison is illuminating; see §E.3. We also direct the reader to (Liberzon, 2012, §3.3-3.4) in which the classical route is explored.
E.3 THE EULER-LAGRANGE EQUATIONS
The Euler-Lagrange equations reveal how the adjoint method of Chen et al. (2018), see also (Vialard et al., 2020, §3.3), will correspond to our invariant imbedding formulation. The derivation follows from integrating (44) by parts with respect toη. One derives
δJ | θ (ν) = q p [∇ z f ](t, z, θ) T · λ(t; p, x) + [∇ z R](t, z, θ) +λ(t; p, x), η(t; p, x) dt + [∇ z T ](z(q; p, x)) − λ(q; p, x), η(q; p, x) . (45)
noting that the initial value of the perturbation is η(p; p, x) = 0 by assumption. We simplify the constant term by imposing the initial condition on λ: λ(q; p, x) = [∇ z T ](z(q; p, x)).
(46) Since (45) equates to 0 by the first order optimality condition (40) for all admissible perturbations η, the fundamental lemma of the calculus of variations implieṡ
λ(t; p, x) = −([∇ z f ](t, z, θ) T · λ(t; p, x) + [∇ z R](t, z, θ)).(47)
Equations (46) and (47) constitute the continuous analogy to backpropagation known as the "adjoint method" by Chen et al. (2018). Its formulation is thus encoded in (44) and so the forthcoming method of invariant imbedding provides a direct alternative to this backpropagation approach.
E.4 THE INVARIANT IMBEDDING METHOD
For each 0 ≤ ∆ ≤ q − p consider the family of subintervals [p + ∆, q] of [p, q]. Over these intervals, we imbed the systems z(t; p+∆, x) into z(t; p, x) whilst keeping x, the input, invariant. This results in the partial differential equations (51) and (52) in which changes in p are equated to changes with respect to x.
To derive these equations, we first note that over each interval the systems follow (1), which may be expressed as a finite difference equation:
z(t; p, x) = z(t + ∆; p, x) − f (t, z(t; p, x), θ(t; p, x))∆ + o(∆) (48) for p ≤ t ≤ q − ∆.
We then formally extend z(t; p + ∆, x) to t = p via (48) so that z(p; p + ∆, x) = x − f (p, z(p; p + ∆, x), θ(p; p + ∆, x))∆ + o(∆).
(49) As such, the imbedding of systems over [p + ∆, q] ⊂ [p, q] is explicitly defined by z(t; p + ∆, x) = z(t; p, x − f (p, z(p; p + ∆, x), θ(p; p + ∆, x))∆ + o(∆)) (50) for all p ≤ t ≤ q. To compute the partial derivative ∇ p z = ∂z/∂p, consider the difference z(t; p + ∆, x) − z(t; p, x) in which x remains invariant. Substituting (50), dividing by ∆ and taking the limit ∆ → 0 + one obtains the invariant imbedding identity for the trajectory:
∇ p z(t; p, x) = −∇ x z(t; p, x) · f (p, x, θ(p; p, x)).
(51) Mutatis mutandis, one derives the invariant imbedding identity for the control:
∇ p θ(t; p, x) = −∇ x θ(t; p, x) · f (p, x, θ(p; p, x)).
(52) These equations express a fundamental translational invariance property of both the optimal control and the corresponding trajectory. This proves the basic identity in Theorem 1.
E.5 INVARIANT IMBEDDING OF THE LAGRANGE MULTIPLIER
We extend the invariant imbedding relations (51) and (52) to the Lagrange multiplier λ(t; p, x). The extension is obtained by taking the gradient of (43) with respect to ∇ p = ∂/∂p and ∇ x . Adding the resulting expressions together and using (51) and (52) to replace the terms ∇ p z(t; p, x) and ∇ p θ(t; p, x) we derive the identity
[∇ θ f ](t, z, θ) T · (∇ p λ(t; p, x) + ∇ x λ(t; p, x) · f (p, x, θ(p; p, x))).(53)
For non-trivial dependence of f on the control θ we assume that [∇ θ ]f (t, z, θ) is not identically 0 implying the invariant imbedding identity for the Lagrange multiplier, λ:
∇ p λ(t; p, x) = −∇ x λ(t; p, x) · f (p, x, θ(p; p, x)).(54)
These equations express a fundamental translational invariance property of both the optimal control and the corresponding trajectory.
The coefficients f (p, x, θ(p; p, x)) and θ(p; p, x) in (51) and (52) depend on the variables (p, x) alone, independently of t. It is thus sensible to introduce the terminology
Φ(p, x) := f (p, x, θ(p; p, x)) ∈ R N ; (55) Ψ(p, x) := θ(p; p, x) ∈ R M .(56)
E.6 FIRST ORDER OPTIMALITY WITH INVARIANT IMBEDDING
Here we derive an auxiliary optimality condition from the first variation (44) subject to the invariant imbedding relations (51), (52) and (54). Recall that (44) is equivalent to the adjoint-state equation of Chen et al. (2018), seen by application of the Euler-Lagrange equations; §E.3. The theory we develop here should thus also be seen as analogue to the standard backpropagation method.
The first variation (44) is equal to 0 by (40). One may take its gradient with respect to ∇ p = ∂/∂p using Leibniz' rule and apply η(p; p, x) = 0; moreover, each occurrence of ∇ p z, ∇ p θ and ∇ p λ may be substituted with their respective invariant imbedding relation. Then take the sum of the resulting equation with the gradient of (44) with respect to ∇ x scaled by the term Φ(p, x). One thus obtains the general form of the first-variation auxiliary equation:
[∇ z T ](z(q; p, x)), ∇ p η(q; p, x) + ∇ x η(q; p, x) · Φ(p, x) + λ(p; p, x),η(p; p, x) q p [∇ z f ](t, z, θ) T · λ(t; p, x) + [∇ z R](t, z, θ), ∇ p η(t; p, x) + ∇ x η(t; p, x) · Φ(p, x) dt + q p λ(t; p, x), ∇ pη (t; p, x) + ∇ xη (t; p, x) · Φ(p, x) dt = 0. (57)
The derivation of (57) requires the multiplication of higher order tensors. This is readily completed using Einstein's summation convention. Note that before specifying our choice of η there is no stipulation that it should adhere to an invariant imbedding rule of the form ∇ p η = −∇ x η · Φ.
E.7 SPECIFYING THE VARIATIONS TO SOLVE FOR THE LAGRANGE MULTIPLIER
The auxiliary optimality condition (57) holds for all admissible perturbations (ν, η). We now make specific choices to derive an initial condition for λ(t; p, x) at t = p. For 1 ≤ j ≤ N define the column vector η j (t; p,
x) := ((t − p)δ ij ) i .(58)
This perturbation satisfies η j (p; p, x) = 0, as required, and has derivatives given byη j = −∇ p η j = (δ ij ) i and ∇ pηj = 0; whilst all gradients with respect to ∇ x satisfy ∇ x η j = ∇ xηj = 0. By substitution into (57) we show that
λ(p; p, x) = [∇ z T ](z(q; p, x)) − p q ([∇ z f ](t, z, θ) T · λ(t; p, x) + [∇ z R](t, z, θ))dt.(59)
Note that by the fundamental theorem of calculus, the initial condition in (59) is equivalent to the adjoint equation of Chen et al. (2018).
The remaining results are then derived as follows. Theorem 2 follows directly from (59). Take the derivatives ∇ p Λ(p, x) and ∇ x Λ(p, x) from the explicit formula in (59) and substitute the ∇ p terms with the invariant imbedding relations (51), (52) and (54). Letting ∇ x Λ(p, x) operate on Φ(p, x) Theorem 2 follows by summing the two equations. The initial condition at p = q follows immediately from (59). Similarly, Theorem 3 follows by specifying (40) at the endpoint t = q.
F FURTHER EXPERIMENTATION
F.1 INIMNETS AND LEARNING DYNAMICAL SYSTEMS
A significant speed bump in the training of InImNet architectures is the computation of nested Jacobians ∇ x . This problem increases exponentially with depth. We are able to demonstrate successful results with low depth implementations. We also provide elementary mechanisms to circumvent the issue; see §3.2.
Time series problems require vastly deep InImNets given the training data is spread along a series of times t (or rather depths p). This is the case when applying InImNets to model ODE dynamics directly. But it is the immediate alternative-to replace the Jacobians with numerical gradients as in §3.2-that draws out a more interesting problem. The stability due to the initial divergence of the numerical gradients is related to 'Lyapunov exponents' (Lyapunov, 1992) that measure the growth rates of generic perturbations. The simulation and optimisation of ODEs by neural networks with perturbed initial conditions, in particular InImNets, is a deep one and the subject of immediate ongoing research.
We nevertheless are able to construct a rigorous architecture to handle time series data-see §C-and demonstrate its functionality on a sufficiently small experiment.
F.2 A PROJECTILE MOTION REGRESSION PROBLEM
InImNets can be trained to model a unique brand of time series observations. We depict this with the example of projectile curves, see Figure 1. The data provided is not a sample along the t-varying shape of a given curve, but the single output of many curves initialised at different positions with the same velocity.
We simulate this via the discrete running cost training paradigm in §C. The system we consider is parameterised by a state variable z = [h, v], denoting vertical height and velocity, such that the t-axis is proportional to horizontal position. The ODE system dynamics is then of the forṁ z(t) = [v(t), −g] where g, gravity, is a positive scalar constant (which we take to equal 9.81). We choose to integrate p (backward) over the interval [p min , q] = [0, 1].
The architecture we use is the backward loss propagation described in Theorem 2 adapted to the training data by the time-series algorithm in §C. We implement a constant-in-t training function f and further make use of the augmented adjoint routine described in §D.2 to update the parameters of f . We run the experiment with a learning rate of 0.001 over 10 training epochs.
We operate the experiment at a resolution of points p = 0, 0.25, 0.5, 0.75, 1 with a sample size of just four. Using the automatic differentiation method for computation of Jacobians as per Equation (15), this is the maximum resolution that we are able to run without RAM availability being exhausted by growing Jacobian calculations-this is prototypical of the difficulties described in §F.1. As such, the InImNet performs poorly on this task in lieu of sufficient p-resolution, and hence training data. Nevertheless, this experiments suffices to prove the concept that InImNets may be used to solve a different genre of time-series problems, with further research demanded on the implementation of their nested derivatives.
We implement this experiment in a Google Colab notebook which is provided as a supplement to this article. Figure 6: Plotted curves are given as training samples, where the InImNet's task, trained only on end points, is to identify the curves' height h(t; p, x) at t = q = 1. The InImNet outputs are represented by correspondingly coloured triangular markers.
F.3 EXTRAPOLATION OF OUTPUTS BEYOND OPTIMISED DEPTHS
considers networks of greater depth, with training occurring at lower layers only. In our experimental architecture of §3.1 the weights are optimised independently: each i-th layer is parameterised by a distinct Φ(p i , x). Therefore we cannot perform extrapolation in such a setting. To compensate, we have introduced a variant InImNet architecture for both the rotating MNIST and the bouncing balls tasks where Φ is identical for all layers; that is, for each i we have Φ(p i , x) = Φ(x). This is comparable to the t-invariant parameters used by Chen et al. (2018). In this architecture we take p min = −3 for bouncing balls and p min = −4 for the rotating MNIST. Otherwise, the rest of the parameters have not been changed from the original model hyperperameters (see §G).
With this architecture, we observe competitive performance for rotating MNIST-albeit worse than the original architecture varying Φ over the layers in §4.1.1 and §G.1-as well as the impact of going beyond p min . Figure 5 shows the average performance depending on the frame sequence number. One can see that the accuracy decreases for higher layers. However, we can make quantitative observations about the rate of this loss change in different directions. For example, the loss is lost at a slower rate moving deeper into the network (p = −5) rather than shallower (p = −3).
The examples of extrapolated sequences from the testing set are given in Figure 7. It can be observed that the extrapolation in higher layers (such as -8) is accompanied by decrease in sample diversity across the sequence. For the bouncing balls task, we see that coupling of the weights makes it more challenging to predict the balls' dynamics with the same hyperparameters as in §G.2, as expected. However, we demonstrate here that we can quantify and explain this discrepancy as the model gives us the insight into what it learnt. In Figure 7, while not successfully modelling the dynamics, the model converges to the 'best' strategy of erasing the sequence. Once we increase the layers, the same effect results in the reversal of the original sequence.
F.4 EXPERIMENTS WITH CONVOLUTIONAL ARCHITECTURES
We demonstrate the ability of the InImNet architectures to be composed of larger convolutional layers, allowing for end-to-end training of an InImNet model. With the autoencoder architecture as in Vialard et al. (2020) to define an InImNet layer and repeat the experiments modelling bouncing balls as outlined in §4.1.2. For explicit details on the architecture of this model, see §G.3.
We report on performance results for p min = {−1, −2, −3} in Figure 8 and give sample demonstrations for different layer numbers in Figure 9. The models are optimised dependant on their output at level p min = −4 and p min = −3, respectively. One is able to assess performance at extrapolated depths p = p min around these.
F.5 ON THE APPROXIMATION OF THE INPUT GRADIENTS
Equation (21) outlines the approximation we implement allowing us to avoid high memory costs during training whilst needing to evaluate a growing number of nested Jacobians. The aforementioned costs are associated with the recurrent computation of ∇ x z(t, p i , x) which would require the computation of an i-th order gradient tree; this hold up is also discussed in §F.1. While it is still possible to use the exact solution-via automatic differentiation and gradient graphs-for some of the smaller problems with small p min , we use the approximation from Equation (21) 2020), and parameters, including the batch size, replicate the same experimental set-up. We follow the discrete InImNet architecture described in Section 3.1, and we minimise the mean squared error loss between the ground truth and the outputs decoded at the layer p min using backpropagation and gradient descent based optimisation. To enable the time series prediction, we append the time value to the latent representation of the autoencoder. The multilayer perceptron models use dropout regularisation for every layer except the last. The hyperparameters of the rotating MNIST model are listed in
G.2 BOUNCING BALLS
The architecture of the autoencoder and experimental setting reimplements in Tensorflow 2.6.0 the one from the official code of Vialard et al. (2020), the rest of the considerations are identical to the ones given in G.1. As in the previous section, we concatenate the time parameter with the latent representation of the first three images in the autoencoder to obtain time series imbedding. The hyperparameters of the bouncing balls model are given in Table 3.
G.3 CONVOLUTIONAL EXPERIMENTS
In our Bouncing-Balls-with-convolutional-architecture experiment, §F.4, every InImNet layer reimplements the fully-convolutional autoencoder from Vialard et al. (2020) in Tensorflow 2.6.0, with the exception of omitting the final layer's sigmoid (see the autoencoder description below). The dimensionality reduction is omitted as the input x and the outputs z(q; p, x) are 32 × 32 images. Following Vialard et al. (2020), we concatenate the first three images into the three channels of the autoencoder input, and add the fourth channel filled with the time value shaped 32 × 32. We obtain the prediction by taking the first channel of the autoencoder output. As in the previous experiments, we optimise the mean squared error loss at the layer p min and use backpropagation to optimise the parameters of all InImNet layers.
The hyperparameters for the described convolutional model are listed in Table 4.
Hyperparameter
Figure 1 :
1Plotted are heights h(t; p, x) vs.
Figure 2 :
2Top: A Neural ODE constitutes a two-point boundary value problem over t ∈ [p, q].
the experimental setting used by Yıldız et al. (2019) and Vialard et al. (2020). This gives a point of comparison between our proposed InImNet, for various depth architectures, as described in §3.1.
Figure 3 :
3Left: samples from 'Rotating MNIST' experiment with p min = −4 and a two-layer MLP. Right: samples from 'Bouncing Balls' experiment with p min = −3 and a three-layer MLP. See §G.1 and §G.2 of the appendix for architectural details. 4.1.2 BOUNCING BALLS As in the previous section, we replicate the experimental setting of Yıldız et al. (2019) and Vialard et al. (2020). We use the experimental architecture given in §3.1 and we list hyperparameters used §G.2 of the appendix.
Figure 4 :Figure 5 :
45Bouncing balls experiment. Left: reported MSE for the proposed InImNet and state-ofthe-art methods, results from other methods are taken from Yıldız et al. (2019) and Vialard et al. (2020). Right: average time consumption per epoch.For this 'Bouncing Balls' experiment we note that the MSE of the proposed InImNet and the state-ofthe-art methods are comparable while using a more computationally efficient model. We measured the time per epoch whilst using a public configuration of Google Colab for the (best-performing) up-down method of Vialard et al. (2020) against InImNet (with p min = −3; three-layer MLP). We set the batch size to the same value of 25 as given in the configuration of the official implementation inVialard et al. (2020). While the proposed InImNet requires 153 seconds per epoch, the method as described byVialard et al. (2020) took 516 seconds to finish one epoch. Extrapolation beyond and before chosen training depth at p min = −4.
Figure 7 :
7Extrapolated ouputs for the Rotating MNIST (above) and Bouncing Balls experiments.
0146 Figure 8 :Figure 9 :
014689, p min = −1 0.0140 InImNet, p min = −2 0.0123 InImNet, p min = −3 0.0122 Vialard et al (2020) 0.Bouncing balls experiment. Left: reported MSE for the proposed InImNet and state-ofthe-art methods, results from other methods are taken from Yıldız et al. (2019) and Vialard et al. (2020). Right: average time consumption per epoch. Testing (p min = −3)Training(p min = Bouncing balls experiment using a convolutional architecture; details of the architecture are given in Section G.3.
in all discrete architecture experiments with high performance success and with acceptable memory costs. the experimental setting from Vialard et al. (2020); the experiments have been conducted in Tensorflow 2.6.0. The autoencoder's architecture, reimplemented in Tensorflow from the official pytorch code of Vialard et al. (
Table 1: Rotating MNIST: Reported MSE for the proposed InImNet (where n is the number of MLP layers) and state-of-the-art methods, results from other methods are taken from Yıldız et al.Method
MSE ±σ
p min n MSE ±σ
Time
[s/epoch]
GPPVAE-DIS
0.0309 ± 0.00002
−1
2 0.0156 ± 0.0008
3.3459
GPPVAE-JOINT
0.0288 ± 0.00005
−2
2 0.0130 ± 0.0005
4.1624
ODE 2 VAE
0.0194 ± 0.00006
−3
2 0.0126 ± 0.0007
5.0060
ODE 2 VAE-KL
0.0184 ± 0.0003
−4
2 0.0125 ± 0.0004
5.5806
Vialard et al. (2020)* 0.0122 ± 0.0064** −1
3 0.0176 ± 0.0010
3.1504
(*) 8.6363s/epoch
−2
3 0.0129 ± 0.0008
4.0412
−3
3 0.0125 ± 0.0003
4.886
−4
3 0.0126 ± 0.0004
5.8521
Advances in Neural Information Processing Systems, pp. 3952-3963. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 293835c2cc75b585649498ee74b395f5-Paper.pdf.C. Maynard and M. Scott. Invariant imbedding of linear partial differential equations via generalized K. Spingarn. Some numerical results using kalaba's new approach to optimal control and filtering.IEEE Trans. Automat. Contr., 17(5):713-715, 1972. ISSN 0018-9286. doi: 10.1109/TAC.1972. 1100124. URL https://doi.org/10.1109/TAC.1972.1100124.International Journal of Control,
55(3):531-534, 1992. doi: 10.1080/00207179208934253. URL https://doi.org/10.
1080/00207179208934253.
S. Massaroli, M. Poli, J. Park, A. Yamashita, and H. Asama.
Dissecting Neu-
ral ODEs.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin
(eds.), Riccati transformations. J. Math. Anal. Appl., 36(2):432 -459, 1971. ISSN 0022-247X. doi:
10.1016/0022-247X(71)90011-4. URL https://doi.org/10.1016/0022-247X(71)
90011-4.
G. Meyer. Initial Value Methods for Boundary Value Problems: Theory and Application of Invariant
Imbedding, volume 100 of Mathematics in Science and Engineering. Elsevier, 1973. doi: 10.
1016/S0076-5392(08)62980-X. URL https://doi.org/10.1016/S0076-5392(08)
62980-X.
C. Mobley. Light and Water. Academic Press, 1994. URL http://www.oceanopticsbook.
info/view/introduction/overview.
L. Ruthotto and E. Haber. Deep Neural Networks Motivated by Partial Differential Equations. J.
Math. Imaging Vis, 62:352--364, 2020. doi: 10.1007/s10851-019-00903-1. URL https:
//doi.org/10.1007/s10851-019-00903-1.
F.-X. Vialard, R. Kwitt, S. Wei, and M. Niethammer. A Shooting Formulation of Deep Learning.
Advances in Neural Information Processing Systems, 33, 2020.
Ç . Yıldız, M. Heinonen, and H. Lähdesmäki. ODE 2 VAE: Deep generative second order ODEs
with Bayesian neural networks. Advances in Neural Information Processing Systems, 32:13412-
13421, 2019. URL https://arxiv.org/pdf/1905.10994.
To further demonstrate the ability of InImNets to produce meaningful representations at different layers, we have conducted experiments that extrapolate results for depths p beyond p min . This Fixed p = 0.00, varying initial velocity v v = 13.61 v = 11.55 v= 7.22 v = 13.90 Fixed initial velocity v, varying p v = 13.900
0.2
0.4
0.6
0.8
1
t
0
2
4
6
8
10
h(t; p, x=[0, v])
0
0.2
0.4
0.6
0.8
1
t
Table 2 .
2All experiments have been run in a Google Colab GPU environment. Similar to Vialard et al. (2020), we denote as the inflation factor the size ratio between the intermediate and input MLP layers.Hyperparameter
Value
Optimiser
Adam
Batch size
25
Initial learning rate
0.001
Epochs
500
Dropout value
0.3
Inflation factor
2
Dimension of latent space 20
Learning rate schedule
0.5 [decaying*]
Table 2 :
2Hyperparameters for the rotating MNIST experiment. (*) The learning rate decays exponentially at steps of 30 epochs.
Table 3 :
3Hyperparameters for the bouncing balls experiment. (*) The learning rate decays exponentially at steps of 30 epochs.
Table 4 :
4Hyperparameters for the convolutional Bouncing balls experiment.
ACKNOWLEDGMENTSThe first author would like to extend sincere thanks to Prof. Jacq Christmas and Prof. François-Xavier Vialard for their engagement on this topic.We calculate the Jacobians by flattening the values of x and z(q; p, x) into a vector of 32 × 32 × 4 values. The overall structure of the autoencoder as implemented in Tensorflow Keras is as follows.Encoder:
Invariant embedding: A new method of solving a system of nonlinear boundary-value differential equations. A Agarwal, S Saraf, 10.1016/0022-247X(79)90245-2J. Math. Anal. Appl. 722A. Agarwal and S. Saraf. Invariant embedding: A new method of solving a system of nonlin- ear boundary-value differential equations. J. Math. Anal. Appl., 72(2):524 -532, 1979. ISSN 0022-247X. doi: 10.1016/0022-247X(79)90245-2. URL https://doi.org/10.1016/ 0022-247X(79)90245-2.
Diffuse reflection of light by a foggy medium. V Ambarzumian, C. R. (Doklady) Acad. Sci. URSS (N.S.). 38V. Ambarzumian. Diffuse reflection of light by a foggy medium. C. R. (Doklady) Acad. Sci. URSS (N.S.), 38:229-232, 1943.
An Introduction to Invariant Imbedding. R Bellman, G Wing, 10.1137/1.9781611971279Society for Industrial and Applied Mathematics. 8SIAMR. Bellman and G. Wing. An Introduction to Invariant Imbedding, volume 8. Society for Industrial and Applied Mathematics (SIAM), 1975. ISBN 0-89871-304-8. doi: 10.1137/1.9781611971279. URL https://doi.org/10.1137/1.9781611971279.
Invariant Imbedding and Nonlinar Filtering Theory. R Bellman, H Kagiwada, R Kalaba, R Sridhar, 0021-9142J. Astronaut. Sci. 13R. Bellman, H. Kagiwada, R. Kalaba, and R. Sridhar. Invariant Imbedding and Nonlinar Filtering Theory. J. Astronaut. Sci., 13:110-115, 1966. ISSN 0021-9142.
A Bensoussan, Y Li, D P C Nguyen, M.-B Tran, S C P Yam, X Zhou, Machine Learning and Control Theory. arXiv preprintA. Bensoussan, Y. Li, D. P. C. Nguyen, M.-B. Tran, S. C. P. Yam, and X. Zhou. Machine Learning and Control Theory. arXiv preprint, 2020. URL http://arxiv.org/abs/2006.05604.
Neural Ordinary Differential Equations. T Q Chen, Y Rubanova, J Bettencourt, D Duvenaud, Advances in Neural Information Processing Systems 31. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. Neural Ordi- nary Differential Equations. Advances in Neural Information Processing Sys- tems 31, pp. 6571-6583, 2018. URL http://papers.nips.cc/paper/ 7892-neural-ordinary-differential-equations.
J Davis, K Choromanski, J Varley, H Lee, J.-J Slotine, V Likhosherstov, A Weller, A Makadia, V Sindhwani, Time Dependence in Non-Autonomous Neural ODEs. arXiv preprintJ. Davis, K. Choromanski, J. Varley, H. Lee, J.-J. Slotine, V. Likhosherstov, A. Weller, A. Makadia, and V. Sindhwani. Time Dependence in Non-Autonomous Neural ODEs. arXiv preprint, 2020. URL http://arxiv.org/abs/2005.01906.
Augmented Neural ODEs. E Dupont, A Doucet, Y W Teh, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. GarnettCurran Associates, Inc32E. Dupont, A. Doucet, and Y. W. Teh. Augmented Neural ODEs. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 3140-3150. Curran Associates, Inc., 2019. URL http://papers. nips.cc/paper/8577-augmented-neural-odes.pdf.
. L Euler, Elementa Calculi Variationum. Novi Comment. Acad. Sci. Imp. Petropol. 10L. Euler. Elementa Calculi Variationum. Novi Comment. Acad. Sci. Imp. Petropol., 10:51-93, 1766. URL https://scholarlycommons.pacific.edu/euler-works/296/.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2015.
Accuracy and Efficiency in Fixed-Point Neural ODE Solvers. M Hopkins, S Furber, 10.1162/NECO_a_00772Neural Comput. 2710M. Hopkins and S. Furber. Accuracy and Efficiency in Fixed-Point Neural ODE Solvers. Neural Comput., 27(10):2148-2182, 2015. doi: 10.1162/NECO a 00772. URL https://doi.org/ 10.1162/NECO_a_00772.
Invariant Imbedding and Optimal Control Theory. R Kalaba, R Sridhar, 10.1007/BF00927676J. Optim. Theory Appl. 4R. Kalaba and R. Sridhar. Invariant Imbedding and Optimal Control Theory. J. Optim. Theory Appl., 4:343-351, 1969. ISSN 0022-3239. doi: 10.1007/BF00927676. URL https://doi.org/ 10.1007/BF00927676.
Calculus of Variations and Optimal Control Theory: A Concise Introduction. D Liberzon, Princeton University Press9780691151878D. Liberzon. Calculus of Variations and Optimal Control Theory: A Concise Introduction. Princeton University Press, 2012. ISBN 9780691151878.
Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations. Y Lu, A Zhong, Q Li, B Dong, PMLRProceedings of the 35th International Conference on Machine Learning. J. Dy and A. Krausethe 35th International Conference on Machine Learning80Y. Lu, A. Zhong, Q. Li, and B. Dong. Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations. In J. Dy and A. Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3276-3285. PMLR, 10-15 Jul 2018. URL http://proceedings. mlr.press/v80/lu18d.html.
5), strides=2, padding='same'). Conv2d, Conv2D(16, (5, 5), strides=2, padding='same')
5), strides=2, padding='same')). Conv2d, 32Conv2D(32, (5, 5), strides=2, padding='same'))
3), strides=2, padding='same'). Conv2dtranspose, Conv2DTranspose(128, (3, 3), strides=2, padding='same')
5), strides=2, padding='same'). Conv2dtranspose, 64Conv2DTranspose(64, (5, 5), strides=2, padding='same')
5), strides=2, padding='same'). Conv2dtranspose, 32Conv2DTranspose(32, (5, 5), strides=2, padding='same')
. Conv2dtranspose, 5Conv2DTranspose(4, (5, 5), padding='same')) |
246,607,791 | LEARNING REPRESENTATION FROM NEURAL FISHER KERNEL WITH LOW-RANK APPROXIMATION | In this paper, we study the representation of neural networks from the view of kernels. We first define the Neural Fisher Kernel (NFK), which is the Fisher Kernel (Jaakkola and Haussler, 1998) applied to neural networks. We show that NFK can be computed for both supervised and unsupervised learning models, which can serve as a unified tool for representation extraction. Furthermore, we show that practical NFKs exhibit low-rank structures. We then propose an efficient algorithm that computes a low rank approximation of NFK, which scales to large datasets and networks. We show that the low-rank approximation of NFKs derived from unsupervised generative models and supervised learning models gives rise to high-quality compact representations of data, achieving competitive results on a variety of machine learning tasks.Published as a conference paper at ICLR 2022 linear feature space of the kernel. Kernel machines provide drastically different representations than layer activations, where the knowledge of a neural network is instantiated by the induced kernel function over data points.In this work, we propose to make use of the linear feature space of the kernel, associated with the pre-trained neural network model, as the data representation of interest. To this end, we made novel contributions on both theoretical and empirical side, as summarized below.• We propose Neural Fisher Kernel (NFK) as a unified and principled kernel formulation for neural networks models in both supervised learning and unsupervised learning settings.• We introduce a highly efficient and scalable algorithm for low-rank kernel approximation of NFK, which allows us to obtain a compact yet informative feature embedding as the data representation.• We validate the effectiveness of proposed approach from NFK in unsupervised learning, semisupervised learning and supervised learning settings, showing that our method enjoys superior sample efficiency and representation quality.PRELIMINARY AND RELATED WORKSIn this section, we present technical background and formalize the motivation. We start by introducing the notion of data representation from the perspective of kernel methods, then introduce the connections between neural network models and kernel methods.Notations. Throughout this paper, we consider dataset with N | [
208857409,
3162051,
13757156,
2723173,
204838340,
829159,
11758569,
11383178,
13123084,
4009713,
221836662,
13995862,
213896662,
3708505,
52055130,
13619197
] | LEARNING REPRESENTATION FROM NEURAL FISHER KERNEL WITH LOW-RANK APPROXIMATION
Ruixiang Zhang [email protected]
Université de Montréal
Apple Inc
Mila
Shuangfei Zhai [email protected]
Université de Montréal
Apple Inc
Mila
Etai Littwin [email protected]
Université de Montréal
Apple Inc
Mila
Josh Susskind [email protected]
Université de Montréal
Apple Inc
Mila
LEARNING REPRESENTATION FROM NEURAL FISHER KERNEL WITH LOW-RANK APPROXIMATION
Published as a conference paper at ICLR 2022
In this paper, we study the representation of neural networks from the view of kernels. We first define the Neural Fisher Kernel (NFK), which is the Fisher Kernel (Jaakkola and Haussler, 1998) applied to neural networks. We show that NFK can be computed for both supervised and unsupervised learning models, which can serve as a unified tool for representation extraction. Furthermore, we show that practical NFKs exhibit low-rank structures. We then propose an efficient algorithm that computes a low rank approximation of NFK, which scales to large datasets and networks. We show that the low-rank approximation of NFKs derived from unsupervised generative models and supervised learning models gives rise to high-quality compact representations of data, achieving competitive results on a variety of machine learning tasks.Published as a conference paper at ICLR 2022 linear feature space of the kernel. Kernel machines provide drastically different representations than layer activations, where the knowledge of a neural network is instantiated by the induced kernel function over data points.In this work, we propose to make use of the linear feature space of the kernel, associated with the pre-trained neural network model, as the data representation of interest. To this end, we made novel contributions on both theoretical and empirical side, as summarized below.• We propose Neural Fisher Kernel (NFK) as a unified and principled kernel formulation for neural networks models in both supervised learning and unsupervised learning settings.• We introduce a highly efficient and scalable algorithm for low-rank kernel approximation of NFK, which allows us to obtain a compact yet informative feature embedding as the data representation.• We validate the effectiveness of proposed approach from NFK in unsupervised learning, semisupervised learning and supervised learning settings, showing that our method enjoys superior sample efficiency and representation quality.PRELIMINARY AND RELATED WORKSIn this section, we present technical background and formalize the motivation. We start by introducing the notion of data representation from the perspective of kernel methods, then introduce the connections between neural network models and kernel methods.Notations. Throughout this paper, we consider dataset with N
INTRODUCTION
Modern deep learning systems rely on finding good representations of data. For supervised learning models with feed forward neural networks, representations can naturally be equated with the activations of each layer. Empirically, the community has developed a set of effective heuristics for representation extraction given a trained network. For example, ResNets (He et al., 2016) trained on Imagenet classification yield intermediate layer representations that can benefit downstream tasks such as object detection and semantic segmentation. The logits layer of a trained neural network also captures rich correlations across classes which can be distilled to a weaker model (Knowledge Distillation) (Hinton et al., 2015).
Despite empirical prevalence of using intermediate layer activations as data representation, it is far from being the optimal approach to representation extraction. For supervised learning models, it remains a manual procedure that relies on trial and error to select the optimal layer from a pre-trained model to facilitate transfer learning. Similar observations also apply to unsupervised learning models including GANs (Goodfellow et al., 2014), VAEs (Kingma and Welling, 2014), as evident from recent studies (Chen et al., 2020a) that the quality of representation in generative models heavily depends on the choice of layer from which we extract activations as features. Furthermore, although that GANs and VAEs are known to be able to generate high-quality samples from the data distribution, there is no strong evidence that they encode explicit layerwise representations to similar quality as in supervised learning models, which implies that there does not exist a natural way to explicitly extract a representation from intermediate layer activations in unsupervisedly pre-trained generative models. Additionally, layer activations alone do not suffice to reach the full power of learned representations hidden in neural network models, as shown in recent works (Mu et al., 2020) that incorporating additional gradients-based features into representation leads to substantial improvement over solely using activations-based features.
In light of these constraints, we are interested in the question: is there a principled method for representation extraction beyond layer activations? In this work, we turn to the kernel view of neural networks. Recently, initiated by the Neural Tangent Kernel (NTK) (Jacot et al., 2018) work, there have been growing interests in the kernel interpretation of neural networks. It was shown that neural networks in the infinite width regime are reduced to kernel regression with the induced NTK.
Our key intuition is that, the kernel machine induced by the neural network provides a powerful and principled way of investigating the non-linear feature transformation in neural networks using the
1 N N i=1 (f (x i ) − y i )
2 + λ f 2 H . Neural Networks and Kernel Methods. A long line of works (Neal, 1996;Williams, 1996;Roux and Bengio, 2007;Hazan and Jaakkola, 2015;Lee et al., 2018;de G. Matthews et al., 2018;Jacot et al., 2018;Chen and Xu, 2021;Geifman et al., 2020;Belkin et al., 2018;Ghorbani et al., 2020), have studied that many kernel formulations can be associated to neural networks, while most of them correspond to neural network where being fixed kernels (e.g. Laplace kernel, Gaussian kernel) or only the last layer is trained, e.g., Conjugate Kernel (CK) (Daniely et al., 2016), also called as NNGP kernel (Lee et al., 2018). On the other hand, Neural Tangent Kernel (NTK) (Jacot et al., 2018) is a fundamentally different formulation corresponding to training the entire infinitely wide neural network models. Let f (θ; x) denote a neural network function with parameters θ, then the empirical NTK is defined as K ntk (x, z) = ∇ θ f (θ; x), ∇ θ f (θ; z) . (Jacot et al., 2018;Lee et al., 2018) showed that under the so-called NTK parametrization and other proper assumptios, the function f (x; θ) learned by training the neural network model with gradient descent is equivalent to the function estimated via ridgeless KRR using K ntk as the kernel. For finite-width neural networks, by taking first-order Taylor expansion of funnction f around the θ, kernel regression under K ntk can be seen as linearized neural network model at parameter θ, suggesting that pre-trained neural network models can also be studied and approximated from the perspective of kernel methods. Figure 1: Overview of our proposed approach. Given a pre-trained neural network model, which can be either an unsupervised generative model p θ (x) (e.g. GANs, VAEs), or a supervised learning model p θ (y|x), we aim to extract a compact yet informative representation from it. By reinterpreting various families of models as energy-based models (EBM), we introduce Neural Fisher Kernel (NFK) K nfk as a principled and unified kernel formulation for neural network models (Section. 3.1). We introduce a highly efficient and scalable kernel approximation algorithm (Section. 3.2) to obtain the low-dimensional feature embedding e x , which serves as the extracted data representation from NFK.
data space X to parameter space Θ, and obtain representations that are linearized. As proposed in (Jaakkola and Haussler, 1998;Perronnin and Dance, 2007), the Fisher vector V x can be used as the feature representation derived from probabilistic generative models, which was shown to be superior to hand-crafted visual descriptors in a variety of computer vision tasks.
Generative Models In this work, we consider a variety of representative deep generative models, including generative adversarial networks (GANs) (Goodfellow et al., 2014), variational autoencoders (VAEs) (Kingma and Welling, 2014), as well we normalizing flow models (Dinh et al., 2015) and auto-regressive models (van den Oord et al., 2016). Please refer to (Salakhutdinov, 2014) for more technical details on generative models.
LEARNING REPRESENTATION FROM NEURAL FISHER KERNEL
We aim to propose a general and efficient method for extracting high-quality representation from pre-trained neural network models. As formalized in previous section, we can describe the outline of our proposed approach as: given a pre-trained neural network model f (x; θ) (either unsupervised generative model p(x; θ) or supervised learning model p(y | x; θ)), with pre-trained weights θ, we adopt the kernel formulation K f induced by model f (x; θ) and make use of the associated linear feature embedding ϕ(x) of the kernel K f as the feature representation of data x. We present an overview introduction to illustrate our approach in Figure. 1.
At this point, however, there exist both theoretical difficulties and practical challenges which impede a straightforward application of our proposed approach. On the theoretical side, the NTK theory is only developed in supervised learning setting, and its extension to unsupervised learning is not established yet. Though Fisher kernel is immediately applicable in unsupervised learning setting, deriving Fisher vector from supervised learning model p(y | x; θ) can be tricky, which needs the log-density estimation of marginal distribution p θ (x) from p(y | x; θ). Note that it is a drastically different problem from previous works (Achille et al., 2019) where Fisher kernel is applied to the joint distribution over p(x, y). On the practical efficiency side, the dimensionality of the feature space associated with NTK or FK is same as the number of model parameters |θ|, which poses unmanageably high time and space complexity when it comes to modern large-scale neural network models. Additionally, the size of the NTK scales quadratically with the number of classes in multi-class supervised learning setting, which gives rise to more efficiency concerns.
To address the kernel formulation issue, we propose Neural Fisher Kernel (NFK) in Sec. 3.1 as a unified kernel for both supervised and unsupervised learning models. To tackle the efficiency challenge, we investigate the structural properties of the proposed NFK and propose a highly scalable low-rank kernel approximation algorithm in Sec. 3.2 to extract compact low-dimensional feature representation from NFK.
NEURAL FISHER KERNEL
In this section, we propose Neural Fisher Kernel (NFK) as a principled and general kernel formulation for neural network models. The key intuition is that we can extend classical Fisher kernel theory to unify the procedure of deriving Fisher vector from supervised learning models and unsupervised learning models by using Energy-based Model (EBM) formulation. An untrained model with the same architecture on the other hand, demonstrates a much lower degree of low-rankness.
UNSUPERVISED NFK
We consider unsupervised probabilistic generative models p θ (x) = p(x; θ) here. Our proposed NFK formulation can be applied to all generative models with tractable evaluation (or approximation) of ∇ θ log p θ (x).
GANs. We consider the EBM formulation of GANs (Dai et al., 2017;Zhai et al., 2019;Che et al., 2020). Given pre-trained GAN model, we use D(x; θ) to denote the output of the discriminator D, and use G(h) to denote the output of generator G given latent code h ∼ p(h). As a brief recap, GANs can be interpreted as an implementation of EBM training with a variational distribution, where we have the energy-function E(x; θ) = −D(x; θ). Please refer to (Zhai et al., 2019;Che et al., 2020) for more details. Thus we have the unnormalized density function p θ (x) ∝ e −E(x;θ) given by the GAN model. Following (Zhai et al., 2019), we can then derive the Fisher kernel K nfk and Fisher vector from standard GANs as shown below: (h) . Note that we use diagonal approximation of FIM throughout this work for the consideration of scalability.
K nfk (x, z) = V x , V z V x = (diag(I) − 1 2 )U x U x = ∇ θ D(x; θ) − E h∼p(h) ∇ θ D(G(h); θ) (1) where x, z ∈ X , I = E h∼p(h) U G(h) U G
VAEs. Given a VAE model pre-trained via maximizing the variational lower-bound ELBO L ELBO (x) ≡ E q(h|x) log p(x,h) q(h|x) , we can approximate the marginal log-likelihood log p θ (θ) by evaluating L ELBO (x) via Monte-Carlo estimations or importance sampling techniques (Burda et al., 2016). Thus we have our NFK formulation as
K nfk (x, z) = V x , V z V x = (diag(I) − 1 2 )U x U x ≈ ∇ θ L ELBO (x) (2) where x, z ∈ X , I = E x∼p θ (x) U x U x .
Flow-based Models, Auto-Regressive Models. For generative models with explicit exact data density modeling p θ (x), we can simply apply the classical Fisher kernel formulation in Sec. 2.
SUPERVISED NFK
In the supervised learning setting, we consider conditional probabilistic models p θ (y | x) = p(y | x; θ). In particular, we focus on classification problems where the conditional probability is parameterized by a softmax function over the logits output f (x; θ): p θ (y | x) = exp(f y (x; θ))/ y exp(f y (x; θ)), where y is a discrete label and f y (x; θ) denotes y-th logit output. We then borrow the idea from JEM (Grathwohl et al., 2020) and write out a joint energy function term over (x, y) as E(x, y; θ) = −f y (x; θ). It is easy to see that joint energy yields exactly the same conditional probability, at the same time leading to a free energy function:
Algorithm 1 Baseline method: compute low-rank NFK feature embedding Input dataset D; pre-train NN model f (x; θ); NFK feature dimensionality k; test data input x Output low-rank NFK feature embedding e nfk (x * ) 5: obtain e nfk (x * ) ∈ R k via Eq. 5 and Eq. 4
E(x; θ) = − log y exp(f y (x; θ)). It essentially reframes a conditional distribution over y given x to an induced unconditional distribution over x, while maintaining the same conditional probability p θ (y | x). This allows us to write out the NFK formulation as:
K nfk (x, z) = V x , V z V x = (diag(I) − 1 2 )U x U x = y p θ (y | x)∇ θ f y (x; θ) − E x ∼p θ (x ) y p θ (y | x)∇ θ f y (x ; θ)(3)
where I = E x∼p θ (x) U x U x , and p θ (x) is the normalized density corresponding to the free energy E θ , which could be sampled from via Markov chain Monte Carlo (MCMC) algorithm. In this work, we use empirical data distribution as practical approximation.
NFK WITH LOW-RANK APPROXIMATION
Fisher vector V x is the linear feature embedding ϕ(x) given by NFK K nfk (x, z) = V x , V z for neural network model f (x; θ). However, straightforward application of NFK by using V x as feature representation suffers from scalability issue, since V x ∈ R |θ| shares same dimensionality as the number of parameters |θ|. It is with that in mind that |θ| can be tremendously large considering the scale of modern neural networks, it is unfortunately infeasible to directly leverage V x as feature representation.
Low-Rank Structure of NFK. Motivated by the Manifold Hypothesis of Data that it is widely believed that real world high dimensional data lives in a low dimensional manifold (Roweis and Saul, 2000; Rifai et al., 2011a;, we investigate the structure of NFKs and present empirical evidence that NFKs of good models have low-rank spectral structure. Firstly, we start by examining supervised learning models. We study the spectrum structure of the empirical NFK of trained neural networks with different architectures. We trained a LeNet-5 (LeCun et al., 1998) CNN and a 3-layer MLP network by minimizing binary cross entropy loss, and then compute the eigen-decomposition of the NFK Gram matrix. We show the explained ratio plot in Figure 2. We see that the spectrum of CNN NTK concentrates on fewer large eigenvalues, thus exhibiting a lower effective-rank structure compared to the MLP, which can be explained by the fact that CNN has better model inductive bias for image data domain. For unsupervised learning models, we trained a small unconditional DCGAN ) model on MNIST dataset. We compare the results of a fully trained model against a randomly initialized model in Fig. 2 (note the logarithm scale of the x-axis).
Remarkably, the trained model demonstrates an extreme degree of low-rankness that top 100 principle components explain over 99.5% of the overall variance, where 100 is two orders of magnitude smaller than both number of examples and number of parameters in the discriminator. We include more experimental results and discussions in appendix due to the space constraints.
Efficient Low-Rank Approximation of NFK. The theoretical insights and empirical evidence presented above hint at a natural solution to address the challenge of high-dimensionality of V x ∈ R |θ| : we can turn to seek a low-rank approximation to the NFK. According to the Mercer's theorem (Mercer, 1909), for positive definite kernel
K(x, z) = ϕ(x), ϕ(z) we have K(x, z) = ∞ i=1 λ i φ i (x)φ i (z), x, z ∈ X , where {(λ i , φ i )}
are the eigenvalues and eigenfunctions of the kernel K, with respect to the integral operator p(z)K(x, z)φ i (z) dz = λ i φ i (x). The linear feature embedding representation ϕ(x) can thus be constructed from the orthonormal eigenbasis
{(λ i , φ i )} as ϕ(x) ≡ √ λ 1 φ 1 (x), √ λ 2 φ 2 (x), . . . ≡ √ λ i φ i (x) , i = 1, . . . , ∞.
To obtain a low-rank approximation, we only keep top-k largest eigen-basis {(λ i , φ i )} ordered by corresponding eigenvalues λ i to form the low-rank k-dimensional feature embedding e(x) ∈ R k , k N, k |θ|
e(x) ≡ λ 1 φ 1 (x), λ 2 φ 2 (x), . . . λ k φ k (x) ≡ λ i φ i (x) , i = 1, . . . , k(4)
1: V = Φdiag(Σ)P , P ∈ R |θ|×k using power iteration methods via JVP/VJP evaluations 2: compute K (x, X) Φ i ≈ V x diag(Σ i )P i via JVP evaluation 3: obtain e nfk (x * ) ∈ R k via Eq. 5 and Eq. 4
By applying our proposed NFK formulation K nfk to pre-trained neural network model f (x; θ), we can obtain a compact low-dimensional feature representation e nfk (x) ∈ R k in this way. We call it the low-rank NFK feature embedding.
We then illustrate how to estimate the eigenvalues and eigenfunctions of NFK K nfk from data. Given
dataset D, the Gram matrix K ∈ R N ×N of kernel K is defined as K(x i , x j ) = K(x i , x j ). We use X ≡ [x i ] N i=1
to denote the matrix of all data examples, and use φ i (X) ∈ R N to denote the concatenated vector of evaluating i-th eigenfunction φ i at all data examples. Then by performing eigen-decomposition of the Gram matrix K = Φdiag(Λ)Φ , the i-th eigenvector Φ i ∈ R N and eigenvalue Λ i can be seen as unbiased estimation of the i-th eigenfunction φ i and eigenvalue λ i of the kernel K, evaluated at training data examples X,
φ i (X) ≈ √ N Φ i , λ i ≈ 1 N Λ i .
Based on these estimations, we can thus approximate the eigenfunction φ i via the integral operator by Monte-Carlo estimation with empirical data distribution,
λ i φ i (x) = p(z)K(x, z)φ i (z) dz ≈ E xj ∈p data K(x, x j )φ j (x j ) ≈ 1 N N j=1 K(x, x j )Φ ji(5)
Given new test data example x , we can now approximate the eigenfunction evaluation φ i (x ) by the projection of kernel function evaluation results centered on
training data examples K(x , X) ≡ [K(x , x j )] N j=1 onto the i-th eigenvector Φ i of kernel Gram matrix K.
We adopt this method as the baseline approach for low-rank approximation, and present the baseline algorithm description in Alg. 1.
However, due to the fact that it demands explicit computation and manipulation of the Fisher vector matrix V ∈ R N ×|θ| and the Gram matrix K ∈ R N ×N in Alg. 1, straightforward application of the baseline approach, as well as other off-the-shelf classical kernel approximation (Williams and Seeger, 2000; Rahimi and Recht, 2007) and SVD methods (Halko et al., 2011), are practically infeasible to scale to larger-scale machine learning settings, where both the number of data examples N and the number of model parameters |θ| can be extremely large.
To tackle the posed scalability issue, we propose a novel highly efficient and scalable algorithm for computing low-rank approximation of NFK. Given dataset D and model f (x; θ), We aim to compute the truncated SVD of the Fisher vector matrix V = Φdiag(Σ)P , P ∈ R |θ|×k . Based on the idea of power methods (Golub and Van der Vorst, 2000;Bathe, 1971) for finding leading top eigenvectors, we start from a random vector v 0 and iteratively construct the sequence v t+1 = VV vt VV vt . By leveraging the special structure of V that it can be obtained from the Jacobian matrix J θ (X) ∈ R N ×|θ| up to element-wise linear transformation under the NFK formulation in Sec. 3, we can decompose each iterative step into a Jacobian Vector Product (JVP) and a Vector Jacobian Product (VJP). With modern automatic-differentiation techniques, we can evaluate both JVP and VJP efficiently, which only requires the same order of computational costs of one vanilla backward-pass and forward-pass of neural networks respectively. With computed truncated SVD results, we can approximate the projection term in Eq. 5 by
K (x, X) Φ i = V x V Φ i ≈ V x diag(Σ i )P i ,
which is again in the JVP form so that we can pre-compute and store the truncated SVD results and evaluate the eigenfunction of any test data example via one efficient JVP forward-pass. We describe our proposed algorithm briefly in Alg. 2.
EXPERIMENTS
In this section, we evaluate NFK in the following settings. We first evaluate the proposed low-rank kernel approximation algorithm (Sec. 3.2), in terms of both approximation accuracy and running time efficiency. Next, we evaluate NFK on various representation learning tasks in both supervised, semi-supervised and unsupervised learning settings.
QUALITY AND EFFICIENCY OF LOW-RANK NFK APPROXIMATIONS
We implement our proposed low-rank kernel approximation algorithm in Jax (Bradbury et al., 2018) with distributed multi-GPU parallel computation support. For the baseline methods for comparison, we first compute the full kernel Gram matrix using the neural-tangets (Novak et al., 2020) library, and then use sklearn.decomposition.TruncatedSVD to obtain the truncated SVD results. All model and algorithm hyper-parameters are included in the Appendix.
Computational Costs. We start by comparing running time costs of computing top NFK eigenvectors via truncated SVD. We use two models for the comparison, a DCGAN-like GAN model in (Zhai et al., 2019) and a Wide ResNet (WRN) with 40 layers and 2 times wider than original network (denoted as WRN-40-2). Please see appendix for the detailed description of hyper-parameters. We observed that our proposed algorithm could achieve nearly linear time scaling, while the baseline method would not be able to handle more than 2 14 data examples as the memory usage and time complexity are too high to afford. We also see in Fig. 3 that by utilizing multi-GPU parallelism, we achieved further speed-up which scales almost linearly with the number of GPUs. We emphasize that given the number of desired eigenvectors, the time complexity of our method scales linearly with the number of data examples and the demanded memory usage remains constant with adequate data batch size, since explicit computation and storage of the full kernel matrix is never needed.
Approximation accuracy. We investigate the approximation error of our proposed low-rank approximation method. Since we did not introduce any additional approximations, our method shares the same approximation error bound with the existing randomized SVD algorithm (Martinsson and Tropp, 2020; Halko et al., 2011) and would only expect differences compared to the baseline randomized SVD algorithm up to numerical errors. To evaluate the quality of the low-rank kernel approximation, we use LeNet-5 and compute its full NFK Gram matrix on MNIST dataset. Please see appendix for detailed hyper-parameter setups. We show in Fig. 3 the approximation errors of top-128 eigenvalues along with corresponding explained variances. We obtain less than 1e − 8 absolute error and less than 1e − 7 relative error in top eigen-modes which explains most of the data. (Dosovitskiy et al., 2015) 84.3 Unsupervised -BiGAN (Mu et al., 2020) 70.5 Unsupervised -RotNet Linear (Gidaris et al., 2018) 81.8 Self-Supervised ∼ 25K AET Linear (Zhang et al., 2019) 83.3 Self-Supervised ∼ 25K VAE (Mu et al., 2020) 61 We conduct comparative studies on different tasks to understand the NFK embedding and present empirical observations to answer these questions in following sections.
NFK Representations from Unsupervised Generative Models. In order to examine the effectiveness of the low-rank NFK embeddings as data representations in unsupervised learning setting, we consider GANs and VAEs as representative generative models and compute the low-rank NFK embeddings. Then we adopt the linear probing protocol by training a linear classifier on top of the obtained embeddings and report the classification performance to quantify the quality of NFK data representation. For GANs, we use the same pretrained GAN model from (Zhai et al., 2019) and reimplemented the AFV baseline. For VAEs, we follow the same model architecture proposed in (Child, 2020). We then apply the proposed truncated SVD algorithm with 10 power iterations to obtain the 256 dimensional embedding via projection. We present our results on CIFAR-10 ( Krizhevsky et al., 2009a) in Table. 1. We use GAN-NFK-128d (GAN-NFK-256d) to denote the NFK embedding obtained from using top-128 (top-256) eigenvectors in our GAN model. Our VAE models (VAE-NFK-128d and VAE-NFK-256d) follow the same notations. For VAE baselines, the method proposed in (Mu et al., 2020) combines both gradients features and activations-based features into one linear model, denoted as VAE in the table. For GAN baselines, we first consider using intermediate layer activations only as data representation, referred to as the GAN-Activations model. We then consider using full Fisher vector as representation, namely using the normalized gradients w.r.t all model parameters as features, denoted as the GAN-AFV model as proposed in (Zhai et al., 2019). Moreover, we also compare our results against training whole neural network using data labels in a supervised learning way, denoted as GAN-Supervised model.
As shown in Table. 1, by contrasting against the baseline GAN-AFV from GAN-Activations, as well as validation in recent works (Zhai et al., 2019;Mu et al., 2020), gradients provide additional useful information beyond layer activations based features. However, it would be impractical to use all gradients or full Fisher vector as representation when scaling up to large-scale neural network models. For example, VAE (Child, 2020) has ∼ 40M parameters, it would not be possible to apply the baseline methods directly. Our proposed low-rank NFK embedding approach addressed this challenge by building low-dim vector representation from efficient kernel approximation algorithm, making it possible to utilize all model parameters' gradients information by embedding it into a lowdim vector, e.g. 256-dimensional embedding in VAE-NFK-256d from ∼ 40M parameters. As our low-rank NFK embedding is obtained by linear projections of full Fisher vectors, it naturally provides answers for Q2 that the NFK embedding can be viewed as a compact yet informative representation containing information from all gradients. We see from Table. 1 that by using top-128 eigenvectors, Table 2: Error rates of semi-supervised classification on CIFAR10 and SVHN, varying labels from 500 to 4000. NFK-128d yields extremely competitive performance, compared to other more sophisticated baselines, Mixup , VAT (Miyato et al., 2019), MeanTeacher (Tarvainen and Valpola, 2017), MixMatch (Berthelot et al., 2019), Improved GAN (Salimans et al., 2016), all are jointly learns with labels, . Also note that the architecture used by MixMatch yields a 4.13% supervised learning error rate, which is a much stronger than our supervised baseline (7.3%). the low-rank NFK embedding is able to recover the performance of full ∼ 5.9M -dimension Fisher vector, which provides positive evidence for Q3 that we can preserve most of the useful information in Fisher vector by taking advantage of the low-rank structure of NFK spectrum.
NFK Representations for Semi-Supervised Learning. We then test the low-rank NFK embeddings in the semi-supervised learning setting. Following the standard semi-supervised learning benchmark settings ( Table. 2, in comparison with other baseline methods. We see that NFK-128d achieves very competitive performance. On CIFAR-10, NFK-128d is only outperformed by the state-of-the-art semi-supervised learning algorithm MixMatch (Berthelot et al., 2019), which also uses a stronger architecture than ours. The results on SVHN are mixed though NFK-128d is competitive with the top performing approaches. The results demonstrated the effectiveness of NFK embeddings from unsupervised generative models in semi-supervised learning, showing promising sample efficiency for Q4.
NFK Representations for Knowledge Distillation. We next test the effectiveness of using low-rank NFK embedding for knowledge distillation in the supervised learning setting. We include more details of the distillation method in the Appendix. Our experiments are conducted on CIFAR10, with a teacher set as the WRN-40-2 model and student being WRN-16-1. After training the teacher, we compute the low-rank approximation of NFK of the teacher model, using top-20 eigenvectors. We include more details about the distillation method setup in the Appendix. Our results are reported in Table. 3. We see that our method achieves superior results compared to other competitive baseline knowledge distillation methods, which mainly use the logits and activations from teacher network as distillation target.
CONCLUSIONS
In this work, we propose a novel principled approach to representation extraction from pre-trained neural network models. We introduce NFK by extending the Fisher kernel to neural networks in both unsupervised learning and superevised learning settings, and propose a novel low-rank kernel approximation algorithm, which allows us to obtain a compact feature representation in a highly efficient and scalable way.
A EXTENDED PRELIMINARIES
We extend Sec. 2 to introduce additional technical background and related work.
Kernel methods in Deep Learning. Popularized by the NTK work Jacot et al. (2018), there has been great interests in the deep learning community around the kernel view of neural networks. In particular, several works have studied the low-rank structure of the NTK, including (Baratin et al., 2021;Papyan, 2020;Canatar et al., 2020), which demonstrate that empirical NTK demonstrates low-rankness and that encourages better generalization theoretically. Our low-rank analysis of NFK shares a similar flavor, but generalizes across supervised and unsupervised learning settings. Besides, we make an explicit effort in proposing an efficient implementation of the low-rank approximation, and demonstrate strong empirical performances.
Unsupervised/self supervised representation learning. Unsupervised representation learning is an old idea in deep learning. A large body of work is dedicated to designing better learning objectives (self supervised learning), including denoising (Vincent et al., 2010), contrastive learning (Oord et al., 2018;Chen et al., 2020b;He et al., 2020), mutual information based methods (Hjelm et al., 2019;Poole et al., 2019; and other "pretext tasks" Jing and Tian (2020). Our attempt falls into the same category of unsupervised representation learning, but differs in that we instead focus on effectively extracting information from a standard probabilistic model. This makes our effort orthogonal to many of the related works, and can be easily plugged into different family of models.
Knowledge Distillation. Knowledge distillation (KD) is generally concerned about the problem of supervising a student model with a teacher model (Hinton et al., 2015;Ba and Caruana, 2014). The general form of KD is to directly match the statistics of one or a few layers (default is the logits). Various works have studied the layer selection (Romero et al., 2015) or loss function design aspects (Ahn et al., 2019). More closely related to our work is efforts that consider the second order statistics between examples, including (Tung and Mori, 2019;. NFKD differs in that we represent the teacher's knowledge in the kernel space, which is directly tied to the kernel interpretation of neural networks which introduces different inductive biases than layerwise representations.
Neural Tangent Kernel. Recent advancements in the understanding of neural networks have shed light on the connection between neural network training and kernel methods. In (Jacot et al., 2018), it is shown that one can use the Neural Tangent Kernel (NTK) to characterize the full training of a neural network using a kernel. Let f (θ; x) denote a neural network function with parameters θ. The NTK is defined as follows:
K ntk (x, z) = E θ∼P θ ∇ θ f (θ; x), ∇ θ f (θ; z) .
(6) where P θ is the probability distribution of the initialization of θ. (Jacot et al., 2018) further demonstrates that in the large width regime, a neural network undergoing training under gradient descent essentially evolves as a linear model. Let θ 0 denote the parameter values at initialization. To determine how the function f t (θ t ; x) evolves, we may naively taylor expand the output around θ 0 : f t+1 (θ t+1 ; x) ≈ f t (θ t ; x) − η∇ θt f t (θ t ; x) (θ t+1 − θ t ). As the weight updates are given by
θ t−1 − θ t = − 1 N η m i=1 ∇ θt L t (x i ), hence we have f t+1 (θ t+1 ; x) ≈ f t (θ t ; x) − η 1 N N i=1 K ntk (x, x i )∇ f L t (x i ).
The significance of the NTK stems from two observations. 1) When suitably initialized, the NTK converges to a limit kernel when the width tends to infinity lim width→∞ K ntk (x, z; θ 0 ) =K ntk (x, z). 2) In that limit, the NTK remains frozen in its limit state throughout training.
B ON THE CONNECTIONS BETWEEN NFK AND NTK
In Sec 3.1, we showed that our definition of NFK in the supervised learning setting bares great similarity to the NTK. We provide more discussion here on the connections between NFK and NTK.
For the L2 regression loss function, the empirical fisher information reduces to I =
1 N N i=1 ∇ θ f θ (x)∇ θ f θ (x)
. Note that the fisher information matrix I is give by a covariance matrix of J, while the NTK matrix is defined as the Gram matrix of J, where J is the Jacobian matrix, implying they share the same spectrum, and that the NTK and the NFK share the same eigenvectors. The addition of I −1 in the definition of K nf k can be seen as a form of conditioning, facilitating fast convergence in all directions spanned by J.
1: V = Φdiag(Σ)P , P ∈ R |θ|×k via truncated_svd(X, f θ , topk=K, kernel_type="NFK") 2:
3: compute K (x, X) Φ i ≈ V x diag(Σ i )P i via JVP evaluation 4: obtain e nfk (x * ) ∈ R k via Eq. 5 and Eq. 4
Equation 3 also has immediate connections to NTK. In NTK, the kernel K ntk (x,x) ∈ R N ×N is a matrix which measures the dot product of Jacobian for every pair of logits. The NFK, on the other hand, reduces the Jacobian ∇ θ f θ (x) for each example x to a single vector of dimension n (i.e., size of θ), weighted by the predicted probability of each class p θ (y|x). The other notable difference between NFK and NTK is the subtractive and normalize factors, represented by E x ∼p θ (x ) y p θ (y|x)∇ θ f y θ (x ) and I, respectively. This distinction is related to the difference between Natural Gradient Descent (Amari, 1998;Karakida and Osawa, 2020) and gradient descent. In a nutshell, our definition of NFK in the supervised learning setting can be considered as a reduced version of NTK, with proper normalization. These properties make NFK much more scalable w.r.t. the number of classes, and also less sensitive to the scale of model's parameters.
To better see this, we can define an "unnormalized" version of NFK as
K u (x,x) = [ y p θ (y | x)∇ θ f y θ (x)] y p θ (y |x)∇ θ f y θ (x).
It is easy to see that K u has the same rank as the original NFK K, as I −1 is full rank by definition. We can then further rewrite it as
K u (x,x) = y ȳ p θ (y | x)p θ (ȳ|x)∇ θ f y θ (x) ∇ θ fȳ θ (x) = y ȳ p θ (y | x)p θ (ȳ |x)K y,ȳ ntk (x,x)(7)
In words, the unnormalized version of NFK can be considered as a reduction of NTK, where the weights of each element is weighte by the predicted probability for the respective class. If we further assume that the model of interest is well trained, as is often the case in knowledge distillation, we can approximate the K u as K y * ,ȳ * ntk (x,x), where y * = arg max y p θ (y | x) and likewise forȳ * . This suggests that the unnormalized NFK can roughly viewd as a downsampled version of NTK. As a result, we expect the unnormalized NFK (and hence the NFK) to exhibit similar low rank properties as demonstrated in the NTK literature.
On the low-rank structure of NTK. Consider the NTK Gram matrix K ntk ∈ R N ×N of some network Given the dataset {x i } N i=1 (for simplicity we assume a scalar output) and its eigen decomposition K ntk = m j=1 λ j u j u j . Let f ∈ R N denote the concatenated outputs. Under GD in the linear regime, the outputs f t evolves according to:
∀ j , u j (f t+1 − f t ) ≈ −ηλ j u j ∇ f L.(8)
The updates f t+1 − f t projected onto the bases of the kernel therefore converge at different speeds, determined by the eigenvalues {λ j }. Intuitively, a good kernel-data alignment means that the ∇ f L is spanned by a few eigenvectors with large corresponding eigenvalues, speeding up convergence and promoting generalization. , where E(x) is the energy function parametrized by θ and Z(θ) = exp(−E(x; θ)) dx is the Algorithm 4 truncated_svd, Truncated SVD Algorithm for Low-rank Kernel Approximation. Comments are based on NTK for simplicity.
Input Dataset X ≡ {x i } N i=1
Input Neural network model f θ Input Kernel type kernel, NFK or NTK Input Low-rank embedding size K Input Number of power iterations L = 10 Input Number of over samples U = 10 Output Truncated SVD of Jacobian J θ (X) ≈ P k Σ k Q k 1: U = K + U Size of augmented set of vectors in power iterations 2: Draw random matrix Ω ∈ R N ×U 3: Ω =matrix_jacobian_product(f θ , X, Ω, kernel) Ω = J θ (X)Ω ∈ R M ×U 4: for step = 1 to L do 5:
Ω =jacobian_matrix_product(f θ , X, Ω, kernel)
Ω = J θ (X)Ω ∈ R N ×U 6
:
Ω =matrix_jacobian_product(f θ , X, Ω, kernel) Ω = J θ (X)Ω ∈ R M ×U 7
:
Ω =qr_decomposition(Ω) 8: end for 9: B =jacobian_matrix_product(f θ , X, Ω, kernel) B = J θ (X)Ω ∈ R N ×U 10: P, Σ, Q = svd(B ) 11: P = ΩP 12: Keep top rank-K vectors to obtain the truncated results P k , Σ k , Q k 13: Return P k , Σ k , Q k
Algorithm 5 jacobian_matrix_product
Input Neural network model f θ Input Input data X ∈ R B×D , where B is batch size Input Input matrix M Input Kernel type kernel, NFK or NTK Output J θ (X)M for NTK, Fisher-vector-matrix-product V θ (X)M for NFK 1: jmp_fn = jax.vmap(jax.jvp) 2: P =jmp_fn(f θ , X, M) 3: if kernel = "NFK" then 4: P = diag(I) − 1 2 (P − Z θ M) 5: end if 6: Return P partition function, we could apply the Fisher kernel formulation to derive the Fisher score U x as
U x = ∇ θ log p θ (x) = ∇ θ log [exp(−E(x; θ))] − ∇ θ log Z(θ) = −∇ θ E(x; θ) − ∇ θ log Z(θ) = −∇ θ E(x; θ) − E x∼p θ (x) ∇ θ log [exp(−E(x; θ))] = E x∼p θ (x) ∇ θ E(x; θ) − ∇ θ E(x; θ)(9)
Then we can obtain the FIM I and the Fisher vector V x from above results, shown as below
I = E x∼p θ (x) U x U x V x = I − 1 2 U x(10)
NFK for GANs. As introduced in Section 3, we consider the EBM formulation of GANs. Given pre-trained GAN model, we use D(x; θ) to denote the output of the discriminator D, and use G(h) to denote the output of generator G given latent code h ∼ p(h). Then we have the energy-function defined as E(x; θ) = −D(x; θ). Based on the NFK formulation for EBMs, we can simply substitute Algorithm 6 matrix_jacobian_product Input Neural network model f θ Input Input data X ∈ R B×D , where B is batch size Input Input matrix M Input Kernel type kernel, NFK or NTK Output J θ (X)M for NTK, Fisher-vector-matrix-product V θ (X)M for NFK 1: mjp_fn = jax.vmap(jax.vjp) 2: P =mjp_fn(f θ , X, M) 3: if kernel = "NFK" then 4: P = diag(I) − 1 2 (P − Z θ M) 5: end if 6: Return P E(x; θ) = −D(x; θ) into Eq. 9 and Eq. 10 and derive the NFK formulation for GANs as below
U x = ∇ θ D(x; θ) − E h∼p(h) ∇ θ D(G(h); θ) I = E h∼p(h) U G(h) U G(h) V x = (diag(I) − 1 2 )U x K nfk (x, z) = V x , V z(11)
Note that we use diagonal approximation of FIM throughout this work for the consideration of scalability. Also, since the generator of GANs is trained to match the distribution induced by the discriminator's EBM from the perspective of variational training for GANs, we could use the samples generated by the generator to approximate x ∈ p θ (x), which is reflected in above formulation.
NFK for VAEs, Flow-based Models, Auto-Regressive Models. For models including VAEs, Flowbased Models, Auto-Regressive Models, where explicit or approximate density estimation is available, we can simply apply the classical Fisher kernel formulation as introduced in the main text.
NFK for Supervised Learning Models. In the supervised learning setting, we consider conditional probabilistic models p θ (y | x) = p(y | x; θ). In particular, we focus on classification problems where the conditional probability is parameterized by a softmax function over the logits output f (x; θ): p θ (y | x) = exp(f y (x; θ))/ y exp(f y (x; θ)), where y is a discrete label and f y (x; θ) denotes y-th logit output. We then borrow the idea from JEM (Grathwohl et al., 2020) and write out a joint energy function term over (x, y) as E(x, y; θ) = −f y (x; θ). It is easy to see that joint energy yields exactly the same conditional probability, at the same time leading to a free energy function:
E(x; θ) = − log y exp(f y (x; θ)) ∇ θ E(x; θ) = − y p θ (y | x)∇ θ f y (x; θ)(12)
Based on the NFK formulation for EBMs, we can simply substitute above results into Eq. 9 and Eq. 10 and derive the NFK formulation for GANs as below
U x = y p θ (y | x)∇ θ f y (x; θ) − E x ∼p θ (x ) y p θ (y | x)∇ θ f y (x ; θ)(13)I = E x∼p θ (x) U x U x V x = (diag(I) − 1 2 )U x K nfk (x, z) = V x , V z(14)
C.2 EFFICIENT LOW-RANK NFK/NTK APPROXIMATION VIA TRUNCATED SVD We provide mode details on experimental observations on the low-rank structure of NFK and the low-rank kernel approximation algorithm here. Figure 4: Inverting a DCGAN with 100d NFK embeddings (a), compared with image reconstruction with 100d PCA embeddings (b). In either case, the left plot corresponds to real test images and the right corresponds to the reconstructions. Note that NFK embeddings care capable of inverting a GAN by producing high quality semantic reconstructions. With PCA, embeddings with the same dimensionality produces more blurry reconstructions (thus less semantic).
Low-Rank Structure of NFK.
For supervised learning models, we trained a LeNet-5 (LeCun et al., 1998) CNN and a 3-layer MLP network by minimizing binary cross entropy loss, and then compute the eigen-decomposition of the NFK Gram matrix. For unsupervised learning models, we trained a small unconditional DC-GAN ) model on MNIST dataset. We deliberately selected a small discriminator, which consists of 17K parameters. Because of the relatively low-dimensionality of θ in the discriminator, we were able to directly compute the Fisher Vectors for a random subset of the training dataset. We then performed standard SVD on the gathered Fisher Vector matrix, and examined the spectrum statistics. In particular, we plot the explained variance ration quantity, defined as
r k = k i=1 λ 2 i i=1 λ 2 i
where λ i is the i-th singular value. In addition, we have also visualized the top 5 principle components, by showing example images which have the largest projections on each component in Fig. 6. Furthermore, we conducted a GAN inversion experiment. We start by sampling a set of latent variables from the generator's prior h ∈ p(h), and get a set of generated example {x i }, x i = G(h i ), i = 1, ..., n. We then apply Algorithm 2 on the generated example {x i } to obtain their NFK embeddings {e(x i )}, and we set the dimension of both h and e to 100. We now have a compositional mapping that reads as h → x → e. We then learn a linear mapping W ∈ R 100×100 from {e(G(h i ))} to {h i } by minimizing n i=1 h i − We(G(h i )) 2 . In doing so, we have constructed an auto encoder from a regular GAN, with the compositional mapping of x → e → h →x, wherex is the reconstruction of an input x. The reconstructions are shown in Figure 4 (a). Interestingly, the 100d SVD embedding gives rise to a qualitatively faithful reconstruction on real images. n contrast, a PCA embedding with the same dimension gives much more blurry reconstructions (eg., noise in the background), as shown in Figure 4 (b). This is a good indication that the 100d embedding captures most of the information about an input example.
Power iteration of NFK as JVP/VJP evaluations. Our proposed algorithm is based on the Power method Golub and Van der Vorst (2000); Bathe (1971) for finding the leading top eigenvectors of the real symmetric matrix. Starting from a random vector v 0 drawn from a rotationally invariant distribution and normalize it to unit norm v 0 = 1, the power method iteratively constructs the sequence v t+1 = Kvt Kvt up to q power iterations. Given the special structure of K that it's a Gram matrix of the Jacobian matrix J θ (X) ∈ R D×N , to evaluate Kv t in each power iteration step we need to evaluate J θ (X) J θ (X)v t , which can be decomposed as: (i) evaluating z t = J θ (X)v t , and then (ii) Kv t = J θ (X) z t . Note that when K is in the form of NTK of neural networks, step (i) of evaluating z t is a Vector-Jacobian-Product (VJP) and step (ii) is a Jacobian-Vector-Product (JVP). With the help of automatic-differentiation techniques, we can evaluate both JVP and VJP efficiently, which only requires the same order of computational costs of one backward-pass and forward-pass of neural networks respectively. In this way, we can reduce the Kernel matrix vector product operation in each power iteration step to one VJP evaluation and one JVP evaluation, without the need to computing and storing the Jacobian matrix and kernel matrix explicitly.
As introduced in Section. 3.2, we include detailed algorithm description here, from Algorithm. 3 to Algorithm. 6. In Algorithm. 3, we show the algorithm to compute the low-rank NFK embedding, which can be used as data representations. In Algorithm. 4, we present our proposed automatic-differentiation based truncated SVD algorithm for kernel approximation. Note that in Algorithm. 5 and 6, we only need to follow Equation. 3 to pre-compute the model distribution statistics, Z θ = E x ∼p θ (x ) y p θ (y|x)∇ θ f y θ (x ), and FIM I = E x ∼p θ (x ) [U x U x ]. We adopt Figure 7: Linear probing accuracy on CIFAR10 with different number of principle components in embedding. We use our proposed low-rank approximation method to compute the embedding from the teacher model on CIFAR10 for knowledge distillation.
the EBM formulation of classifier f θ (x) then replace the Jacobian matrix J θ (X) with the Fisher vector matrix V θ (X) = diag(I) − 1 2 (J θ (X) − Z θ ). Note that our proposed algorithm is also readily applicable to empirical NTK via replacing the FIM by the identity matrix.
D EXPERIMENTS SETUP
D.1 QUALITY ANDEFFICIENCY OFLOW-RANKNFK APPROXIMATIONS
Experiments on Computational Cost. We randomly sample N ∈ 2 k : 7 ≤ k ≤ 16 data examples from CIFAR-10 dataset, and compute top-32 eigenvectors of the NFK Gram matrix (R N ×N ) by truncated SVD. We use same number of power iterations (10) in baseline method and our algorithm. We show in Fig. 3 Experiments on Approximation Accuracy. We randomly sample 10000 examples and compute top-128 eigenvalues using both baseline methods and our proposed algorithm. Specifically, we compute the full Gram matrix and perform eigen-decomposition to obtain baseline results. For our implementation, we run 10 power iterations in randomized SVD.
D.2 NEURAL FISHER KERNEL DISTILLATION
With the efficient low-rank approximation of NFK, one can immediately obtain a compact representation of the kernel. Namely, each example can be represented as a k dimension vector. Essentially, we have achieved a form of kernel distillation, which is a useful technique on its own. Furthermore, we can use Q as an generalized form for teacher student styled knowledge distillation (KD), as in (Hinton et al., 2015). In standard KD, one obtain a teacher network (e.g., deep model) and use it to train a student network (e.g., a shallow model) with a distillation loss in the following format:
L kd (x, y) = α * L cls (f s (x), y) + (1 − α) * L t (f s (x), f t (x)),
where L cls is a standard classification loss (e.g., cross entropy) and L t is a teacher loss which forces the student network's output f s to match that of the teacher f t . We propose a straightforward extension of KD with NFK, where we modify the loss function to be:
L nfkd (x, y) = α * L cls (f s (x), y) + (1 − α) * L t (h s (x), Q t (x)),
where Q t (x) denotes the k dimensional embedding from the SVD of teacher NFK, for example x. h s is a prediction head from the student, and L t is overloaded to denote a suitable loss (e.g., 2 distance or cosine distance). Equation 16 essentially uses the low dimension embedding of the teacher's NFK as supervision, inplace of the teacher's logits. There are arguable benefits of using L nf kd over L kd .
For example, when the number of classes is small, the logit layer contains very little extra information (measured in number of bits) than the label alone, whereas Q t can still provide dense supervision to the student.
For the Neural Fisher Kernel Distillation (NFKD) experiments, we adopt the WideResNet-40x2 (Zagoruyko and Komodakis, 2016) neural network as the teacher model. We train another WideResnet with 16 layers as the student model, and keep the width unchanged. We run 10 power iterations to compute the SVD approximation of the NFK of the teacher model, to obtain the top-20 eigenvectors and eigenvalues. Then we train the student model with the additional NFKD distillation loss using mini-batch stochastic gradient descent, with 0.9 momentum, for 250 epochs. The initial learning rate begins at 0.1 and we decay the learning rate by 0.1 at 150-th epoch and decay again by 0.1 at 200-th epoch. We also show the linear probing accuracies on CIFAR10 by using different number of embedding dimensions in Figure. 7.
Figure 2 :
2Left: The spectrum structure of NFKs from a CNN (green) and a MLP (red), trained on MNIST binary classification task. The NFK of CNN concentrates on fewer eigen-modes compared to the MLP. Right: The low-rankness of the NFK on a DCGAN trained on MNIST. For a trained model, the first 100 principle components of the Fisher Vector matrix explain 99.5% of all variances.
Figure 3 :
3Top row: Running time efficiency evaluation for truncated SVD algorithm on single GPU. We vary the number of data examples used, shown in x-axis. y-axis denotes the wall-clock running time (in seconds). Red crosses mark the cases when it is no longer possible for the baseline method to obtain the results in an affordable waiting time and memory consumption. Bottom left: Running time costs with different number of GPUs used in our distributed SVD implementation. Bottom right: Approximation errors (in blue) of our proposed implementation for each eigenmode (in descending order of eigenvalues), v.s. the explained variance (in red). Best viewed in color.
Berthelot et al., 2019;Miyato et al., 2019;Laine and Aila, 2017;Sajjadi et al., 2016;Tarvainen and Valpola, 2017), we evaluate our method on CIFAR-10 (Krizhevsky et al., 2009a) and SVHN datasets(Krizhevsky et al., 2009b). We treat most of the dataset as unlabeled data and use few examples as labeled data. We use the same GAN model as the unsupervised learning setting above, and compute top-128 eigenvectors using training dataset (labeled and unlabeled) to derive the 128-dimensional NFK embedding. Then we only use the labeled data to train a linear classifier on top of the NFK embedding features, denoted as the NFK-128d model. We vary the number of labeled training examples and report the results in
C
NEURAL FISHER KERNEL WITH LOW-RANK APPROXIMATION C.1 NEURAL FISHER KERNEL FORMULATION We provide detailed derivations of the various NFK formulations presented in Section. 3. NFK for Energy-based Models. Consider an Energy-based Model (EBM) p θ (x) = exp(−E(x;θ)) Z(θ)
Figure 5 :
5The low-rankness of the NFK on a DCGAN trained on MNIST. For a trained model, the first 100 principle components of the Fisher Vector matrix explains 99.5% of all variances. An untrained model with the same architecture on the other hand, demonstrates a much lower degree of low-rankness.
Figure 6 :
6Images with the largest projections on the first five principle components. Each row corresponds to a principle component.
the running time of SVD for both methods in terms of number of data examples N .
Table 1 :
1CIFAR-10 accuracies of linear evaluation on top of representations learned with unsupervised
and self-supervised methods. NFK-128d denotes the 128 dimensional embeddings from the low-rank
approximation of the NFK (ie AFV). Remarkably, we can use 128 dimensions to exactly recover the
performance of the 5.9M dimensional Fisher Vectors.
Model
Acc
Category
#Features
Examplar CNN
Q3. To what extent can the low-rank NFK embedding preserve the information in full Fisher vector? Q4. Does the NFK embedding representation lead to better generalization performance in terms of better sample efficiency and faster adaptation?.5
Unsupervised
-
VAE-NFK-128d (ours)
63.2
Unsupervised
128
VAE-NFK-256d (ours)
68.7
Unsupervised
256
GAN-Supervised
92.7
Supervised
-
GAN-Activations
65.3
Unsupervised
-
GAN-AFV (Zhai et al., 2019)
89.1
Unsupervised
5.9M
GAN-AFV (re-implementation) (Zhai et al., 2019) 89.8
Unsupervised
5.9M
GAN-NFK-128d (ours)
89.8 Unsupervised
128
GAN-NFK-256d (ours)
89.8 Unsupervised
256
4.2 LOW-RANK NFK EMBEDDING AS DATA REPRESENTATIONS
In this section we evaluate NFK to answer the following: Q1. In line with the question raised in Sec. 1,
how does our proposed low-rank NFK embedding differ from the intermediate layer activations
for data representation? Q2. How does the low-rank NFK embedding compare to simply using
gradients (Jacobians) as data representation?
Table 3 :
3Supervised knowledge distillation results (classification accuracy on test dataset) on CIFAR10
against baseline methods KD (Hinton et al., 2015), FitNet (Romero et al., 2015), AT (Zagoruyko and
Komodakis, 2017), NST (Huang and Wang, 2017), VID-I (Ahn et al., 2019), numbers are from (Ahn
et al., 2019).
Teacher Student KD
FitNet AT
NST
VID-I NFKD (ours)
ACC 94.26
90.72
91.27 90.64
91.60 91.16 91.85 92.42
Tamir Hazan and Tommi S. Jaakkola. Steps toward deep kernel methods from infinite neural networks.CoRR, abs/1508.05133, 2015. URL http://arxiv.org/abs/1508.05133.Nicolas Le Roux and Yoshua Bengio. Continuous neural networks. In Marina Meila and Xiaotong
Shen, editors, Proceedings of the Eleventh International Conference on Artificial Intelligence
and Statistics, AISTATS 2007, San Juan, Puerto Rico, March 21-24, 2007, volume 2 of JMLR
Proceedings, pages 404-411. JMLR.org, 2007. URL http://proceedings.mlr.press/
v2/leroux07a.html.
Exploiting generative models in discriminative classifiers. Tommi S Jaakkola, David Haussler, Advances in Neural Information Processing Systems. Michael J. Kearns, Sara A. Solla, and David A. CohnDenver, Colorado, USAThe MIT Press11NIPS ConferenceTommi S. Jaakkola and David Haussler. Exploiting generative models in discriminative classifiers. In Michael J. Kearns, Sara A. Solla, and David A. Cohn, editors, Advances in Neural Information Processing Systems 11, [NIPS Conference, Denver, Colorado, USA, November 30 -December 5, 1998], pages 487-493. The MIT Press, 1998. URL http://papers.nips.cc/paper/ 1520-exploiting-generative-models-in-discriminative-classifiers.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 10.1109/CVPR.2016.902016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAIEEE Computer SocietyKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
Distilling the knowledge in a neural network. Geoffrey E Hinton, Oriol Vinyals, Jeffrey Dean, abs/1503.02531CoRR. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015. URL http://arxiv.org/abs/1503.02531.
. J Ian, Jean Goodfellow, Mehdi Pouget-Abadie, Bing Mirza, David Xu, Sherjil Warde-Farley, Aaron Ozair, Yoshua Courville, Bengio, arXiv:1406.2661Generative adversarial networks. arXiv preprintIan J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv preprint arXiv:1406.2661, 2014. URL https://arxiv.org/abs/1406.2661.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, 2nd International Conference on Learning Representations. Banff, AB, CanadaConference Track ProceedingsDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http: //arxiv.org/abs/1312.6114.
Generative pretraining from pixels. Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1691-1703. PMLR, 2020a. URL http://proceedings. mlr.press/v119/chen20s.html.
Gradients as features for deep representation learning. Fangzhou Mu, Yingyu Liang, Yin Li, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Fangzhou Mu, Yingyu Liang, and Yin Li. Gradients as features for deep representation learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id= BkeoaeHKDS.
Neural tangent kernel: Convergence and generalization in neural networks. Arthur Jacot, Clément Hongler, Franck Gabriel, ; Hanna, M Wallach, Hugo Larochelle, Kristen Grauman, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. Nicolò Cesa-Bianchi, and Roman GarnettNeurIPS; Montréal, CanadaSamy Bengio,Arthur Jacot, Clément Hongler, and Franck Gabriel. Neural tangent kernel: Convergence and generalization in neural networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neu- ral Information Processing Systems 31: Annual Conference on Neural Information Pro- cessing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 8580-8589, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/ 5a4be1fa34e62bb8a6ec6b91d2462f5a-Abstract.html.
Kernel methods in machine learning. The annals of statistics. Thomas Hofmann, Bernhard Schölkopf, Alexander J Smola, Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. The annals of statistics, pages 1171-1220, 2008.
Priors for infinite networks. M Radford, Neal, Bayesian Learning for Neural Networks. SpringerRadford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, pages 29-53. Springer, 1996.
Computing with infinite networks. K I Christopher, Williams, Advances in Neural Information Processing Systems 9. Michael Mozer, Michael I. Jordan, and Thomas PetscheDenver, CO, USAMIT PressChristopher K. I. Williams. Computing with infinite networks. In Michael Mozer, Michael I. Jordan, and Thomas Petsche, editors, Advances in Neural Information Processing Systems 9, NIPS, Denver, CO, USA, December 2-5, 1996, pages 295-301. MIT Press, 1996. URL http: //papers.nips.cc/paper/1197-computing-with-infinite-networks.
Deep neural networks as gaussian processes. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netJaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id= B1EA-M-0Z.
Gaussian process behaviour in wide deep neural networks. Alexander G De, G Matthews, Jiri Hron, Mark Rowland, Richard E Turner, Zoubin Ghahramani, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netAlexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id= H1-nGgWC-.
Deep neural tangent kernel and laplace kernel have the same RKHS. Lin Chen, Sheng Xu, 9th International Conference on Learning Representations, ICLR 2021, Virtual Event. AustriaLin Chen and Sheng Xu. Deep neural tangent kernel and laplace kernel have the same RKHS. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id= vK9WrZ0QYQ.
On the similarity between the laplace and neural tangent kernels. Amnon Geifman, Abhay Kumar Yadav, Yoni Kasten, Meirav Galun, David W Jacobs, Ronen Basri, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Amnon Geifman, Abhay Kumar Yadav, Yoni Kasten, Meirav Galun, David W. Jacobs, and Ronen Basri. On the similarity between the laplace and neural tangent kernels. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan- Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 1006ff12c465532f8c574aeaa4461b16-Abstract.html.
To understand deep learning we need to understand kernel learning. Mikhail Belkin, Siyuan Ma, Soumik Mandal, PMLRProceedings of the 35th International Conference on Machine Learning, ICML 2018. Jennifer G. Dy and Andreas Krausethe 35th International Conference on Machine Learning, ICML 2018Stockholmsmässan, Stockholm, Sweden80Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th Interna- tional Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 540-548. PMLR, 2018. URL http://proceedings.mlr.press/v80/belkin18a.html.
When do neural networks outperform kernel methods. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, Andrea Montanari, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/ 2020/hash/a9df2255ad642b923d95503b9a7958d8-Abstract.html.
Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. Amit Daniely, Roy Frostig, Yoram Singer, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editorsBarcelona, SpainAmit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, edi- tors, Advances in Neural Information Processing Systems 29: Annual Conference on Neu- ral Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2253-2261, 2016. URL https://proceedings.neurips.cc/paper/2016/hash/ abea47ba24142ed16b7d8fbf2c740e0d-Abstract.html.
Fisher kernels on visual vocabularies for image categorization. Florent Perronnin, Christopher R Dance, 10.1109/CVPR.2007.383266IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007). Minneapolis, Minnesota, USAIEEE Computer SocietyFlorent Perronnin and Christopher R. Dance. Fisher kernels on visual vocabularies for image categorization. In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA. IEEE Computer Society, 2007. doi: 10.1109/CVPR.2007.383266. URL https://doi.org/10.1109/CVPR. 2007.383266.
NICE: non-linear independent components estimation. Laurent Dinh, David Krueger, Yoshua Bengio, 3rd International Conference on Learning Representations. San Diego, CA, USAWorkshop Track ProceedingsLaurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components estimation. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, 2015. URL http://arxiv.org/abs/1410.8516.
Pixel recurrent neural networks. Aäron Van Den Oord, Nal Kalchbrenner, Koray Kavukcuoglu, Proceedings of the 33nd International Conference on Machine Learning. Maria-Florina Balcan and Kilian Q. Weinbergerthe 33nd International Conference on Machine LearningNew York City, NY, USA48of JMLR Workshop and Conference ProceedingsAäron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Maria-Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 1747-1756. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/oord16.html.
Deep learning. Ruslan Salakhutdinov, 10.1145/2623330.2630809The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14. Sofus A. Macskassy, Claudia Perlich, Jure Leskovec, Wei Wang, and Rayid GhaniNew York, NY, USAACM1973Ruslan Salakhutdinov. Deep learning. In Sofus A. Macskassy, Claudia Perlich, Jure Leskovec, Wei Wang, and Rayid Ghani, editors, The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA -August 24 -27, 2014, page 1973. ACM, 2014. doi: 10.1145/2623330.2630809. URL https://doi.org/10.1145/ 2623330.2630809.
Task2vec: Task embedding for meta-learning. Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, C Charless, Stefano Fowlkes, Pietro Soatto, Perona, 10.1109/ICCV.2019.006532019 IEEE/CVF International Conference on Computer Vision, ICCV 2019. Seoul, Korea (SouthIEEEAlessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C. Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019, pages 6429-6438. IEEE, 2019. doi: 10.1109/ICCV.2019.00653. URL https://doi.org/10.1109/ICCV.2019.00653.
Calibrating energy-based generative adversarial networks. Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard H Hovy, Aaron C Courville, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netZihang Dai, Amjad Almahairi, Philip Bachman, Eduard H. Hovy, and Aaron C. Courville. Calibrating energy-based generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=SyxeqhP9ll.
Adversarial fisher vectors for unsupervised representation learning. Shuangfei Zhai, Walter Talbott, Carlos Guestrin, Joshua M Susskind, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaShuangfei Zhai, Walter Talbott, Carlos Guestrin, and Joshua M. Susskind. Adversarial fisher vec- tors for unsupervised representation learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Pro- cessing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11156-11166, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ 7e1cacfb27da22fb243ff2debf4443a0-Abstract.html.
Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling. Ruixiang Tong Che, Jascha Zhang, Hugo Sohl-Dickstein, Liam Larochelle, Yuan Paull, Yoshua Cao, Bengio, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien LinNeurIPSTong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, and Yoshua Bengio. Your GAN is secretly an energy-based model and you should use discriminator driven latent sampling. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/ hash/90525e70b7842930586545c6f1c9310c-Abstract.html.
Importance weighted autoencoders. Yuri Burda, Roger B Grosse, Ruslan Salakhutdinov, 4th International Conference on Learning Representations. San Juan, Puerto RicoConference Track ProceedingsYuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representa- tions, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1509.00519.
Your classifier is secretly an energy based model and you should treat it like one. Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/ forum?id=Hkxzx0NtDB.
Nonlinear dimensionality reduction by locally linear embedding. science. T Sam, Lawrence K Roweis, Saul, 290Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323-2326, 2000.
Contractive autoencoders: Explicit invariance during feature extraction. Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, Yoshua Bengio, Proceedings of the 28th International Conference on Machine Learning. Lise Getoor and Tobias Schefferthe 28th International Conference on Machine LearningBellevue, Washington, USAOmnipressSalah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto- encoders: Explicit invariance during feature extraction. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pages 833-840. Omnipress, 2011a. URL https://icml.cc/2011/papers/455_icmlpaper.pdf.
The manifold tangent classifier. Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, Xavier Muller, Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting. John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. WeinbergerGranada, SpainSalah Rifai, Yann N. Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The man- ifold tangent classifier. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fer- nando C. N. Pereira, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Sys- tems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 2294-2302, 2011b. URL https://proceedings.neurips.cc/paper/2011/hash/ d1f44e2f09dc172978a4d3151d11d63e-Abstract.html.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, 4th International Conference on Learning Representations, ICLR 2016. San Juan, Puerto RicoConference Track ProceedingsAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1511.
Functions ofpositive and negativetypeand theircommection with the theory ofintegral equations. J Mercer, Philos. Trinsdictions Rogyal Soc. 209J Mercer. Functions ofpositive and negativetypeand theircommection with the theory ofintegral equations. Philos. Trinsdictions Rogyal Soc, 209:4-415, 1909.
Using the nyström method to speed up kernel machines. K I Christopher, Matthias W Williams, Seeger, Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS). Todd K. Leen, Thomas G. Dietterich, and Volker TrespDenver, CO, USAMIT PressChristopher K. I. Williams and Matthias W. Seeger. Using the nyström method to speed up kernel machines. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, Papers from Neu- ral Information Processing Systems (NIPS) 2000, Denver, CO, USA, pages 682-688. MIT Press, 2000. URL https://proceedings.neurips.cc/paper/2000/hash/ 19de10adbaa1b2ee13f77f679fa1483a-Abstract.html.
Random features for large-scale kernel machines. Ali Rahimi, Benjamin Recht, Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems. John C. Platt, Daphne Koller, Yoram Singer, and Sam T. RoweisVancouver, British Columbia, CanadaCurran Associates, IncAli Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007, pages 1177- 1184. Curran Associates, Inc., 2007. URL https://proceedings.neurips.cc/paper/ 2007/hash/013a006f03dbc5392effeb8f18fda755-Abstract.html.
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. Nathan Halko, Joel A Per-Gunnar Martinsson, Tropp, 10.1137/090771806SIAM Rev. 532Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2): 217-288, 2011. doi: 10.1137/090771806. URL https://doi.org/10.1137/090771806.
Eigenvalue computation in the 20th century. H Gene, Golub, A Henk, Van Der, Vorst, Journal of Computational and Applied Mathematics. 1231-2Gene H Golub and Henk A Van der Vorst. Eigenvalue computation in the 20th century. Journal of Computational and Applied Mathematics, 123(1-2):35-65, 2000.
Solution methods for large generalized eigenvalue problems in structural engineering. Klaus-Jürgen Bathe, National Technical Information Service. US Department of CommerceKlaus-Jürgen Bathe. Solution methods for large generalized eigenvalue problems in structural engineering. National Technical Information Service, US Department of Commerce, 1971.
JAX: composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vanderplas, Skye Wanderman-Milne, Qiao Zhang, James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
Neural tangents: Fast and easy infinite neural networks in python. Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A Alemi, Jascha Sohl-Dickstein, Samuel S Schoenholz, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. Neural tangents: Fast and easy infinite neural networks in python. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id= SklD9yrFPS.
Randomized numerical linear algebra: Foundations and algorithms. Joel A Per-Gunnar Martinsson, Tropp, 10.1017/S0962492920000021Acta Numer. 29Per-Gunnar Martinsson and Joel A. Tropp. Randomized numerical linear algebra: Foundations and algorithms. Acta Numer., 29:403-572, 2020. doi: 10.1017/S0962492920000021. URL https://doi.org/10.1017/S0962492920000021.
Discriminative unsupervised feature learning with exemplar convolutional neural networks. Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, Thomas Brox, IEEE transactions on pattern analysis and machine intelligence. 38Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 38(9):1734-1747, 2015.
Unsupervised representation learning by predicting image rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netSpyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by pre- dicting image rotations. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=S1v4N2l0-.
AED: unsupervised representation learning by auto-encoding transformations rather than data. Liheng Zhang, Guo-Jun Qi, Liqiang Wang, Jiebo Luo, Vs, 10.1109/CVPR.2019.00265IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USAComputer Vision Foundation / IEEELiheng Zhang, Guo-Jun Qi, Liqiang Wang, and Jiebo Luo. AET vs. AED: unsupervised representation learning by auto-encoding transformations rather than data. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00265. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Zhang_AET_ vs._AED_Unsupervised_Representation_Learning_by_Auto-Encoding_ Transformations_Rather_CVPR_2019_paper.html.
Very deep vaes generalize autoregressive models and can outperform them on images. CoRR, abs. Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. CoRR, abs/2011.10650, 2020. URL https://arxiv.org/abs/2011.10650.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009a.
mixup: Beyond empirical risk minimization. Hongyi Zhang, Moustapha Cissé, Yann N Dauphin, David Lopez-Paz, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netHongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=r1Ddp1-Rb.
Virtual adversarial training: A regularization method for supervised and semi-supervised learning. Takeru Miyato, Masanori Shin-Ichi Maeda, Shin Koyama, Ishii, 10.1109/TPAMI.2018.2858821IEEE Trans. Pattern Anal. Mach. Intell. 418Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979-1993, 2019. doi: 10.1109/TPAMI.2018.2858821. URL https: //doi.org/10.1109/TPAMI.2018.2858821.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USAAntti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged con- sistency targets improve semi-supervised deep learning results. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1195-1204, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 68053af2923e00204c3ca7c6a3150cf7-Abstract.html.
Mixmatch: A holistic approach to semi-supervised learning. David Berthelot, Nicholas Carlini, Ian J Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaDavid Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5050-5060, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ 1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html.
Improved techniques for training gans. Tim Salimans, Ian J Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman GarnettBarcelona, SpainTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2226-2234, 2016. URL https://proceedings.neurips.cc/ paper/2016/hash/8a3363abe792db2d8761d6403605aeb7-Abstract.html.
Temporal ensembling for semi-supervised learning. Samuli Laine, Timo Aila, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netSamuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In 5th Interna- tional Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/ forum?id=BJ6oOfqge.
Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman GarnettBarcelona, SpainMehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 1163-1171, 2016. URL https://proceedings.neurips.cc/paper/2016/hash/ 30ef30b64204a3088a26bc2e6ecf7602-Abstract.html.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009b.
Fitnets: Hints for thin deep nets. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio, Yoshua Bengio and Yann LeCun. 3Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. In Yoshua Bengio and Yann LeCun, editors, 3rd
International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsInternational Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6550.
Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. Sergey Zagoruyko, Nikos Komodakis, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netSergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the perfor- mance of convolutional neural networks via attention transfer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Sks9_ ajex.
Like what you like: Knowledge distill via neuron selectivity transfer. CoRR, abs/1707.01219. Zehao Huang, Naiyan Wang, Zehao Huang and Naiyan Wang. Like what you like: Knowledge distill via neuron selectivity transfer. CoRR, abs/1707.01219, 2017. URL http://arxiv.org/abs/1707.01219.
Variational information distillation for knowledge transfer. Sungsoo Ahn, Shell Xu Hu, Andreas C Damianou, Neil D Lawrence, Zhenwen Dai, 10.1109/CVPR.2019.00938IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USAComputer Vision Foundation / IEEESungsoo Ahn, Shell Xu Hu, Andreas C. Damianou, Neil D. Lawrence, and Zhenwen Dai. Variational information distillation for knowledge transfer. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 9163-9171. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00938. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Ahn_ Variational_Information_Distillation_for_Knowledge_Transfer_ CVPR_2019_paper.html.
Implicit regularization via neural feature alignment. Aristide Baratin, Thomas George, César Laurent, R Devon Hjelm, Guillaume Lajoie, Pascal Vincent, Simon Lacoste-Julien, PMLRThe 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021. Arindam Banerjee and Kenji Fukumizu130Aristide Baratin, Thomas George, César Laurent, R. Devon Hjelm, Guillaume Lajoie, Pascal Vincent, and Simon Lacoste-Julien. Implicit regularization via neural feature alignment. In Arindam Banerjee and Kenji Fukumizu, editors, The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pages 2269-2277. PMLR, 2021. URL http://proceedings. mlr.press/v130/baratin21a.html.
Traces of class/cross-class structure pervade deep learning spectra. Vardan Papyan, Journal of Machine Learning Research. 21252Vardan Papyan. Traces of class/cross-class structure pervade deep learning spectra. Journal of Machine Learning Research, 21(252):1-64, 2020.
Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Abdulkadir Canatar, Blake Bordelon, Cengiz Pehlevan, Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks, 2020.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, Léon Bottou, Journal of machine learning research. 1112Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12), 2010.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. URL https://arxiv.org/abs/1807.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey E Hinton, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR, 2020b. URL http://proceedings. mlr.press/v119/chen20j.html.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross B Girshick, 10.1109/CVPR42600.2020.009752020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USAIEEE20202020Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726-9735. IEEE, 2020. doi: 10.1109/CVPR42600.2020.00975. URL https://doi.org/10.1109/ CVPR42600.2020.00975.
Learning deep representations by mutual information estimation and maximization. R , Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, Yoshua Bengio, 7th International Conference on Learning Representations. New Orleans, LA, USAR. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview. net/forum?id=Bklr3j0cKX.
On variational bounds of mutual information. Ben Poole, Sherjil Ozair, Aäron Van Den Oord, Alex Alemi, George Tucker, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Ben Poole, Sherjil Ozair, Aäron van den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5171-5180. PMLR, 2019. URL http://proceedings.mlr.press/v97/poole19a.html.
Learning structured latent factors from dependent data:a generative model framework from information-theoretic perspective. Ruixiang Zhang, Masanori Koyama, Katsuhiko Ishiguro, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Ruixiang Zhang, Masanori Koyama, and Katsuhiko Ishiguro. Learning structured latent factors from dependent data:a generative model framework from information-theoretic perspective. In Proceed- ings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 11141-11152. PMLR, 2020. URL http://proceedings.mlr.press/v119/zhang20m.html.
Self-supervised visual feature learning with deep neural networks: A survey. Longlong Jing, Yingli Tian, IEEE Transactions on Pattern Analysis and Machine Intelligence. Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
Do deep nets really need to be deep?. Jimmy Ba, Rich Caruana ; Mani, Max Welling, Corinna Cortes, Neil D Lawrence, Kilian Q Weinberger, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaZoubin Ghahra-Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Zoubin Ghahra- mani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 27: Annual Conference on Neural In- formation Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2654-2662, 2014. URL https://proceedings.neurips.cc/paper/2014/hash/ ea8fcd92d59581717e06eb187f10666d-Abstract.html.
Similarity-preserving knowledge distillation. Frederick Tung, Greg Mori, 10.1109/ICCV.2019.00145IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEEFrederick Tung and Greg Mori. Similarity-preserving knowledge distillation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019, pages 1365-1374. IEEE, 2019. doi: 10.1109/ICCV.2019.00145. URL https://doi.org/10.1109/ICCV.2019.00145.
Contrastive representation distillation. Yonglong Tian, Dilip Krishnan, Phillip Isola, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive representation distillation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id= SkgpBJrtvS.
Natural gradient works efficiently in learning. Shun-Ichi Amari, Neural computation. 102Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998.
Understanding approximate fisher information for fast convergence of natural gradient descent in wide neural networks. Ryo Karakida, Kazuki Osawa, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors2020Ryo Karakida and Kazuki Osawa. Understanding approximate fisher information for fast convergence of natural gradient descent in wide neural networks. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, edi- tors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 7b41bfa5085806dfa24b8c9de0ce567f-Abstract.html.
Wide residual networks. Sergey Zagoruyko, Nikos Komodakis, Proceedings of the British Machine Vision Conference. Richard C. Wilson, Edwin R. Hancock, and William A. P. Smiththe British Machine Vision ConferenceBMVC 2016, York, UKBMVA PressSergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Richard C. Wilson, Edwin R. Hancock, and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, September 19-22, 2016. BMVA Press, 2016. URL http://www. bmva.org/bmvc/2016/papers/paper087/index.html. |
257,079,072 | NEURAL-BASED CLASSIFICATION RULE LEARNING FOR SEQUENTIAL DATA | Discovering interpretable patterns for classification of sequential data is of key importance for a variety of fields, ranging from genomics to fraud detection or more generally interpretable decision-making. In this paper, we propose a novel differentiable fully interpretable method to discover both local and global patterns (i.e. catching a relative or absolute temporal dependency) for rule-based binary classification. It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity. We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset. Key to this end-to-end differentiable method is that the expressive patterns used in the rules are learned alongside the rules themselves. | [
30535508,
213729382
] | NEURAL-BASED CLASSIFICATION RULE LEARNING FOR SEQUENTIAL DATA
Marine Collery
IBM France Lab
Inria Saclay
Philippe Bonnard
IBM France Lab
François Fages
Inria Saclay
Remy Kusters
IBM France Lab
IBM Research
NEURAL-BASED CLASSIFICATION RULE LEARNING FOR SEQUENTIAL DATA
Published as a conference paper at ICLR 2023
Discovering interpretable patterns for classification of sequential data is of key importance for a variety of fields, ranging from genomics to fraud detection or more generally interpretable decision-making. In this paper, we propose a novel differentiable fully interpretable method to discover both local and global patterns (i.e. catching a relative or absolute temporal dependency) for rule-based binary classification. It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity. We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset. Key to this end-to-end differentiable method is that the expressive patterns used in the rules are learned alongside the rules themselves.
INTRODUCTION
During the last decades, machine learning and in particular neural networks have made tremendous progress on classification tasks for a variety of fields such as healthcare, fraud detection or entertainment. They are able to learn from various data types ranging from images to timeseries and achieve impressive classification accuracy. However, they are difficult or impossible to understand by a human. Recently, explaining those black-box models has attracted considerable research interest under the field of Explainable AI (XAI). However, as stated by Rudin (2019), those aposteriori approaches are not the solution for high stakes decision-making and more interest should be placed on learning models that are interpretable in the first place.
Rule-based methods are interpretable, human-readable and have been widely adopted in different industrial fields with Business Rule Management Systems (BRMS). In practice however, those rules are manually written by experts. One of the reasons manually-written rule models cannot easily be replaced with learned rule models is that rule-base learning models are not able to learn as expressive rules with higher-level concepts and complex grammar (Kramer, 2020). Moreover, due to the lack of latent representations, rule-based learning methods underperform w.r.t. state-of-the-art neural networks (Beck & Fürnkranz, 2021).
Classical classification rule learning algorithms (Cohen, 1995;Breiman et al., 1984;Dash et al., 2018;Lakkaraju et al., 2016;Su et al., 2016) as well as neural-based approaches to learn rules (Qiao et al., 2021;Kusters et al., 2022) (or logical expressions with Riegel et al. (2020)) do not provide the grammar required to learn classification rules on sequential data. Numerous approaches for learning classification rules on sequential data in the field of sequential pattern mining have been studied in the past such as Egho et al. (2015); Zhou et al. (2013); Holat et al. (2014) but with a different goal in mind : improve the performance of extracted patterns for a fixed rule grammar as opposed to extending the rule grammar. Another domain of research focuses on training binary neural networks to obtain more computational efficient model storing, computation and evaluation efficiency (Geiger & Team, 2020;Helwegen et al., 2019). It comes with fundamental optimization challenges around weights updates and gradient computation.
In this paper, we bridge three domains and introduce a binary neural network to learn classification rules on sequential data. We propose a differentiable rule-based classification model for sequential data where the conditions are composed of sequence-dependent patterns that are discovered alongside the classification task itself. More precisely, we aim at learning a rule of the following structure: if pattern then class = 1 else class = 0. In particular we consider two types of patterns: local and global patterns as introduced in Aggarwal (2002) that are in practice studied independently with a local and a global model. A local pattern describes a subsequence at a specific position in the sequence while a global pattern is invariant to the location in the sequence (Fig 2). The network, that we refer to as Convolutional Rule Neural Network (CR2N), builds on top of a base rule model that is comparable to rule models for tabular data presented in Qiao et al. (2021); Kusters et al. (2022).
The contributions of this paper are the following: i) We propose a convolutional binary neural network that learns classification rules together with the sequence-dependent patterns in use. ii) We present a training strategy to train a binarized neural network while dynamically enforcing sparsity. iii) We show on synthetic and real world datasets the usefulness of our architecture with the importance of the rule grammar and the validity of our training process with the importance of sparsity.
BASE RULE MODEL
The base rule model we invoke is composed of three consecutive layers (Fig 1). The two last layers respectively mimic logical AND and OR operators (Qiao et al., 2021;Kusters et al., 2022). On top of these layers, we add an additional layer that is specific for categorical input data and corresponds to an OR operator for each categorical variable over every possible value it can take.
x 1
x 2
x 3
x 1
x c1 , x c2 and x c3 (x c k ∈ {A k , B k , C k , D k }).
For simplicity, the truth value of x c1 = B 1 is replaced by B 1 for example. Plain (dotted) lines represent activated (masked) weights. An example evaluation of the model is represented with the filled neurons (neuron=1) for the binary input x c1 = B 1 , x c2 = D 2 and x c3 = A 3 .
The AND layer takes binary features (which are atomic boolean formulae) as input and outputs to the OR layer. The output of the OR layer is mapped to the classification label y. These layers have binary weights specifying the nodes that are included in the respective boolean expression (conjunction or disjunction). In other words, this network implements the evaluation of a DNF and has a direct equivalence with a binary classification rule like if (A ∧ B) ∨ C then class = 1 else class = 0, where A, B and C are binary input features (atoms in logical terms). In this paper, we focus on supervised binary classification where we predict the label y ∈ {0, 1} given input data x.
The base rule model is illustrated in Fig 1 and is composed of three binary neural layers.
• Input neurons x, are binarized input features of size K (x c are one-hot encoded categorical input features of size n). • Hidden neurons h, are conjunctions of the input features of size H.
• Output neuron y, is a disjunction of the (hidden) conjunctions.
We assign to each of boolean operations, i.e. AND and OR operations, a binary weight (W and and W or respectively) that plays the role of a mask to filter nodes with regards to their respective logical operation. For the sake of simplicity, we did not extend the model with a logical NOT operation.
The disjunction operation is implemented as, y = min(Worh, 1).
(1)
If none of the neurons h are activated then y = 0, and y = 1 if at least one is.
For the conjunction operation, we use the De Morgan's law that express the conjunction with the OR operator A ∧ B = ¬(¬A ∨ ¬B). Combined with Eq 1, we obtain: h = ¬(min(W and (¬x), 1)) = 1 − min(W and (1 − x), 1).
(2)
StackedOR Input Layer As defined previously, the AND layer takes binary input features as input. In this paper, we propose to add an additional layer for categorical data. A categorical variable x c can take one value α c i out of a fixed number of possible values n e.g. {α 0 , . . . , α 3 } = {A, B, C, D}. Without any additional layer, it requires a one-hot encoding to be provided as input to the AND layer. Binary inputs x c = A and x c = B are then given as input to the AND layer that can in theory represent the impossible expression x c = A ∧ x c = B i.e the model has to learn the hidden categorical relationship between the one-hot encoded variables. To prevent learning a distribution we already know, we deepen the model with the addition of a stacked architecture of OR layers as input of the AND layer as shown in Fig 1. This structure is defined by K weights, W k stack , for each input categorical variables and will be referred to as the StackedOR layer with W stack weights.
To conclude, the base rule model is composed of a StackedOR layer for categorical variables, a logical AND layer and an OR layer. The formal grammar that this architecture can express is specified with the following production rules (see Appendix A for the full grammar):
rule → if base expressionthen class = 1 else class = 0 base expression → conjunction | conjunction ∨ base expression conjunction → predicate | predicate ∧ conjunction predicate → categorical expression | literal categorical expression → categorical literal | categorical literal ∨ categorical expression
categorical literal → xc = α c 0 | . . . | xc = α c nc (or simply α c 0 | . . . | α c nc ) literal → x1 | x2 | . . .(3)
This grammar is also limited by the model architecture: conjunction contains at most one occurrence of each predicate and the total number of conjunction(s) is bounded by the number of hidden nodes.
CONVOLUTIONAL RULE NEURAL NETWORK
Our main contribution is to extend the base rule model for sequential data. We apply the base rule model as a 1D-convolutional window of fixed length l ∈ N over a sequence and retrieve all outputs as input for an additional disjunctive layer which we refer to as the ConvOR layer as shown in Fig 2 1 . The base rule model learns a DNF over the window size length and the ConvOR layer indicates where along the sequence that logical expression is true. If the evaluation of the logical expression is true all along the sequence then it can be described as a global pattern, otherwise the learned pattern represents a local pattern.
The model input is now of size k l × n k and output of StackedOR layer (or input of the AND layer) is l × K. Other dimensions are not impacted. For simplicity in the following, K is fixed to 1 i.e. input data is composed of one categorical variable evolving sequentially. The method is still valid for K > 1. With this approach, different sequence-dependent expressions can be extracted and their nature depends on the weights of the ConvOR layer (Fig 2). If all the weights, W conv , of the ConvOR layer
t A B D A C B D y local global
Evaluation of base rule model on every window
ConvOR layer
Filter: Base Rule Model (B at t-2 and D at t-1) or (D at t-1 and C at t-0) (B at t-5 and D at t-4) or (D at t-4 and C at t-3) B-D or D-C in sequence Figure 2: Example of a trained CR2N architecture. The base rule model is applied as 1Dconvolutional window over the sequence (i.e. sliding window). The resulting boolean values are given as input of the ConvOR layer which indicates through its activated weights where along the sequence the expression learned by the base model is true. The output of the ConvOR layer is mapped to the label of the sequence y. For local patterns, the base model expression needs to be shifted accordingly to the ConvOR layer weights. For a real-domain application like fraud detection, by providing meaning to B, C and D, we could have for example if "receiving a transaction of amount X"(B) is followed by "emitting a transaction of amount X" (D) or "emitting a transaction of amount X"(D) is followed by "closing the bank account"(C) then class=fraud.
are activated (i.e. equal to 1), the logical expression learned by the base model is true in all the sequence: a global pattern is learned. If only some of the weight of the ConvOR layer are activated, the logical expression learned by the base model is valid only in the window associated to that weight: a local pattern is learned. The base model logical expression is modified accordingly to match that shift (see example in Fig 2 with a shift of 3 sequential steps).
The obtained weights thus translate to a rule grammar with the following production rules: rule → if expression then class = 1 else class = 0 expression → local pattern | global pattern (4) We introduce t, the position when the last observation in a sequence was made. With t being our reference, in a sequence of size N ∈ N, t − i refers to the moment of the i th observation before t (0 ≤ i ≤ N − 1). A, B, C and D are toy binary input possible values for our categorical variable x c (they cannot be activated simultaneously at the same position t in the sequence). With those definitions, we list below examples of different sequence-dependent expressions that can be expressed with the proposed architecture (see Fig 2):
A local pattern is an expression composed of predicates that are true at a specific position i, for example A at t-15. Based on Eq 3 we have:
local pattern → base expression predicate → categorical expression at t-i | literal at t-i.(5)
A global pattern is an expression describing the presence of a pattern anywhere in the sequence, for example B-D in sequence is a global pattern where "−" sign refers to "followed by" and " * " correspond to any unique literal (equivalent to ∀i ∈ [0; N − 1], B at t-i-1 and D at t-i) . If inputs are sequences of characters, global patterns can be compared to simple regular expressions supporting the logical OR (metacharacter'[ ]'). Based on Eq 3 we have:
global pattern → base expression conjunction → predicate | * | predicate − conjunction(6)
Additional special cases can be pointed out such as the learning of a global pattern over an interval (e.g. B-*-D in window [t-6; t-3]) or the learning of sequence characteristics dependant expression such as 4 ≤ len(sequence) ≤ 6 based on the sequence length (not shown on Fig 2 but it corresponds to a specific case where the base model has learned an always true rule). Also, it is important to note that base expression and conjunction in both grammars are bounded by the fixed window size l.
To ensure full equivalence between the model and rule, sequences boundaries need to be considered, especially for global patterns. All sequences are padded on both ends with a sequence of 0 of size l − 1 (not shown for simplicity on Fig 2). Also sequences of different lengths are supported by creating a model based on the maximal available sequence length M in the data and padding shorter sequences with a sequence of 0 of appropriate length. ConvOR layer input size is then M + l − 1.
With this one architecture we can model both local and global patterns. However for optimization reasons detailed below, we choose to differentiate the two into two distinct models: a local and a global model. The ConvOR layer weights for the global model are set and fixed to 1 during training.
TRAINING STRATEGY
To overcome training challenges attributed to binarized neural networks (Geiger & Team, 2020), we use latent weights and enforce sparsity dynamically. We define a loss function that penalizes complex rules and the model is trained via automatic differentiation (Pytorch) with Adam optimizer.
Latent weights The binary model parameters introduced above (W and , W or , W stack , W conv ) are trained indirectly via the training of a continuous parameter loc which is activated (binarized) by a sigmoid function (Kusters et al., 2022). With such binary weights and continuous relaxation Eq 1 and 2 are differentiable with nonzero derivatives (Kusters et al., 2022). As opposed to when using a straight through estimator (Qiao et al., 2021), non-zero gradients are ensured during the backward pass. To overcome training limitations, we use a hard concrete distribution (Qiao et al., 2021;Louizos et al., 2018). It rescales the weights and the random variable introduced during training prevents from obtaining local minima (Appendix B). Weight values are in [0, 1] during training, while for testing and rule extraction, a Heaviside is applied to them (≥ 0.5) to ensure strict binarization.
Loss function We define the loss function L composed of a mean-squared error component along with a regularization term that penalize the complexity of the rule,
L = Lmse + λΠ(7)
That regularization term Π, or penalty, evaluates the number of terminal conditions in the rule. In practice we use λ = 10 −5 . For a layer n of input size I and output size O, the number of terminal conditions per output corresponds to the weighted sum of the number of terminal conditions of each output of the previous layer i.e. Π layer n = I W layer n Π layer n−1 , a vector of size O. For the first layer, the StackedOR layer, Π stack is defined as the sum over the input dimension of the weights and we can then express the number of terminal conditions of the base rule model Π base .
Π layer 0 = Π stack = W 1 stack . . . W K stack , Π base = Wor W and Π stack(8)
For optimizing Π for local patterns, we have to minimize the activated ConvOR layer weights. For global patterns, we want them to all be activated. A condition could be set on the sum of ConvOR layer weights (Eq 9) to shift from one optimization problem to the other but with loss of continuity and thus differentiability (interesting values of τ being M + l − 1, the ConvOR layer input size, that would correspond to all ConvOR layer weights being equal to 1, or M − l + 1 that would allow for 2(l − 1) weights to be 0, and corresponds to the padding required for properly accounting for sequence boundaries (Section 3)).
Π local = Π base Wconv, Π global = Π base , Π * = Π global if Wconv ≥ τ Π local otherwise(9)
Table 1: Ground truth applied on sequences of letters (A to F) to generate synthetic unbalanced datasets 1, 2, 3 and 4 along with the distributions of the positive class. In the patterns, t refers to the position when the last observation in a sequence was made. Balanced datasets with same ground truth are generated and are referred to as the dataset number followed by the letter b (Appendix D).
Ground truth # Distribution (%)
C at t-4 1 14.2 A at t-6 and C at t-4 2 1.5 (A at t-6 and C at t-4) or (B at t-5 and C at t-3) 3 3.6 B-D in sequence 4 20.4
Due to non continuity of Π * in Eq 9, we choose to have two models with the same architecture for the two cases: the local and the global model respectively more relevant for their associated pattern. For the local model, all weights are trainable and Π = Π local . For the global model, weights in the ConvOR layer are fixed and set to 1, and Π = Π global .
Enforced sparsity Sparsity of the model is crucial to learn concise expressions, the model needs to generalize without observing all possible instances at training time. The first requirement for that matter is sparsity in the base rule model. In addition to the regularization term in the loss function, we propose to use a sparsify-during-training method (Hoefler et al., 2021) and dynamically enforce sparsity in weights from 0% to an end rate r f set to 99% in our case (Lin et al., 2020). Sparsify-during-training method can also benefit the quality of the training in terms of convergence by correcting for approximation errors due to premature pruning in early iterations but is highly dependant on the sparsification schedule (Hoefler et al., 2021).
Every 16 iterations s and for a total of s f training iterations, every trainable weight is pruned with a binary mask, m (of size of its associated weight and applied with Hadamard product ( )) (Lin et al., 2020;Zhu & Gupta, 2017). We propose a mask based on the maximum of weight magnitude loc and pruning rate r (Zhu & Gupta, 2017) making the assumption that it contributes to generalization (Eq 10). This strategy can be more aggressive than state-of-art contributions (Lin et al., 2020) due to its dependency to the loc maximum value. During training, the model with the highest prediction accuracy on validation dataset and the highest sparsity (evaluated at each epoch) is kept.
r = r f − r f 1 − s s f 3 , mi,j = |loci,j| ≥ r × maxi,j(loc),Ŵ = W m(10)
Additional training optimizations have been tested out such as for example using a binarized optimizer (Helwegen et al., 2019;Geiger & Team, 2020), adding a scheduled cooling on the sigmoid of the binarized weights, alternating the training of each layer every few epochs (Qiao et al., 2021) or using a learning rate scheduler. Those techniques are not presented here but would be of interest for improving results on specific datasets.
EXPERIMENTS
In order to evaluate the validity and usefulness of this method, we apply it to both synthetic datasets and UCI membranolytic anticancer peptides dataset (Grisoni et al., 2019;Dua & Graff, 2017).
Synthetic Datasets
We propose 8 synthetic datasets based on 4 ground truth expressions in both balanced and unbalanced distributions for discovering simple binary classification rules with local or global patterns as shown in Table 1. There are 1000 sequences of letters (A to F) of different length from 4 to 14 letters in each of them (Mean around 9±3). Generation is detailed in Appendix D.
Peptides Dataset Besides the synthetic datasets, real-world UCI anticancer peptides dataset composed of labeled one-letter amino acid sequences, is used (Grisoni et al., 2019;Dua & Graff, 2017). The multi-classification problem is transformed into a binary classification problem is the same manner as Nwegbu et al. (2022) (see Appendix C). Sequence length are from 5 to 38 letters (Mean: 17 ± 5.5) and positive class distribution is 79%.
Experimental Setting All datasets are partitioned in a stratified fashion with 60% for training, 20% for validation and 20% for testing datasets and we use a batch size of 100 sequences. The hidden size in the base rule model is set to the double of the input size of the AND layer (which is the window size of the convolution). More details on experimental setting can be found in Appendix E. At each epoch (200 in total), we evaluate the model against the validation dataset and keep the model with the highest accuracy and in case of equality the model with lowest penalty. For each experiment, we run the algorithm 10 times with different weights initializations. Resulting metrics are averaged over these runs.
We run the experiments with two different window sizes (3 and 6) for the CR2N convolution filter size. We compare the two versions of the architecture: the local and global models described in Section 4 and study three different dynamic pruning strategies: none, dynamic enforced sparsity from epoch=0 and from epoch=30 (arbitrary).
RESULTS
global local Rule grammar and expressivity The importance of the rule model expressivity can be seen concretely by comparing the different patterns the local and global models have learned for dataset 3b for example: 1. (A or B)-*-(C)-*-*-(A or B or C or D or E or F) in sequence (global, no pruning, window size=6), and 2. (B at t-5 and C at t-3) or (A at t-6 and C at t-4) (local, pruning, window size=6). In the first case, the grammar is not appropriate to model the data (as a reminder, the global model is a constrained version of the local model) as opposed to the local model that learned the perfect rule. In practice, on real data, obtained patterns, such as "if (D or E or G or H or I or N or Q or T or Y)-*-*-*- (D or E or G or I or N or Q or S or T or V or Y) in sequence" obtained for labelling 'invalid-virtual' peptides, can be explored further by a domain expert. Black box approaches do not provide such insights.
It is also highlighted by comparing experiments with local or global model and experiments with different window sizes. First of all, the accuracy of the local model is higher compared to the global model on balanced synthetic datasets 1, 2 and 3 (Fig 3(a)). For balanced and unbalanced dataset 4, both models achieve very high accuracies (> 95%). However, as shown on Fig 3(b), it is at the cost of rule complexity for the local approach with averaged penalty values higher than 60 (and standard deviation higher than 50) compared to lower than 10 for the global model (and standard deviation lower than 5). It points out that the local model in that case requires on average at least more than 6 times more terminal conditions in the learned rule than the global model for comparable accuracies, but also that the weights initial states have a huge impact on the rule complexity when the rule grammar is not expressive enough (with no pruning). Those results are confirmed on real-world dataset with the peptides dataset, accuracies between the local and global models especially for a window size of 6 are comparable. However there is an order of magnitude difference for the penalty, global approach being more concise. It is important to note that by architecture the global approach has less weights to train and thus a much lower maximum penalty.
Datasets 2b and 3b benefit from a bigger window size (highly expected for dataset 3/3b due to ground truth pattern size) as shown in Fig 3(c). Accuracies are also higher with window size 6 than 3 for the peptides dataset at the cost of also higher penalties ( Table 2).
The more expressive the model is i.e. the more patterns it can model, training limitations aside, the better for the performance. Of course any black box neural network with no such 1-1 rule mapping constrained architecture would reach 100% accuracy, but it is that mapping in particular that makes the model relevant, expressive and fully interpretable. Also, the best performances in accuracy for the peptides dataset (∼ 91%) are comparable to the best results (∼ 92%) obtained from classification with single kernels when applied to that same dataset in Nwegbu et al. (2022), our model providing an additional fully-interpretable property. The presented model is also flexible due to its logical equivalence and can be inputted into other logical layers for deeper architectures to extend the rule grammar (Beck & Fürnkranz, 2021). It can also be extended for timeseries, temporal aggregates or multi-classification problems. Other rule grammar extensions can be inspired by Linear Temporal Logic domain and regular expression pattern mining (De Giacomo et al., 2022). However the more expressive the model is the more attention is required for training and rule complexity.
Sparsity and training strategy The importance of the model sparsity is pointed out by the experiments with different pruning strategies. First, looking at training scenarios, both on synthethic and peptide datasets, experiments with sparsity-during-training approaches reach the best model faster on average than without (lower best epoch Fig 3(d)). Then, regarding the performance in terms of accuracy, we can differentiate two cases: balanced and unbalanced datasets. Training of unbalanced datasets is more affected by the aggressive dynamic pruning strategy than balanced datasets with a drop in average of around 0.2 in accuracy for dataset 1 compared to an equivalent accuracy for balanced dataset 1 for example (Fig 3(e)). The pruning strategy starting after 30 epochs is preferred in both cases. Average accuracies with a pruning strategy not starting immediately (30 epochs) are comparable to the ones obtain without pruning for balanced datasets. In terms of rule complexity, penalty values are lower with pruning and even lower when starting after 30 epochs in most cases (Fig 3(f)).
With our pruning strategy (Eq 10), we make the assumption that lower positive loc values are associated to overfitting or redundancy by taking into account that values closer to 0, i.e. on the sigmoid slope, are more likely to shift thus less 'certain'. As pointed out in early work by Prechelt (1997), the dynamic pruning strategy helps to overcome possible lower generalization ability compared to a fixed pruning which could explain cases of better performance (peptide dataset local model window size of 3 for example). Prechelt proposed a different pruning strategy based on a generalization loss to characterize the amount of overfitting. When this strategy is relevant in more general cases and can be applied to many different networks, our strategy is tailored for minimizing positive trainable parameter values.
Sparsity of the model is also induced via the regularization term Π in the loss function L (Eq 7). While this method is parameterized with a relative importance of sparsity for training optimization and provides an uncontrolled target sparsity, a dynamic pruning strategy is easier to control for both target sparsity and accuracy but is highly dependent on the pruning schedule (Hoefler et al., 2021).
An interesting point is made by Hoefler et al. (2021) about the convolutional operator that 'can be seen as a sparse version of fully-connected layers'. That level of forced sparsity in our model is therefore defined by the fixed window size model parameter with respect to the maximum sequence length. The ideal sparser window size would be the size of the maximum temporal hidden pattern in distribution that can only be approximated with external or expert knowledge and/or tuned with trial and error.
With or without a dynamic pruning strategy, for highly unbalanced datasets (2 and 3), experiments have shown that the training strategy of the model is not suitable. Indeed most of them, label everything with the majority class (50% balanced accuracy). It corresponds to the specific case of learning an empty rule (penalty=0) (Fig 3(a,c,e)). For unbalanced datasets 1 and 4, their best models do not reach on average the same accuracies as in their balanced versions.
Overall this training strategy is both the key and the main limitation of our approach: it can provide a sharp concise rule with minimal redundancy and simplified logical expression but it is highly dependent on numerous model, training and pruning parameters and is not suited as is for highly unbalanced datasets.
CONCLUSION
To conclude, we presented a 1D-convolutional neural architecture to discover local and global patterns in sequential data while learning binary classification rules. This architecture is fully differentiable, interpretable and requires sparsity that is enforced dynamically. One main limitation is its dependence to the window size and sparsity scheduler parameters. Further work will consist in integrating this block into more complex architectures to augment the expressivity of the learned rules as well as extending it for multi-classification.
AUTHOR CONTRIBUTIONS
M.C. and R.K. designed the model. R.K. encouraged M.C. to investigate the use of convolutions and supervised the findings of this work. P.B. and F.F. helped supervise the project. M.C. validated the training strategy and carried out the implementation and the experiments. M.C. wrote the manuscript. All authors provided critical feedback and helped shape the manuscript.
A CONTEXT-FREE GRAMMAR
Context-free grammar (Chomsky, 1956) A context free-grammar is a 4-tuple G = (V T , V N , S, R) where − V T , is finite set of terminals or terminal elements in the language that form the alphabet of the language. − V N , disjoint from V T , is a finite set of non terminal elements (variables) that define a sublanguage of L. We note V = V N ∪ V T , the vocabulary of the grammar. − S ∈ V N , is the start symbol or variable that defines the whole sentence. − R is a finite set of rules or production rules of the form A → w with A ∈ V N and w ∈ V * In the following, we have 3 different types of terminal elements on the syntax level:
• reserved words, distinguished with the following style : reserved • signs, such as for example −, * , ... • other terminal elements that are defined prior to the grammar, distinguished with the following style: terminal Production rules presented in the paper (Eq 3, Eq 4, Eq 5 and Eq 6) define grammars when associated to values for V T , V N and S.
Here is an example for the base rule model grammar with production rules in Eq 3. (2017)) is composed of one-letter amino acid sequences (of variable length) and each sequence is labeled with its anticancer activity on breast cancer cell lines. The dataset provides 4 classes with the following distribution: 83 inactive-exp, 750 inactive-virtual, 98 moderately active and 18 very active. Sequences lengths range from 5 to 38 letters (Mean: 17 ± 5.5). We transform this multi-classification problem into a binary classification problem (as done in Nwegbu et al. (2022)). Class 'inactive-virtual' is the positive class (750) and all the other are combined as the negative class (199). No other processing of the data is necessary and we leave it as is.
D SYNTHETIC DATASETS GENERATION
Balanced datasets are generated randomly with the same ground truth as unbalanced datasets. Then, they are upsampled until the minority class represents half of the goal dataset size and appropriate number of majority class are randomly removed.
E EXPERIMENTAL SETTING
The loc-parameters for weights computation are initialized with xavier uniform initialization method (Glorot & Bengio, 2010). The loss function is described in Eq 7 and depends on the MSE loss and regularization coefficient λ = 10 −5 . The adam optimizer is used with a fixed learning rate set to 0.
Fig 1 is also still valid with a change of index, k is now referring to the position in the window of size l.
Figure 3 :
3Representations of key results obtained on the synthethic datasets. Error bars represent the standard deviation over the 10 executions with different weights initializations. Full results are available in Appendix F.
VT = {if, then class = 1 else class = 0, ∧, ∨, =} ∪ x1, x2, . . . ∪ xc 1 = α 1 0 , . . . , xc 1 = α 1 n 1 , . . . , xc k = α k n k VN = {rule, base expression, conjunction, predicate, categorical expression, categorical literal, literal} S = {rule} (11) B HARD CONCRETE DISTRIBUTION (LOUIZOS ET AL., 2018) Parameters are set as follows: β = 2/3, ζ = 1.1 and γ = −0.1. u ∼ U (0, 1), s = σ((log(u) − log(1 − u) + loc)/β),ŝ = s * (ζ − γ) + γ (12) W = min(max(ŝ, 0), 1) (13) C PEPTIDES DATASET UCI anticancer peptides dataset (Grisoni et al., 2019) (Available on Dua & Graff
3 :
3Performance metrics obtained for the different models, window size and pruning strategy on the synthetic datasets. Values are followed by the standard deviation over the 10 executions with different weights initializations. (Bal Acc: balanced accuracy, Epoch: best epoch).
100.0 ± 0.0 1.0 ± 0.0 49 ± 39 30 100.0 ± 0.0 1.0 ± 0.0 54 ± 30 6 No 100.0 ± 0.0 12.6 ± 10.1 116 ± 52 Yes 100.0 ± 0.0 1.0 ± 0.0 77 ± 24 30 100.0 ± 0.0 1.0 ± 0.0 87 ± 27
Table 2 :
2Performance metrics obtained for the different models, window size and pruning strategy on the peptides dataset, along with the standard deviations over the 10 executions with different weights initializations. (Bal. Acc.: balanced accuracy, Epoch: best epoch).Model Window Pruning Accuracy
Bal. Acc.
Penalty (Π)
Epoch
global
3
No
88.9 ± 5.0 75.3 ± 12.7 58.7 ± 35.9
105 ± 60
Yes
85.3 ± 5.3 67.3 ± 14.3 19.4 ± 15.9
23 ± 20
30
88.7 ± 5.0 75.4 ± 12.9 46.9 ± 28.0
48 ± 22
6
No
91.2 ± 1.0 81.8 ± 2.0
220.0 ± 56.5
92 ± 82
Yes
89.5 ± 3.6 77.6 ± 9.3
132.7 ± 93.9
33 ± 15
30
91.0 ± 1.3 82.0 ± 2.8
97.3 ± 60.6
71 ± 19
local
3
No
90.0 ± 3.7 78.7 ± 9.7
694.3 ± 269.3 77 ± 48
Yes
90.3 ± 1.5 81.6 ± 1.6
885.2 ± 328.7 25 ± 8
30
89.2 ± 5.2 76.3 ± 13.2 674.7 ± 483.9 49 ± 13
6
No
92.1 ± 1.0 83.5 ± 2.5
3.9k ± 1.4k
91 ± 58
Yes
88.0 ± 4.7 74.6 ± 12.4 1.0k ± 1.0k
34 ± 16
30
91.7 ± 1.5 83.5 ± 2.6
1.8k ± 0.6k
49 ± 10
1 and a run consists of 200 epochs. Experiments were run on CPU on a MacBookPro18,2 (2021) with Apple M1 Max chip, 10 Cores, 32 GB of RAM and running macOS Monterey Version 12.4.F FULL EXPERIMENT RESULTS
Table
Table 3
3continued from previous page Dataset Model Window Pruning Bal. Acc. Penalty Epoch
A natural extension for sequential data of the base rule architecture would be to extend it with an explicit recursion of the base rule model, similar to a RNN. This approach was tested but faced the same limitations as any classical RNNs, i.e., vanishing gradients and only captures short-term dependencies.
ACKNOWLEDGMENTSWe would like to thank Shubham Gupta for helpful discussions and constructive feedback as well as Yusik Kim for reviewing the manuscript. This work has been partially funded by the French government as part of project PSPC AIDA 2019-PSPC-09, in the framework of the "Programme d'Investissement d'Avenir".98.0 ± 3.9 79.0 ± 55.9 68 ± 12
On effective classification of strings with wavelets. C Charu, Aggarwal, 10.1145/775047.775071Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02. the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02New York, NY, USAAssociation for Computing MachineryCharu C. Aggarwal. On effective classification of strings with wavelets. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02, pp. 163-172, New York, NY, USA, July 2002. Association for Computing Machinery. ISBN 978-1-58113-567-1. doi: 10.1145/775047.775071.
Beyond DNF: First Steps towards Deep Rule Learning. Florian Beck, Johannes Fürnkranz, Proceedings of the 21st Conference Information Technologies -Applications and Theory (ITAT 2021), volume 2962 of CEUR Workshop Proceedings. Broňa Brejová, Lucie Ciencialová, Martin Holeňa, František Mráz, Dana Pardubská, Martin Plátek, and Tomáš Vinařthe 21st Conference Information Technologies -Applications and Theory (ITAT 2021), volume 2962 of CEUR Workshop ProceedingsCEURHotel Heľpa, Nízke Tatry and Muránska planinaFlorian Beck and Johannes Fürnkranz. Beyond DNF: First Steps towards Deep Rule Learning. In Broňa Brejová, Lucie Ciencialová, Martin Holeňa, František Mráz, Dana Pardubská, Martin Plátek, and Tomáš Vinař (eds.), Proceedings of the 21st Conference Information Technologies - Applications and Theory (ITAT 2021), volume 2962 of CEUR Workshop Proceedings, pp. 61-68, Hotel Heľpa, Nízke Tatry and Muránska planina, September 2021. CEUR.
Classification and Regression Trees. Leo Breiman, Jerome Friedman, Charles J Stone, R A Olshen, 978-0-412-04841-8Taylor & FrancisLeo Breiman, Jerome Friedman, Charles J. Stone, and R. A. Olshen. Classification and Regression Trees. Taylor & Francis, January 1984. ISBN 978-0-412-04841-8.
Three models for the description of language. Noam Chomsky, IRE Transactions on information theory. 23Publisher: IEEENoam Chomsky. Three models for the description of language. IRE Transactions on information theory, 2(3):113-124, 1956. Publisher: IEEE.
Fast Effective Rule Induction. William W Cohen, Proceedings of the Twelfth International Conference on Machine Learning. the Twelfth International Conference on Machine LearningMorgan KaufmannWilliam W. Cohen. Fast Effective Rule Induction. In In Proceedings of the Twelfth International Conference on Machine Learning, pp. 115-123. Morgan Kaufmann, 1995.
Boolean Decision Rules via Column Generation. Sanjeeb Dash, Oktay Gunluk, Dennis Wei, Advances in Neural Information Processing Systems. Curran Associates, Inc31Sanjeeb Dash, Oktay Gunluk, and Dennis Wei. Boolean Decision Rules via Column Generation. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
International Joint Conferences on Artificial Intelligence Organization. Giuseppe De Giacomo, Marco Favorito, Jianwen Li, Moshe Y Vardi, Shengping Xiao, Shufang Zhu, 10.24963/ijcai.2022/359Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. the Thirty-First International Joint Conference on Artificial IntelligenceVienna, AustriaLTLf Synthesis as AND-OR Graph Search: Knowledge Compilation at WorkGiuseppe De Giacomo, Marco Favorito, Jianwen Li, Moshe Y. Vardi, Shengping Xiao, and Shufang Zhu. LTLf Synthesis as AND-OR Graph Search: Knowledge Compilation at Work. In Proceed- ings of the Thirty-First International Joint Conference on Artificial Intelligence, pp. 2591-2598, Vienna, Austria, July 2022. International Joint Conferences on Artificial Intelligence Organiza- tion. ISBN 978-1-956792-00-3. doi: 10.24963/ijcai.2022/359.
UCI Machine Learning Repository. Dheeru Dua, Casey Graff, University of California, Irvine, School of Information and Computer SciencesDheeru Dua and Casey Graff. UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences, 2017.
A Parameter-Free Approach for Mining Robust Sequential Classification Rules. Elias Egho, Dominique Gay, Marc Boulle, Nicolas Voisine, Fabrice Clerot, 10.1109/ICDM.2015.872015 IEEE International Conference on Data Mining. Atlantic City, NJIEEEElias Egho, Dominique Gay, Marc Boulle, Nicolas Voisine, and Fabrice Clerot. A Parameter-Free Approach for Mining Robust Sequential Classification Rules. In 2015 IEEE International Con- ference on Data Mining, pp. 745-750, Atlantic City, NJ, November 2015. IEEE. ISBN 978-1- 4673-9504-5. doi: 10.1109/ICDM.2015.87.
Larq: An open-source library for training binarized neural networks. Lukas Geiger, Plumerai Team, 10.21105/joss.01746Journal of Open Source Software. 5451746Lukas Geiger and Plumerai Team. Larq: An open-source library for training binarized neural net- works. Journal of Open Source Software, 5(45):1746, January 2020. doi: 10.21105/joss.01746.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsJMLR Workshop and Conference ProceedingsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-256. JMLR Workshop and Conference Proceedings, March 2010.
De novo design of anticancer peptides by ensemble artificial neural networks. Francesca Grisoni, Claudia S Neuhaus, Miyabi Hishinuma, Gisela Gabernet, Jan A Hiss, Masaaki Kotera, Gisbert Schneider, 10.1007/s00894-019-4007-6Journal of Molecular Modeling. 255112Francesca Grisoni, Claudia S. Neuhaus, Miyabi Hishinuma, Gisela Gabernet, Jan A. Hiss, Masaaki Kotera, and Gisbert Schneider. De novo design of anticancer peptides by ensemble artificial neural networks. Journal of Molecular Modeling, 25(5):112, April 2019. ISSN 0948-5023. doi: 10.1007/s00894-019-4007-6.
Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization. Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, Roeland Nusselder, Advances in Neural Information Processing Systems. Curran Associates, Inc32Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, and Roeland Nusselder. Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste, Journal of Machine Learning Research. 22241Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1-124, 2021.
Sequence Classification Based on Delta-Free Sequential Patterns. Pierre Holat, Marc Plantevit, Chedy Raissi, Nadi Tomeh, Thierry Charnois, Bruno Cremilleux, 10.1109/ICDM.2014.1542014 IEEE International Conference on Data Mining. Shenzhen, ChinaIEEEPierre Holat, Marc Plantevit, Chedy Raissi, Nadi Tomeh, Thierry Charnois, and Bruno Cremilleux. Sequence Classification Based on Delta-Free Sequential Patterns. In 2014 IEEE International Conference on Data Mining, pp. 170-179, Shenzhen, China, December 2014. IEEE. ISBN 978- 1-4799-4302-9 978-1-4799-4303-6. doi: 10.1109/ICDM.2014.154.
A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward). Stefan Kramer, 10.24963/ijcai.2020/678Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial IntelligenceYokohama, JapanInternational Joint Conferences on Artificial Intelligence OrganizationStefan Kramer. A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward). In Proceedings of the Twenty-Ninth International Joint Con- ference on Artificial Intelligence, pp. 4868-4876, Yokohama, Japan, July 2020. International Joint Conferences on Artificial Intelligence Organization. ISBN 978-0-9992411-6-5. doi: 10.24963/ijcai.2020/678.
Remy Kusters, Yusik Kim, Marine Collery, Christian De Sainte Marie, Shubham Gupta, arXiv:2201.06515Differentiable Rule Induction with Learned Relational Features. cs, statRemy Kusters, Yusik Kim, Marine Collery, Christian de Sainte Marie, and Shubham Gupta. Differ- entiable Rule Induction with Learned Relational Features. arXiv:2201.06515 [cs, stat], January 2022.
Interpretable Decision Sets: A Joint Framework for Description and Prediction. Himabindu Lakkaraju, Stephen H Bach, Jure Leskovec, 10.1145/2939672.2939874Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco California USAACMHimabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable Decision Sets: A Joint Framework for Description and Prediction. In Proceedings of the 22nd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, pp. 1675-1684, San Francisco California USA, August 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939874.
Dynamic model pruning with feedback. Tao Lin, Sebastian U Stich, Luis Barba, Daniil Dmitriev, Martin Jaggi, International Conference on Learning Representations. Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, and Martin Jaggi. Dynamic model pruning with feedback. In International Conference on Learning Representations, 2020.
Learning sparse neural networks through L 0 regularization. Christos Louizos, Max Welling, Diederik P Kingma, International Conference on Learning Representations. Christos Louizos, Max Welling, and Diederik P. Kingma. Learning sparse neural networks through L 0 regularization. In International Conference on Learning Representations, 2018.
A novel kernel based approach to arbitrary length symbolic data with application to type 2 diabetes risk. Nnanyelugo Nwegbu, Santosh Tirunagari, David Windridge, 10.1038/s41598-022-08757-1Scientific Reports. 124985Nnanyelugo Nwegbu, Santosh Tirunagari, and David Windridge. A novel kernel based approach to arbitrary length symbolic data with application to type 2 diabetes risk. Scientific Reports, 12: 4985, March 2022. ISSN 2045-2322. doi: 10.1038/s41598-022-08757-1.
Connection pruning with static and adaptive pruning schedules. Lutz Prechelt, 10.1016/S0925-2312(96)00054-9Neurocomputing. 161Lutz Prechelt. Connection pruning with static and adaptive pruning schedules. Neurocomputing, 16 (1):49-61, July 1997. ISSN 0925-2312. doi: 10.1016/S0925-2312(96)00054-9.
Learning accurate and interpretable decision rule sets from neural networks. Litao Qiao, Weijia Wang, Bill Lin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Litao Qiao, Weijia Wang, and Bill Lin. Learning accurate and interpretable decision rule sets from neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 4303-4311, 2021.
Ryan Riegel, Alexander Gray, Francois Luus, Naweed Khan, Ndivhuwo Makondo, Haifeng Ismail Yunus Akhalwaya, Ronald Qian, Francisco Fagin, Udit Barahona, Sharma, Shajith Ikbal, Hima Karanam, Sumit Neelam, Ankita Likhyani, and Santosh Srivastava. Logical Neural Networks. Ryan Riegel, Alexander Gray, Francois Luus, Naweed Khan, Ndivhuwo Makondo, Ismail Yunus Akhalwaya, Haifeng Qian, Ronald Fagin, Francisco Barahona, Udit Sharma, Shajith Ikbal, Hima Karanam, Sumit Neelam, Ankita Likhyani, and Santosh Srivastava. Logical Neural Networks, June 2020.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Cynthia Rudin, 10.1038/s42256-019-0048-xNature Machine Intelligence. 15Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215, May 2019. ISSN 2522-5839. doi: 10.1038/s42256-019-0048-x.
Interpretable Two-level Boolean Rule Learning for Classification. Guolong Su, Dennis Wei, Kush R Varshney, Dmitry M Malioutov, Guolong Su, Dennis Wei, Kush R. Varshney, and Dmitry M. Malioutov. Interpretable Two-level Boolean Rule Learning for Classification, June 2016.
Itemset Based Sequence Classification. Cheng Zhou, Boris Cule, Bart Goethals ; David Hutchison, Takeo Kanade, Josef Kittler, Jon M Kleinberg, Friedemann Mattern, John C Mitchell, Moni Naor, Advanced Information Systems Engineering. Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Camille Salinesi, Moira C. Norrie, andÓscar Pastor7908353Cheng Zhou, Boris Cule, and Bart Goethals. Itemset Based Sequence Classification. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Ter- zopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Camille Salinesi, Moira C. Norrie, andÓscar Pastor (eds.), Advanced Information Systems Engineering, volume 7908, pp. 353-
. Heidelberg Springer Berlin, 10.1007/978-3-642-40988-223Berlin, HeidelbergSpringer Berlin Heidelberg, Berlin, Heidelberg, 2013. ISBN 978-3-642-38708-1 978-3-642- 38709-8. doi: 10.1007/978-3-642-40988-2 23.
To prune, or not to prune: exploring the efficacy of pruning for model compression. Michael Zhu, Suyog Gupta, arXiv:1710.01878cs, statMichael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression, November 2017. URL http://arxiv.org/abs/1710.01878. arXiv:1710.01878 [cs, stat]. |
262,083,735 | Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing | Calibration measures and reliability diagrams are two fundamental tools for measuring and interpreting the calibration of probabilistic predictors.Calibration measures quantify the degree of miscalibration, and reliability diagrams visualize the structure of this miscalibration.However, the most common constructions of reliability diagrams and calibration measures -binning and ECE -both suffer from well-known flaws (e.g.discontinuity).We show that a simple modification fixes both constructions: first smooth the observations using an RBF kernel, then compute the Expected Calibration Error (ECE) of this smoothed function.We prove that with a careful choice of bandwidth, this method yields a calibration measure that is well-behaved in the sense of Błasiok, Gopalan, Hu, and Nakkiran (2023) -a consistent calibration measure.We call this measure the SmoothECE.Moreover, the reliability diagram obtained from this smoothed function visually encodes the SmoothECE, just as binned reliability diagrams encode the BinnedECE.We also provide a Python package with simple, hyperparameter-free methods for measuring and plotting calibration: `pip install relplot`.Code at: https://github.com/apple/ml-calibration. | [
212747810,
233454709,
219981518
] | Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing
21 Sep 2023
Jarosław Błasiok
Columbia University
Preetum Nakkiran
Columbia University
Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing
21 Sep 20239F22EEA5216230C263FF9969AA763E7BarXiv:2309.12236v1[cs.LG]
Calibration measures and reliability diagrams are two fundamental tools for measuring and interpreting the calibration of probabilistic predictors.Calibration measures quantify the degree of miscalibration, and reliability diagrams visualize the structure of this miscalibration.However, the most common constructions of reliability diagrams and calibration measures -binning and ECE -both suffer from well-known flaws (e.g.discontinuity).We show that a simple modification fixes both constructions: first smooth the observations using an RBF kernel, then compute the Expected Calibration Error (ECE) of this smoothed function.We prove that with a careful choice of bandwidth, this method yields a calibration measure that is well-behaved in the sense of Błasiok, Gopalan, Hu, and Nakkiran (2023) -a consistent calibration measure.We call this measure the SmoothECE.Moreover, the reliability diagram obtained from this smoothed function visually encodes the SmoothECE, just as binned reliability diagrams encode the BinnedECE.We also provide a Python package with simple, hyperparameter-free methods for measuring and plotting calibration: `pip install relplot`.Code at: https://github.com/apple/ml-calibration.
Smooth Reliability (ours) smECE : 0.138 ± 0.016
Introduction
Calibration is a fundamental aspect of probabilistic predictors, capturing how well predicted probabilities of events match their true frequencies (Dawid, 1982).For example, a weather forecasting model is perfectly calibrated (also called "perfectly reliable") if among the days it predicts a 10% chance of rain, the observed frequency of rain is exactly 10%.There are two key questions in studying calibration: First, for a given predictive model, how do we measure its overall amount of miscalibration?This is useful for ranking different models by their reliability, and determining how much to trust a given model's predictions.Methods for quantifying miscalibration are known as calibration measures.Second, how do we convey where the miscalibration occurs?This is useful for better understanding an individual predictor's behavior (where it is likely to be over-vs.under-confident), as well as for re-calibrationmodifying the predictor to make it better calibrated.The standard way to convey this information is known as a reliability diagram.Unfortunately, in machine learning, the most common methods of constructing both calibration measures and reliability diagrams suffer from well-known flaws, which we describe below.
The most common choice of calibration measure in machine learning is the Expected Calibration Error (ECE), more specifically its empirical variant the Binned ECE (Naeini et al., 2015).The ECE is known to be unsatisfactory for many reasons; for example, it is a discontinuous functional, so changing the predictor by an infinitesimally small amount may change its ECE drastically (Kakade and Foster, 2008;Foster and Hart, 2018;Błasiok et al., 2023).Moreover, the ECE is impossible to estimate efficiently from samples (Lee et al., 2022;Arrieta-Ibarra et al., 2022), and its sampleefficient variant, the Binned ECE, is overly sensitive to choice of bin widths (Nixon et al., 2019;Kumar et al., 2019;Minderer et al., 2021).These shortcomings have been well-documented in the community, which motivated proposals of new, better-behaved calibration measures (e.g.Roelofs et al. (2022); Arrieta-Ibarra et al. (2022); Lee et al. (2022)).
Recently, Błasiok et al. (2023) proposed a theoretical definition of what constitutes a "good" calibration measure.The key principle is that good measures should provide upper and lower bounds on the calibration distance dCE, which is the Wasserstein distance between the joint distribution of prediction-outcome pairs, and the set of perfectly calibrated such distributions (formally defined in Definition 6 below).Calibration measures which satisfy this property are called consistent calibration measures.In light of this line of work, one may think that the question of which calibration measure to choose is largely resolved: simply pick a consistent calibration measure, such as Laplace Kernel Calibration Error / MMCE (Błasiok et al., 2023;Kumar et al., 2018), as suggested by Błasiok et al. (2023).However, this theoretical suggestion belies the practical reality: Binned ECE remains the most popular calibration measure used in practice, even in recent studies.We believe this is partly because Binned ECE enjoys an additional property: it can be visually represented by a specific kind of reliability diagram, namely the binned histogram.This raises the question of whether there are calibration measures which are consistent in the sense of Błasiok et al. (2023), and can also be represented by an appropriate reliability diagram.To be precise, we must discuss reliability diagrams more formally.
Reliability Diagrams.We consider measuring calibration in the setting of binary outcomes, for simplicity.Here, we have a joint distribution (f, y) ∼ D over predictions f ∈ [0, 1], and true outcomes y ∈ {0, 1}.We interpret f as the predicted probability that y = 1.The "calibration function"1 µ : [0, 1] → [0, 1] is defined as the conditional expectation:
µ(f ) := E D [y | f ].
A perfectly calibrated distribution, by definition, is one with a diagonal calibration function: µ(f ) = f .Reliability diagrams are traditionally thought of as estimates of the calibration function µ (Naeini et al., 2014;Bröcker, 2008).In other words, reliability diagrams are one-dimensional regression methods, since the goal of regressing y on f is exactly to estimate the regression function E[y | f ].The practice of "binning" to construct reliability diagrams (as in Figure 1 left) can be equivalently thought of as using histogram regression to regress y on f .With this perspective on reliability diagrams, one may wonder why histogram regression is still the most popular method, when more sophisticated regressors are available.One potential answer is that users of reliability diagrams have an additional desiderata: it should be easy to visually read off a reasonable calibration measure from the reliability diagram.For example, it is easy to visually read off the Binned ECE from a binned reliability diagram, because it is simply the integrated absolute deviation from the diagonal:
BinnedECE k = 1 0 μk (f ) − f k dF
where k is the number of bins, μk is the histogram regression estimate of y given f , and f k is the "binned" version of f -formally the histogram regression estimate of f given f .This relationship is even more transparent for the full (non-binned) ECE, where we have
ECE = 1 0 |µ(f ) − f | dF = E f [|µ(f ) − f |]
where µ is the true regression function as above.However, more sophisticated regression methods do not neccesarily have such tight relationships to calibration measures.Thus we have a situation where better calibration measures exist, but they are not accompanied by reliability diagrams, and conversely better reliability diagrams exist (i.e.regression methods), but they are not associated with consistent calibration measures.We address this situation here: we present a new consistent calibration measure, SmoothECE, along with a regression method which naturally encodes this calibration measure.The SmoothECE is, per its name, equivalent to the ECE of a "smoothed" version of the original distribution, and the resulting reliability diagram can thus be interpreted as a smoothed estimate of the calibration function.
We emphasize that the idea of smoothing is not new -Gaussian kernel smoothing has been explicitly proposed as method for constructing reliability diagrams in the past (e.g.Bröcker (2008), as discussed in Arrieta-Ibarra et al. ( 2022)).Our contribution is two-fold: first, we give strong theoretical justification for kernel smoothing by proving it induces a consistent calibration measure.Second, and of more practical relevance, we show how to chose the kernel bandwidth in a principled way, which differs significantly from existing recommendations.In particular, in the past smoothing was recommended for statistical reasons, to allow estimation of the calibration function from finite samples.However, our analysis reveals that smoothing is necessary for more fundamental reasons-even if we have effectively infinite samples, and a perfect estimate of the calibration function, we will still want to use a nonzero smoothing bandwidth 2 .In other words, the smoothing is not done to approximate some underlying population quantity-rather, smoothing is essential to the definition of the measure itself.
Overview of Method
We start by describing the regression method, which defines our reliability diagram.We are given i.i.d.observations
{(f 1 , y 1 ), (f 2 , y 2 ) . . . (f k , y k )} where f i ∈ [0, 1]
is the i-th prediction, and y i ∈ {0, 1} is the corresponding outcome.For example, if we are measuring calibration of an ML model on a dataset of validation samples, we will have
f i = F (x i )
for model F evaluated on sample x i , with ground-truth label y i .We would like to estimate the true calibration function µ(f
) := E[f | y].
Our estimate μ(f ) is given by Nadaraya-Watson kernel regression (kernel smoothing) on this dataset (see Nadaraya (1964); Watson (1964); Simonoff (1996)):
μ(f ) := i K σ (f, f i )y i i K σ (f, f i ) . (1)
That is, for a given f ∈ [0, 1] our estimate of y is the weighted average of all y i , where weights are given by the kernel function K σ (f, f i ).The choice of kernel, and in particular the choice of bandwidth σ, is crucial for our method's theoretical guarantees.We use an essentially standard kernel (described in more detail below): the Gaussian Kernel, reflected appropriately to handle boundary-effects of the interval [0, 1].Our choice of bandwidth σ is more subtle, but it is not a hyperparameter -we describe the explicit algorithm for choosing σ in Section 3. It suffices to say for now that the amount of smoothing σ will end up being proportional to the reported calibration error.
An equivalent way of understanding the kernel smoothing of Eqn ( 1) is via kernel density estimation.Specifically, let δ0 and δ1 be kernel density estimates of p(f | y = 0) and p(f | y = 1), obtained by convolving the kernel K σ with the empirical distributions of f i restricted to the samples where y i = 0 and y i = 1 accordingly.Then it is easy 0 1 ( ) Step 1. Kernel density estimation
= 1 = 0 0 1 ( ) = 1 = 0 0.
Step 2. Normalize vertical slices to verify that μ(f ) = δ1 (f )/( δ1 (f ) + δ 2 (f )). Figure 2 illustrates this method of constructing the calibration function estimate μ, on a toy dataset of eight prediction-outcome pairs (f i , y i ).
Reflected Gaussian Kernel In all of our kernel applications, we use a "reflected" version of the Gaussian kernel defined as follows.Let π R : R → [0, 1] be the projection function which is identity on [0, 1], and collapses two points iff they differ by a composition of reflections around integers.That is π R (x) := (x mod 2) if (x mod 2) ≤ 1, and
(2 − (x mod 2)) otherwise.The Reflected Gaussian kernel on [0, 1] with scale σ, is then given by
Kσ (x, y) := x∈π −1 R (x) ϕ σ (x − y) = ỹ∈π −1 R (y) ϕ σ (x − ỹ),(2)
where ϕ is the probability density function of N (0, 1), that is ϕ σ (t) = exp(−t2 /2σ 2 )/ √ 2πσ 2 .Now, by construction, convolving a random variable F with the kernel Kσ produces the random variable π R (F + η) where η ∼ N (0, σ).
We chose the Reflected Gaussian kernel in order to alleviate the bias introduced by standard Gaussian kernel near the boundaries of the interval [0, 1].For instance, if we start with the uniform distribution over [0, 1], convolve it with the standard Gaussian kernel, and restrict the convolution to [0, 1], we end up with a non-uniform distribution: the density close to the boundary is smaller by approximately factor of 2 from the density in the middle.In contrast, the reflected Gaussian kernel does not exhibit this pathology -the uniform distribution on [0, 1] is an invariant measure under convolution with this kernel.
Reliability Diagram
We then construct a reliability diagram in the standard way, by displaying a plot of the estimated calibration function μ along with a kernel density estimate of the predictions f i (see Figure 1).These two estimates, compactly presented on the same diagram, provide a tool to quickly understand and visually assess calibration properties of a given predictor.Moreover, they can be used to define a quantiative measure of overall degree of miscalibration, as we show below.
SmoothECE A natural measure of calibration can be easily computed from data in the above reliability diagram.Specifically, let δ(f ) be the kernel density estimate of predictions: δ(f ) := 1 n i Kσ (f, f i ).Then, similar to the definition of ECE, we can integrate the deviation of μ from the diagonal to obtain:
smECE σ := |μ(t) − t| δ(t)dt.
The measure of calibration we actually propose, smECE σ , is closely related but not identical to the above.Briefly, to define smECE σ we consider the kernel smoothing of the difference between the outcome and the prediction (y i − f i ) instead of just smoothing the outcomes y i .As it turns out, those choices lead to a calibration measure with better mathematical properties: smECE σ is monotone decreasing as the kernel bandwidth σ is increased, and smECE, applied to the population distribution, is 0 for perfectly calibrated predictors.
We reiterate that the choice of the scale σ is very important: too large or too small bandwidth will prevent the SmoothECE from being a consistent calibration measure.In Section 3, we will show how to algorithmically define the correct scale σ * .For the reliability diagram, we suggest presenting the estimates ŷ and δ with the same scale σ * , and for this scale we indeed have smECE σ * ≈ smECE σ * (see Section 4).Finally, note that we have been discussing finitesample estimators of all quantities; the corresponding population quantities are defined analogously in Section 3.
Extensions to General Metrics
Until now, we have been discussing the original notion of consistent calibration measure as introduced in Błasiok et al. (2023).This notion relies on the concept of the distance to calibration dCE, which we vaguely referred to before.It turns out that the notion of calibration distance, as a Wasserstein distance (the definition of Wasserstein distance is provided for readers convenience in Section 3), implicitly assumes the trivial metric on the space of predictions:
for f 1 , f 2 ∈ [0, 1], we consider ℓ 1 (f 1 , f 2 ) := |f 1 − f 2 |.
In fact the associated distance to calibration can be defined generally for any metric.Let us finally provide a formal definition of distance to calibration here.
Definition 1 (Distance to Calibration).For a probability distribution D over [0, 1] × {0, 1}, and a metric d : [0, 1] 2 → R ≥0 ∪ {+∞}, we define dCE d (D) to be the Wasserstein distance to the nearest perfectly calibrated distribution, with respect to the metric
3 d [0,1]×{0,1} ((f 1 , y 1 ), (f 2 , y 2 )) := d(f 1 , f 2 ) + |y 1 − y 2 |.
There is indeed a good reason to consider non-trivial metrics on the space of predictions, because such metrics arise naturally when minimizing a generic proper loss function.This connection is well-known part of the convex analysis theory (Gneiting et al., 2007), and Błasiok et al. (2023) builds upon this in the context of calibration error.We summarize the latter results below.
It was shown in Błasiok et al. (2023) that for the standard ℓ 1 metric, the dCE ℓ1 provides a lower and upper bound on how much the ℓ 2 -loss of a predictor can be improved by post-composition with a Lipschitz function.Specifically, they showed that a random pair (f, y) ∈ [0, 1] × {0, 1} of prediction f and outcome y has small dCE ℓ1 , if and only if the loss E(f − y) 2 cannot be improved by more than ε by Lipschitz post-processing of the prediction.That is, for any Lipschitz function w : [0, 1] → [0, 1], we have
E(w(f ) − y) 2 ≥ E(f − y) 2 − ε.
In practice, though, prediction algorithms are often optimized for different proper loss functions than ℓ 2 -the crossentropy loss is a popular choice in deep learning, for example, leading to a metric d logit (u, v) Błasiok et al. (2023) generalize both the notion of calibration error, and their theorem, to apply to general metrics.This motivates a study of calibration error for a non-trivial metrics on the space of predictions [0, 1], which we will undertake in the remainder of this paper.
:= | log(u/(1 − u)) − log(v/(1 − v))|. With this in mind,
Summary of Our Contributions
1. SmoothECE.We define a new hyperparmeter-free calibration measure, which we call the SmoothECE (abbreviated smECE).This measure We prove that the SmoothECE is a consistent calibration measure, in the sense of Błasiok et al. (2023).It also corresponds to a natural notion of distance: if SmoothECE is ε, then the function f can be stochastically post-processed to make it perfectly calibrated, without perturbing f by more than ε in L 1 .
2. Smoothed Reliability Diagrams.We show how to construct principled reliability diagrams which visually encode the SmoothECE.These diagrams can be thought of as "smoothed" versions of the usual binned reliability diagrams, where we perform Nadaraya-Watson kernel smoothing with the Gaussian kernel.3. Code.We provide an open-source Python package relplot which computes the SmoothECE and plots the associated smooth reliability diagram.It is hyperparameter-free, efficient, and includes uncertainty quantification via bootstrapping.We include several experiments in Section 6, for demonstration purposes.4. Extensions to general metrics.On the theoretical side, we investigate how far our construction of SmoothECE generalizes.We show that the notion of SmoothECE introduced in this paper can indeed be defined for a wider class of metrics on the space of predictions [0, 1], and we prove the appropriate generalization of our main theorem: that the smECE for a given metric is a consistent calibration measure with respect to the same metric.Finally, perhaps surprisingly, we show that under specific conditions on the metric (which are satisfied, for instance, by the d logit metric), the associated smECE is in fact a consistent calibration measure with respect to ℓ 1 metric.
Organization.We begin by discussing the closest related works (Section 2).In Section 3 we formally define the SmoothECE and prove its mathematical and computational properties.We provide the explicit algorithm (Algorithm 2), and prove it is sample-efficient (Section 3.4) and runtime efficient (Section 3.5).We then discuss the justification behind our various design choices in Section 4, primarily to aid intuition.Section 5 explores extensions of our results to more general metrics.Finally, we include experimental demonstrations of our method and the associated python package in Section 6, and conclude in Section 7.
Related Works
Reliability Diagrams and Binning.Reliability diagrams, as far as we are aware, had their origins in the early reliability tables constructed by the meteorological community.Hallenbeck (1920), for example, presents the performance of a certain rain forecasting method by aggregating results over 6 months into a table: Among the days forecast to have between 10%−20% chance of rain, the table records the true fraction of days which were rainy -and similarly for every forecast interval.This early account of calibration already applies the practice of binning-discretizing predictions into bins, and estimating frequencies conditional on each bin.Plots of these tables turned into binned reliability diagrams (Murphy and Winkler, 1977;DeGroot and Fienberg, 1983), which was recently popularized in the machine learning community by a series of works including Zadrozny and Elkan (2001); Niculescu-Mizil and Caruana (2005); Guo et al. (2017).Binned reliability diagrams continue to be used in studies of calibration in machine learning, including in the GPT-4 tech report (Guo et al., 2017;Nixon et al., 2019;Minderer et al., 2021;Desai and Durrett, 2020;?;OpenAI, 2023).
Reliability Diagrams as Regression.The connection between reliability diagrams and regression methods has been noted in the literature (e.g.Bröcker (2008);Copas (1983);Stephenson et al. (2008)).For example, Stephenson et al. (2008) observes "one can consider binning to be a crude form of non-parametric smoothing." However, this connection does not appear to be appreciated in the machine learning community, since many seemingly unresolved questions about reliability diagrams are clarified by the connection to statistical regression.For example, there is much debate about how to choose hyperparameters when constructing reliability diagrams via binning (e.g. the bin widths, the adaptive vs. non-adaptive binning scheme, etc), and it is not apriori clear how to think about the effect of these choices.The regression perspective offers insight here: optimal hyperparameters in regression are chosen to minimize test-loss (e.g. on some held-out validation set).And in general, the choice of estimation method for reliability diagrams should be informed by our assumptions and priors about the underlying ground-truth calibration function µ, such as smoothness and monotonicity, just as it is in statistical regression.
Finally, we remind the reader of a subtlety: our objective in this work is not identical to the regression objective, since we want an estimator that is simultaneously a reasonable regression and a consistent calibration measure.Our choice of bandwidth must thus carefully balance the two; it cannot be simply be chosen to minimize the regression test loss.
Alternate Constructions.There have been various proposals to construct reliability diagrams which improve on binning; we mention several of them here.Many proposals can be seen as suggesting alternate regression techniques, to replace histogram regression.For example, some works suggest modifications to improve the binning method, such as adaptive bin widths or debiasing (Kumar et al., 2019;Nixon et al., 2019;Roelofs et al., 2022).These are closely related to data-dependent histogram estimators in the statistics literature (Nobel, 1996).Other works suggest using entirely different regression methods, including spline fitting (Gupta et al.), kernel smoothing (Bröcker, 2008;?), and isotonic regression (Dimitriadis et al., 2021).The above methods for constructing regression-based reliability diagrams are closely related to methods for re-calibration, since the ideal recalibration function is exactly the calibration function µ.For example, isotonic regression (Barlow, 1972) has been used as both for recalibration (Zadrozny and Elkan, 2002;Naeini et al., 2015) and for reliability diagrams (Dimitriadis et al., 2021).Finally, Tygert (2020) and Arrieta-Ibarra et al. ( 2022) suggest visualizing reliability via cumulative-distribution plots, instead of estimating conditional expectations.While all the above proposals do improve upon binning in certain aspects, none of them ultimately induce consistent calibration measures in the sense of Błasiok et al. (2023).For example, the kernel smoothing proposals often suggest picking the kernel bandwidth to optimize the regression test loss.This choice does not yield a consistent calibration measure as the number of samples n → ∞.See Błasiok et al. (2023) for further discussion on the shortcomings of these measures.
Multiclass Calibration.We focus on binary calibration in this work.The multi-class setting introduces several new complications-foremost, there is no consensus on how best to define calibration measures in the multi-class setting; this is an active area of research (e.g.Vaicenavicius et al. (2019); Kull et al. (2019); Widmann et al. ( 2020)).
The strongest notion of perfect calibration in the multi-class setting, known as multi-class calibrated or canonically calibrated, is intractable to verify in general-it requires sample-size exponential in the number of classes.Weaker notions of calibration exist, such as classwise-calibration, confidence-calibration (Kull et al., 2019), and low-degree calibration (Gopalan et al., 2022), but it is unclear how to best define calibration measures in these settings -that is, how to most naturally extend the theory of Błasiok et al. (2023) to multi-class settings.We refer the reader to Kull et al. (2019) for a review of several different definitions of multi-class calibration.
However, our methods can apply to specific multi-class settings which reduce to binary problems.For example, multi-class confidence calibration is equivalent to the standard calibration of a related binary problem (involving the joint distribution of confidences f ∈ [0, 1] and accuracies y ∈ {0, 1}).Thus, we can apply our method to plot reliability diagrams for confidence calibration, by first transforming our multi-class data into its binary-confidence form.We show an example of this in the neural-networks experiments in Section 6.
Consistent Calibration Measures.We warn the reader that the terminology of consistent calibration measure does not refer to the concept of statistical consistency.Rather, it refers to the definition introduced in Błasiok et al.
(2023), to capture calibration measures which polynomially approximate the true (Wasserstein) distance to perfect calibration.
Smooth ECE
In this section we will define the calibration measure smECE.As it turns out, it has slightly better mathematical properties than smECE defined in Section 1.1, and those properties will allow us to chose the proper scale σ in a more principled way -moreover, we will be able to relate smECE with smECE.
Specifically, the measure smECE σ enjoys the following convenient mathematical properties, which we will prove in this section.
• The smECE σ (D) is monotone decreasing as we increase the smoothing parameter σ.
• If D is perfectly calibrated distribution, then for any σ we have smECE σ (D) = 0. Indeed, for any σ we have
smECE σ (D) ≤ ECE(D).
• The smECE σ is Lipschitz with respect to the Wasserstein distance on the space of distributions over [0, 1] × {0, 1}:
for any D 1 , D 2 we have |smECE σ (D 1 ) − smECE σ (D 2 )| ≤ (1 + σ −1 )W 1 (D 1 , D 2 ). This implies smECE σ (D) ≤ (1 + σ −1 )dCE(D).
• For any distribution D and any σ, there is a (probabilistic) post-processing κ, such that if (f, y) ∼ D, then the distribution D ′ of (κ(f ), y) is perfectly calibrated, and moreover 3.1 Defining smECE σ at scale σ
E |f −κ(f )| ≤ smECE σ (D)
We now present the construction of smECE σ , at a given scale σ > 0. We will show how to pick this σ in the subsequent section.Let D be a distribution over [0, 1] × {0, 1} of the pair of prediction f ∈ [0, 1] and outcome y ∈ {0, 1}.For a given kernel K : R → R we define the kernel smoothing of the residual r := y − f as
rD,K (t) := E(f,y)∼D K(t, f )(y − f ) E(f,y)∼D K(t, f ) . (3)
This differs from the definition in Section 1.1, where we applied the kernel smoothing to the outcomes y instead.
In many cases of interest, the probability distribution D is going to be an empirical probability distribution over finite set of pairs {(f i , y i )} of observed predictions f i and associated observed outcomes y i .In this case, the rD (t) is just a weighted average of residuals (f i − y i ) where the weight of a given sample is determined by the kernel K(f i , t).This is equivalent to the Nadaraya-Watson kernel regression (a.k.a.kernel smoothing, see Nadaraya (1964); Watson (1964); Simonoff (1996)), estimating (y − f ) with respect to the independent variable f .
We consider also the kernel density estimation
δD,K (t) := E f,y∼D K(t, f ). (4) Note that if the kernel K is of form K(u, v) = K(u − v)
where the univariate K is a probability density function of a random variable η, then δ is a probability density function of the random variable η + f (with η, f independent), and moreover we can interpret rK (t) as
E[y − f |f + η = t].
The smECE K (D) is now defined as
smECE K (D) := |r D,K (t)| δD,K (t) dt. (5)
This notion is close to ECE of a smoothed distribution of (f, y).We will provide more rigorous guarantees in Section 4. For now, let us discuss the intuitive connection.For any distribution of prediction, and outcome (f, y), we can consider an expected residual r(t
) := E[f − y|f = t], then ECE(f, y) := |r(t)| dµ f (t),
where µ f is a measure of f .We can compare this with (5), where the conditional residual r has been replaced by its smoothed version r, and the measure µ f has been replaced by δ dt -the measure of f + η for some noise η.
The equation ( 5) can be simplified by directly combining the equations ( 3) and (4),
smECE K (D) = E f,y K(t, f )(y − f ) dt.(6)
In what follows, we will be focusing on the reflected Gaussian kernel with scale σ, KN,σ (see ( 2)), and we shall use shorthand smECE σ to denote smECE KN,σ .We will now show how the scale σ is chosen.
Defining smECE: Proper choice of scale
First, we observe that smECE σ satisfies a natural monotonicity property: increasing the smoothing scale σ decreases the smECE σ .(Proof of this and subsequent lemmas can be found in Appendix A.)
Lemma 2. For a distribution D over [0, 1] × {0, 1} and σ 1 ≤ σ 2 , we have
smECE σ1 (D) ≥ smECE σ2 (D).
Several of our design choices were crucial to ensure this property: the choice of the reflected Gaussian kernel, and the choice to smooth the residual (y − f ) as opposed to the outcome y.
Note that since smECE σ (D) ∈ [0, 1], and for a given predictor D, the function σ → smECE σ (D) is a non-increasing function of σ, there is a unique σ * s.t.smECE σ * (D) = σ * (and we can find it efficiently using binary search).
Definition 3 (SmoothECE).We define smECE(D) to be the unique σ * satisfying smECE σ * (D) = σ * .We also write this quantity as smECE * (D) for clarity.
smECE is a consistent calibration measure
We will show that σ * defined in the previous subsection is a convenient scale on which the smECE of D should be evaluated.The formal requirement that smECE σ * meets is captured by the notion of consistent calibration measure, introduced in Błasiok et al. (2023).We provide the definition below, but before we do, let us recall the definition of the Wasserstein metric.
For a metric space (X , d), let us consider ∆(X ) to be the space of all probability distributions over X .We define the Wasserstein metric on the space ∆(X) (sometimes called earth-movers distance) Peyré et al. (2019).
Definition 4 (Wasserstein distance).For two distributions µ, ν ∈ ∆(X ) we define the Wasserstein distance
W 1 (µ, ν) := inf γ∈Γ(µ,ν) E (x,y)∼γ d(x, y),
where Γ(µ, ν) is the family of all couplings of distributions µ and ν.
Definition 5 (Perfect calibration).A probability distribution D over [0, 1] × {0, 1} of prediction f and outcome y is perfectly calibrated if ED[y|f ] = f .We denote the family of all perfectly calibrated distributions by P ⊂ ∆([0, 1] × {0, 1}).
Definition 6 (Consistent calibration measure (Błasiok et al., 2023)).For a probability distribution D over [0, 1] × {0, 1} we define the distance to calibration dCE(D) to be the Wasserstein distance to the nearest perfectly calibrated distribution, associated with metric d on [0, 1] × {0, 1} which puts two disjoint intervals infinitely far from each other.4
Concretely
d((f 1 , y 1 ), (f 2 , y 2 )) := |f 1 − f 2 | if y 1 = y 2 ∞ otherwise . and dCE(D) = inf D∈P W 1 (D, D ′ ).
Finally, any function µ assigning to distributions over [0, 1] × {0, 1} a non-negative real calibration score, is called consistent calibration measure if it is polynomially upper and lower bounded by dCE, i.e. there are constants
c 1 , c 2 , α 1 , α 2 , s.t. c 1 dCE(D) α1 ≤ µ(D) ≤ c 2 dCE(D) α2 .
With this definition in hand, we prove the following.
Theorem 7. The measure smECE(D) is a consistent calibration measure.
This theorem is a consequence of the following two inequalities.First of all, if we add the penalty proportional to the scale of noise σ, then smECE σ upper bounds the distance to calibration.
Lemma 8.For any σ, we have dCE(D) ≲ smECE σ (D) + σ.
On the other hand, as soon as the scale of the noise is sufficiently large compared to the distance to calibration, the smECE of a predictor is itself upper bounded as follows.
Lemma 9. Let (f, y) be any predictor.Then for any σ we have
smECE σ (D) ≤ 1 + 1 σ dCE(D).
In particular if σ > 2 dCE(D), then
smECE σ (D) ≤ 2 dCE(D).
This lemma, together with the fact that σ → smECE σ is non-increasing, directly implies that the fixpoint satisfies σ * ≤ 2 dCE(D).On the other hand, using Lemma 8, at this fixpoint we have
dCE(D) ≤ σ * +smECE σ * (D) = 2σ * . That is 1 2 dCE(D) ≤ smECE * (D) ≤ 2 dCE(D)
, proving the Theorem 7.
Remark 10.We wish to clarify the significance of the decision to use the reflected Gaussian kernel as a kernel K of choice in the definition of smECE.
Upper and lower bounds Lemma 8 and Lemma 9 hold for a wide range of kernels, and indeed, we prove more general statements (Lemma 23 and Lemma 25) in the appendix.In order to deduce the existance of unique σ * we need the monotonicity property (Lemma 2), which is more subtle.For this property to hold, we require that for σ 1 ≤ σ 2 , we can decompose the kernel K σ2 as a convolution: Kσ2 = Kσ1 * Kh(σ1,σ2) .This property is also satisfied for instance for a standard Gaussian kernel on K N,σ : R × R → R, given by K(x, y) = ϕ σ (x − y) -and indeed, we could have stated (and proved) Theorem 7 for this kernel.In fact, in Appendix A we showed all necessary lemmas in the generality that covers also this case.
The choice of reflected Gaussian kernel, instead of Gaussian kernel stems from the fact, that the domain of reflected Gaussian kernel is [0, 1] instead of R -more natural choice, since our distribution is indeed supported in [0, 1].The standard Gaussian kernel would introduce undesirable biases in the density estimation near the boundary of the interval [0, 1] -a region which is of particular interest.The associated reliability diagrams would therefore be less informativefor instance, the uniform distribution on [0, 1] is invariant under convolution with our reflected Gaussian kernel, which is not the case for the standard Gaussian kernel.
Sample Efficiency
We show that we can estimate smECE of the underlying distribution D over [0, 1] × {0, 1} (of prediction f ∈ [0, 1] and outcome y ∈ {0, 1}), using few samples from this distribution.Specifically, let us sample independently at random m pairs (f i , y i ) ∼ D, and let us define D to be the empirical distribution over the multiset {(f i , y i )}; that is, to sample from D, we pick a uniformly random i ∈ [m] and output (f i , y i ).
Theorem 11.For a given σ 0 > 0 if m ≳ σ −1 0 ε −2 , then with probability at least 2/3 over the choice of independent random sample (f i , y i ) m i=1 (with (f i , y i ) ∼ D), we have simultanously for all σ ≥ σ 0 ,
|smECE σ (D) − smECE σ ( D)| ≤ ε.
In particular if smECE * (D) > σ 0 , then (with probability at least 2/3) we have |smECE
* (D) − smECE * ( D)| ≤ ε.
The proof can be found in Appendix A.6.The success probability can be amplified in the standard way, by taking the median of independent trials.
Runtime
In this section we discuss how smECE can be computed efficiently: for a given sample {(f 1 , y 1 ) . . .
(f n , y n )} ∈ ([0, 1] × {0, 1}) n , if D is an empirical distribution over {(f i , y i )} n i=1
, then the quantity smECE σ ( D) can be approximated up to error ε in time O(n + M −1 log 3/2 M −1 ) in the RAM model, where M = ⌈ε −1 σ −1 ⌉.In order to find an optimal scale σ * , we need to perform a binary search, involving log ε −1 evaluations of smECE σ .We provide the pseudocode as Algorithm 1 for computation of smECE σ on a given scale σ and Algorithm 2 for finding the σ * .
We shall first observe that the convolution with the reflected Gaussian kernel can be expressed in terms of a convolution with a shift-invariant kernel.This is useful, since such a convolution can be implemented in time O(M log M ) using Fast Fourier Transform, where m is the size of the discretization.
Claim 12.For any function g : [0, 1] → R, the convolution with the reflected Gaussian kernel g * K N,σ can be equivalently computed as follows.Take an extension of g to the entire real line g : R → R defined as g(x
) := g(π R )(x)). Then [g * KN,σ ](t) = [g * K N,σ ](t),
where
K N,σ : R × R → R is the standard Gaussian kernel K σ (t 1 , t 2 ) = exp(−(t 1 − t 2 ) 2 /2σ)/ √ 2πσ 2 .
Proof.Elementary calculation.
We can now restrict g to the interval [−T, T + 1] where T := ⌈ log(2ε −1 )⌉, convolve such a restricted g with a Gaussian, and restrict the convolution in turn to the interval ε.Indeed, such a restriction introduces very small error, for every t ∈ [0, 1] we have.
[(1 [−T,T +1] • g) * K N,σ ](t) − [g * K N,σ ](t) ≤ (1 − Φ(T /σ)) + (1 − Φ((T + 1)/σ)) ≤ 2/π(T /σ) exp(−(T /σ) 2 /2).
In practice, it is enough to reflect the function g only twice, around two of the boundary points (corresponding to the choice T = 1).For instance, when σ < 0.38, the above bound implies that the introduced additive error is smaller than σ 2 , and the error term rapidly improves as σ is getting smaller.
Let us now discuss computation of smECE σ for a given scale σ.To this end, we discretize the interval [0, 1], splitting it into M equal length sub-intervals.For a sequence of observations (f i , y i ) we round each r i to the nearest integer multiple of 1/M , mapping it to a bucket b i = round(M f i ).In each bucket b ∈ {0, . . .M }, we collect the residues of all observation falling in this bucket h b := i:bi=b (f i − y i ).
In the next step, we apply the Claim 12, and produce a wrapping h of the sequence h -extending it to integer multiples of 1/M in the interval [−T, T + 1] by pulling back h through the map π R .The method smECE σ then proceeds to compute convolution h * K with the discretization of the Gaussian kernel probability density function, i.e.Kt := exp(−t 2 /2σ 2 ), and K t := Kt i Kt .This convolution r := h * K can be computed in time O(Q log Q), where Q = M T , using a Fast Fourier Transform, and is implemented in standard mathematical libraries.Finally, we report the sum of absolute values of the residuals |r i | as an approximation to smECE σ as an approximation to smECE σ .
Discussion: Design Choices
Here we discuss the motivation behind several design choices that may a-priori seem ad-hoc.In particular, the choice to smooth the residuals (y i − f i ) when computing the smECE, but to smooth the outcomes y i directly when plotting the reliability diagram.
Algorithm 1: Efficient estimation of smECE σ , at fixed scale σ
Function Discretization({(f i , z i } n i=1 , M ) is h ← zeros(M + 1); for i ∈ [n] do b ← round(M f i ); h b ← h b + z i ; end return h; end Function Wrap(h, T ) is M ← len(h); for i ∈ [(2T + 1)M ] do j ← (i mod 2M ); if j > M then j ← 2M − j; end hi ← h j ; end return h; end Function smECE(σ, {f i , y i } n 1 ) is h ← Discretization({f i , f i − y i }, ⌈σ −1 ε −1 ⌉); h ← Wrap(h, ⌈ log(2ε −1 )); K ← DiscreteGaussianKernel(σ, ⌈σ −1 ε −1 ⌉); r ← h * K; return (T +1)M −1 i=T M |r i |; end
Algorithm 2: Efficient estimation of smECE: using binary search over σ to find a root of the decreasing function g(σ) := smECE σ − σ.
Data:
(f i , y i ) n 1 , ε Result: smECE * ({(f i , y i )}) l ← 0; u ← 1; while u − l > ε do σ ← (u + l)/2; if smECE σ ({f i , y i }) < σ then u ← σ; else l ← σ; end end return u;
For the purpose of constructing the reliability diagram, it might be tempting to plot a function y ′ (f ) := r(f ) + f (of smoothed residual as defined in (3), shifted back by the prediction f ), as well as the smoothed density δ(t), as in the definition of smECE.This is a fairly reasonable approach, unfortunately it has a particularly undesirable feature -there is no reason for y ′ (t) := r(t) + t to be bounded in the interval [0, 1].It is therefore visually quite counterintuitive, as the plot of y(t) is supposed to be related with our guess on the average outcome y given (slightly noisy version of) the prediction t.
As discussed in Section 1.1, we instead consider the kernel regression on y, as opposed to the kernel regression on the residual y − f , and plot exactly this, together with the density δ.Specifically, let us define
ŷD,K (t) := Ef,y∼D K(t, f )y Ef,y∼D K(t, f ) . (7)
and chose as the reliability diagram a plot of a pair of functions t → ŷD,K (t) and t → δD,K (t) -the first plot is our estimation (based on the kernel regression) of the outcome y for a given prediction t, the other is the estimation of the density of prediction t.As discussed in the Section 1.1, we will focus specifically on the kernel K being the reflected Gaussian kernel, defined by (2).
It is now tempting to define the calibration error related with this diagram, as an ECE of this new random variable over [0, 1] × {0, 1}, analogously to the definition of smECE, by considering
smECE σ (D) := |ŷ D,K (t) − t| δD,K (t) dt.(8)
This definition can be readily interpreted: for a random pair (f, y) and an η ∼ N (0, σ) independent, we can consider a pair (f + η, y).It turns out that
smECE σ (D) = ECE(π R (f + η), y)
, where π R : R → [0, 1] collapses points that differ by reflection around integers (see Section 1.1).
Unfortunately, despite being directly connected with more desirable reliability diagrams, and having more immediate interpretation as a ECE of a noisy prediction, this newly introduced measure smECE has its own problems, and is generally mathematically much poorer-behaved than smECE.In particular it is no longer the case that if we start with the perfectly calibrated distribution, and apply some smoothing with relatively large bandwidth σ, the value of the integral (8) stays small.In fact it might be growing as we add more smoothing5 .
Nevertheless, if we chose the correct bandwidth σ * , as guided by the smECE consideration, the integral (8), which is visually encoded by the reliability diagram we propose, should still be within constant factor from the actual smECE * σ (D), and hence provides a consistent calibration measure Lemma 13.For any σ we have
smECE σ (D) = smECE σ (D) ± cσ,
where c = 2/π ≤ 0.8.
In particular, for σ * s.t.smECE σ * (D) = σ * , we have
smECE σ * (D) ≈ smECE σ * (D).
(The proof can be found in Appendix A.3).
General Metrics
Our previous discussion implicitly assumed the the trivial metric on the interval d(u, v) = |u − v|.We will now explore which aspects of our results extend to more general metrics over the interval [0, 1].This is relevant if, for example, our application downstream of the predictor is more sensitive to miscalibration near the boundaries.
The study of consistency measures with respect to general metrics is also motivated by the results of Błasiok et al. (2023).There it was shown that for any proper loss function l, there was an associated metric d l on [0, 1] such that the predictor has small weak calibration error with respect to d l if and only if the loss l cannot be significantly improved by post-composition with a Lipschitz function with respect to d l .Specifically, they proved
wCE d l (D) ≲ E (f,y)∼D [l(f, y)] − inf κ E (f,y)∼D [l(κ(f ), y)] ≲ wCE d l (D),
where κ : [0, 1] → [0, 1] ranges over all functions Lipschitz with respect to the d l metric, and wCE d l (D) is the weak calibration error (as introduced by Kakade and Foster (2008), and extended to general metrics in Błasiok et al. (2023), see Definition 14).
The most intuitive special case of the above result is the square loss function, which corresponds to a trivial metric on the interval d(u, v) = |u − v|.In practice, different proper loss functions are also extensively used -the prime example being the cross entropy loss l(f, y) := −(y ln p + (1 − y) ln(1 − p)), which is connected with the metric
d logit (u, v) := | log(u/(1 − u)) − log(v/(1 − v))| on [0, 1]
. Thus, we may want to generalize our results to also apply to non-trivial metrics.
General Duality
We will prove a more general statement of the duality theorem in Błasiok et al. (2023).Specifically, they showed that the minimization problem in the definition of the dCE, can be dualy expressed as a maximal correlation between residual r := y − f and a bounded Lipschitz function of the prediction f .This notion, which we will refer to as weak calibration error first appeared in Kakade and Foster (2008), and was further explored in ?Błasiok et al. ( 2023
(f − y)w(f ),(9)
where the supremum is taken over all functions w : [0, 1] → [−1, 1] which are 1-Lipschitz with respect to the metric d.7
For the trivial metric on the interval d(u, v) = |u − v|, wCE was known to be linearly related with dCE by Błasiok et al. (2023).We show in this paper that the duality theorem connecting wCE and dCE holds much more generally, for a broad family of metrics.
Theorem 15 (Błasiok et al. (2023)).If a metric d on the interval satisfies d(u, v) ≳ |v − u| then wCE d ≈ dCE d .
The more general formulation provided by Theorem 15 can be shown by following closely the original proof step by step.We provide an alternate proof (simplified and streamlined) in Appendix A.9.
5.2
The dCE d logit is a consistent calibration measure with respect to ℓ 1
As in turns out, for a relatively wide family of metrics on the space of predictions (including the d logit metric), the associated calibration measures are consistent calibration measures with respect to the ℓ 1 metric.The main theorem we prove in this section is the following.
Theorem 16. If a metric d : [0, 1] 2 → R ∪ {±∞} satisfies d(u, v) ≳ |u − v| and moreover for some c > 0, ∀ε, ∀u, v ∈ [ε, 1 − ε], d(u, v) ≤ |u − v|ε −c , then dCE d is a consistent calibration measure.
The proof of this theorem (as is the case for many proofs of consistency for calibration measures) heavily uses the duality Theorem 15 -since proving that a function is a consistent calibration measure amounts to providing a lower and upper bound, it is often convenient to use the dCE formulation for one bound and wCE for the other.
The lower bound in Theorem 16 is immediate -since d(u, v) ≥ ℓ 1 (u, v), the induced Wasserstein distances on the space [0, 1] × {0, 1} satisfy the same inequality, hence dCE d ≥ dCE ℓ1 , and dCE ℓ1 ≥ dCE/2 by Claim 33.
As it turns out, if the metric of interest is well-behaved except near the endpoints of the unit interval, we can also prove the converse inequality, and lower bound wCE(D) by polynomial of wCE d (D).
Lemma 17.Let d : [0, 1] 2 → R + ∪ {∞} be any metric satisfying for some c > 0,
∀ε, ∀u, v ∈ [ε, 1 − ε], d(u, v) ≤ |u − v|ε −c , then wCE d (D) q ≲ wCE(D)
, where q := max(c + 1, 2).
(Proof in Appendix A.5.)
We are ready now to prove the main theorem here
Proof of Theorem 16.We have dCE d (D) ≥ dCE ( D)/2 by our previous discussion, on the other hand Theorem 15 and Lemma 17 imply the converse inequality:
dCE d (D) ≈ wCE d (D) ≤ wCE(D) 1/q ≈ dCE(D) 1/q .
Corollary 18.For a metric induced by cross-entropy loss function
d logit (u, v) := | ln(u/(1 − v)) − ln(v/(1 − v))|, the wCE d logit is a consistent calibration measure.
Proof.To verify the conditions of Theorem 16 it is enough to check that logit(v
) := ln(v/(1−v)) satisfies min(t, 1− t) c ≤ d dt logit(t) ≤ C. Since d dt logit(t) = 1 t(1−t)
, these conditions are satisfied with c = 1 and C = 4.
Generalized SmoothECE
We now generalize the definition of SmoothECE to other metrics, and show that it remains a consistent calibration measure with respect to its metric.Motivated by the logit example discussed above, a concrete way to introduce a non-trivial metric on a space of predictions [0, 1], is to consider a continuous and increasing function h : [0, 1] → R∪ {±∞}, and the metric obtained by pulling back the metric from R to [0, 1] through h, i.e.
d h (u, v) := |h(u) − h(v)|.
Using the isomorphism h between ([0, 1], d h ) and a subinterval of (R∪{±∞}, |•|), we can introduce a generalization of the notion of smECE, where the kernel-smoothing is being applied in the image of h.
More concretely, and by analogy with ( 3) and ( 4), for a probability distribution D over [0, 1] × {0, 1}, a kernel
K : R × R → R + and an increasing continuous map h : [0, 1] → R ∪ {±∞} we define rK,h (t) := E(f,y) K(t, h(f ))(f − y) E(f,y) K(t, h(f )) δK,h (t) := E (f,y) K(t, h(f )).
Again, we define smECE K,h (D) := rK,h (t) δK,h (t) dt, which simplifies to
smECE K,h (D) = E (f,y)∼D K(t, h(f ))(f − y) dt.
As it turns out, with the duality theorem in place (Theorem 15) the entire content of Section 3 can be carried over in this more general context without much trouble.
Specifically, if we define smECE σ,h := smECE K N,σ ,h , where K N,σ is a Gaussian kernel with scale σ, then σ → smECE σ,h (f, y) is non-increasing in σ, and therefore there is a unique fixed point σ * s.t.σ * = smECE σ * ,h (f, y).
We can now define smECE * ,h (f, y) := σ * , and we have the following generalization of Theorem 7, showing that SmoothECE remains a consistent calibration even under different metrics.
Theorem 19.For any increasing and continuous function h
: [0, 1] → R ∪ {±∞}, if we define d h : [0, 1] 2 → R + to be the metric d h (u, v) = max(|h(u) − h(v)|, 2) then dCE d h (D) ≲ smECE * ,h (D) ≲ dCE d h (D).
(Proof in Appendix A.7.)
Note that if the function h is such that the associated metric d h satisfies the conditions of Theorem 16, as an additional corollary we can deduce that smECE * ,h is also a consistent calibration measure in a standard sense.
Obtaining perfectly calibrated predictor via post-processing
One of the appealing properties of the notion dCE as it was introduced in Błasiok et al. ( 2023), was the theorem stating that if a predictor (f, y) is close to calibrated, then in fact a nearby perfectly calibrated predictor can be obtained simply by post-processing all the predictions by a univariate function.Specifically, they showed that for a distribution D over [0, 1] × {0, 1}, there is κ :
[0, 1] → [0, 1] such that for (f, y) ∼ D the pair (κ(f ), y) is perfectly calibrated and moreover E |κ(f ) − f | ≲ dCE(D).
As it turns out, through the notion of smECE h we can prove a similar in spirit statement regarding the more general distances to calibration dCE d h .The only difference is that we allow the post-processing κ to be a randomized function.
Theorem 20.For any increasing function h : [0, 1] → R ∪ {±∞}, and any distribution D supported on [0, 1] × {0, 1}, there is a probabilistic function κ : [0, 1] → [0, 1] such that for (f, y) ∼ D, the pair (κ(f ), y) is perfectly calibrated and
E d h (κ(f ), f ) ≲ smECE * ,h (D)
, where d h is the metric induced by h.In particular
E d h (κ(f ), f ) ≲ dCE d h (D).
Proof in the appendix.
Experiments
We include several experiments demonstrating our method on public datasets in various domains, from deep learning to meteorology.The sample sizes vary between several hundred to 50K, to show how our method behaves for different data sizes.In each setting, we compare the classical binned reliability diagram to the smooth diagram generated by our Python package.Our diagrams include bootstrapped uncertainty intervals for the SmoothECE, as well as kernel density estimates of the predictions (at the same kernel bandwidth σ * used to compute the SmoothECE).For binned diagrams, the number of bins is chosen to be optimal for the regression test MSE loss, optimized via cross-validation.Code to reproduce these figures is available at https://github.com/apple/ml-calibration.
Deep Networks.Figure 3 shows the confidence calibration of ResNet32 (He et al., 2016) on the ImageNet validation set (Deng et al., 2009).ImageNet is an image classification task with 1000 classes, and has a validation set of 50,000 samples.In this multi-class setting, the model f outputs a distribution over k = 1000 classes, f : X → ∆ k .
Confidence calibration is defined as calibration of the pairs (argmax c∈[k] f c (x) , 1{f(x) = y}), which is a distribution over [0, 1] × {0, 1}.That is, confidence calibration measures the agreement between confidence and correctness of the predictions.We use the publicly available data from Hollemans (2020), evaluating the models trained by Wightman (2019).Solar Flares.Figure 4 shows the calibration of a method for forecasting solar flares, over a period of 731 days.We use the data from Leka et al. (2019), which was used to compare reliability diagrams in Dimitriadis et al. (2021).Specifically, we consider forecasts of the event that a class C1.0+ solar flare occurs on a given day, made by the DAFFS forecasting model developed by NorthWest Research Associates.Overall, such solar flares occur on 25.7% of the 731 recorded days.We use the preprocesssed data from the replication code at: https://github.com/TimoDimi/replication_DGJ20.For further details of this dataset, we refer the reader to Dimitriadis et al. (2023, Section 6.1) and Leka et al. (2019).
Precipitation in Finland. Figure 5 shows the calibration of daily rain forecasts made by the Finnish Meteorological Institute (FMI) in 2003, for the city of Tampere in Finland.Forecasts are made for the probability that precipitation exceeds 0.2mm over a 24 hour period; the dataset includes records for 346 days (Nurmi, 2003).
Synthetic Data.For demonstration purposes, we apply our method to a simple synthetic dataset in Figure 6.One thousand samples f i ∈ [0, 1] are drawn uniformly in the interval [0, 1], and the conditional distribution of labels
E[yi | f i ]
is given by the green line in Figure 6.Note that the true conditional distribution is non-monotonic in this example.
Limitations.One limitation of our method is that since it is generic, there may be better tools to use in special cases, when we can assume more structure in the prediction distributions.For example, if the predictor is known to only output a small finite set of possible probabilities, then it is reasonable to simple estimate conditional probabilities by using these points as individual bins.The rain forecasts in Figure 5 have this structure, since the forecasters only predict probabilities in multiples of 10% -in such cases, using bins which are correctly aligned is a very reasonable option.Finally, note that the boostrapped uncertainty bands shown in our reliability diagrams should not be interpreted as confidence intervals for the true regression function.Rather, the bands reflect the sensitivity of our particular regressor under resampling the data.
Conclusion
We have presented a method of computing calibration error which is both mathematically well-behaved (i.e.consistent in the sense of Błasiok et al. (2023)), and can be visually represented in a reliability diagram.We also released a python package which efficiently implements our suggested method.We hope this work aids practitioners in computing, analyzing, and visualizing the reliability of probabilistic predictors.
A Appendix
A.1 Proof of Theorem 7
In this section we will prove Lemma 23 and Lemma 25, two main steps in the proof of Theorem 7, corresponding to respectively lower and upper bound.As it turns out, those two lemmas are true for a much wider class of kernels.The restriction on the kernel K to be a Gaussian kernel stems from the monotonicity property (Lemma 30), which was convenient for us to define the scale invariant measure smECE * by considering a fix-point scale σ * .In Appendix A.2 we will show that the Reflected Gaussian kernel satisfies the conditions of Lemma 23 and Lemma 25.
We will first define a dual variant of dCE.
Definition 21.We define the weak calibration error to be the maximal correlation of the residual (f − y) with a 1-Lipschitz function and [−1, 1] bounded function of a predictor, i.e.
wCE(D
) := sup w∈L E (f,y)∼D w(f )(f − y),
where L is a family of all 1-Lipschitz functions from [0, 1] to [−1, 1].
To show that smECE * is a consistent calibration measure we will heavily use the duality theorem proved in Błasiok et al. (2023) -the wCE and dCE are (up to a constant factor) equivalent.A similar statement is proved in this paper, in a greater generality (see Theorem 15).
Theorem 22 (Błasiok et al. (2023)).For any distribution D over [0, 1] × {0, 1} we have
dCE(D) ≤ wCE(D) ≤ 2dCE(D).
Intuitively, this is useful since showing that a new measure smECE is a consistent calibration measure corresponds to upper and lower bounding it by polynomials of dCE.With the duality theorem above, we can use the minimization formulation dCE for one direction of the inequality, and the maximization formulation wCE for the other.Indeed, we will first show that wCE is upper bounded by smECE if we add the penalty parameter for the "scale" of the kernel K.
Lemma 23.Let U ⊂ R be (possible infinite) interval containing [0, 1] and K : U × U → R be a non-negative symmetric kernel satisfying for every t 0 ∈ [0, 1], K(t 0 , t) dt = 1, and |t − t 0 |K(t, t 0 ) dt ≤ γ.Then wCE(D) ≤ smECE K (D) + γ.
Proof.Let us consider an arbitrary 1-Lipschitz function w : [0, 1] → [−1, 1], and take η ∼ K as in the lemma statement.Since kernel K is nonnegative, and K(t, t 0 ) dt = 0, we can sample triple ( f , f, y) s.t.(f, y) ∼ D, and
f is distributed according to density K(•, f ). In particular E | f − f | ≤ γ.
We can bound now
E (f,y)∼D [w(f )(f − y)] ≤ E[w( f )(f − y)] + E |f − f ||f − y| ≤ γ + E w( f )(f − y) . (10)
We now observe that
E[(f − y)| f = t] = Ef,y K(t, f )(f − y) Ef,y K(t, f ) = r(t),
and the marginal density of f is exactly
µ f (t) = E (f,y)∼D K(t, f ) = δ(t).
This leads to
E w( f )(f − y) = w(t)r(t) δ(t) dt ≤ |r(t)| δ(t) dt = smECE K (f, y).(11)
Combining ( 10) and ( 11) we conclude the statement of this lemma.
To show that smECE K (D) is upper bounded by dCE, we will first show that smECE K is zero for perfectly calibrated distributions, and then we will show that for well-behaved kernels smECE K (D) is Lipschitz with respect the Wasserstein distance on the space of distributions.
Claim 24.For any perfectly calibrated distribution D and for any kernel K we have
smECE K (D) = 0.
Proof.Indeed, by the definition of r we have
r(t) = Ef,y K(f, t)(f − y) Ef,y K(f, t) , Since the distribution D is perfectly calibrated, we have E(f,y)∼D[(f − y)|f ] = 0, hence E f,y [K(f, t)(f − y)] = E f E (f,y)∼D [K(f, t)(f − y)|f ] = E f K(f, t) E (f,y)∼D [(f − y)|f ] = 0.
This means that the function r(t) is identically zero, and therefore
smECE K (D) = t |r(t)| δ(t) dt = 0.
Lemma 25.Let K be a symmetric, non-negative kernel, such that for and let λ ≤ 1 be a constant such that for any
t 0 , t 1 ∈ [0, 1] we have |K(t 0 , t) − K(t 1 , t)| dt ≤ |t 0 − t 1 |/λ. Let D 1 , D 2 be a pair of distributions over [0, 1] × {0, 1}. Then |smECE K (D 1 ) − smECE K (D 2 )| ≤ 1 λ + 1 W 1 (D 1 , D 2 ),
where the Wasserstein distance is induced by the metric d((f 1 , y 1 ), (f 2 , y
2 )) = |f 1 − f 2 | + |y 1 − y 2 | on [0, 1] × {0, 1}.
Proof.We have
smECE K (D) = E (f,y)∼D [K(t, f )(y − f )] dt.
If we have a coupling
(f 1 , f 2 , y 1 , y 2 ) s.t. E[|f1 − f 2 | + |y 1 − y 2 |] ≤ δ, (f 1 , y 1 ) ∼ D 1 and (f 2 , y 2 ) ∼ D 2 ,
then by triangle inequality we can decompose
|smECE K (D 1 ) − smECE K (D 2 )| ≤ E (f1,f2,y1,y2) [|K(t, f 1 ) − K(t, f 2 )||y 1 − f 1 | dt + E (f1,f2,y1,y2) [K(t, f 2 )(|f 1 − f 2 | + |y 1 − y 2 |] dt.
We can bound those two terms separately
E (f1,f2,y1) [|K(t, f 1 ) − K(t, f 2 )||y 1 − f 1 |] dt ≤ E (f1,f2,y1) |K(t, f 1 ) − K(t, f 2 )| dt ≤ 1 λ E[|f1 − f 2 |] ≤ δ/λ,
and similarly
E [K(t, f 2 )(|f 1 − f 2 | + |y 1 − y 2 |)] dt = E t K(t, f 2 ) dt • (|f 1 − f 2 | + |y 1 − y 2 | = E[|f1−f2|+|y1−y2|] ≤ δ.
Corollary 26.Under the same assumptions on K as in Lemma 25, for any distribution D over [0, 1] × {0, 1},
smECE K (D) ≤ 1 λ + 1 dCE(D).
Proof.By definition of the dCE there is a perfectly calibrated distribution D ′ , such that W 1 (D, D ′ ) ≤ dCE(D), since the W 1 (D, D ′ ) is decreasing as we change the underlying metric to a smaller one.By Claim 24, smECE K (D ′ ) = 0, and the corollary follows directly from Lemma 25.
A.2 Facts about reflected Gaussian kernel
We wish to now argue that Lemma 23 and Lemma 25 imply the more specialized statements Lemma 8 and Lemma 9 respectively -the reflected Gaussian kernel K N,σ satisfies conditions of Lemma 23 and Lemma 25 with γ and λ proportional to σ.We Lemma 27.Reflected Gaussian kernel KN,σ defined by (2) satisfies 1.For every t 0 , we have KN,σ (t, t 0 ) dt = 1.
2. For every t 0 , we have |t − t 0 | KN,σ (t, t 0 ) dt ≤ 2/πσ.
3. For every t 0 , t 1 , we have | KN,σ (t, t 0 ) − KN,σ (t, t 0 )| dt ≤ |t 0 − t 1 |/(2σ).
Proof.For any given t 0 , the function KN,σ (t 0 , •) is a probability density function of a random variable π R (t 0 + η) where η ∼ N (0, σ) and π R : R → [0, 1] is defined in Section 1.1.In particular, we have
|π R (x) − π R (y)| ≤ |x − y|.
The property 1 is satisfied, since the KN,σ (•, t 0 ) is a probability density function.
The property 2 follows since
|t − t 0 | KN,σ (t, t 0 ) dt = E η∼N (0,σ |π R (t 0 + η) − t 0 | = E η∼N (0,σ |π R (t 0 + η) − π R (t 0 )| ≤ E η∼N (0,σ |η| = σ 2/π.
Finally, the property 2 again follows from the same fact for a Gaussian random variable: the integral | KN,σ (t, t 0 ) − KN,σ (t, t 0 )| is just a total variation distance between π R (t 0 + η) and π R (t 1 + η) where η ∼ N (0, σ), but by data processing inequality we have
T V (π R (t 0 + η), π R (t 1 + η)) ≤ T V (t 0 + η, t 1 + η) ≤ |t 0 − t 1 |/(2σ).
Where the last bound on the total variation distance between two one-dimension Gaussians is a special case of Theorem 1.3 in ?8 .
Definition 28.We say that a paramterized family of kernels
K σ : U × U → R where [0, 1] ⊂ U ⊂ R is a proper kernel family if for any σ 1 ≤ σ 2 there is a non-negative kernel H σ1,σ2 : U × U → R, satisfying ∥H σ1,σ2 ∥ 1→1 ≤ 1 and K σ2 = K σ1 * H σ1,σ2 .
Here the notation K * H is denotes
[K * H](t 1 , t 2 ) := U K(t 1 , t)H(t, t 2 ) dt,and∥H∥ 1→1 := sup t0∈U U |H(t 0 , t)| dt.
Claim 29.The family of reflected Gaussian kernels KN,σ is a proper kernel family, with
Kσ1,N = Kσ2,N * K√ σ 2 1 −σ 2 2 ,N .
Proof.Let σ 3 := σ 2 1 − σ 2 2 , we wish to show that Kσ1,N = Kσ2,N * Kσ3,N .In order to show this, it is enough to prove that for any f , we have f * Kσ1,N = f * Kσ2,N * Kσ3,N .This is true by Claim 12, since this property holds for standard Gaussian kernel K σ2,N * K σ3,N = K σ1,N (it is here equivalent to saying that for two independent random variables Z 2 ∼ N (0, σ 2 ) and Z 3 ∼ N (0, σ 3 ) we have Z 2 + Z 2 ∼ N (0, σ 1 )).
A.3 Useful properties of smECE.
Lemma 30 (Monotonicity of smECE).Let K σ be any proper kernel family parameterized by σ (see Definition 28).If
σ 1 ≤ σ 2 , then smECE Kσ 1 (D) ≥ smECE Kσ 2 (D). Proof. Let us define h σ (t) := E (f,y)∼D K σ (t, f )(f − y) = r(t) δ(t), such that smECE Kσ (D) = ∥h σ ∥ 1 := |h σ (t)| dt.
Since σ 1 ≤ σ 2 and K σ is a proper kernel family, we can write K σ2 = K σ1 * H σ1,σ2 .
We have now,
h σ1 * H σ1,σ2 = E (f,y) (f − y)K σ1 (•, f ) * H σ1,σ2 = E f,y (f − y)[K σ1 * H σ1,σ2 (•, f )] = E f −y (f − y)K σ2 (•, f ) = h σ2 .
On the other hand for any function f we have ∥f * H σ1,σ2 ∥ 1 ≤ ∥f ∥ 1 ∥H σ1,σ2 ∥ 1→1 , and ∥H σ1,σ2 ∥ 1→1 ≤ 1 by the definition of proper kernel family.Therefore
Corollary 31.In particular for σ 1 ≤ σ 2 we have smECE σ2 (D) ≤ smECE σ1 (D).
Proof.Reflected Gaussian kernels form a proper kernel family by Claim 29.
Lemma 32.For any σ, we have smECE σ
(D) = smECE σ (D) ± σ 2/π. Proof. Let f (t) := Ef,y KN,σ (t, f )f Ef,y KN,σ (t, f )
.
We have
| smECE σ (f, y) − smECE σ (f, y)| ≤ | f (t) − t| δ(t) dt ≤ E f [ KN,σ (t, f )|f − t|] dt = E f K σ (t, f )|f − t| dt = E f E Z∼N (f,σ) |f − π R (Z)| ≤ E Z∼N (0,σ) |Z| = 2/π.
A.4 Equivalence between definitions of dCE for trivial metric
The was defined in Błasiok et al. (2023) as a Wasserstein distance to the set of perfectly calibrated distributions over X := [0, 1] × {0, 1}, where X is equipped with a metric
d 1 ((f 1 , y 1 ), (f 2 , y 2 )) := |f 1 − f 2 | if y 1 = y 2 ∞ otherwise .
While generalizing the notion to that of dCE d , where d is a general metric on [0, 1], we chose a different metric on X (specifically, we put a different metric on the second coordinate), that is d
((f 1 , y 1 ), (f 2 , y 2 )) = d(f 1 , f 2 ) + |y 1 − y 2 |.
As it turns out, for the case of a trivial metric on the space of predictions, this choice is inconsequential, but the new definition has better generalization properties.
Claim 33.For the metric ℓ
1 (f 1 , f 2 ) = |f 1 − f 2 |, we have dCE(D) ≲ dCE ℓ1 (D) ≤ dCE(D)
, for some universal constant c.
Finally, for σ > σ 0 , note that X
(σ) i = X (σ0) i * KN, √ σ 2 −σ 2 0 (Claim
29) and therefore as soon as ∥ X(σ0) i ∥ ≤ ε, we also have
∥ i X(σ) i /m∥ 1 = ∥ i X(σ0) i * KN, √ σ 2 −σ 2 0 ∥ 1 ≤ ∥ i X(σ0) i ∥ 1 ∥ K∥ 1→1 ≤ ε,
where
∥ K∥ 1→1 := sup t1 t2 | K(t 1 , t 2 )| dt ≤ 1. This implies |smECE σ (D) − smECE σ ( D)| < ε for all σ ≥ σ 0 . Finally, if smECE * (D) = σ * ≥ σ 0 ,
A.7 Proof of Theorem 19
The Lemma 23 and Lemma 25 have their correspondent versions in the more general setting where a metric is induced on the space of predictions [0, 1] by a monotone function h : [0, 1] → R -the proofs are almost identical to those supplied in the special case, except we need to use the more general version of the duality theorem between wCE and dCE, with respect to a metric d (Theorem 15).
Lemma 36. Let h be an increasing function
h : [0, 1] → R ∪ {±∞} and d h (u, v) = |h(u) − h(v)| be the induced metric on [0, 1]. Let U ⊂ R be (possible infinite) interval containing h([0,1
]) and K : U × U → R be a non-negative symmetric kernel satisfying for every t 0 ∈ [0, 1], K(t 0 , t) dt = 1, and |t − t 0 |K(t, t 0 ) dt ≤ γ.Then
wCE d (D) ≤ smECE K,d h (D) + γ.
The proof is identical to the proof of Lemma 23.
Lemma 37. Let h be an increasing function h :
[0, 1] → R ∪ {±∞}, and d h (u, v) := |h(u) − h(v)| be the induced metric on [0, 1].
Let K be a symmetric, non-negative kernel, such that for some λ ≤ 1 and any t
0 , t 1 ∈ [0, 1] we have |K(t 0 , t) − K(t 1 , t)| dt ≤ |t 0 − t 1 |/λ. Let D 1 , D 2 be a pair of distributions over [0, 1] × {0, 1}. Then |smECE h,K (D 1 ) − smECE h,K (D 2 )| ≤ 1 λ + 1 W 1 (D 1 , D 2 ),
where the Wasserstein distance is induced by the metric d
((f 1 , y 1 ), (f 2 , y 2 )) = ||h(f 1 ) − h(f 2 )| + |y 1 − y 2 | on [0, 1] × {0, 1}.
The proof is identical to the proof of Lemma 25.
With those lemmas in hand, as well as the duality theorem for general metric ( On the other hand, again at this fixpoint σ * , using Theorem 15 and Lemma 36, we have
dCE(D) ≈ wCE d (D) ≤ smECE h,σ * (D) + σ * = 2σ * .(12)
A.8 Proof of Theorem 20
Let us consider a distribution D over [0, 1] × {0, 1} and a monotone function h, such that smECE h, * = σ * .
First, let us define the randomized function κ 1 : let π 0 : R → R be a projection of R to h([0, 1]), and let η ∼ N (0, σ * ).We define κ 1 (f ) := h −1 (π 0 (h(f ) + η)).We claim that this κ 1 satisfy the following two properties:
1. E(f,y)∼D |d(f, κ ′ (f ))| ≲ σ * , 2. ECE(κ ′ (f ), y) ≲ σ * .Indeed, the first inequality is immediate:
E[d(f, κ ′ (f )] = E[|h(f ) − π 0 (h(f ) + η)|] ≤ E |η| ≤ σ * .
The proof that ECE(κ ′ (f ), y) ≲ σ * is identical to the proof of Lemma 13, where such a statement was shown for the standard metric (corresponding to h(x) = x).
Finally, those two properties together imply the statement of the theorem: indeed, if ECE(f ′ , y) ≤ σ * , we can take κ 2 (t) := E[y|f ′ = t].In this case pair (κ 2 (f ′ ), y) is perfectly calibrated, and by definition of ECE, we have
E |κ 2 (f ′ ) − f ′ | = ECE(f ′ , y). Composing now κ = κ 2 • κ 1 , we have E (f,y)∼D |κ(f ) − f | ≤ E[|κ2 • κ 1 (f ) − κ 1 (f )|] + E[|κ1(f ) − f |] ≲ σ * .
Moreover distribution D of (κ(f ), y) is perfectly calibrated.
A.9 General duality theorem (Proof of Theorem 15)
Let P ⊂ ∆([0, 1] × {0, 1}) be the family of perfectly calibrated distributions.This set is cut from the full probability simplex ∆([0, 1] × {0, 1}) by a family of linear constraints, specifically µ ∈ P if and only if ∀t, (1 − t)µ(t, 1) − tµ(t, 0) = 0.
Definition 38.Let F(H, R) be a family of all functions from H to R. For a convex set of probability distributions Q ⊂ ∆(H), we define Q * ⊂ F(H, R) to be a set of all functions q, s.t. for all D ∈ Q we have Ex∼D q(x) ≤ 0. Proof.Let us consider a linear space AE of all finite signed Radon measures on X := [0, 1] × {0, 1}, satisfying µ(X) = 0. We equip this space with the norm ∥µ∥ AE := EMD(µ + , µ − ) for measures s.t.µ + (X) = 1 (and extended by ∥λµ∥ AE = λ∥µ∥ AE to entire space).The dual of this space is Lip 0 (X) -space of all Lipschitz functions on X which are 0 on some fixed base point x 0 ∈ X (the choice of base point is inconsequential).The norm on Lip 0 (X) is ∥W ∥ L given by the Lipschitz constant of W (see Chapter 3 in ?for proofs and more extended discussion).For a function H on X and a measure µ on X, we will write H(µ) to denote W dµ. For the strong duality, we shall now apply the following simple corollary of Hahn-Banach theorem.
Claim 41 (?, Theorem 2.5).Let (X, ∥ • ∥ X ) be a normed linear space, x 0 ∈ X, and P ⊂ X a convex set, and let d(x, P ) := inf p∈P ∥x − p∥ X .Then there is w ∈ X * , such that ∥w∥ X * = 1 and inf p∈P w(p) − w(x) = d(x, P ).
Take a convex set P ⊂ AE given by P := {D − q : q ∈ Q}.Clearly min D1∈Q W 1 (D, D 1 ) = d AE (0, P ) by definition of the space AE, and hence using the claim above, we deduce d(0, P ) = max
Figure 1 :
1
Figure 1: Left: Traditional reliability diagram based on binning, which is equivalent to histogram regression.Right: Proposed reliability diagram based on kernel regression, with our theoretically-justified choice of bandwidth.The width of the red line corresponds to the density of predictions, and the shaded region shows bootstrapped confidence intervals.Plot generated by our Python package relplot.
Figure 2 :
2
Figure 2: Illustration of how to compute the smooth reliability diagram, on a toy dataset of 8 samples.
+σ.In particular dCE(D) ≤ smECE σ + σ. • Let us denote by D σ the distribution of (π R (f + ση), y) for (f, y) ∼ D and η ∼ N (0, 1).(See (2) for a definition of the wrapping function π R .)Then |smECE σ (D) − ECE(D σ )| < 0.8σ.In particular, for σ * such that smECE σ * (D) = σ * we have smECE σ * (D) ≈ ECE(D σ * ).
); Błasiok et al. (2023) 6 .Definition 14 (Weak calibration error).For a distribution D over [0, 1] × {0, 1} of pairs of prediction and outcome, and a metric d on the space [0, 1] of all possible predictions, we define wCE d (D)
Figure 3 :
3
Figure 3: Confidence calibration of ResNet34 on ImageNet.Data from Hollemans (2020).
Figure 4 :
4
Figure 4: Calibration of solar flare forecasts over a 731 day period.Data from Leka et al. (2019); Dimitriadis et al. (2023).
Figure 5 :
5
Figure 5: Calibration of daily rain forecasts in Finland in 2003.Data form Nurmi (2003).
Figure 6 :
6
Figure 6: Calibration of synthetic data in a toy example.Here, instead of kernel density estimates, we show bootstrapped uncertainity bands around our estimated regression function.
we have smECE σ * (D) = σ * , hence smECE σ * ( D) ≥ σ * − ε, and by monotonicity smECE σ * −ε ( D) ≥ σ * − ε, implying smECE * ( D) ≥ σ * − ε.Identical argument shows smECE * ( D) ≤ σ * + ε.
Claim 39 .
39
The set P * ⊂ F([0, 1] × {0, 1}, R) is given by the following inequalities.A function H ∈ P * if and only if∀t, E y∼Ber(t) H(t, y) ≤ 0. Lemma 40.Let W 1 (D 1 , D 2 ) be the Wasserstein distance between two distributions D 1 , D 2 ∈ ∆([0, 1]×, {0, 1}) with arbitrary metric d on the set [0, 1] × {0, 1}, and let Q ⊂ ∆([0, 1] × {0, 1}) be a convex set of probability distributions.The value of the minimization problemmin D1∈Q W 1 (D 1 , D) is equal to max E (f,y)∼D H(f, y) s.t.H is Lipschitz with respect to d, H ∈ Q * .
The weak duality is clear: for any Lipschitz function H ∈ Q * , and any distributionD 1 ∈ Q we have H(D) ≤ H(D 1 ) + W 1 (D 1 , D) = W 1 (D 1 , D).
H∈Lip 0 :∥ H∥ L =1 inf p∈P H(p).Taking H which realizes this maximum, we can now consider a shift H := H − sup q∈Q H(Q), so that H ∈ Q * , and verify min D1∈Q W 1 (D, D 1 ) = d(0, P ) = inf p∈P H(p) = H(D) − sup q∈Q H(q) = H(D).Corollary 42.For any metric d on [0, 1] × {0, 1}, the dCE d (D) is equal to the value of the following maximization program max E (f,y)∼D H(f, y) s.t.H is Lipschitz with respect to d ∀t, E y∼Ber(t) H(t, y) ≤ 0.Lemma 43.For any metric d on [0, 1] if we define d to be a metric on [0, 1] × {0, 1} given by d((f 1 , y 1 ), (f 2 , y 2 )) := d(f 1 , f 2 ) + |y 1 − y 2 |, we have wCE d (D) ≥ dCE d(D)/2Proof.We shall compare the value of wCE d (D) with the optimal value of the dual as in Corollary 42.Let us assume that for a distribution D we have a functionH : [0, 1] × {0, 1} → R, s.t.E(f,y)∼D H(f, y) = OPT,which is Lipschitz with respect to d.We wish to find a function w: [0, 1] → [−1, 1] which is Lipschitz with respect to d, s.t.E f,y (f − y)w(f ) ≥ OPT/2.Let us take w(f ) := H(f, 0) − H(f, 1).We will show instead that w is 2-Lipschitz, [−1, 1] bounded and satisfies Ef,y(f −y)w(f ) ≥ OPT, and the statement of the lemma will follow by scaling.Let us define w(f) := H(f, 0) − H(f, 1).The condition ∀f, E y∼Ber(f ) H(f, y) ≤ 0 is equivalent to f w(f ) ≥ H(f, 0).Hence H(f, y) = yH(f, 1) + (1 − y)H(f, 0) = H(f, 0) − yw(f ) ≤ (f − y)w(f ), which implies E(f − y)w(f ) ≥ E H(f, y).Moreover, the function w(f ) is bounded by construction of the metric d and the assumption that H(f, y) was Lipschitz.Indeed |w(f)| = |H(f, 0) − H(f, 1)| ≤ d((f, 0), (f, 1)) ≤ 1.To finish the proof of Theorem 15, we we are left with the weak duality statement if the distance d on [0, 1] satisfiesd(u, v) ≥ |u − v|, we have wCE d (D) ≤ 2dCE d (D).This, as usual, is relatively easy.Let w be a Lipschitz function as in the definition of wCE d , and D ′ be a perfectly calibrated distribution, supplied together with a coupling Π between D and D ′ such thatE (f1,y1),(f2,y2)∼Π d(f 1 , f 2 ) + |y 1 − y 2 | = dCE d (D),as in the definition of dCE d (D) (where (f 1 , y 1 ) is distributed according to D and (f 2 , y 2 ) according to D ′ ).Then|E[(f 1 − y 1 )w(f 1 )]| ≤ |E[(f 2 − 2 )w(f 2 )]| + E[|(f1 − y 1 )(w(f 1 ) − w(f 2 )|] + E[|(f1 − f 2 + y 1 − y 2 )w(f 1 )|].andwe can bound those three terms separately:E[(f2 − y 2 )w(f 2 )] = 0 since D ′ is perfectly calibrated, E[|(f1 − y 1 )(w(f 1 ) − w(f 2 ))|] ≤ E[d(f1, f 2 )],sincew is Lipschitz with respect to d and |f 1 − y 1 | ≤ 1, and E[|(f1 − f 2 + y 1 − y 2 )w(f 1 )|] ≤ E[|f1 − f 2 | + |y 1 − y 2 |] ≤ E[d(f1, f 2 ) + |y 1 − y 2 |].Collecting those together we get E[(f1 − y 1 )w(f 1 )] ≤ 2 E[d(f1, f 2 )] + E[|y1 − y 2 |] ≤ 2dCE d (D).
In the terminology ofBröcker (2008).
Briefly, this is because the true calibration function still depends on the distribution D in a discontinuous way. This discontinuity can manifest in the calibration measure, unless it is appropriately smoothed.
This definition differs slightly from the original definition inBłasiok et al. (2023) in that the distance on the second coordinate between two different outcomes was infinite (see Definition 6). Claim 33 shows that the specialization of this new definition to the trivial metric is equivalent up to a constant factor with the original dCE, but the definition presented here behaves better in general -it allows to generalize the duality (Theorem 15) to arbitrary metrics on predictions [0, 1].
This definition, which appeared in the(Błasiok et al., 2023) differs slightly from the more general definition Definition 1 introduced in this work, in that Definition 1 puts the two intervals within distance 1 from each other, as opposed to ∞ in Definition 6. As we show with Claim 33, this choice does not make substantial difference for the standard metric on the interval, but the Definition 1 better generalizes to other metrics.
This can be easily seen, if we consider the trivial perfectly calibrated distribution, where outcome y ∼ Bernoulli(1/2) and prediction f is deterministic 1/2. Then smECEσ(D) = Cσ for some constant C ≈ 0.79.
Weak calibration was called smooth calibration error in ?Błasiok et al. (2023). We revert back to the original terminology weak calibration error to avoid confusion with the notion of smECE developed in this paper.
In Błasiok et al. (2023)
This special case, where the two variances are equal, is in fact an elementary calculation.
Proof.The lower bound dCE ℓ1 ≤ dCE is immediate, since dCE ℓ1 is a distance of D to P with respect to a Wasserstein distance induced by the metric d 1 on [0, 1] × {0, 1}, dCE is the Wasserstein distance with respect to the metric d 2 , and we have a pointwise boundThe other bound follows from Theorem 15 and Theorem 22 -dCE and dCE ℓ1 are within constant factor from wCE ℓ1 .A.5 Proof of Lemma 17Proof.Let us take w(x)We wish to show that wCE(f, y) ≳ ε c+1 .Indeed, let us take w(X) := w(π I (x)) where I := [γ, 1−γ], π I : [0, 1] → I is a projection onto the interval I, and γ := ε/C for some large constant C., on one of those two connected components correlation between the residual (y − f ) and w 2 has to be at least ε/8.Since the other case is analogous, let us assume for concreteness, thatWe will show that this implies Pr(f ≤ γ ∧ y = 1) ≳ ε, and refer to Claim 34 to finish the argument.Indeed, where we finally specify γ := ε/32.To finish the proof, it is enough to show the following Claim 34.For a random pair (f, y) of prediction and outcome, if Pr(f ≤ γ ∧y = 1) ≥ ε or Pr(f ≥ 1−γ ∧y = 0) ≥ ε, where γ = ε/8, then wCE(f, y) ≳ ε 2 .Proof.We will only consider the case Pr(f ≤ γ ∧ y = 1) ≥ ε.The other case is identical.Let us take w(x) := max(1 − x/2γ, 0).We haveA.6 Sample complexity -proof of Theorem 11Lemma 35.Let X : [0, 1] → R be a random function, satisfying with probability 1, ∥X∥ 1 := 1 0 |X(t)| dt ≤ 1 and sup t X(t) ≤ σ.Assume moreover that for every t, we have E[X(t)] = 0.Consider now m independent realizations X 1 , X 2 , . . .X m : [0, 1] → R, each identically distributed as X(t), and finally letThenProof of Theorem 11.Let us first focus on the case σ = σ 0 .For a pair (f, y)fi,yi /m∥ 1 , and similarly smECE(D) = ∥ Ef,y∼D X (σ0)f,y -this is a random function, since (f i , y i ) is chosen at random from distribution D, and note that:1. Random functions X(σ0) i for i ∈ {1, . . ., m} are independent and identically distributed.2. With probability 1, we have ∥ X(σ00
Metrics of calibration for probabilistic predictions. Imanol Arrieta-Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, Cherie Xu, arXiv:2205.096802022arXiv preprint
Statistical Inference Under Order Restrictions: The Theory and Application of Isotonic Regression. R E Barlow, Wiley Series in Probability and Mathematical Statistics. 1972
A unifying theory of distance from calibration. Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran, Proceedings of the 55th Annual ACM Symposium on Theory of Computing. the 55th Annual ACM Symposium on Theory of ComputingNew York, NY, USAAssociation for Computing Machinery2023ISBN 9781450399135
Some remarks on the reliability of categorical probability forecasts. Jochen Bröcker, Monthly weather review. 136112008
When does optimizing a proper loss yield calibration?. Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran, 2023
Plotting p against x. Jb Copas, Applied statistics. 1983
The well-calibrated bayesian. Dawid Philip, Journal of the American Statistical Association. 773791982
The comparison and evaluation of forecasters. H Morris, Stephen E Degroot, Fienberg, Journal of the Royal Statistical Society: Series D (The Statistician). 321-21983
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. Ieee2009
Calibration of pre-trained transformers. Shrey Desai, Greg Durrett, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)2020
Stable reliability diagrams for probabilistic classifiers. Timo Dimitriadis, Tilmann Gneiting, Alexander I Jordan, Proceedings of the National Academy of Sciences. 1188e20161911182021
Timo Dimitriadis, Tilmann Gneiting, Alexander I Jordan, Peter Vogel, arXiv:2301.10803Evaluating probabilistic classifiers: The triptych. 2023arXiv preprint
Smooth calibration, leaky forecasts, finite recall, and nash dynamics. Dean P Foster, Sergiu Hart, 10.1016/j.geb.2017.12.022Games Econ. Behav. 1092018
Forecast hedging and calibration. P Dean, Sergiu Foster, Hart, Journal of Political Economy. 129122021
Probabilistic forecasts, calibration and sharpness. Tilmann Gneiting, Fadoua Balabdaoui, Adrian E Raftery, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 6922007
Low-degree multicalibration. Parikshit Gopalan, Michael P Kim, Mihir Singhal, Shengjia Zhao, of Proceedings of Machine Learning Research. London, UKPMLR2-5 July 2022. 2022178Conference on Learning Theory
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, International Conference on Machine Learning. PMLR2017
Calibration of neural networks using splines. Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, Richard Hartley, International Conference on Learning Representations.
Forecasting precipitation in percentages of probability. Cleve Hallenbeck, Monthly Weather Review. 48111920
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition2016
Matthijs Hollemans. Reliability diagrams. 2020
Deterministic calibration and nash equilibrium. Sham Kakade, Dean Foster, Journal of Computer and System Sciences. 7412008
Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with dirichlet calibration. Meelis Kull, Miquel Perello-Nieto, Markus Kängsepp, Hao Song, Peter Flach, arXiv:1910.126562019arXiv preprint
Verified uncertainty calibration. Ananya Kumar, Percy S Liang, Tengyu Ma, Advances in Neural Information Processing Systems. 2019
Trainable calibration measures for neural networks from kernel mean embeddings. Aviral Kumar, Sunita Sarawagi, Ujjwal Jain, International Conference on Machine Learning. PMLR2018
T-cal: An optimal test for the calibration of predictive models. Donghwan Lee, Xinmeng Huang, Hamed Hassani, Edgar Dobriban, arXiv:2203.018502022arXiv preprint
A comparison of flare forecasting methods. ii. benchmarks, metrics, and performance results for operational solar flare forecasting systems. Sung-Hong Kd Leka, Kanya Park, Jesse Kusano, Graham Andries, Suzy Barnes, Shaun Bingham, Aoife E Bloomfield, Veronique Mccloskey, David Delouille, Falconer, The Astrophysical Journal Supplement Series. 2432362019
Revisiting the calibration of modern neural networks. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic, Advances in Neural Information Processing Systems. 202134
Reliability of subjective probability forecasts of precipitation and temperature. H Allan, Robert L Murphy, Winkler, Journal of the Royal Statistical Society Series C: Applied Statistics. 2611977
On estimating regression. E A Nadaraya, 10.1137/1109020Theory of Probability & Its Applications. 19649
Binary classifier calibration: Non-parametric approach. Gregory F Mahdi Pakdaman Naeini, Milos Cooper, Hauskrecht, arXiv:1401.33902014arXiv preprint
Obtaining well calibrated probabilities using bayesian binning. Gregory F Mahdi Pakdaman Naeini, Milos Cooper, Hauskrecht, AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence. NIH Public Access201520152901Proceedings of the
Predicting good probabilities with supervised learning. Alexandru Niculescu, - Mizil, Rich Caruana, Proceedings of the 22nd international conference on Machine learning. the 22nd international conference on Machine learningACM2005
Measuring calibration in deep learning. Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, Dustin Tran, CVPR Workshops. 20192
Histogram regression estimation using data-dependent partitions. Andrew Nobel, The Annals of Statistics. 2431996
Verifying probability of precipitation -an example from finland. Pertti Nurmi, 2003
. OpenAI. Gpt-4 technical report. 2023
Computational optimal transport: With applications to data science. Gabriel Peyré, Marco Cuturi, Foundations and Trends® in Machine Learning. 201911
Mitigating bias in calibration error estimation. Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, Michael C Mozer, International Conference on Artificial Intelligence and Statistics. PMLR2022
Smoothing Methods in Statistics. J S Simonoff, Springer Series in Statistics. 1996Springer
Two extra components in the brier score decomposition. Caio As David B Stephenson, Ian T Coelho, Jolliffe, 2008Weather and Forecasting23
Plots of the cumulative differences between observed and expected values of ordered bernoulli variates. Mark Tygert, arXiv:2006.025042020arXiv preprint
Evaluating model calibration in classification. Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, Thomas Schön, The 22nd International Conference on Artificial Intelligence and Statistics. PMLR2019
Smooth regression analysis. Geoffrey S Watson, Sankhyā: The Indian Journal of Statistics, Series A. 0581572X2641961-2002. 1964
Calibration tests beyond classification. David Widmann, Fredrik Lindsten, Dave Zachariah, International Conference on Learning Representations. 2020
Ross Wightman, Pytorch image models. 2019
Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. Bianca Zadrozny, Charles Elkan, Icml. Citeseer20011
Transforming classifier scores into accurate multiclass probability estimates. Bianca Zadrozny, Charles Elkan, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. the eighth ACM SIGKDD international conference on Knowledge discovery and data miningACM2002 |
253,107,476 | THE CURIOUS CASE OF BENIGN MEMORIZATION | Despite the empirical advances of deep learning across a variety of learning tasks, our theoretical understanding of its success is still very restricted. One of the key challenges is the overparametrized nature of modern models, enabling complete overfitting of the data even if the labels are randomized, i.e. networks can completely memorize all given patterns. While such a memorization capacity seems worrisome, in this work we show that under training protocols that include data augmentation, neural networks learn to memorize entirely random labels in a benign way, i.e. they learn embeddings that lead to highly non-trivial performance under nearest neighbour probing. We demonstrate that deep models have the surprising ability to separate noise from signal by distributing the task of memorization and feature learning to different layers. As a result, only the very last layers are used for memorization, while preceding layers encode performant features which remain largely unaffected by the label noise. We explore the intricate role of the augmentations used for training and identify a memorization-generalization trade-off in terms of their diversity, marking a clear distinction to all previous works. Finally, we give a first explanation for the emergence of benign memorization by showing that malign memorization under data augmentation is infeasible due to the insufficient capacity of the model for the increased sample size. As a consequence, the network is forced to leverage the correlated nature of the augmentations and as a result learns meaningful features. To complete the picture, a better theory of feature learning in deep neural networks is required to fully understand the origins of this phenomenon. * Equal contribution. | [
6212000,
3162051,
9794990,
234357520,
3531730,
3144218
] | THE CURIOUS CASE OF BENIGN MEMORIZATION
Sotiris Anagnostidis [email protected]
Department of Computer Science
ETH Zürich
Switzerland
Gregor Bachmann [email protected]
Department of Computer Science
ETH Zürich
Switzerland
Lorenzo Noci [email protected]
Department of Computer Science
ETH Zürich
Switzerland
Thomas Hofmann
Department of Computer Science
ETH Zürich
Switzerland
THE CURIOUS CASE OF BENIGN MEMORIZATION
Despite the empirical advances of deep learning across a variety of learning tasks, our theoretical understanding of its success is still very restricted. One of the key challenges is the overparametrized nature of modern models, enabling complete overfitting of the data even if the labels are randomized, i.e. networks can completely memorize all given patterns. While such a memorization capacity seems worrisome, in this work we show that under training protocols that include data augmentation, neural networks learn to memorize entirely random labels in a benign way, i.e. they learn embeddings that lead to highly non-trivial performance under nearest neighbour probing. We demonstrate that deep models have the surprising ability to separate noise from signal by distributing the task of memorization and feature learning to different layers. As a result, only the very last layers are used for memorization, while preceding layers encode performant features which remain largely unaffected by the label noise. We explore the intricate role of the augmentations used for training and identify a memorization-generalization trade-off in terms of their diversity, marking a clear distinction to all previous works. Finally, we give a first explanation for the emergence of benign memorization by showing that malign memorization under data augmentation is infeasible due to the insufficient capacity of the model for the increased sample size. As a consequence, the network is forced to leverage the correlated nature of the augmentations and as a result learns meaningful features. To complete the picture, a better theory of feature learning in deep neural networks is required to fully understand the origins of this phenomenon. * Equal contribution.
INTRODUCTION
Deep learning has made tremendous advances in the past decade, leading to state-of-the-art performance on various learning tasks such as computer vision (He et al., 2016), natural language processing (Devlin et al., 2019) and graph learning (Kipf & Welling, 2017). While some progress has been made regarding the theoretical understanding of these deep models (Arora et al., 2018;Bartlett et al., 2019;Neyshabur et al., 2015;Dziugaite & Roy, 2017), the considered settings are unfortunately often very restrictive and the insights made are only qualitative or very loose. One of the key technical hurdles hindering progress is the highly overparametrized nature of neural networks employed in practice, which is in stark contrast with classical learning theory, according to which simpler hypotheses compatible with the data should be preferred. The challenge of overparametrization is beautifully illustrated in the seminal paper of Zhang et al. (2017), showing that deep networks are able to fit arbitrary labelings of the data, i.e. they can completely memorize all the patterns. This observation renders tools from classical learning theory such as VC-dimension or Rademacher complexity vacuous and new avenues to investigate this phenomenon are needed. The random label experiment has been applied as a sanity check for new techniques (Arora et al., 2018;2019a;Bartlett et al., 2017;Dziugaite & Roy, 2017), where an approach is evaluated based on its ability to distinguish between networks that memorize or truly learn the data. From a classical perspective, memorization is thus considered as a bug, not a feature, and goes hand in hand with bad generalization.
In this work we challenge this view by revisiting the randomization experiment of Zhang et al. (2017) with a slight twist: we change the training protocol by adding data augmentation, a standard practice used in almost all modern deep learning pipelines. We show that in this more practical setting, the story is more intricate;
Neural networks trained on random labels with data augmentation learn useful features! More precisely, we show that probing the embedding space with the nearest neighbour algorithm of such a randomly trained network admits highly non-trivial performance on a variety of standard benchmark datasets. Moreover, such networks have the surprising ability to separate signal from noise, as all layers except for the last ones focus on feature learning while not fitting the random labels at all. On the other hand, the network uses its last layers to learn the random labeling, at the cost of clean accuracy, which strongly deteriorates. This is further evidence of a strong, implicit bias present in modern models, allowing them to learn performant features even in the setting of complete noise. Inspired by the line of works on benign overfitting (Bartlett et al., 2020;Sanyal et al., 2021;Frei et al., 2022), we coin this phenomenon benign memorization. We study our findings through the lens of capacity and show that under data augmentation, modern networks are forced to leverage the correlations present in the data to achieve memorization. As a consequence of the label-preserving augmentations, the model learns invariant features which have been identified to have strong discriminatory power in the field of self-supervised learning (Caron et al., 2021;Grill et al., 2020;Bardes et al., 2022;Zbontar et al.;Chen & He, 2021). Specifically, we make the following contributions:
• We make the surprising observation that learning under complete label noise still leads to highly useful features (benign memorization), showing that memorization and generalization are not necessarily at odds. • We show that deep neural networks exhibit an astonishing capability to separate noise and signal between different layers, fitting the random labels only at the very last layers. • We highlight the intricate role of augmentations and their interplay with the capacity of the model class, forcing the network to learn the correlation structure. • We interpret our findings in terms of invariance learning, an objective that has instigated large successes in the field of self-supervised learning.
RELATED WORK
Memorization. Our work builds upon the seminal paper of Zhang et al. (2017) which showed how neural networks can easily memorize completely random labels. This observation has inspired a multitude of follow-up works and the introduced randomization test has become a standard tool to assess the validity of generalization bounds (Arora et al., 2018;2019a;Bartlett et al., 2017;Dziugaite & Roy, 2017). The intriguing capability of neural networks to simply memorize data has inspired researchers to further dissect the phenomenon, especially in the setting where only a subset of the targets is randomized. Arpit et al. (2017) studies how neural networks tend to learn shared patterns first, before resorting to memorization when given real data, as opposed to random labels where examples are fitted independently. Feldman & Zhang (2020) study the setting when real but "long-tailed" data is used and show how memorization in this case can be beneficial to performance. Maennel et al. (2020);Pondenkandath et al. (2018) on the other hand show how pre-training networks on random labels can sometimes lead to faster, subsequent training on the clean data or novel tasks. Finally, show how training on random labels can be valuable for neural architecture search. In all these previous works, data augmentation is excluded from the training pipeline. For partial label noise, it is well-known in the literature that neural networks exhibit surprising robustness (Rolnick et al., 2017;Song et al., 2020;Patrini et al., 2017) and generalization is possible. We highlight however that this setting is distinct from complete label noise, which we study in this work. Finally, Dosovitskiy et al. (2014) study the case where each sample has a unique label and achieve strong performance under data augmentation. This setting is again very different from ours as two examples never share the same label, making the task significantly simpler and distinct from memorization.
(a) Visualization of an encoder-projector pair. The encoder is typically chosen as a ResNet18, while the projector is a one hidden-layer MLP.
(b) Illustration of standard data augmentation. Notice how each augmentation indeed preserves the label "cat". Data augmentation. Being a prominent component of deep learning applications, the benefits of data augmentation have been investigated theoretically in the setting of clean labels (Chen et al., 2020b;Dao et al., 2019;Wu et al., 2020;Hanin & Sun, 2021). The benefits of data augmentation have been verified empirically when only a subset of the data is corrupted (Nishi et al., 2021). On the other hand, investigations with pure label noise where no signal remains in the dataset are absent in the literature. Finally, we want to highlight the pivotal role of data augmentation in self-supervised learning frameworks (Caron et al., 2021;Grill et al., 2020;Bardes et al., 2022;Zbontar et al.;Chen & He, 2021;HaoChen et al., 2021), where it facilitates learning of invariant features.
BACKGROUND
Setting. In this work we consider the standard classification setting, where we are given n ∈ N i.i.d. samples (x 1 , y 1 ) , . . . , (x n , y n ) i.i.d.
∼ D from some data distribution D, consisting of inputs x ∈ R d and one-hot targets y ∈ R C , each encoding one out of C classes. We consider a family of parametrized functions (in this work, neural networks) f θ : R d − → R C , where θ ∈ Θ denotes the (concatenated) weights from some space Θ ⊆ R p where p is the total number of parameters. Moreover, we specify a loss function l : R C × R C − → R + which measures the discrepancy between a prediction f θ (x) and its corresponding target y, i.e. l(f θ (x), y). We then perform learning by minimizing the empirical lossL as a function of the parameters θ, θ := argmin θ∈ΘL (θ) := argmin θ∈Θ
n i=1 l(f θ (x i ), y i )(1)
using some form of stochastic gradient descent and measure the resulting generalization error L(θ) = E (x,y)∼D l(fθ(x), y) . We denote by p X the marginal density of the inputs and by p Y |X the conditional density of the labels given an input. p Y |X encodes the statistical relationship between an input x and the associated label y. As typical in practice, we assume that we are in the so-called interpolation regime (Ma et al., 2018), i.e. we assume that stochastic gradient descent can find a parameter configurationθ that achieves zero training loss (see Eq. 1).
Architecture. Throughout this work, we consider networks composed of an encoder h ψ : R d − → R m and a projector g φ : R m − → R C , where both ψ and φ denote the parameters of the respective building block. As an encoder, we typically employ modern convolutional architectures such as ResNets (He et al., 2016) or VGG (Simonyan & Zisserman, 2014, excluding the final fullyconnected layers, while the projector is usually an MLP with one hidden layer. We illustrate such a network in Fig. 2a. Such architectures have become very popular in the domain of feature learning and are extensively used in unsupervised and self-supervised learning (Caron et al., 2021;Grill et al., 2020;Bardes et al., 2022;Zbontar et al.). In this work, we are interested in assessing the quality of the encoder's features h ψ when the network f θ = g φ • h ψ is trained on random labels.
Probing. To evaluate the embeddings h ψ , we apply nearest-neighbour-based and linear probing, which refers to performing K-nearest-neighbour (or linear) classification based on the embeddings Training loss clean labels random labels Figure 3: Fitting random labels on unaugmented data on CI-FAR10 with a ResNet18 is not significantly slower than fitting clean labels.
Layer
Initialization End of training
{(h ψ (x i ), y i )} n i=1 .
We fix the number of neighbours to K = 20 throughout this work unless otherwise specified. Probing measures how useful a given set of features h ψ is for a fixed task. A special case we will often consider in this work is probing the encoder of a network at initialization, which we refer to as probing at initialization. Due to the lack of feature learning in probing, the resulting performance is very dependent on the quality of the input representations h ψ . Such method has been used to assess the quality of a given embedding space in various works, for instance Alain & Bengio (2017) Random labels and memorization. Denote by e l ∈ R C the l-th unit vector, i.e. the one-hot encoding of class l. Zhang et al. (2017) introduced a randomization test, where any clean label y i ∈ {e 1 , . . . , e C } in the training set is replaced by a random one-hot encoding,ỹ i = e l where l ∼ U({1, . . . , C}) with U denoting the uniform distribution over the discrete set {1, . . . , C}. Throughout this text, we will denote randomized variables with ∼ on top. Notice that such an intervention destroys any statistical relationship between inputs and targets, i.e. p Y |X = p Y . As a consequence, training on such a dataset should become very difficult as there is no common pattern left to exploit and an algorithm has to resort to a pure memorization approach. Zhang et al. (2017) showed that deep neural networks have an astonishing capacity to perform such memorization tasks, easily overfitting any random label assignment even on large-scale datasets such as ImageNet (Deng et al., 2009) without a large increase in training time (see Fig. 3). Even explicit regularizers such as weight decay and Dropout (Srivastava et al., 2014) can be used in a standard manner, data augmentation however is excluded from the pipeline. We have reproduced a subset of the results of Zhang et al. (2017) in Table 2, first column. As expected, deep neural networks indeed manage to achieve zero training error across a variety of datasets while not generalizing at all, both with respect to a random test labeling as well as the original, clean test labels. In order to further study the amount of distortion in the resulting embedding space, we apply nearest-neighbour probing with respect to the clean data. More precisely, given the features h ψ learnt from training on the random label task
{(x i ,ỹ i )} n i=1
, we apply probing based on the clean training data {(x i , y i )} n i=1 and evaluate with respect to clean test data. We display the results in Table 1. In line with previous works, (Cohen et al., 2018;Maennel et al., 2020), we find that while very early layers might retain their performance at initialization, a significant drop occurs with increasing depth, further highlighting the lack of feature learning and the malignant nature of memorization in this setting. This is in line with observations that early layers learn useful representations (Zhang et al., 2019).
Data Augmentation. A standard technique present in almost all computer vision pipelines is data augmentation. We consider transformations T : R d − → R d , that take a given input x and produce a new augmentationx := T (x), which by design, should preserve the associated label, i.e.ȳ = y. Such transformations are usually given as a composition of smaller augmentations including random crops, flips, color-jittering etc. In Fig. 2b we show a set of different augmentations of the same underlying image x. Notice how these transformations indeed leave the associated label invariant. We denote by T the set of all possible augmentations. Data augmentation is usually applied in an online fashion, i.e. at every step of gradient descent, we uniformly sample a fresh transformation T ∼ U (T ) and propagate it through the network. We highlight that data augmentation is a standard Liu et al., 2021) all rely on some form of data augmentation in their training pipeline. If one hence wants to study the memorization potential of neural networks in practical settings, data augmentation needs to be considered. We notice that the results of Zhang et al. (2017) and subsequent studies on the properties of memorization under random labels (Arpit et al., 2017;Maennel et al., 2020) were obtained without the use of data augmentation, leaving the memorization picture thus incomplete.
BENIGN MEMORIZATION
In this section, we present the curious phenomenon of benign memorization i.e. how neural networks manage to completely fit random labels under data augmentation, while at the same time learning predictive features for downstream tasks. Let us first formally introduce the terms benign and malign memorization, which are central to the results of this work. In the following,
S := {(x i , y i )} n i=1 denotes the original clean dataset andS = {(x i ,ỹ i )} n i=1
its randomly labeled version.
Definition 4.1 We call an encoder-projector pair (h φ * , g ψ * ) a memorization ofS, if f * perfectly fitsS. Moreover, we call (h φ * , g ψ * ) a malign memorization if additionally, probing of h φ * on S does not improve over probing at initialization. On the contrary, we call (h φ * , g ψ * ) a benign memorization ofS if probing of h φ * on S outperforms probing at initialization.
As highlighted in Sec. 3 and shown in Table 2, memorizing solutions found with stochastic gradient descent without data augmentation on standard vision benchmarks are of malign nature. This is very intuitive, as randomizing the targets destroys any signal present in the dataset and thus generalization seems impossible. We show now how including data augmentation in the training pipeline completely reverses this observation.
Training details. In all subsequent experiments involving data augmentations, we use the standard transformations employed in self-supervised learning frameworks such as Chen et al. (2020c); Grill et al. (2020); Chen & He (2021). These consist of a composition of random crops, colorjittering, random greyscaling, and random horizontal flips, leading to a diverse set of transformations. Moreover, we rely on mixup augmentations , where two images x 1 , x 2 are combined into a linear interpolation, according to some weighting α ∈ [0, 1], i.e.
x := αx 1 + (1 − α)x 2 . The corresponding labelȳ is accordingly subject to the same linear combination, i.e.ȳ = αy 1 + (1 − α)y 2 where labels are represented as their one-hot encodings. We use the standard vision datasets CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009), as well as TinyImageNet (Le & Yang, 2015). For more details, we refer the reader to Appendix F.
Benign Memorization. We display the results of training under data augmentation in Table 2. We observe that, surprisingly, nearest-neighbour probing of the learnt embeddings leads to clearly non-trivial performance, strongly improving over the models trained under random labels without Notice the sharp phase transition when reaching the projector layers"P" and "O", i.e. the projector reaches strong performance on the noisy labels (dashed) but randomly guesses on the clean data (solid). The opposite happens for the embedding layer, reaching 76% K-NN accuracy.
data augmentation. Moreover, we strongly outperform probing at initialization, showing that indeed rich feature learning is happening. As a consequence, data augmentation does not simply prevent malign memorization by preserving the signal at initialization but actually leads to learning from the data. On the other hand, it holds that the projector achieves perfect training accuracy on the random labels and the network thus indeed memorizes the training data perfectly. Under data augmentation, deep models hence exhibit benign memorization. In Appendix C, Fig. 10, we further underline the utility of the learnt features by evaluating their performance under transfer learning. We remark that this is, to the best of our knowledge, the first work showing that learning under random labels can lead to strong downstream performance, highlighting that memorization and generalization are not necessarily at odds. We notice that the training time under randomized targets together with data augmentation increases significantly, both compared to training under clean labels as well as fitting random labels without augmentations. In Fig. 4 we show the evolution of both the training loss as well as nearest-neighbour probing test accuracy as a function of the number of epochs. Observe that for as long as 3000 epochs, neither training loss nor probing accuracy show any progress but then suddenly start to rapidly improve.
We stress that our goal is not to compete with standard training under clean labels, it can be seen in Table 2 that there is a large gap between the two methods. Instead we rather aim for a deeper understanding of how deep networks generalize. We show that generalization, contrary to prior beliefs, remains possible even in the most adversarial setting of complete label noise.
Signal-Noise Separation. We now further inspect models trained under random labels and data augmentation, with an emphasis on how the noise stemming from the random labeling affects different parts of the network. To gain insights into this, we perform nearest-neighbour probing of different layers in the network, with the twist that we fit the K-NN classifier both with respect to the clean training labels, as well as with respect to the random labels. This way we can assess how much a given feature has adapted to the particular structure (clean vs. random). We visualize the results of such a clean and noisy probing strategy in Fig. 4 at different stages of training. Surprisingly, we can see a striking separation between feature learning and memorization within the architecture; noisy probing only starts to improve over random guessing once we reach the projector, whereas the encoder remains largely unaffected. On the other hand, clean probing outperforms probing at initialization throughout the entire embedding stage but sharply decays once we get to the projector. Some previous work highlighted that even when training on random labels without data augmentation, the very first layers learn data dependent features which lead to subsequent fast re-training on clean data Maennel et al. (2020). We give an interpretation of this finding in Sec. 5.2. We further investigate the role of the projector in Appendix C, Fig. 8, finding that higher widths lead to better performance in general. Figure 5: Illustration of the thought experiment in Sec. 5. The true cat x 1 labeled as "giraffe" is augmented, leading to a positive signal between the augmentations but to more noise due to x 2 , that has as true label "dog".
THE ROLE OF AUGMENTATIONS
The empirical evidence in the previous section highlights the crucial role of data augmentation for benign memorization. In this section, we aim to gain insights into the phenomenon by investigating more closely what properties of data augmentation are essential for benign memorization and how "ideal" augmentations might strongly differ when learning with clean or random labels.
Label Preservation. The first important characteristic of augmentations is their label-preserving nature, i.e. by augmenting an image (x, y) twice to produce (T 1 (x), y) , (T 2 (x), y), we have effectively added information, as indeed T 1 (x) and T 2 (x) share the same label. Unfortunately, this reasoning is flawed in the setting of random labels, or at the least only forms part of a larger picture, as we show in the following thought experiment. Consider two "original" samples (x 1 ,ỹ) and (x 2 ,ỹ) which happen to share the same random labelỹ but in truth have distinct labels y 1 = y 2 . In this case, forming augmentations (T 1 (x 1 ),ỹ) and (T 2 (x 1 ),ỹ) might lead to some correct signal as T 1 (x 1 ) and T 2 (x 1 ) share the same, true label, but at the same time leads to more distortion as x 2 has the same, random label, reinforcing the wrong correlation even more. As a consequence, separating noise from signal remains equally challenging as before. We illustrate the argument in Fig. 5. To check this hypothesis, we consider the extreme case where augmentations T ∈ T produce a new, i.i.d. sample T (x) that shares the same, true label with x. In a sense, this is the ideal augmentation and leads to the highest information gain. We implement such i.i.d. augmentations by only using a subset of the training data while using the remainder to assign to each training point B potential, independent examples with the same, true label. We choose the subset size as s = 1000 and the number of augmentations as B = 50 for CIFAR10 and train the same models as in Sec. 4, both under random and clean labels. We display the results in Table 5. Notice that for clean label training, such augmentations increase the training set size from 1000 to 1000×50 = 50000, thus leading to almost identical performance as training on the full dataset. Counter-intuitively, but for the reasons outlined above, training under random labels severely suffers under such ideal, independent augmentations and leads to malign memorization. In fact, this experiment is equivalent to fitting random labels without augmentations on the full training set where malign memorization occurs, as seen in Sec. 4.
This thought experiment demonstrates that under random labels, augmentations seem to play a very distinct role compared to the clean setting and label preservation in itself is not enough to guarantee benign memorization. This raises the following question:
What other properties of augmentations, besides label preservation, cause benign memorization?
We hypothesize that the origins of the phenomenon lie at the interplay between the highly correlated nature of augmentations and the inflated effective sample size that exceeds the model capacity (Sec 5.1), forcing the model to learn meaningful features (Sec 5.2). Training loss after 5000 epochs Figure 6: Illustration of nearest-neighbour probing accuracy (left) and (running) training loss (right) of a ResNet18 on CIFAR10 with increasing number of standard augmentations. We consider both label-preserving (blue) and completely random labeling of augmentations (red).
GOING BEYOND THE MODEL CAPACITY
We now study the phenomenon of benign memorization from the view point of capacity of a model class. Intuitively speaking, the capacity C t of a model captures how many distinct datapoints we can fit in t gradient steps, even if the corresponding targets are completely randomized. We refer to Appendix D.1 for the formal definition adopted here. If a model has enough capacity, it can potentially memorize all the patterns in the dataset. As seen in Zhang et al. (2017), deep models used in practice in conjunction with standard datasets, operate below the capacity threshold but nevertheless, they do not "abuse" their power and fit clean data in a meaningful way. On the other hand, by using data augmentation we inflate the number of samples and consequently operate above the capacity level. As a result, a model needs to efficiently encode augmentations (pure memorization is not possible) and hence meaningful features need to be learnt.
As seen in Sec. 3, standard datasets such as CIFAR10 have size n below the capacity C t of modern networks such as ResNets. When using data augmentation however, we now show that the resulting dataset exceeds the capacity.
Inflated Sample Size. Consider a set of augmentations T and assume that it has a finite size, B := |T | < ∞. Augmenting thus leads to a larger effective dataset S aug of size Bn. We can now study whether the capacity of standard networks trained by GD for a fixed number of epochs exceeds such an augmented dataset by varying B and randomly labeling each sample. We precompute a fixed set of B random augmentations for each sample for varying B and attempt to memorize a random labeling, where labels of augmentations are not preserved. We use the same setup as in Sec. 4 and augment CIFAR10 using the standard augmentations in the way described in Sec. 4. We display the results in Fig. 6. We see that indeed, overfitting the random labels becomes more difficult as we increase B and actually infeasible for a fixed number of gradient steps t = 5000 × 196 (fixed batch size on CIFAR10). Moreover, notice that in the standard setting of online augmentations, this observation becomes even more drastic as B increases over time if typical, continuous augmentations such as color jittering are included in the pipeline. We hypothesize that malign memorization under such an augmentation strategy thus becomes infeasible.
Learn if you must. We now investigate the influence of capacity on the resulting probing accuracy, in the case where augmentations are label-preserving and hence offer valuable signal. We thus consider the same setup as in the previous paragraph for varying number of augmentations B while assigning the same label to augmentations of the same image. We show the results in Fig. 6 on the left. We observe that as the number of augmentations B increases, probing accuracy improves and eventually surpasses probing at initialization. We hypothesize that as we approach and eventually surpass capacity, the model is increasingly forced to leverage the signal in augmentations and thus learns more and more relevant features. As we increase B, we saturate the information gain provided by the augmentations, the signal becomes redundant and performance starts to plateau.
WHAT CAN YOU LEARN FROM AUGMENTATIONS?
While we have seen that the model is forced to leverage the signal in augmentations, it remains unclear why this leads to high-quality embeddings. We now show how augmentations encourage features to become invariant, a property that has been identified in SSL to be strongly predictive.
Normalized Invariance. To measure the invariance of a function q : R d − → R a , we introduce the following quantity:
I(x; q, T ) := E T1,T2∼U (T ) q(T 1 (x)) − q(T 2 (x)) 2 E x =x q(x) − q(x ) 2 .(2)
Intuitively, I(x; q, T ) captures the features' similarity of augmentations of the same datapoint x compared to the representations of different datapoints x = x. Model's invariance implies I(x; q, T ) = 0, hence different augmentations are mapped to the same datapoint, while the model is still able to meaningfully distinguish between the representations of different datapoints (i.e. E x =x q(x) − q(x ) = 0). In Fig. 7, we show how I(x; q, T ) for different layers correlates with probing performance and indeed decreases over time. Due to its better implicit bias (ResNet vs MLP), invariance is largely learnt in the encoder, leading to the striking signal-noise separation identified in Sec. 4. These results suggest that when the model has insufficient capacity to memorize the (augmented) samples, it learns to be more invariant with respect to the label-preserving augmentations. Consequently, this mechanism reduces the "effective sample size" and allows the model to fit the data in a benign (i.e. augmentation-invariant) way. But why do invariant features imply a better clustering in embedding space as evidenced by a high K-NN accuracy? This is a heavily researched topic with several plausible theories in the area of self-supervised learning (
DISCUSSION AND CONCLUSION
In this work we have identified the surprising phenomenon of benign memorization, demonstrating that generalization and memorization are not necessarily at odds. We put forward an interpretation through the lens of model capacity, where the inflation in sample size forces the model to exploit the correlation structure by learning the invariance with respect to the augmentations. Furthermore, we have shown that invariance learning happens largely in the encoder, while the projector performs the noisy memorization task.
Our findings underline the complicated and mysterious inner workings of neural networks, showing that generalization can be found where we least expect it. To describe benign memorization, a complete generalization theory needs to capture the strong implicit bias built into deep models, which enables a clean separation of noise and signal. Secondly, such a theory needs to incorporate feature learning, as benign memorization only emerges in the encoder. Both those goals remain very challenging. In particular, we speculate that the line of work based on the neural tangent kernel (Jacot et al., 2018) which connects SGD-optimized neural networks in the infinite width regime and the realm of kernels cannot explain benign memorization. In fact, in this regime the weights do not move from initialization, thus preventing feature learning. Some recent works have pushed further and study neural networks outside of the kernel regime (Allen-Zhu & Li, 2019; 2020) and we believe that the developed tools could be very helpful in understanding benign memorization. Another promising direction are perturbative finite width corrections to NTKs (Hanin & Nica, 2019;Zavatone-Veth et al., 2021) which also incorporate feature learning. We leave exploring benign memorization in such a mathematical framework as future work.
REPRODUCIBILITY STATEMENT
We have taken multiple steps to ensure reproducibility of the experiments. We refer the reader to Appendix F for a complete description of the training protocol. We have also released the code as part of the supplementary material, including scripts on how to reproduce our results.
A LINEAR PROBING RESULTS
We replicate the results of Table 4 but with linear probing instead of K-NN probing of the embeddings, for the same models and the same datasets. In this case we train a linear classifier on top of the embeddings of the true (unaugmented) training data and evaluate on the left-out test set. We see that random without data augmentation although above guessing is still below the performance at initialization.
B CONNECTION TO SELF-SUPERVISED LEARNING
In this section, we investigate the connection between non-contrastive SSL (Hua et al., 2021b;Grill et al., 2020;Chen & He, 2021) and training with random labels on the mean squared error (MSE) loss. We start with the following result:
Lemma B.1 Fix B ∈ N vectors x 1 , . . . , x B ∈ R d and a ∈ R d . Then it holds that
1 B B i=1 x i − a 2 = 1 2B 2 B i,j=1 x i − x j 2 + a − 1 B B i=1
x i 2 Proof: We simply expand the first term on the right-hand-side:
1 2B 2 B i,j=1 x i − x j 2 = 1 2B 2 B i,j=1 x i − a + a − x j 2 = 1 2B 2 B i,j=1 x i − a 2 + x j − a 2 − 2(x i − a) T (x j − a) = 1 B B i x i − a 2 − 1 B 2 B i,j=1 (x i − a) T (x j − a) = 1 B B i x i − a 2 − a 2 − 1 B 2 n i,j=1 x T i x j + 2 B n i=1 a T x i
On the other hand, we have that
a − 1 B B i=1 x i 2 = a 2 + 1 B 2 n i,j=1 x T i x j − 2 B n i=1
a T x i
Hence we see that
1 2B 2 B i,j=1 x i − x j 2 = 1 B B i x i − a 2 − a − 1 B B i=1
x i 2 and re-arranging terms concludes the proof.
We now apply this result in the case where the label plays the role of a and x i 's play the role of different augmentations. Let us thus consider B augmentations T 1 , . . . , T B and n ∈ N samples x 1 , . . . , x n ∈ R d . Moreover we have some labels y 1 , . . . , y n ∈ R K that could be completely random. Using the previous result we can write the (random) supervised loss (assuming meansquared error) as follows:
Theorem B.2 Denote the supervised loss under data augmentation aŝ
L super (θ) = 1 nB n i=1 B a=1 f θ (T a (x i )) − y i 2
Then we can decompose it into the following two terms,
L super (θ) = 1 2nB 2 n i=1 B a,b=1 f θ (T a (x i )) − f θ (T b (x i )) 2 =:Inv(θ)≥0 + 1 n n i=1 y i − 1 B B a=1 f θ (T a (x i )) 2 =:Bias(θ)≥0
Notice that in Inv(θ), we group augmentations of the same input x i together, measuring thus how invariant a given model f θ is w.r.t. to the augmentations. This is the positive signal illustrated in the thought experiment in Fig. 5. Interestingly, minimizing Inv(θ) is a core ingredient for so-called non-contrastive self-supervised learning methods (Hua et al., 2021b;Grill et al., 2020;Chen & He, 2021).
The bias term on the other hand is influenced by the random labeling. To better understand the bias term, define N c := {i ∈ {1, . . . , n} : y i = e c }, i.e. all samples that have label c. Furthermore, let
f θ (x i ) = 1 B B a=1 f θ (T a (x i )
) be the model average over all the B augmentations. We can show that we can express the bias as
Bias(θ) = 1 n C c=1 i∈Nc e c −f θ (x i ) 2 .
Notice that as in a random labeling experiment, the number of classes C can be considered as a hyperparameter, hence if we choose C >> n, then each sample x 1 , . . . , x n is assigned a different class with high probability. In this case, the "bad bias" given by assigning the same label to samples of different classes vanishes, and one has:
Bias(θ) = 1 n n i=1 e i −f θ (x i ) 2 ,
where w.l.o.g we have re-ordered the labels such that the i-th label is assigned to sample x i . In this limiting case, the bias term represents a contrastive term that encourages the average modelf θ (x i ) to be different from the other samples x j = x i and their augmentations. In this sense, this term can be related to contrastive SSL. On the other hand -unlike contrastive SSL -the distance between the average models of two datapoints cannot be arbitrarily large, and once the loss is driven to zero, Bias(θ) = 0 and it is trivial to see that f θ (x i ) −f θ (x j ) 2 = 2. We refer the reader to Appendix C for experiments with varying number of classes. We observe that, as hypothesized, the influence of the bias diminishes and performance increases as a function of the number of classes C.
C MORE EXPERIMENTS
In the following we present more empirical evidence that may help better interpret our findings.
Role of the Projector. We visualize in Fig. 8 the effect of varying the size of the hidden layer in the projector. There is a threshold under which learning is impossible (or at least very hard given a fixed computational budget). Surprisingly, using a larger projector seems beneficial beyond the point where we manage to decrease the loss significantly and achieve memorization of the original dataset. The role of the number of random classes. So far we assigned to each sample a random label in R C , where C is the number of classes in the original dataset. In principle, increasing the number of classes decreases the strength of the noise provided to the model. We verify this in Fig. 9, where we vary the number of possible classes that each sample is randomly assigned to. In the extreme case, every sample is assigned its unique class label y i = e i , where e i ∈ R n , recovering the result of Dosovitskiy et al. (2014) which we refer to as 'Per sample'. We observe that while increasing number of classes performance improves (as less noisy assignments are made). We refer the reader to Appendix B for a more formal connection. Transferability. Features learnt with random labels are also useful when evaluated on different datasets. In Fig. 10 on the right-hand-side, we transfer the features learnt on CIFAR10, to CIFAR100 and STL10 (Coates et al., 2011). We observe that we can achieve strong downstream performance, highlighting the strength of the features learnable under random labels. Moreover, we see that using data augmentation is crucial when label noise is high as without data augmentation Figure 10: Nearest-neighbour probing accuracy for transfer learning to different datasets as a function of label noise. A ResNet18 is trained based on random labels on CIFAR10 with (right) and without (left) data augmentation.
Dependence on noise and speed of convergence. Augmentations have been shown to be especially useful under the presence of heavy label noise. In Fig. 11 we visualize the performance achieved during the first stage of training under varying level of label noise. Label noise specifies the percentage of labels that are randomly permuted versus the percentage of labels that are kept the same. Effect on the strength of the augmentation. Although malign memorization gets impossible as the number of augmentations keeps increasing, the strength of the augmentations themselves plays a role on the generalization achieved. The strength of the augmentation basically dictates the strength of the invariance that the model has to learn, which also correlates with the quality of the features learned as seen in Fig. 12. To control strength we use the strength level of RandAugment (Cubuk et al., 2020). We clearly see that stronger augmentations are beneficial for the quality of the embeddings, underlining the fact that invariance indeed plays a key role in benign memorization. Strength 7
Augmentation type Figure 12: Nearest-neighbour probing accuracy when trained on random labels with varying strength of (infinite/online) augmentations.
Varying levels of label noise. Previous works have highlighted the importance of augmentations in the presence of high-label noise. We examine the relationship between label noise and generalization both with and without augmentations in Fig. 13. We see that while data augmentation is not so crucial for clean labels, its role becomes more and more critical as we increase the amount of label noise. Without augmentations, we suffer from a strong decrease in performance, eventually ending with random guessing as we reach complete label noise. On the other hand, using augmentations stabilizes this decay and we still achieve strongly non-trivial performance even in the setting of complete label noise. Here we mathematically introduce the concept of capacity of a model class trained with Gradient Descent (GD) that was informally used in the main text. We remark that similar definitions have been explored in prior work (Arpit et al., 2017).
Definition D.1 Consider x 1 , . . . , x n for n ∈ N in general position and a fixed number of classes C. Given a labelingS = {(x i ,ỹ i )} n i=1 , let A t (S) ⊂ Θ denote the set of solutions reachable by GD with a computational budget t. We define the capacity C t of the model class {f θ : θ ∈ Θ} as C t := sup{n ∈ N| ∀S ∃θ ∈ A t (S) s.t. f θ memorizesS}.
While similar to standard measures such as the VC-dimension (Vapnik & Chervonenkis, 1971), the capacity defined here depends on the learning algorithm (including the computational budget t) and is thus always smaller than the corresponding VC-dimension.
D.2 RELATED WORKS
We give a more detailed discussion on the related work of Dosovitskiy et al. (2014) Figure 3, the authors also adjust the number of samples used in the experiments, if the authors use 8000 surrogate classes, they also use 8000 patches, if they use 16000 classes they also use 16000 patches. At no point do different samples share the same label as the dataset size is not preserved in each experiment. This is in contrast to our experiment in Fig. 9 where we indeed vary the number of classes and assign the same labels to several samples. Having a label per sample of course strongly differs from only having 10 labels (or C, depending on the dataset). First, achieving strong performance in this setting is far more difficult (and thus surprising) since by reducing the number of classes so drastically, we are introducing a bias into the dataset as examples with different underlying labels might suddenly share the same label. We refer to the thought experiment in Sec. 5 for more details and remark that this reasoning does not apply for Dosovitskiy et al. (2014) precisely due to the fact that they use the same number of labels as samples. Second, our setup recovers the well-studied setting of random labels and thus memorization, connecting hence two very different fields, further distinguishing our results from Dosovitskiy et al. (2014) that were obtained in the context of self-supervised learning.
D.3 INVARIANCE MEASURE
Here we give a bit more background and motivation for the invariance measure I(x; q, T ) := E T1,T2∼U (T ) q(T 1 (x)) − q(T 2 (x)) 2 E x =x q(x) − q(x ) 2 . This measure is inspired by loss functions often used in self-supervised learning where we aim to explicitly optimize for invariance to different augmentations (Chen et al., 2020c). We use a normalization term to ensure that there is a diversity between predictions of different, unrelated samples in order to rule out that simple collapse (i.e. constant) representations do not achieve a high invariance.
E TOY EXAMPLE
To better understand the role of augmentations with respect to the capacity of the model we devise a simple toy setting.
We consider samples x i , . . . , x n ∈ R d , that belong to one of the C underlying classes. We select the first d 1 < d coordinates to denote the true class assignment z i , uniformly selected from the set {1, . . . , C}, by sampling from a mixture of Gaussian distributions and let the rest d − d 1 dimensions correspond to noise. More specifically:
x i[:d1] | zi=k ∼ N (µ i , Σ 1 ),
x i[d1:] ∼ N (0, Σ 2 ). As set the covariance matrices as Σ 3 = Σ 2 = 10 Σ 1 = σ 2 I, where I is the identity matrix. It is obvious that learning invariance for this task under these augmentations leads to embeddings that ignore the noise subspace. We employ in this case a simple linear encoder h W (x) = W x, with W ∈ R d×d . As a projector we use g V (
x) = K i=1 1 x − v i K j=1 1 x − v j l i .
Here l i is a one hot encoding sampled randomly from the set {e 1 , . . . , e C }. We choose such a projector as it is not difficult to verify that in the limit case where g V (x) = l arg min i x−v i , we have an upper bound on the capacity of the model, based on the number K of the vectors used by the projector. Additionally, its non-linear nature proposes that any possible invariance learning will occur at the encoder. In Fig. 14 we visualize the K-NN probing for the downstream task of correctly predicting the clean label of unseen test samples, after training on random labels. By varying the number of samples n and augmentations B available, we directly control whether full memorization can take place or not. When the number of samples n is smaller than C B , where C is the capacity of the model, full memorization is possible, which discourages any feature learning, leading to bad generalization.
F EXPERIMENTAL SETUP
Dataset Details: We conducted experiments on the classic CIFAR-10, CIFAR-100 and TinyIma-geNet datasets, using the default dataset splits. For completeness, we present in Table 5 Table 5: Statistics for the datasets used.
Architecture: As commonly done for smaller size images (Chen & He, 2021;Hua et al., 2021a), we use a variant of the ResNet18 architecture, where the max-pooling layer is removed, and the first convolution is modified to have a kernel size of 3 and a stride of 1. We also remove the last fully connected layer, as we replace it with our projector. We provide more details about the hyperparameters used in Table 6. For the experiments on TinyImageNet, we rescale images to a size of 64 × 64 and additionally restore the stride of the first convolution to the value 2. We apply the same modification to VGG when trained on TinyImageNet.
Hyperparameters Value
Figure 1 :
1(Left) Encoder-projector pair. (Right) Standard data augmentations.
; Chen et al. (2020a); Caron et al. (2021); Grill et al. (2020); Bardes et al. (2022); Zbontar et al..
Figure 4 :
4K-NN probing of a ResNet18 on CIFAR10, based on (flattened) representation of different layers. Dashed blue lines indicate K-NN accuracies fitted on the clean training data and evaluated w.r.t. clean test data. Solid lines indicate K-NN training accuracies based on the random labels.
Figure 7 :
7Averaged normalized invariance of a ResNet18 on CIFAR10, as a function of epochs, lower value means more invariant. The dashed line represents K-NN probing error of the embedding layer.
Figure 8 :
8Nearest-neighbour probing accuracy as function of the size of the projector.
Figure 9 :
9Nearest-neighbour probing accuracy as a function of the number of classes.
Figure 11 :
11Nearest-neighbour probing during training for the CIFAR10 and TinyImageNet datasets during training.
Figure 13 :
13Nearest-neighbour probing accuracy when trained on varying levels of label noise with and without augmentations.
Figure 14 :
14K-NN probing for the downstream class assignment on unseen test points for the toy example, after training on random labels.With µ i we denote the cluster centers that are sampled randomly. In this case, we consider augmentations of the samples:x = x + a,where a [:d1] = 0 and a [d1:] ∼ N (0, Σ 3 ).
Table 2 :
2K-NN probing accuracies (in percentage) of the embeddings for various datasets under
different settings. DA refers to training under standard data augmentation. INIT refers to the perfor-
mance at initialization. Except for INIT, all models reach perfect (unaugmented) training accuracy.
technique necessary for state-of-the-art performance for a variety of vision tasks. Indeed, the top
five leaders on ImageNet 1 (Yu et al., 2022; Dai et al., 2021; Zhai et al., 2021; Pham et al., 2021;
Random + i.i.d. DA 12.41 -Dataset Size
1000 50000
Random
11.41 12.41
Clean
36.75 83.66
Clean + i.i.d. DA
82.73
-
Clean + DA
69.7
91.3
Random + DA
54.7
76.2
Table 3 :
3K-NN probing accuracies of the embeddings of a ResNet18 for CIFAR10 with and without i.i.d. augmentations under clean and random labels. 50000 refers to full CIFAR10 for reference.
SSL)(Saunshi et al., 2022;Arora et al., 2019b; Wen & Li, 2021). In Appendix B, we derive a more formal connection between SSL loss and training with random labels. Others have looked at the improved sample complexity caused by incorporating invariances into the model(Bietti et al., 2021; Xiao & Pennington, 2022).
Alberto Bietti, Luca Venturi, and Joan Bruna. On the sample complexity of learning under geometric stability. Advances in Neural Information Processing Systems, 34:18673-18684, 2021. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Geoffrey Gordon, David Dunson, and Miroslav Dudík (eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS), volume 15 of Proceedings of Machine Learning Research, pp. 215-223, Fort Lauderdale, FL, USA, 11-13 Apr 2011. PMLR. Gilad Cohen, Guillermo Sapiro, and Raja Giryes. Dnn or k-nn: That is the generalize vs. memorize question. ArXiv, abs/1805.06822, 2018. Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, and Christopher Re. A kernel theory of modern data augmentation. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 1528-1537. PMLR, 09-15 Jun 2019.Mathilde Caron, Hugo Touvron, Ishan Misra, Herv'e J'egou, Julien Mairal, Piotr Bojanowski, and
Armand Joulin. Emerging properties in self-supervised vision transformers. 2021 IEEE/CVF
International Conference on Computer Vision (ICCV), pp. 9630-9640, 2021.
Mark Chen, Alec Radford, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever.
Generative pretraining from pixels. In Proceedings of the 37th International Conference on Ma-
chine Learning (ICML), 2020a.
Shuxiao Chen, Edgar Dobriban, and Jane Lee. A group-theoretic framework for data augmentation.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural
Information Processing Systems, volume 33, pp. 21321-21333. Curran Associates, Inc., 2020b.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for
contrastive learning of visual representations. In Hal Daumé III and Aarti Singh (eds.), Proceed-
ings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of
Machine Learning Research, pp. 1597-1607. PMLR, 13-18 Jul 2020c.
Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750-15758, 2021.
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated
data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition workshops, pp. 702-703, 2020.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hier-
archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition,
pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 2019.
Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimi-
native unsupervised feature learning with convolutional neural networks. Advances in neural
information processing systems, 27, 2014.
Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for
deep (stochastic) neural networks with many more parameters than training data. Proceedings of
the Thirty-Third Conference on Uncertainty in Artificial Intelligence, 2017.
Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the
long tail via influence estimation. Advances in Neural Information Processing Systems, 33:2881-
2891, 2020.
Kento Nishi, Yi Ding, Alex Rich, and Tobias Höllerer. Augmentation strategies for learning with
noisy labels. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pp. 8018-8027, 2021.
Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making
deep neural networks robust to label noise: A loss correction approach. 2017 IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), pp. 2233-2241, 2017.
Hieu Pham, Zihang Dai, Qizhe Xie, Minh-Thang Luong, and Quoc V. Le. Meta pseudo labels. In
IEEE Conference on Computer Vision and Pattern Recognition, 2021.
Vinaychandran Pondenkandath, Michele Alberti, Sammer Puran, Rolf Ingold, and Marcus Liwicki.
Leveraging random label memorization for unsupervised pre-training. Workshop of Integration
of Deep Learning Theories at Conference on Neural Information Processing Systems (NIPS),
abs/1811.01640, 2018.
David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive
label noise. arXiv preprint arXiv:1705.10694, 2017.
Amartya Sanyal, Puneet K. Dokania, Varun Kanade, and Philip Torr. How benign is benign overfit-
ting? In International Conference on Learning Representations, 2021.
Nikunj Saunshi, Jordan Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham
Kakade, and Akshay Krishnamurthy. Understanding contrastive learning requires incorporat-
ing inductive biases. Proceedings of the 39th International Conference on Machine Learning,
(ICML), 2022.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. International Conference on Learning Representations (ICLR), 2014.
Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy
labels with deep neural networks: A survey, 2020.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning
Research, 15(56):1929-1958, 2014.
V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability and its Applications, 16(2):264-280, 1971.
doi: 10.1137/1116025.
Zixin Wen and Yuanzhi Li. Toward understanding the feature learning process of self-supervised
contrastive learning. In International Conference on Machine Learning, pp. 11112-11122.
PMLR, 2021.
Sen Wu, Hongyang R. Zhang, Gregory Valiant, and Christopher Ré. On the generalization effects of
linear transformations in data augmentation. In Proceedings of the 37th International Conference
on Machine Learning (ICML), 2020.
Lechao Xiao and Jeffrey Pennington. Synergy and symmetry in deep learning: Interactions between
the data, model, and inference algorithm. arXiv preprint arXiv:2207.04612, 2022.
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu.
Coca: Contrastive captioners are image-text foundation models. Transactions on Machine Learn-
ing Research, 2022.
Jacob Zavatone-Veth, Abdulkadir Canatar, Ben Ruben, and Cengiz Pehlevan. Asymptotics of repre-
sentation learning in finite bayesian neural networks. Advances in neural information processing
systems, 34:24765-24777, 2021.
Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised
learning via redundancy reduction. In Proceedings of the 38th International Conference on Ma-
chine Learning (ICML.
DATASET
MODEL
RANDOM RANDOM + DA CLEAN CLEAN + DA INIT
CIFAR10
ResNet18
29.8
77.0
83.9
92.5
43.8
VGG11
23.9
71.0
81.8
89.8
40.7
CIFAR100
ResNet18
10.3
44.2
52.3
69.6
18.5
VGG11
12.7
47.5
51.5
61.4
16.9
TinyImageNet
ResNet18
4.7
33.9
39.5
47.7
3.7
VGG11
3.7
22.3
33.0
41.3
4.4
Table 4 :
4Linear probing accuracies (in percentage) of the embeddings for various datasets under
different settings. DA refers to training under data augmentation. INIT refers to the performance at
initialization.
since although very different, a quick reading of it can mislead the reader into finding it more similar than is actually the case. The authors in Dosovitskiy et al. (2014) consider a dataset without labels, and sample N patches from different images, leading to examples {x 1 , . . . , x N }. K augmentations are produced for each sample and all of those augmentations get the same label i. There are thus N so-called surrogate classes. When varying the number of surrogate classes in
statistics regarding these datasets.Examples in train split Examples in test split Number of classesDataset
CIFAR-10
50000
10000
10
CIFAR-100
50000
10000
100
TinyImageNet 3
100000
10000
200
https://paperswithcode.com/sota/image-classification-on-imagenet
Understanding intermediate layers using linear classifier probes. Guillaume Alain, Yoshua Bengio, abs/1610.01644International Conference on Learning Representations (ICLR). Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. International Conference on Learning Representations (ICLR), abs/1610.01644, 2017.
What can resnet learn efficiently, going beyond kernels?. Zeyuan Allen, -Zhu , Yuanzhi Li, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond ker- nels? In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Gar- nett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Asso- ciates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 5857d68cd9280bc98d079fa912fd6740-Paper.pdf.
Backward feature correction: How deep learning performs deep learning. ArXiv, abs. Zeyuan Allen, -Zhu , Yuanzhi Li, Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning. ArXiv, abs/2001.04413, 2020.
Stronger generalization bounds for deep nets via a compression approach. Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang, Proceedings of the 35th International Conference on Machine Learning (ICML). the 35th International Conference on Machine Learning (ICML)Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruosong Wang, Proceedings of the 36th International Conference on Machine Learning (ICML. the 36th International Conference on Machine Learning (ICMLSanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. Proceedings of the 36th International Conference on Machine Learning (ICML, 2019a.
Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, arXiv:1902.09229arXiv preprintSanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saun- shi. A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229, 2019b.
A closer look at memorization in deep networks. Devansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, S Maxinder, Tegan Kanwal, Asja Maharaj, Aaron Fischer, Yoshua Courville, Bengio, International conference on machine learning. PMLRDevansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin- der S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In International conference on machine learning, pp. 233-242. PMLR, 2017.
VICReg: Variance-invariance-covariance regularization for self-supervised learning. Adrien Bardes, Jean Ponce, Yann Lecun, International Conference on Learning Representations (ICLR). 2022Adrien Bardes, Jean Ponce, and Yann LeCun. VICReg: Variance-invariance-covariance regular- ization for self-supervised learning. In International Conference on Learning Representations (ICLR), 2022.
Peter Bartlett, Dylan J Foster, Matus Telgarsky, Spectrally-normalized margin bounds for neural networks. 31st Conference on Neural Information Processing Systems (Neurips). Peter Bartlett, Dylan J. Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. 31st Conference on Neural Information Processing Systems (Neurips), 2017.
Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. L Peter, Nick Bartlett, Chris Harvey, Abbas Liaw, Mehrabian, Journal of Machine Learning Research. 20Peter L. Bartlett, Nick Harvey, Chris Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. Journal of Machine Learning Research 20, pp. 1-17, 2019.
Benign overfitting in linear regression. L Peter, Philip M Bartlett, Long, Alexander Gá Bor Lugosi, Tsigler, 10.1073/pnas.1907378117Proceedings of the National Academy of Sciences. the National Academy of Sciences117Peter L. Bartlett, Philip M. Long, Gá bor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 117(48):30063-30070, apr 2020. doi: 10.1073/pnas.1907378117.
Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data. Spencer Frei, S Niladri, Peter Chatterji, Bartlett, PMLRProceedings of Thirty Fifth Conference on Learning Theory. Po-Ling Loh and Maxim RaginskyThirty Fifth Conference on Learning Theory178Spencer Frei, Niladri S Chatterji, and Peter Bartlett. Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data. In Po-Ling Loh and Maxim Raginsky (eds.), Proceedings of Thirty Fifth Conference on Learning Theory, volume 178 of Proceedings of Machine Learning Research, pp. 2668-2703. PMLR, 02-05 Jul 2022.
Bootstrap your own latent -a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Remi Koray Kavukcuoglu, Michal Munos, Valko, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. LinCurran Associates, Inc33Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, and Michal Valko. Bootstrap your own latent -a new approach to self-supervised learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 21271-21284. Curran Associates, Inc., 2020.
Boris Hanin, Mihai Nica, arXiv:1909.05989Finite depth and width corrections to the neural tangent kernel. arXiv preprintBoris Hanin and Mihai Nica. Finite depth and width corrections to the neural tangent kernel. arXiv preprint arXiv:1909.05989, 2019.
How data augmentation affects optimization for linear regression. Boris Hanin, Yi Sun, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanBoris Hanin and Yi Sun. How data augmentation affects optimization for linear regression. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural In- formation Processing Systems, 2021.
Provable guarantees for selfsupervised deep learning with spectral contrastive loss. Jeff Z Haochen, Colin Wei, Adrien Gaidon, Tengyu Ma, NeurIPS. 2021Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self- supervised deep learning with spectral contrastive loss. In NeurIPS, 2021.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
On feature decorrelation in self-supervised learning. Tianyu Hua, Wenxiao Wang, Zihui Xue, Sucheng Ren, Yue Wang, Hang Zhao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionTianyu Hua, Wenxiao Wang, Zihui Xue, Sucheng Ren, Yue Wang, and Hang Zhao. On feature decorrelation in self-supervised learning. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pp. 9598-9608, 2021a.
On feature decorrelation in self-supervised learning. Tianyu Hua, Wenxiao Wang, Zihui Xue, Yue Wang, Sucheng Ren, Hang Zhao, IEEE/CVF International Conference on Computer Vision (ICCV). Tianyu Hua, Wenxiao Wang, Zihui Xue, Yue Wang, Sucheng Ren, and Hang Zhao. On feature decorrelation in self-supervised learning. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9578-9588, 2021b.
Neural tangent kernel: Convergence and generalization in neural networks. Arthur Jacot, Franck Gabriel, Clément Hongler, Advances in neural information processing systems. 31Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and gen- eralization in neural networks. Advances in neural information processing systems, 31, 2018.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsThomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional net- works. Proceedings of the 5th International Conference on Learning Representations, 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Toronto, OntarioUniversity of TorontoTechnical Report 0Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical Report 0, University of Toronto, Toronto, Ontario, 2009.
Tiny imagenet visual recognition challenge. Ya Le, Xuan S Yang, Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. 2015.
Swin transformer v2: Scaling up capacity and resolution. Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo, Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, and Baining Guo. Swin transformer v2: Scaling up capacity and resolution, 2021.
The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. Siyuan Ma, Raef Bassily, Mikhail Belkin, PMLRProceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3325-3334. PMLR, 10-15 Jul 2018.
What do neural networks learn when trained with random labels?. Hartmut Maennel, Ibrahim M Alabdulmohsin, O Ilya, Robert Tolstikhin, Olivier Baldock, Sylvain Bousquet, Daniel Gelly, Keysers, Advances in Neural Information Processing Systems. 33Hartmut Maennel, Ibrahim M Alabdulmohsin, Ilya O Tolstikhin, Robert Baldock, Olivier Bousquet, Sylvain Gelly, and Daniel Keysers. What do neural networks learn when trained with random labels? Advances in Neural Information Processing Systems, 33:19693-19704, 2020.
Norm-based capacity control in neural networks. Ryota Behnam Neyshabur, Nathan Tomioka, Srebro, Proceedings of The 28th Conference on Learning Theory. The 28th Conference on Learning TheoryPMLRBehnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. Proceedings of The 28th Conference on Learning Theory (PMLR), 2015.
A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. Srinadh Behnam Neyshabur, Nathan Bhojanapalli, Srebro, International Conference on Learning Representations (ICLR). Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. International Conference on Learning Representations (ICLR), 2018.
Scaling vision transformers. Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, Lucas Beyer, Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers, 2021.
Understanding deep learning requires rethinking generalization. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, International Conference on Learning Representations. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. International Conference on Learning Repre- sentations (ICLR), 2017.
mixup: Beyond empirical risk minimization. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, David Lopez-Paz, International Conference on Learning Representations. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empiri- cal risk minimization. In International Conference on Learning Representations, 2018.
To balance or not to balance: A simple-yet-effective approach for learning with long-tailed distributions. Junjie Zhang, Lingqiao Liu, Peng Wang, Chunhua Shen, arXiv:1912.04486arXiv preprintJunjie Zhang, Lingqiao Liu, Peng Wang, and Chunhua Shen. To balance or not to balance: A simple-yet-effective approach for learning with long-tailed distributions. arXiv preprint arXiv:1912.04486, 2019.
Neural architecture search with random labels. Xuanyang Zhang, Pengfei Hou, X Zhang, Jian Sun, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Xuanyang Zhang, Pengfei Hou, X. Zhang, and Jian Sun. Neural architecture search with random labels. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10902-10911, 2021. |
220,041,969 | Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers | We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the modified reward function penalizes the agent for visiting states and taking actions in the source domain which are not possible in the target domain. Said another way, the agent is penalized for transitions that would indicate that the agent is interacting with the source domain, rather than the target domain. Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics. On discrete and continuous control tasks, we illustrate the mechanics of our approach and demonstrate its scalability to high-dimensional tasks. * Equal contribution.Preprint. Under review. | [] | Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Benjamin Eysenbach [email protected]
Google Brain
CMU
University of Pittsburgh
CMU
CMU
UC Berkeley
Google Brain
Swapnil Asawa
Google Brain
CMU
University of Pittsburgh
CMU
CMU
UC Berkeley
Google Brain
Shreyas Chaudhari [email protected]
Google Brain
CMU
University of Pittsburgh
CMU
CMU
UC Berkeley
Google Brain
Ruslan Salakhutinov
Google Brain
CMU
University of Pittsburgh
CMU
CMU
UC Berkeley
Google Brain
Sergey Levine
Google Brain
CMU
University of Pittsburgh
CMU
CMU
UC Berkeley
Google Brain
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the modified reward function penalizes the agent for visiting states and taking actions in the source domain which are not possible in the target domain. Said another way, the agent is penalized for transitions that would indicate that the agent is interacting with the source domain, rather than the target domain. Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics. On discrete and continuous control tasks, we illustrate the mechanics of our approach and demonstrate its scalability to high-dimensional tasks. * Equal contribution.Preprint. Under review.
Introduction
Reinforcement learning (RL) is often touted as a promising approach for costly and risk-sensitive applications, yet learning to act in those domains directly is costly and risky. How can an intelligent agent learn to solve tasks in environments in which it cannot practice? In this paper we study the problem of domain adaptation in reinforcement learning (RL). In the context of RL, domains refer to different environments (MDPs) that have different dynamics (transition functions). Our aim is to learn a policy in the source domain that will achieve high reward in a different target domain, using a limited amount of experience from the target domain.
RL algorithms today require a large amount of experience in the target domain. Experience in the target domain is expensive to collect: it costs time (e.g., when the target domain is the real world, we cannot progress faster than real-time); it costs money (e.g., a robot might break itself); it could even be dangerous to humans [46]. For many tasks, such as assistive robotics and self-driving cars, we may have access to a different but structurally similar source domain. While the source domain has different dynamics than the target domain, experience in the source domain is much cheaper to collect. For example, a computer simulation of the real world can run much faster than real time, collecting (say) a year of experience in an hour; it is much cheaper to simulate 1000 robot manipulators in parallel than to maintain 1000 robot manipulators. The source domain need not be a simulator, but Figure 1: We will learn a policy for a target domain (Left) using experience from a source domain with different dynamics (Center). (Right) Our method modifies the reward function to force the agent to learn behaviors that will be feasible in the target domain. rather could be any "practice" facility, such as a "farm" of robot arms [42], a "playpen" for learning to walk [52], or a controlled testing facility for self-driving vehicles [45]. Even when the source domain is built to exactly mimic the target domain, subtle aspects of the domains often remain different: robot motors may have different actuation noise, or human behavior may not be perfectly modeled.
Domain adaptation in RL is challenging because strategies which are effective in the source domain may not be effective in the target domain. For example, a good approach to driving a car around a dry racetrack (the source domain) may entail aggressive acceleration and cutting corners. If the target domain is an icy, public road, this approach may cause the car to skid off the road or hit oncoming traffic. While prior work has thoroughly studied the domain adaptation of observations in RL [7,25,29], it ignores the domain adaptation of the dynamics.
This paper presents a simple and practical approach for domain adaptation in RL, illustrated in Fig. 1. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Because our method learns a classifier, rather than a dynamics model, we expect it to handle high-dimensional tasks better than model-based methods, a conjecture supported by experiments on the 111-dimensional Ant task. Intuitively, the modified reward function penalizes the agent for visiting states and taking actions where the source domain and target domain differ. The agent is penalized for taking transitions which would indicate whether the agent is interacting with the source or target domain.
The main contribution of this work is an algorithm for domain adaptation to dynamics changes in RL, based on the idea of compensating for differences in dynamics by modifying the reward function. This algorithm does not need to estimate transition probabilities, but rather modifies the reward function using a pair of classifiers. On a range of discrete and continuous control tasks, we both illustrate the mechanics of our approach and demonstrate its scalability to higher-dimensional tasks. Broadly, we believe that the idea of compensating for domain shift in dynamics with a learned reward function represents a broadly-applicable approach for learning from inaccurate models.
Related Work
While our work will focus on domain adaptation applied to RL, we start by reviewing more general ideas in domain adaptation, and defer to Kouw and Loog [39] for a recent review of the field. Two common approaches to domain adaptation are importance weighting and domain-agnostic features. Importance-weighting methods (e.g., [12,43,88]) estimate the likelihood ratio of examples under the target domain versus the source domain, and use this ratio to re-weight examples sampled from the source domain. To estimate the likelihood ratio, some methods directly estimate two density models and then take the difference (e.g., [4,64,87]), other methods directly estimate the ratio [35,48,63,65,66,78]. A number of these direct estimation methods operate by learning a classifier to distinguish examples from the source domain versus examples from the target domain [6,48,61,78]. We refer to Huszár [31] for a recent review of direct estimation methods. Methods based on domainagnostic features aim to map examples from the source and target domain into a common feature space [21,30,85]. Our method is similar to classifier-based density-ratio estimation, with two important distinctions. First, we will need to estimate the density ratio of conditional distributions (transition probabilities), which is different from modeling conditional distributions as a density ratio [67]. To do this, we will learn not one but two classifiers. Second, we will use the logarithm of the density ratio to modify the reward function instead of weighting samples by the density ratio, which is often numerically unstable (see, e.g., Schulman et al. [60, §3]).
Prior methods for applying domain adaptation to RL include approaches based on system identification, domain randomization, and observation adaptation. Perhaps the most established approach, system identification [44], uses observed data to tune the parameters of a simulator [19,20,55,70,81,84,90] More recent work has successfully used this strategy to bridge the sim2real gap [9,53]. Closely related is work on online system identification and meta-learning, which directly uses the inferred system parameters to update the policy [11,57,71,86]. However, these approaches typically require either a model of the environment or a manually-specified distribution over potential test-time dynamics, requirements that our method will lift. Another approach, domain randomization, randomly samples the parameters of the source domain and then finds the best policy for this randomized environment [13,50,56,75]. While often effective, this method is sensitive to the choice of which parameters are randomized, and the distributions from which these simulator parameters are sampled. A third approach, observation adaptation, modifies the observations of the source domain to appear similar to those in the target domain. While this approach has been successfully applied to video games [24] and robot manipulation [7], it ignores the fact that the source and target domains may have differing dynamics.
The theoretical derivation of our method is heavily inspired by prior work which formulates control as a problem of probabilistic inference [1,3,15,36,40,41,54,73,76,77,91]. These methods aim to make an agent's experience in the target domain look like the expert's experience in the target domain, whereas our method aims to make an agent's experience in the source domain look like the expert's experience in the target domain. We emphasize that the domain shift we consider is caused by domains having different dynamics, not by actions being sampled from different policies. Algorithms for model-based RL (e.g., [10,16,22,28,32,51,68,80,83] and off-policy RL (e.g., [14,17,23,49] similarly aim to improve the sample efficiency of RL, but do use the source domain to accelerate learning. Our method is applicable to any maximum entropy RL algorithm, including on-policy [62], off-policy [1,27], and model-based [32,83] algorithms. We will use the soft actor critic algorithm [27] in our experiments. Recently, Vemula et al. [79] proposed a method for planning with an inaccurate model by assigning a high, fixed, cost to transitions where the model was inaccurate. Our method similarly accounts for discrepancies in dynamics via rewards, but does so with a learned classifier, allowing our method to be applied in stochastic environments with continuous states and actions.
Preliminaries
In this section, we introduce notation and formally define domain adaptation for RL. Our problem setting will consider two MDPs: M source represents the source domain (e.g., a practice facility, simulator, or learned approximate model of the target domain) while M target represents a the target domain. We assume that the two domains have the same state space S, action space A, reward function r, and initially state distribution p 1 (s 1 ); the only difference between the domains is there dynamics, p source (s t+1 | s t , a t ) and p target (s t+1 | s t , a t ). We will learn a Markovian policy π θ (a | s), parametrized by θ. Our objective is to learn a policy π that maximizes the expected discounted sum of rewards on M target , E π,Mtarget [ t γ t r(s t , a t )]. We now formally define our problem setting: Definition 1. Domain Adaptation for RL is the problem of using interactions in the source MDP M source together with a small number of interactions in the target MDP M target to acquire a policy that achieves high reward in the target MDP, M target .
We will assume every transition with non-zero probability in the target domain will have non-zero probability in the source domain:
p target (s t+1 | s t , a t ) > 0 =⇒ p source (s t+1 | s t , a t ) > 0 for all s t , s t+1 ∈ S, a t ∈ A.
This assumption is very weak, and common in work on importance sampling [38, §12.2.2].
A Variational Perspective on Domain Adaptation in RL
The probabilistic inference interpretation of RL [36,40,54,76,77,91] treats the reward function as defining a desired distribution over trajectories. The agent's task is to sample from this distribution by picking trajectories with probability proportional to their exponentiated reward. This section will reinterpret this model in the context of domain transfer, showing that domain adaptation of dynamics can be done by modifying the rewards.
To apply this model to domain adaptation, define p(τ ) as the desired distribution over trajectories in the target domain,
p(τ ) ∝ p 1 (s 1 ) t p target (s t+1 | s t , a t ) exp t r(s t , a t ) ,
and q(τ ) as our agent's distribution over trajectories in the source domain,
q(τ ) = p 1 (s 1 ) t p source (s t+1 | s t , a t )π θ (a t | s t ).
As noted in Section 3, we assume both trajectory distributions have the same initial state distribution. Our aim is to learn a policy whose behavior in the source domain both receives high reward and has high likelihood under the target domain dynamics. We codify this objective by minimizing the reverse KL divergence between these two distributions:
min π(a|s),q(s |s,a) D KL (q p) = −E q t r(s t , a t ) + H π [a t | s t ] + ∆r(s t+1 , s t , a t ) + c, where ∆r(s t+1 , s t , a t ) log p(s t+1 | s t , a t ) − log q(s t+1 | s t , a t ).
The constant c is the partition function of p(τ ), which is independent of the policy and dynamics. While ∆r is defined as the difference of transition probabilities, in Sec. 5.1 we show how to estimate ∆r without learning transition probabilities directly. In the special case where the source and target dynamics are equal, the correction term ∆r is zero and we recover maximum entropy RL [76,91]. We emphasize that our reward correction is different from prior work that adds log β(a | s) to the reward to regularize the policy to be close to the behavior policy β [1,33,34,58,59,72].
In the case where the source dynamics are not equal to the true dynamics, this objective is not the same as maximum entropy RL on trajectories sampled from the source domain. Instead, this objective suggests a corrective term ∆r that should be added to the reward function to account for the discrepancy between the source and target dynamics. The correction term, ∆r, is quite intuitive. If a transition (s t , a t , s t+1 ) has equal probability in the source and target domains, then ∆r(s t , a t ) = 0 so no correction is applied. For transitions that are likely in the source but are unlikely in the target domain, ∆r < 0, so the agent is penalized for "exploiting" inaccuracies or discrepancies in the source domain by taking these transitions. For the example environment in Figure 1, transitions through the center of the environment are blocked in the target domain but not in the source domain. For these transitions, ∆r would serve as a large penalty, discouraging the agent from taking these transitions and instead learning to navigate around the wall. Appendix A presents additional interpretations of ∆r in terms of coding theory, mutual information, and a constraint on the discrepancy between the source and target dynamics.
The Special Case of an Observation Model
To highlight the relationship between domain adaptation of dynamics versus observations, we now consider a special case. In this subsection, we will assume that the state s t (z t , o t ) is a combination of the system latent state z t (e.g., the poses of all objects in a scene) and an observation o t (e.g., a camera observation). We will define q(o t | z t ) and p(o t | z t ) as the observation models for the source and target domains. In this special case, we can decompose the KL objective (Eq. 4) into three terms:
D KL (q p) = −E q t r(s t , a t ) + H π [a t | s t ] MaxEnt RL objective + log p target (o t | z t ) − log p source (o t | z t ) Observation Adaptation log p target (z t+1 | z t , a t ) − log p source (z t+1 | z t , a t ) Dynamics Adaptation .
Prior methods that perform observation adaptation [7,24] effectively minimize the observation adaptation term, 2 but ignore the effect of dynamics. In contrast, the ∆r reward correction in our method provides one method to address both dynamics and observations. These approaches could be combined; we leave this as future work.
Algorithm 1 Domain Adaptation with Rewards from Classifiers [DARC]
1: Input: source MDP M source and target M target ; ratio r of experience from source vs. target. 2: Initialize: replay buffers for source and target transitions, D source , D target ; policy π; parameters θ = (θ SAS , θ SA ) for classifiers q θSAS (target | s t , a t , s t+1 ) and q θSAS (target | s t , a t ). 3: for t = 1, · · · , num iterations do 4:
D source ← D source ∪ ROLLOUT(π, M source )
Collect source data. 5: if t mod r = 0 then Periodically, collect target data. 6:
D target ← D target ∪ ROLLOUT(π, M target ) 7: θ ← θ − η∇ θ (θ) Update both classifiers. 8:r(s t , a t , s t+1 ) ← r(s t , a t ) + ∆r(s t , a t , s t+1 )
∆r is computed with Eq. 1.
9:
π ← MAXENT RL(π, D source ,r) 10: return π
Domain Adaptation in RL with a Learned Reward
The variational perspective on model-based RL in the previous section suggests that we should modify the reward in the source domain by adding ∆r. While ∆r is defined above in terms of transition probabilities, we will show below how it can be estimated via binary classification, without learning an explicit dynamics model. We then use this observation to develop a practical algorithm for off-dynamics RL.
Estimating the Reward Correction with Classifiers
The transition probabilities in the modified reward function are rarely known and are hard to estimate. Instead, we show that we can estimate this log ratio using a pair of (learned) binary classifiers, which will infer whether transitions came from the source or target domain. The key idea is that the transition probabilities are related to the classifier probabilities via Bayes' rule:
p(target | s t , a t , s t+1 ) = p(s t+1 | s t , a t , target) =ptarget(st+1|st,at) p(s t , a t | target)p(target)/p(s t , a t , s t+1 ).
We estimate the term p(s t , a t | target) on the RHS via another classifier, p(target | s t , a t ):
p(s t , a t | target) = p(target | s t , a t )p(s t , a t ) p(target) .
Substituting these expression into our definition for ∆r and simplifying, we obtain an estimate for ∆r that depends solely on the predictions of these two classifiers:
∆r(s t , a t , s t+1 ) = log p(target | s t , a t , s t+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . − log p(target | s t , a t ) − log p(source | s t , a t , s t+1 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . + log p(source | s t , a t )(1)
The . . . . . . . orange terms are the difference in logits from the classifier conditioned on s t , a t , s t+1 , while the blue terms are the difference in logits from the classifier conditioned on just s t , a t . Intuitively, ∆r answers the following question: for the task of predicting whether a transition came from the source or target domain, how much better can you perform after observing s t+1 ? We make this connection precise in Appendix A.2 by relating ∆r to mutual information. Ablation experiments (Fig. 9) confirm that both classifiers are important to the success of our method.
Algorithm Summary
Our algorithm modifies an existing MaxEnt RL algorithm to additionally learn two classifiers, q θSAS (target | s t , a t , s t+1 ) and q θSAS (target | s t , a t ), parametrized by θ SAS and θ SA respectively, to minimize the standard cross-entropy loss.
SAS (θ SAS ) −E Dtarget [log q θSAS (target | s t , a t , s t+1 )] − E Dsource [log q θSA (source | s t , a t , s t+1 )] SA (θ SA ) −E Dtarget [log q θSA (target | s t , a t )] − E Dsource [log q θSA (source | s t , a t )] .
Our algorithm, Domain Adaptation with Rewards from Classifiers (DARC), is presented in Alg. 1 and illustrated in Fig. 2. To simplify notation, we define θ (θ SAS , θ SA ) and (θ) SAS (θ SAS ) + SA (θ SA ). At each iteration, we collect transitions from the source and (less frequently) target domain, storing the transitions in separate replay buffers. We then sample a batch of experience from both buffers to update the classifiers. We use the classifiers to modify the rewards from the source domain, and apply MaxEnt RL to this experience. We use SAC [27] as our MaxEnt RL algorithm, but emphasize that DARC is applicable to any MaxEnt RL algorithm (e.g., on-policy, off-policy, and model-based). When training the classifiers, we add Gaussian input noise to prevent overfitting to the small number of targetdomain transitions (see Fig. 9 for an ablation). Code will be released.
Experiments
We start with a didactic experiment to build intuition for the mechanics of our method, and then evaluate on more complex tasks. Our experiments will show that DARC outperforms alternative approaches, such as directly applying RL to the target domain or learning importance weights. We will also show that our method can account for domain shift in the termination condition, and confirm the importance of learning two classifiers. Illustrative example. We start with a simple gridworld example, shown on the right, where we can apply our method without function approximation. The goal is to navigate from the top left to the bottom left. The real environment contains an obstacle (shown in red), which is not present in the source domain. If we simply apply RL on the source domain, we obtain a policy that navigates directly to the goal (blue arrows), and will fail when used in the target domain. We then apply our method: we collect trajectories from the source domain and real world to fit the two tabular classifiers. These classifiers give us a modified reward, which we use to learn a policy in the source domain. The modified reward causes our learned policy to navigate around the obstacle, which succeeds in the target environment. Visualizing the reward modification in stochastic domains. In our next experiment, we use an "archery" task to visualize how the modified reward accounts for differences in dynamics. The task, shown in Fig. 4, requires choosing an angle at which to shoot an arrow. The practice range (i.e., the source domain) is outdoors, with wind that usually blows from left to right. The competition range (i.e., the target domain) is indoors with no wind. The reward is the negative distance to the target. We plot the reward as a function of the angle in both domains in Fig. 4. The optimal strategy for the outdoor range is to compensate for the wind by shooting slightly to the left (θ = −0.8), while the optimal strategy for the indoor range is to shoot straight ahead (θ = 0). We estimate the modified reward function with DARC, and plot the modified reward in the windy outdoor range and indoor range. We aggregate across episodes using J(θ) = log E p(s |θ) [exp(r(s ))]; see Appendix B.4 for details. We observe that maximizing the modified reward in the windy range does not yield high reward in the windy range, but does yield a policy that performs well in the indoor range. Scaling to more complex tasks. We now apply DARC to the more complex tasks shown in Fig. 5. We define three tasks by crippling one of the joints of each robot in the target domain, but using the fully-functional robot in the source domain. We use three simulated robots taken from OpenAI Gym [8]: 7 DOF reacher, half cheetah, and ant. The broken reacher is based on the task described by Vemula et al. [79]. We also include a task where the shift in dynamics is external to the robot, by modifying the cheetah task to reward the agent for running both forward and backwards. It is easier to learn to run backwards, but the target domain contains an obstacle that prevents the agent from running backwards.
We compare our method to seven baselines. RL on Source and RL on Target directly perform RL on the source and target domains, respectively. The Finetuning baseline takes the result of running RL on the source domain, and further finetunes the agent on the target domain. The Importance Weighting baseline performs RL on importance-weighted samples from the source domain; the importance weights are exp(∆r). To account for the fact that our method performs more gradient updates per environment step in the source domain, we trained a version of the RL on source baseline likewise does 10 gradient updates per source domain step. Finally, we compared against two recent model-based RL methods: MBPO [32] and PETS [10].
We show the results of this experiment in Fig. 6, plotting the reward on the target domain as a function of the number of transitions in the target domain. On all tasks, the RL on source baseline (shown as a dashed line because it observes no target transitions) performs considerably worse than the optimal policy from RL on the target domain, suggesting that good policies for the source domain are suboptimal for the target domain. Nonetheless, on three of the four tasks our method matches (or even surpasses) the asymptotic performance of doing RL on the target domain, despite never doing RL on experience from the target domain, and despite observing 5 -10x less experience from the target domain. On the broken reacher and broken half cheetah tasks, we observe that finetuning on the target domain performs on par with our method. On the simpler broken reacher task, just doing RL on the target domain with a large number of gradient steps works quite well (we did not tune this parameter for our method). However, as we scale to the more complex broken ant and half cheetah obstacle tasks, we observe that all baselines perform poorly. To gain more intuition for our method, we recorded the reward correction ∆r throughout training on the broken reacher environment. In this experiment, we ran RL on the source domain for 100k steps before switching to our method. Said another way, we ignored ∆r for the first 100k steps of training. As shown in Fig. 7, ∆r steadily decreases during these first 100k steps, suggesting that the agent is learning a strategy that takes transitions where the source domain and target domain have different dynamics: the agent is making use of its broken joint. After 100k steps, when we maximize the combination of task reward and reward correction ∆r, we observe that ∆r increases, so the agent's transitions in the source domain are increasingly consistent with target domain dynamics. After around 1e6 training steps ∆r is zero: the agent has learned a strategy that uses transitions that are indistinguishable between the source and target domains. Safety emerges from domain adaptation to the termination condition. The termination condition is part of the dynamics [82], and our next experiment studies how our method copes with domain shift in the termination condition.
We use the humanoid shown in Fig. 8 for this experiment and set the task reward to 0. In the source domain episodes have a fixed length of 300 steps; in the target domain the episode terminates when the robot falls. The scenario mimics the real-world setting where robots have freedom to practice in a safe, cushioned, practice facility, but are preemptively stopped when they try to take unsafe actions in the real world. Our aim is for the agent to learn to avoid unsafe transitions in the source domain that would result in episode termination in the target domain (see Broader Impacts for more discussion). As shown in Fig. 8, our method learns to remain standing for nearly the entire episode. As expected, baselines that maximize the zero reward on the source and target domains fall immediately. While DARC was not designed as a method for safe RL [2,5,18,69], this experiment suggests that safety may emerge automatically from DARC, without any manual reward function design.
Ablation experiments. Our next experiment examines the importance of using two classifiers to estimate ∆r. We compared our method to an ablation that does not learn the SA classifier, effectively ignoring the blue terms in Eq. 1. As shown in Fig. 9 (left), this ablation performs considerably worse than our method. Intuitively, this makes sense: we might predict that a transition came from the source domain not because the next state had higher likelihood under the source dynamics, but rather because the state or action was visited more frequently in the source domain. The second classifier used in our method corrects for this distribution shift.
Finally, we examine the importance of input noise regularization in classifiers. As we observe only a handful of transitions from the target domain, we hypothesized that regularization would be important to prevent overfitting. We test this hypothesis in Fig. 9 (right) by training our method on the broken reacher environment with varying amounts of input noise. With no noise or little noise our method performs poorly (likely due to overfitting); too much noise also performs poorly (likely due to underfitting). We used a value of 1 in all our experiments, and did not tune this value. See Appendix C for more plots of both ablation experiments.
Discussion
In this paper, we proposed a simple, practical, and intuitive approach for domain adaptation to changing dynamics in RL. We formally motivate this method from a novel variational perspective on domain adaptation in RL, which suggests that we can compensate for differences in dynamics via the reward function. Experiments on a range of control tasks show that our method can leverage the source domain to learn policies that will work well in the target domain, despite observing only a handful of transitions from the target domain. The main limitation of our method is that the source dynamics must be sufficiently stochastic, an assumption that can usually be satisfied by adding noise to the dynamics, or ensembling a collection of sources. Empirically, we found that our method worked best on tasks that could be completed in many ways in the source domain, but some of these strategies were not compatible with the target dynamics. The main takeaway of this work is that inaccuracies in dynamics can be compensated for via the reward function. In future work we aim to use the variation perspective on domain adaptation (Sec. 4) to learn the dynamics for the source domain.
A Additional Interpretations of the Reward Correction
This section presents four additional interpretations of the reward correction, ∆r.
A.1 Coding Theory
The reward correction ∆r can also be understood from the perspective of coding theory. Suppose that we use a data-efficient replay buffer that exploits that fact that the next state s t+1 is highly redundant with the current state and action, s t , a t . If we assume that the replay buffer compression has been optimized to store transitions from the target environment, (negative) ∆r is the number of additional bits (per transition) needed for our source replay buffer, as compared with our target replay buffer. Thus, an agent which maximizes ∆r will seek those transitions that can be encoded most efficiently, minimizing the size of the source replay buffer.
A.2 Mutual Information
We can gain more intuition in the modified reward by writing the expected value of ∆r from Eq. 1 in terms of mutual information:
E[∆r(s t , a t , s t+1 )] = I(s t+1 ; target | s t , a t ) − I(s t+1 ; source | s t , a t ). The mutual information I(s t+1 ; target | s t , a t ) reflects how much better you can predict the next state if you know that you are interacting with the target domain, instead of the source domain. Our approach does exactly this, rewarding the agent for taking transitions that provide information about the target domain while penalizing transitions that hint to the agent that it is interacting with a source domain rather than the target domain: we don't want our are agent to find bugs in the Matrix. 3 A.3 Lower bound on the risk-sensitive reward objective.
While we derived DARC by minimizing a reverse KL divergence (Eq. 4), we can also show that DARC maximizes a lower bound on a risk-sensitive reward objective [47]:
The inequality on the last line is an application of Jensen's inequality. One interesting question is when it would be preferable to maximize Eq. 2 rather than Eq. 3. While Eq. 3 provides a loser bound on the risk sensitive objective, empirically it may avoid the risk-seeking behavior that can be induced by risk-sensitive objectives. We leave the investigation of this trade-off as future work.
A.4 A Constraint on Dynamics Discrepancy
Our method regularizes the policy to visit states where the transition dynamics are similar between the source domain and target domain: This objective can equivalently be expressed as applying MaxEnt RL to only those policies which avoid exploiting the dynamics discrepancy. More precisely, the KKT conditions guarantee that there exists a positive constant > 0 such that our objective is equivalent to the following constrained objective:
max π∈ΠDARC E a∼π(a|s) s ∼p(s |s,a) t r(s t , a t ) + H π [a t | s t ] ,
where Π DARC denotes the set of policies that do not exploit the dynamics discrepancy:
Π DARC π E a∼π(a|s) s ∼p(s |s,a) t D KL (p source (s t+1 | s t , a t ) p target (s t+1 | s t , a t )) ≤ .
One potential benefit of considering our method as the unconstrained objective is that it provides a principled method for increasing or decreasing the weight on the ∆r term, depending on how much the policy is currently exploiting the dynamics discrepancy. We leave this investigation as future work.
B Experiment Details and Hyperparameters
Our implementation of DARC is built on top of the implementation of SAC from Guadarrama et al. [26]. Unless otherwise specified, all hyperparameters are taken from Guadarrama et al. [26]. All neural networks (actor, critics, and classifiers) have two hidden layers with 256-units each and ReLU activations. Since we ultimately will use the difference in the predictions of the two classifiers, we use a residual parametrization for the SAS classifier q(target | s t , a t , s t+1 ). Using f SAS (s t , a t , s t+1 ), f SA (s t , a t ) ∈ R 2 to denote the outputs of the two classifier networks, we compute the classifier predictions as follows:
q θSA (· | s t , a t ) = SOFTMAX(f SA (s t , a t )) q θSAS (· | s t , a t , s t+1 ) = SOFTMAX(f SAS (s t , a t , s t+1 ) + f SA (s t , a t ))
For the SAS classifier we propagate gradients back through both networks parameters, θ SAS and θ SA . Both classifiers use Gaussian input noise with σ = 1. Optimization of all networks is done with Adam [37] with a learning rate of 3e-4 and batch size of 128. Most experiments with DARC collected 1 step in the target domain every 10 steps in the source domain (i.e., r = 10). The one exception is the half cheetah obstacle domain, where we tried increasing r beyond 10 to 30, 100, 300, and 1000. We found a large benefit from increasing r to 30 and 100, but did not run the other experiments long enough to draw any conclusions. Fig. 6 uses r = 30 for half cheetah obstacle. We did not tune this parameter, and expect that tuning it would result in significant improvements in sample efficiency.
We found that DARC was slightly more stable if we warm-started the method by applying RL on the source task without ∆r for the first t warmup iterations. We used t warmup = 1e5 for all tasks except the broken reacher, where we used t warmup = 2e5. This discrepancy was caused by a typo in an experiment, and subsequent experiments found that DARC is relatively robust to different values of t warmup ; we did not tune this parameter.
B.1 Baselines
The RL on Source and RL on Target baselines are implemented identically to our method, with the exception that ∆r is not added to the reward function. The RL on Target (10x) is identical to RL on Target, with the exception that we take 10 gradient steps per environment interaction (instead of 1). The Importance Weighting baseline estimates the importance weights as p target (s t+1 | s t , a t )/p source (s t+1 | s t , a t ) ≈ exp(∆r). The importance weight is used to weight transitions in the SAC actor and critic losses.
PETS [10]
The PETS baseline is implemented using the default configurations used by [10] for the environments evaluated. The broken-half-cheetah environment uses the hyperparameters as used by the half-cheetah environment in [10]. The broken-ant environment uses the same set of hyperparameters, namely: task horizon = 1000, number of training iterations = 300, number of planning (real) steps per iteration = 30, number of particles to be used in particle propagation methods = 20. The PETS codebase can be found at https://github.com/kchua/handful-of-trials.
MBPO [32]
We used the authors implementation with the default hyperparameters: https:// github.com/JannerM/mbpo. We kept the environment configurations the same as their default unmodified MuJoCo environments, except for the domain and task name. We added our custom environment xmls in mbpo/env/assets/ folder, and their corresponding environment python files in the mbpo/env/ folder. Their static files were added under mbpo/static/. These environments can be registered as gym environments in the init file under mbpo_odrl/mbpo/env/ or can be initialized directly in softlearning/environments/adapters/gym adapter.py. We set the time limit to max_episode_steps=1000 for the Broken Half Cheetah, Broken Ant and Half Cheetah Obstacle environments and to 100 for the Broken Reacher environment.
B.2 Environments
Broken Reacher This environment uses the 7DOF robot arm from the Pusher environment in OpenAI Gym. The observation space is the position and velocities of all joints and the goal. The reward function is r(s, a) = − 1 2 s end effector − s goal 2 − 1 10 a 2 2 , and episodes are 100 steps long. In the target domain the 2nd joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.
Broken Half Cheetah This environment is based on the HalfCheetah environment in OpenAI Gym. Episodes are 1000 steps long. In the target domain the 0th joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.
Broken Ant This environment is based on the Ant environment in OpenAI Gym. We use the standard termination condition and cap the maximum episode length at 1000 steps. In the target domain the 3rd joint (0-indexed) is broken: zero torque is applied to this joint, regardless of the commanded torque.
In all the broken joint environments, we choose which joint to break to computing which joint caused the "RL on Source" baseline to perform worst on the target domain, as compared with the "RL on Target" baseline.
Half Cheetah Obstacle This environment is based on the HalfCheetah environment in OpenAI Gym. Episodes are 1000 steps long. We modified the standard reward function to use the absolute value in place of the velocity, resulting the following reward function:
r(s, a) = s x vel · ∆t − a 2 2 , where s x vel is the velocity of the agent along the forward-aft axis and ∆t = 0.01 is the time step of the simulator. In the target domain, we added a wall at x = −3m, roughly 3 meters behind the agent.
Humanoid Used for the experiment in Fig. 8, we used a modified version of Humanoid from OpenAI Gym. The source domain modified this environment to ignore the default termination condition and instead terminate after exactly 300 time steps. The target domain uses the unmodified environment, which terminates when the agent falls.
B.3 Figures
Unless otherwise noted, all experiments were run with three random seeds. Figures showing learning curves (Figures 6, 9, 7, 10, and 11) plot the mean over the three random seeds, and also plot the results for each individual random seed with semi-transparent lines.
B.4 Archery Experiment
We used a simple physics model for the archery experiment. The target was located 70m North of the agent, and wind was applied along the East-West axis. The system dynamics: s t+1 = 70 sin(θ) + f / cos(θ) 2 f ∼ N (µ = 1, σ = 1) in the target domain f ∼ N (µ = 0, σ = 0.3) in the source domain We trained the classifier by sampling θ ∼ U[−2, 2] (measured in degrees) for 10k episodes in the source domain and 10k episodes in the target domain. The classifier was a neural network with 1 hidden layer with 32 hidden units and ReLU activation. We optimized the classifier using the Adam optimizer with a learning rate of 3e-3 and a batch size of 1024. We trained until the validation loss increased for 3 consecutive epochs, which took 16 epochs in our experiment. We generated Fig. 4 by sampling 10k episodes for each value of θ and aggregating the rewards using J(θ) = log E p(s |θ) [exp(r(s ))]. We found that aggregating rewards by taking the mean did not yield meaningful results, perhaps because the mean corresponds to a (possibly loose) lower bound on J (see Appendix A.3). Figures 10 and 11 show the full results of the ablation experiment in Fig. 9, run on all four tasks.
C Additional Experiments
Figure 2 :
2Block diagram of DARC (Alg. 1)
Figure 3 :
3Tabular example of off-dynamics RL
Figure 4 :
4Visualizing the modified reward
Figure 5 :
5Environments: (L to R) broken reacher, broken half cheetah, broken ant, and half cheetah obstacle.
Figure 6 :
6DARC compensates for crippled robots and obstacles: We apply DARC to four continuous control tasks: three tasks (broken reacher, half cheetah, and ant) which are crippled in the target domain but not the source domain, and one task (half cheetah obstacle) where the source domain omits the obstacle from the target domain. Note that naïvely ignoring the shift in dynamics (green dashed line) performs quite poorly, while directly learning on the crippled robot requires an order of magnitude more experience than our method.
Figure 7 :
7Without the reward correction, the agent takes transitions where the source domain and target domains are dissimilar; after adding the reward correction, the agent's transitions in the source domain are increasingly plausible under the target domain. See text for details.
Figure 9 :
9Ablation experiments (Left) DARC performs worse when only one classifier is used. (Right) Using input noise to regularize the classifiers boosts performance. Both plots show results for broken reacher; see Appendix C for results on all environments.
Figure 8 :
8Our method accounts for domain shift in the termination condition, causing the agent to avoid transitions that cause termination in the target domain.
≥
st, at) + log ptarget(st+1 | st, at) − log psource(st+1 | st, at) ∆r(s t ,a t ,s t+1 ) E s ∼psource(s |s,a), a∼π(a|s) t r(st, at) + ∆r(st, at, st+1) .
s t , a t ) + log p target (s t+1 | s t , a t ) − log p source (s t+1 | s t , a t ) −DKL(psource ptarget) +H π [a t | s t ] .
Figure 10 :
10Importance of using two classifiers: Results of the ablation experiment fromFig. 9(left) on all environments.
Figure 11 :
11Importance of regularizing the classifiers: Results of the ablation experiment fromFig. 9(right) on all environments.
Tiao et al.[74] show that observation adaptation using CycleGan[89] minimizes a Jensen-Shannon divergence. Assuming sufficiently expressive models, the Jensen-Shannon divergence and the reverse KL divergence above have the same optimum.
The CMU compute cluster where we ran our experiments is also named Matrix.
Acknowledgements We thank Anirudh Vemula for early discussions; we thank Karol Hausman, Vincent Vanhoucke and anonymous reviews at RAIL for feedback on a draft of this work. We thank Barry Moore for providing containers with MuJoCo and Dr. Paul Munro granting access to compute at CRC. This work is supported by the Fannie and John Hertz Foundation, University of Pittsburgh Center for Research Computing (CRC), NSF (DGE1745016, IIS1763562), ONR (N000141812861), and US Army. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.Contributions BE proposed the idea of using rewards to correct for dynamics, designed and ran many of the experiments in the paper, and wrote much of the paper. Swapnil did the initial literature review, wrote and designed some of the DARC experiments and environments, developed visualizations of the modified reward function, and ran the MBPO experiments. SC designed some of the initial environments, helped with the implementation of DARC, and ran the PETS experiments. RS and SL provided guidance throughout the project, and contributed to structure and writing of the paper.
A Abdolmaleki, J T Springenberg, Y Tassa, R Munos, N Heess, M Riedmiller, arXiv:1806.06920Maximum a posteriori policy optimisation. arXiv preprintAbdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920.
Constrained policy optimization. J Achiam, D Held, A Tamar, Abbeel , P , Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Achiam, J., Held, D., Tamar, A., and Abbeel, P. (2017). Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 22-31. JMLR. org.
Planning by probabilistic inference. H Attias, AISTATS. CiteseerAttias, H. (2003). Planning by probabilistic inference. In AISTATS. Citeseer.
Domain adaptation on the statistical manifold. M Baktashmotlagh, M T Harandi, B C Lovell, M Salzmann, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBaktashmotlagh, M., Harandi, M. T., Lovell, B. C., and Salzmann, M. (2014). Domain adaptation on the statistical manifold. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2481-2488.
Safe model-based reinforcement learning with stability guarantees. F Berkenkamp, M Turchetta, A Schoellig, A Krause, Advances in neural information processing systems. Berkenkamp, F., Turchetta, M., Schoellig, A., and Krause, A. (2017). Safe model-based reinforcement learning with stability guarantees. In Advances in neural information processing systems, pages 908-918.
Discriminative learning for differing training and test distributions. S Bickel, M Brückner, T Scheffer, Proceedings of the 24th international conference on Machine learning. the 24th international conference on Machine learningBickel, S., Brückner, M., and Scheffer, T. (2007). Discriminative learning for differing training and test distributions. In Proceedings of the 24th international conference on Machine learning, pages 81-88.
Using simulation and domain adaptation to improve efficiency of deep robotic grasping. K Bousmalis, A Irpan, P Wohlhart, Y Bai, M Kelcey, M Kalakrishnan, L Downs, J Ibarz, P Pastor, K Konolige, 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEEBousmalis, K., Irpan, A., Wohlhart, P., Bai, Y., Kelcey, M., Kalakrishnan, M., Downs, L., Ibarz, J., Pastor, P., Konolige, K., et al. (2018). Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4243-4250. IEEE.
. G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, W Zaremba, arXiv:1606.01540Openai gym. arXiv preprintBrockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). Openai gym. arXiv preprint arXiv:1606.01540.
Closing the sim-to-real loop: Adapting simulation randomization with real world experience. Y Chebotar, A Handa, V Makoviychuk, M Macklin, J Issac, N Ratliff, D Fox, 2019 International Conference on Robotics and Automation (ICRA). IEEEChebotar, Y., Handa, A., Makoviychuk, V., Macklin, M., Issac, J., Ratliff, N., and Fox, D. (2019). Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In 2019 International Conference on Robotics and Automation (ICRA), pages 8973-8979. IEEE.
Deep reinforcement learning in a handful of trials using probabilistic dynamics models. K Chua, R Calandra, R Mcallister, S Levine, Advances in Neural Information Processing Systems. Chua, K., Calandra, R., McAllister, R., and Levine, S. (2018). Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pages 4754-4765.
Learning to adapt: Meta-learning for model-based control. I Clavera, A Nagabandi, R S Fearing, P Abbeel, S Levine, C Finn, arXiv:1803.113473arXiv preprintClavera, I., Nagabandi, A., Fearing, R. S., Abbeel, P., Levine, S., and Finn, C. (2018). Learning to adapt: Meta-learning for model-based control. arXiv preprint arXiv:1803.11347, 3.
Domain adaptation and sample bias correction theory and algorithm for regression. C Cortes, M Mohri, Theoretical Computer Science. 519Cortes, C. and Mohri, M. (2014). Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103-126.
Reinforcement learning with multi-fidelity simulators. M Cutler, T J Walsh, J P How, 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEECutler, M., Walsh, T. J., and How, J. P. (2014). Reinforcement learning with multi-fidelity simulators. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 3888-3895. IEEE.
Policy evaluation with temporal differences: A survey and comparison. C Dann, G Neumann, J Peters, Journal of Machine Learning Research. 15Dann, C., Neumann, G., Peters, J., et al. (2014). Policy evaluation with temporal differences: A survey and comparison. Journal of Machine Learning Research, 15:809-883.
Using expectation-maximization for reinforcement learning. P Dayan, G E Hinton, Neural Computation. 92Dayan, P. and Hinton, G. E. (1997). Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271-278.
Pilco: A model-based and data-efficient approach to policy search. M Deisenroth, C E Rasmussen, Proceedings of the 28th International Conference on machine learning (ICML-11). the 28th International Conference on machine learning (ICML-11)Deisenroth, M. and Rasmussen, C. E. (2011). Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465-472.
M Dudík, J Langford, Li , L , arXiv:1103.4601Doubly robust policy evaluation and learning. arXiv preprintDudík, M., Langford, J., and Li, L. (2011). Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601.
Leave no trace: Learning to reset for safe and autonomous reinforcement learning. B Eysenbach, S Gu, J Ibarz, S Levine, arXiv:1711.06782arXiv preprintEysenbach, B., Gu, S., Ibarz, J., and Levine, S. (2017). Leave no trace: Learning to reset for safe and autonomous reinforcement learning. arXiv preprint arXiv:1711.06782.
Humanoid robots learning to walk faster: From the real world to simulation and back. A Farchy, S Barrett, P Macalpine, P Stone, Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems. the 2013 international conference on Autonomous agents and multi-agent systemsFarchy, A., Barrett, S., MacAlpine, P., and Stone, P. (2013). Humanoid robots learning to walk faster: From the real world to simulation and back. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, pages 39-46.
Dual control theory. i. Avtomatika i Telemekhanika. A Feldbaum, 21Feldbaum, A. (1960). Dual control theory. i. Avtomatika i Telemekhanika, 21(9):1240-1249.
Unsupervised visual domain adaptation using subspace alignment. B Fernando, A Habrard, M Sebban, T Tuytelaars, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionFernando, B., Habrard, A., Sebban, M., and Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE international conference on computer vision, pages 2960-2967.
Deep visual foresight for planning robot motion. C Finn, S Levine, 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEEFinn, C. and Levine, S. (2017). Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786-2793. IEEE.
Off-policy deep reinforcement learning without exploration. S Fujimoto, D Meger, D Precup, arXiv:1812.02900arXiv preprintFujimoto, S., Meger, D., and Precup, D. (2018). Off-policy deep reinforcement learning without exploration. arXiv preprint arXiv:1812.02900.
Transfer learning for related reinforcement learning tasks via image-to-image translation. S Gamrian, Y Goldberg, arXiv:1806.07377arXiv preprintGamrian, S. and Goldberg, Y. (2018). Transfer learning for related reinforcement learning tasks via image-to-image translation. arXiv preprint arXiv:1806.07377.
Domain-adversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, The Journal of Machine Learning Research. 171Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. (2016). Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030.
Tf-agents: A library for reinforcement learning in tensorflow. S Guadarrama, A Korattikara, O Ramirez, P Castro, E Holly, S Fishman, K Wang, E Gonina, C Harris, V Vanhoucke, Guadarrama, S., Korattikara, A., Ramirez, O., Castro, P., Holly, E., Fishman, S., Wang, K., Gonina, E., Harris, C., Vanhoucke, V., et al. (2018). Tf-agents: A library for reinforcement learning in tensorflow.
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. T Haarnoja, A Zhou, P Abbeel, S Levine, arXiv:1801.01290arXiv preprintHaarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290.
D Hafner, T Lillicrap, I Fischer, R Villegas, D Ha, H Lee, Davidson , J , arXiv:1811.04551Learning latent dynamics for planning from pixels. arXiv preprintHafner, D., Lillicrap, T., Fischer, I., Villegas, R., Ha, D., Lee, H., and Davidson, J. (2018). Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551.
Darla: Improving zero-shot transfer in reinforcement learning. I Higgins, A Pal, A Rusu, L Matthey, C Burgess, A Pritzel, M Botvinick, C Blundell, A Lerchner, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Higgins, I., Pal, A., Rusu, A., Matthey, L., Burgess, C., Pritzel, A., Botvinick, M., Blundell, C., and Lerchner, A. (2017). Darla: Improving zero-shot transfer in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1480-1490. JMLR. org.
J Hoffman, D Wang, F Yu, Darrell , T , arXiv:1612.02649Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprintHoffman, J., Wang, D., Yu, F., and Darrell, T. (2016). Fcns in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv preprint arXiv:1612.02649.
F Huszár, arXiv:1702.08235Variational inference using implicit distributions. arXiv preprintHuszár, F. (2017). Variational inference using implicit distributions. arXiv preprint arXiv:1702.08235.
When to trust your model: Model-based policy optimization. M Janner, J Fu, M Zhang, S Levine, Advances in Neural Information Processing Systems. Janner, M., Fu, J., Zhang, M., and Levine, S. (2019). When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pages 12498-12509.
Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. N Jaques, A Ghandeharioun, J H Shen, C Ferguson, A Lapedriza, N Jones, S Gu, R Picard, arXiv:1907.00456arXiv preprintJaques, N., Ghandeharioun, A., Shen, J. H., Ferguson, C., Lapedriza, A., Jones, N., Gu, S., and Picard, R. (2019). Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456.
Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control. N Jaques, S Gu, D Bahdanau, J M Hernández-Lobato, R E Turner, D Eck, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Jaques, N., Gu, S., Bahdanau, D., Hernández-Lobato, J. M., Turner, R. E., and Eck, D. (2017). Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1645-1654. JMLR. org.
A least-squares approach to direct importance estimation. T Kanamori, S Hido, M Sugiyama, Journal of Machine Learning Research. 10Kanamori, T., Hido, S., and Sugiyama, M. (2009). A least-squares approach to direct importance estimation. Journal of Machine Learning Research, 10(Jul):1391-1445.
Path integrals and symmetry breaking for optimal control theory. H J Kappen, Journal of statistical mechanics: theory and experiment. 1111011Kappen, H. J. (2005). Path integrals and symmetry breaking for optimal control theory. Journal of statistical mechanics: theory and experiment, 2005(11):P11011.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Probabilistic graphical models: principles and techniques. D Koller, N Friedman, MIT pressKoller, D. and Friedman, N. (2009). Probabilistic graphical models: principles and techniques. MIT press.
A review of domain adaptation without target labels. W M Kouw, M Loog, IEEE transactions. Kouw, W. M. and Loog, M. (2019). A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence.
S Levine, arXiv:1805.00909Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprintLevine, S. (2018). Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909.
Variational policy search via trajectory optimization. S Levine, V Koltun, Advances in neural information processing systems. Levine, S. and Koltun, V. (2013). Variational policy search via trajectory optimization. In Advances in neural information processing systems, pages 207-215.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. S Levine, P Pastor, A Krizhevsky, J Ibarz, D Quillen, The International Journal of Robotics Research. 374-5Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., and Quillen, D. (2018). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 37(4-5):421-436.
Detecting and correcting for label shift with black box predictors. Z C Lipton, Y.-X Wang, A Smola, arXiv:1802.03916arXiv preprintLipton, Z. C., Wang, Y.-X., and Smola, A. (2018). Detecting and correcting for label shift with black box predictors. arXiv preprint arXiv:1802.03916.
System identification. Wiley encyclopedia of electrical and electronics engineering. L Ljung, Ljung, L. (1999). System identification. Wiley encyclopedia of electrical and electronics engineering, pages 1-19.
Waymo built a secret world for self-driving cars. A C Madrigal, Madrigal, A. C. (2018). Waymo built a secret world for self-driving cars.
Amazon has a history of bear repellent accidents. L Matsakis, Matsakis, L. (2018). Amazon has a history of bear repellent accidents.
Risk-sensitive reinforcement learning. O Mihatsch, R Neuneier, Machine learning. 492-3Mihatsch, O. and Neuneier, R. (2002). Risk-sensitive reinforcement learning. Machine learning, 49(2- 3):267-290.
S Mohamed, B Lakshminarayanan, arXiv:1610.03483Learning in implicit generative models. arXiv preprintMohamed, S. and Lakshminarayanan, B. (2016). Learning in implicit generative models. arXiv preprint arXiv:1610.03483.
Safe and efficient off-policy reinforcement learning. R Munos, T Stepleton, A Harutyunyan, M Bellemare, Advances in Neural Information Processing Systems. Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. (2016). Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pages 1054-1062.
Sim-to-real transfer of robotic control with dynamics randomization. X B Peng, M Andrychowicz, W Zaremba, Abbeel , P , 2018 IEEE international conference on robotics and automation (ICRA). IEEEPeng, X. B., Andrychowicz, M., Zaremba, W., and Abbeel, P. (2018). Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pages 1-8. IEEE.
Survey of model-based reinforcement learning: Applications on robotics. A S Polydoros, L Nalpantidis, Journal of Intelligent & Robotic Systems. 862Polydoros, A. S. and Nalpantidis, L. (2017). Survey of model-based reinforcement learning: Applications on robotics. Journal of Intelligent & Robotic Systems, 86(2):153-173.
The best robots on four legs with marc raibert (boston dynamics). M Raibert, Raibert, M. (2019). The best robots on four legs with marc raibert (boston dynamics).
A Rajeswaran, S Ghotra, B Ravindran, S Levine, arXiv:1610.01283Epopt: Learning robust neural network policies using model ensembles. arXiv preprintRajeswaran, A., Ghotra, S., Ravindran, B., and Levine, S. (2016). Epopt: Learning robust neural network policies using model ensembles. arXiv preprint arXiv:1610.01283.
On stochastic optimal control and reinforcement learning by approximate inference. K Rawlik, M Toussaint, S Vijayakumar, Twenty-Third International Joint Conference on Artificial Intelligence. Rawlik, K., Toussaint, M., and Vijayakumar, S. (2013). On stochastic optimal control and reinforcement learning by approximate inference. In Twenty-Third International Joint Conference on Artificial Intelligence.
Agnostic system identification for model-based reinforcement learning. S Ross, J A Bagnell, arXiv:1203.1007arXiv preprintRoss, S. and Bagnell, J. A. (2012). Agnostic system identification for model-based reinforcement learning. arXiv preprint arXiv:1203.1007.
Cad2rl: Real single-image flight without a single real image. F Sadeghi, S Levine, arXiv:1611.04201arXiv preprintSadeghi, F. and Levine, S. (2016). Cad2rl: Real single-image flight without a single real image. arXiv preprint arXiv:1611.04201.
Adaptive control of linearizable systems. S S Sastry, A Isidori, IEEE Transactions on Automatic Control. 3411Sastry, S. S. and Isidori, A. (1989). Adaptive control of linearizable systems. IEEE Transactions on Automatic Control, 34(11):1123-1131.
Universal value density estimation for imitation learning and goal-conditioned reinforcement learning. Y Schroecker, C Isbell, arXiv:2002.06473arXiv preprintSchroecker, Y. and Isbell, C. (2020). Universal value density estimation for imitation learning and goal-conditioned reinforcement learning. arXiv preprint arXiv:2002.06473.
Trust region policy optimization. J Schulman, S Levine, P Abbeel, M Jordan, P Moritz, International conference on machine learning. Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. (2015). Trust region policy optimization. In International conference on machine learning, pages 1889-1897.
J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Amortised map inference for image super-resolution. C K Sønderby, J Caballero, L Theis, W Shi, F Huszár, arXiv:1610.04490arXiv preprintSønderby, C. K., Caballero, J., Theis, L., Shi, W., and Huszár, F. (2016). Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490.
H F Song, A Abdolmaleki, J T Springenberg, A Clark, H Soyer, J W Rae, S Noury, A Ahuja, S Liu, D Tirumala, arXiv:1909.12238V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control. arXiv preprintSong, H. F., Abdolmaleki, A., Springenberg, J. T., Clark, A., Soyer, H., Rae, J. W., Noury, S., Ahuja, A., Liu, S., Tirumala, D., et al. (2019). V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control. arXiv preprint arXiv:1909.12238.
Covariate shift adaptation by importance weighted cross validation. M Sugiyama, M Krauledat, K.-R Mãžller, Journal of Machine Learning Research. 8Sugiyama, M., Krauledat, M., and MÞller, K.-R. (2007). Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(May):985-1005.
Input-dependent estimation of generalization error under covariate shift. M Sugiyama, K.-R Müller, Statistics & Decisions. 234Sugiyama, M. and Müller, K.-R. (2005a). Input-dependent estimation of generalization error under covariate shift. Statistics & Decisions, 23(4/2005):249-279.
Model selection under covariate shift. M Sugiyama, K.-R Müller, International Conference on Artificial Neural Networks. SpringerSugiyama, M. and Müller, K.-R. (2005b). Model selection under covariate shift. In International Conference on Artificial Neural Networks, pages 235-240. Springer.
Direct importance estimation with model selection and its application to covariate shift adaptation. M Sugiyama, S Nakajima, H Kashima, P V Buenau, M Kawanabe, Advances in neural information processing systems. Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P. V., and Kawanabe, M. (2008). Direct importance estimation with model selection and its application to covariate shift adaptation. In Advances in neural information processing systems, pages 1433-1440.
Conditional density estimation via least-squares density ratio estimation. M Sugiyama, I Takeuchi, T Suzuki, T Kanamori, H Hachiya, D Okanohara, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsSugiyama, M., Takeuchi, I., Suzuki, T., Kanamori, T., Hachiya, H., and Okanohara, D. (2010). Conditional density estimation via least-squares density ratio estimation. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 781-788.
Dyna, an integrated architecture for learning, planning, and reacting. R S Sutton, ACM Sigart Bulletin. 24Sutton, R. S. (1991). Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin, 2(4):160-163.
Scaling up robust mdps by reinforcement learning. A Tamar, H Xu, S Mannor, arXiv:1306.6189arXiv preprintTamar, A., Xu, H., and Mannor, S. (2013). Scaling up robust mdps by reinforcement learning. arXiv preprint arXiv:1306.6189.
Simulation-based design of dynamic controllers for humanoid balancing. J Tan, Z Xie, B Boots, C K Liu, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEETan, J., Xie, Z., Boots, B., and Liu, C. K. (2016). Simulation-based design of dynamic controllers for humanoid balancing. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2729-2736. IEEE.
Adaptive model predictive control for constrained linear systems. M Tanaskovic, L Fagiano, R Smith, P Goulart, M Morari, European Control Conference (ECC). IEEETanaskovic, M., Fagiano, L., Smith, R., Goulart, P., and Morari, M. (2013). Adaptive model predictive control for constrained linear systems. In 2013 European Control Conference (ECC), pages 382-387. IEEE.
Safety augmented value estimation from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks. B Thananjeyan, A Balakrishna, U Rosolia, F Li, R Mcallister, J E Gonzalez, S Levine, F Borrelli, K Goldberg, IEEE Robotics and Automation Letters. 52Thananjeyan, B., Balakrishna, A., Rosolia, U., Li, F., McAllister, R., Gonzalez, J. E., Levine, S., Borrelli, F., and Goldberg, K. (2020). Safety augmented value estimation from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks. IEEE Robotics and Automation Letters, 5(2):3612-3619.
A generalized path integral control approach to reinforcement learning. E Theodorou, J Buchli, S Schaal, journal of machine learning research. 11Theodorou, E., Buchli, J., and Schaal, S. (2010). A generalized path integral control approach to reinforce- ment learning. journal of machine learning research, 11(Nov):3137-3181.
Cycle-consistent adversarial learning as approximate bayesian inference. L C Tiao, E V Bonilla, F Ramos, arXiv:1806.01771arXiv preprintTiao, L. C., Bonilla, E. V., and Ramos, F. (2018). Cycle-consistent adversarial learning as approximate bayesian inference. arXiv preprint arXiv:1806.01771.
Domain randomization for transferring deep neural networks from simulation to the real world. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, Abbeel , P , 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEETobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23-30. IEEE.
Linearly-solvable markov decision problems. E Todorov, Advances in neural information processing systems. Todorov, E. (2007). Linearly-solvable markov decision problems. In Advances in neural information processing systems, pages 1369-1376.
Robot trajectory optimization using approximate inference. M Toussaint, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningToussaint, M. (2009). Robot trajectory optimization using approximate inference. In Proceedings of the 26th annual international conference on machine learning, pages 1049-1056.
M Uehara, I Sato, M Suzuki, K Nakayama, Y Matsuo, arXiv:1610.02920Generative adversarial nets from a density ratio estimation perspective. arXiv preprintUehara, M., Sato, I., Suzuki, M., Nakayama, K., and Matsuo, Y. (2016). Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920.
Planning and execution using inaccurate models with provable guarantees. A Vemula, Y Oza, J A Bagnell, M Likhachev, arXiv:2003.04394arXiv preprintVemula, A., Oza, Y., Bagnell, J. A., and Likhachev, M. (2020). Planning and execution using inaccurate models with provable guarantees. arXiv preprint arXiv:2003.04394.
T Wang, X Bao, I Clavera, J Hoang, Y Wen, E Langlois, S Zhang, G Zhang, P Abbeel, J Ba, arXiv:1907.02057Benchmarking model-based reinforcement learning. arXiv preprintWang, T., Bao, X., Clavera, I., Hoang, J., Wen, Y., Langlois, E., Zhang, S., Zhang, G., Abbeel, P., and Ba, J. (2019). Benchmarking model-based reinforcement learning. arXiv preprint arXiv:1907.02057.
Neural networks for control and system identification. P J Werbos, Proceedings of the 28th IEEE Conference on Decision and Control. the 28th IEEE Conference on Decision and ControlIEEEWerbos, P. J. (1989). Neural networks for control and system identification. In Proceedings of the 28th IEEE Conference on Decision and Control,, pages 260-265. IEEE.
Unifying task specification in reinforcement learning. M White, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70White, M. (2017). Unifying task specification in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3742-3750. JMLR. org.
G Williams, A Aldrich, E Theodorou, arXiv:1509.01149Model predictive path integral control using covariance variable importance sampling. arXiv preprintWilliams, G., Aldrich, A., and Theodorou, E. (2015). Model predictive path integral control using covariance variable importance sampling. arXiv preprint arXiv:1509.01149.
Adaptive dual control methods: An overview. B Wittenmark, Adaptive Systems in Control and Signal Processing. ElsevierWittenmark, B. (1995). Adaptive dual control methods: An overview. In Adaptive Systems in Control and Signal Processing 1995, pages 67-72. Elsevier.
Addressing appearance change in outdoor robotics with adversarial domain adaptation. M Wulfmeier, A Bewley, I Posner, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEWulfmeier, M., Bewley, A., and Posner, I. (2017). Addressing appearance change in outdoor robotics with adversarial domain adaptation. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1551-1558. IEEE.
Preparing for the unknown: Learning a universal policy with online system identification. W Yu, J Tan, C K Liu, G Turk, arXiv:1702.02453arXiv preprintYu, W., Tan, J., Liu, C. K., and Turk, G. (2017). Preparing for the unknown: Learning a universal policy with online system identification. arXiv preprint arXiv:1702.02453.
Analysis of kernel mean matching under covariate shift. Y Yu, C Szepesvári, arXiv:1206.4650arXiv preprintYu, Y. and Szepesvári, C. (2012). Analysis of kernel mean matching under covariate shift. arXiv preprint arXiv:1206.4650.
Learning and evaluating classifiers under sample selection bias. B Zadrozny, Proceedings of the twenty-first international conference on Machine learning. the twenty-first international conference on Machine learning114Zadrozny, B. (2004). Learning and evaluating classifiers under sample selection bias. In Proceedings of the twenty-first international conference on Machine learning, page 114.
Unpaired image-to-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionZhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017a). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223-2232.
Fast model identification via physics engines for data-efficient policy search. S Zhu, A Kimmel, K E Bekris, A Boularias, arXiv:1710.08893arXiv preprintZhu, S., Kimmel, A., Bekris, K. E., and Boularias, A. (2017b). Fast model identification via physics engines for data-efficient policy search. arXiv preprint arXiv:1710.08893.
Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. B D Ziebart, Carnegie Mellon UniversityPhD thesisZiebart, B. D. (2010). Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University. |
2,808,403 | Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge | We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided. * equal contribution . | [] | Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge
Emmanuel De Bézenac [email protected]
LIP6
UPMC
Arthur Pajot [email protected]
LIP6
UPMC
Patrick Gallinari [email protected]
LIP6
UPMC
Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge
We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided. * equal contribution .
Introduction
A physical process is a sustained phenomenon marked by gradual changes through a series of states occurring in the physical world. Physicists and environmental scientists attempt to model these processes in a principled way through analytic descriptions of the scientist's prior knowledge of the underlying processes. Conservation laws, physical principles or phenomenological behaviors are generally formalized using differential equations. This physical paradigm has been, and still is the main framework for modeling complex natural phenomena like e.g. those involved in climate. With the availability of very large datasets captured via different types of sensors, this physical modeling paradigm is being challenged by the statistical Machine Learning (ML) paradigm, which offers a prior-agnostic approach. However, despite impressive successes in a variety of domains as demonstrated by the deployment of Deep Learning methods in fields such as vision, language, speech, etc, the statistical approach is not yet ready to challenge the physical paradigm for modeling complex natural phenomena, or at least it has not demonstrated how to. This is a new challenge for this field and an emerging research direction in the ML community. We believe that knowledge and techniques accumulated for modeling physical processes in well developed fields such as maths or physics could be useful as a guideline to design efficient learning systems and conversely, that the ML paradigm could open new directions for modeling such complex phenomena. In this paper we then raise two issues: 1) are modern ML techniques ready to be used to model complex physical phenomena, and 2) how general knowledge gained from the physical modeling paradigm could help designing efficient ML models.
In this work, we tackle these questions by considering a specific physical modeling problem: forecasting sea surface temperature (SST). SST plays a significant role in analyzing and assessing the dynamics of weather and other biological systems. Accurately modeling and predicting its dynamics is critical in various applications such as weather forecasting, or planning of coastal activities.
Since 1982, weather satellites have made huge quantities of very high resolution SST data available [Bernstein, 1982]. Standard physical methods for forecasting SST use coupled ocean-atmosphere prediction systems, based on the Navier Stokes equations. These models rely on multiple physical hypotheses and do not optimally exploit the information available in the data. On the other hand, despite the availability of large amounts of data, direct applications of ML methods do not lead to competitive state of the art results, as will be seen in section 4. We use SST as a typical and representative problem of intermediate complexity. Our goal is not to offer one more solution to this problem, but to use it as an illustration for advancing on the challenges mentioned above. The way we handle this problem is general enough to be transfered to a more general class of transport problems.
We propose a Deep Neural Network (NN) model, inspired from general physical motivations which offers a new approach for solving this family of problems. We first motivate our approach by introducing in section 2 the solution of a general class of partial differential equations (PDE) which is a core component of a large family of transport and propagation phenomena in physics. This general solution is used as a guideline for introducing a Deep Learning architecture for SST prediction which is described in section 3. Experiments and comparison with a series of baselines is introduced in section 4. A review of related work is finally presented in section 5.
The main contributions of this work are: 1) an example showing how to incorporate general physical background for designing a NN aimed at modeling a relatively complex prediction task. We believe the approach to be general enough to be used for a family of transport problems obeying general advection-diffusion principles. 2) formal links between our model's prediction and the solution of a general advection diffusion PDE 3) an unsupervised model for estimating motion fields, given a sequence of images. 4) a proof, on a relatively complex physical modeling problem, that full data intensive approaches based on deep architectures can be competitive with state of the art dedicated numerical methods.
Physical Motivation
Forecasting consists in predicting future temperature maps using past records. Temperatures are acquired via satellite imagery. If we focus on a specific area, we can formulate the problem as prediction of future temperature images of this area using past images. The classical approach to forecasting SST consists in using numerical models representing prior knowledge on the conservation laws and physical principles, which take the form of PDEs. These models are then coupled with SST data using assimilation techniques in order to adjust to initial conditions. It is then integrated forward in time to predict SST evolution. For the sea surface, temperature variation is related to a fluid transport problem. In fluids, transport occurs through the combination of two principles: advection and diffusion. During advection, a fluid transports some conserved quantity I (the temperature for SST) or material via bulk motion, i.e.for small variations ∆x, ∆t conservation is expressed as:
I(x, t) = I(x + ∆x, t + ∆t)(1)
applying a first order approximation of the right hand side and moving the resulting terms to the left hand side of equation 1, we obtain the advection equation, known also as the Brightness Constancy Constraint Equation (BCCE):
∂I ∂t + (w.∇)I = 0(2)
where ∇ denotes the gradient operator, and w the motion vector ∆x ∆t . This equation describes the temporal evolution of quantity I for displacement w. Note that this equation is also the basis for many variational methods for Optical Flow. To retrieve the motion, numerical schemes are applied, and the resulting system of equations, along with a an additional constraint on w is solved for w. This motion can then be used to forecast the future value of I Advection alone is not sufficient to explain the evolution of many physical processes (including SST). Diffusion corresponds to the movement which spreads out the quantity I from areas of high concentration to areas of low concentration. Both advection and diffusion should be considered together. The following equation describes the transport of quantity I through advection and diffusion:
∂I ∂t + (w.∇)I = D∇ 2 I(3)
∇ 2 denotes the Laplacian operator and D the diffusion coefficient. Note that when D → 0, we recover the advection equation 2. This equation describes a large family of physical processes (e.g. fluid dynamics, heat conduction, wind dynamics, etc). Let us now state a result, characterizing the general solutions of equation 3.
Theorem 1. 2 For any initial condition I 0 ∈ L 1 (R 2 ) with I 0 (±∞) = 0, there exists a unique global solution I(x, t) to the advection-diffusion equation 3, where
I(x, t) = R 2 k( x − w, y) I 0 (y) dy(4)
and k(u, v) = 1 4πDt e − 1 4Dt u−v 2 is a radial basis function kernel, or alternatively, a Gaussian probability density with mean x − w and variance 2Dt in its second argument.
Equation 4 provides a principled way to calculate I(x, t) for any time t using the initial condition I 0 , provided the motion w and diffusion coefficient D are known. It states that quantity I(x, t) can be computed from the initial condition I 0 via a convolution with a Gaussian probability density function. In other words, if I was used as a model for the evolution of the SST and the surface's underlying advecting mechanisms were known, future surface temperatures could be predicted from previous ones. Unfortunately neither the initial conditions, the motion vectors nor the diffusion coefficient are known. They have to be estimated from the data. Inspired from the general form of solution 4, we propose a ML method, expressed as a Deep Learning architecture for predicting SST. This model will learn to predict a motion field analog to the w in equation 3, which will be used to predict future images.
Model
Supervision
Model
Warping Scheme I t k 1:tŵ tÎ t+1 I t+1 Figure 1: Motion is estimated from the input images with a convolutional neural network. A warping scheme then displaces the last input image along this motion estimate to produce the future image. The error signal is calculated using the target future image, and is backprogated through the warping scheme to correct the CNN. To produce multiple time-step forecasts, the predicted image is fed back in the CNN in an autoregressive manner.
The model consists of two main components, as illustrated in Figure 1. One predicts the motion field from a sequence of past input images, this is convolutional-deconvolutional (CDNN) module on the top of figure 1), and the other warps the last input image using the motion field from the first component, in order to produce an image forecast. The entire system is trained in an end-to-end fashion, using only the supervision from the target SST image. By doing so, we are able to produce an interpretable latent state which corresponds in our problem to the velocity field advecting the temperatures.
Let us first introduce some notations. Each SST image I t is acquired on a bounded rectangle of R 2 , named Ω. We denote I t (x) and w t (x) the sea surface temperature and the two-dimensional motion vector at time t ∈ R at position x ∈ Ω. I t : Ω → R and w t : Ω → R 2 represent the temperatures and the motion vector field at time t defined on Ω, respectively. When time t and position x are available from the context, we will drop the subscript t from w t (x) and I t (x), along with x for clarity. Given a sequence of k consecutive SST images {I t−k−1 , ..., I t } (also denoted as I t t−k−1 ), our goal is to predict the next image I t+1 .
Motion Estimation
I t k 1:t Convolution Deconvolution 64 ⇥ 64 ⇥ k 64 ⇥ 64 ⇥ 2 64 ⇥ 32 ⇥ 32 128 ⇥ 16 ⇥ 16 256 ⇥ 8 ⇥ 8 512 ⇥ 4 ⇥ 4 386 ⇥ 8 ⇥ 8 194 ⇥ 16 ⇥ 16 98 ⇥ 32 ⇥ 32
Skip Connectionsŵ t Figure 2: Architecture of the motion estimation component. For the motion flow, colours correspond to the flow orientation and colour intensity to the flow intensity
As indicated in section 2, provided the underlying motion field is known, one can compute SST forecasts. Let us introduce how the motion field is estimated in our architecture. We are looking for a vector field w which when applied to the geometric space Ω renders I t close to I t+1 , i.e. I t+1 (x) I t (x + w(x)), ∀x ∈ Ω. If I t+1 were known, we could estimate w, but I t+1 is precisely what we are looking for. Instead, we choose to use a convolutional-deconvolutional architecture to predict a motion vector for each pixel. As shown in figure 2, this network makes use of skip connections [He et al., 2015], allowing fine grained information from the first layers to flow through in a more direct manner. We use batch normalization layer between each convolution, and Leaky ReLU (with parameter value set to 0.1) non-linearities between convolutions and transposed-convolutions. We used k = 4 concatenated images I t−k−1:t as input for training. We have selected this architecture experimentally, testing different state-of-the-art convolution-deconvolution network architectures. Letŵ ∈ R 2×W ×H be the output of the network, where W and H is the width and height of the images, respectively.
Generally, and this is the case for our problem, we do not have a direct supervision on the motion vector field, since the target motion is usually not available. Using the warping scheme introduced below, we will nonetheless be able to (weakly) superviseŵ, based on discrepancy of the warped version of I t image and the target image I t+1 . Figure 3: Warping scheme. To calculate the pixel value for time t + 1 at position x, we first compute its previous position at time t, i.e. x − w. We then center a Gaussian in that position in order to obtain a weight value for each pixel in I t based on its distance with x − w, and compute a weighted average of the pixel values of I t . This weighted average will correspond to the new pixel value at x in I t+1 .
Warping Scheme
I t I t+1 x x − w w
Discretizing the solution of the advection-diffusion equation in section 2 by replacing the integral with a sum, and setting image I t as the initial condition, we obtain a method to calculate the future image, based on the motion field estimateŵ. The latter is used as a warping scheme:
I t+1 (x) = y∈Ω k( x −ŵ(x), y) I t (y) (5) where k(x −ŵ, y) = 1 4πD∆t e − 1 4D∆t x−ŵ−y 2
is a radial basis function kernel, as in equation 4, parameterized by the diffusion coefficient D and the time step value ∆t between t and t + 1. To calculate the temperature for time t + 1 at position x, we compute the scalar product between k(x −ŵ, .), a Gaussian centered in x −ŵ, and the previous image I t . Simply put, it is a weighted average of the temperatures I t , where the weight values are larger when the pixel's positions are closer to x −ŵ. Informally, x −ŵ corresponds to the pixel's previous position at time t. See figure 3.
As seen by the relation with the solution of the advection-diffusion equation, the proposed warping mechanism is then clearly adapted to the modeling of phenomena governed by the advection-diffusion equation. SST forecasting is a particular case, but the proposed scheme can be used for any problem problems in which advection and diffusion is occurring. Moreover, this warping scheme is entirely differentiable, allowing backpropagation of the error signal to the motion estimating module.
This warping mechanism has been inspired by the Spatial Transformer Network (STN) [Jaderberg et al., 2015], originally designed to be incorporated as a layer in a convolutional neural network architecture in order to gain invariance under geometric transformations. Using the notations in [Jaderberg et al., 2015], when the inverse geometric transformation T θ of the grid generator step is set to T θ (x) = x −ŵ(x), and the kernels k( . ; Φ x ) and k( . ; Φ y ) in the sampling step are rbf kernels, we recover our warping scheme. The latter can be seen as a specific case of the STN, without the localization step. This result theoretically grounds the use of the STN for Optical Flow in many recent articles [Zhu et al., 2017], [Yu et al., 2016], [Patraucean et al., 2015], [Finn et al., 2016]: in equation 3, when D → 0, we recover the brightness constancy constraint equation, used in the latter.
For training, supervision is provided at the output of the warping module. It consists in minimizing the discrepancy between the warped imageÎ t+1 and the target image I t+1 . The loss is measured via a differentiable function and the gradient is back propagated through the warping function in order to adjust the parameters of the convolutional-deconvolutional module generating the vector field. This is detailed in the next section.
Loss function
At each iteration, the model aims at forecasting the next observation, given the previous ones. We evaluate the discrepancy between the warped imageÎ t+1 and the target image I t+1 using the Charbonnier penalty function ρ(x) = (x + ) 1 α , where and α are parameters to be set. Note that with = 0 and α = 1 2 , we recover the 2 loss. The Charbonnier penalty function is known to reduce the influence of outliers compared to an l 2 norm. We have tested the Laplacian pyramid loss [Ling and Okada, 2006], where we enforce convolutions of all deconvolutional layers to be close to down-sampled versions of the target image in the Charbonnier penalty sense, but we have observed an overall decrease in generalization performance. The proposed NN model has been designed according to the intuition gained from general background knowledge of a physical phenomenon, here advection-diffusion equations. Additional prior knowledge -expressed as partial differential equations, or through constraints -can be easily incorporated in our model, by adding penalty terms in the loss function. As the displacement w is explicitly part of our model, one strength of our model is its capacity to apply some regularization term directly on the motion field. In our experiments, we tested the influence of different terms: divergence ∇. w t (x) 2 , magnitude ( w t (x) ) 2 and smoothness ∇w t (x) 2 . Data assimilation techniques sometimes use these weighted penalties to control the rotation and divergence fields, for example. We evaluate the influence of these terms in the experiments section. The objective function we used to train the model may be written as:
L t = x∈Ω ρ(Î t+1 (x) − I t+1 (x)) + λ div (∇.ŵ t (x)) 2 + λ magn ŵ t (x) 2 + λ grad ∇ŵ t (x) 2(6)
Experiments
Dataset description
Since 1982, high resolution SST data has been made available by the NOAA6 weather satellite [Bernstein, 1982]. Dealing directly with these data requires a lot of preprocessing (e.g. some regions are not available due to clouds hindering temperature acquisition). In order to avoid such complications which are beyond the scope of this work, we used synthetic but realistic SST data of the Atlantic ocean generated by a sophisticated simulation engine: NEMO (Nucleus for European Modeling of the Ocean) engine 3 [Madec, 2008]. NEMO is a state-of-the-art modelling framework of ocean related engines. It is a primitive equation model adapted to the regional and global ocean circulation problems. Historical data is accumulated in the model to generate a synthesized estimate of the states of the system using data reanalysis, a specific data assimilation scheme, which means that the data does follow the true temperatures. The resulting dataset is constituted of daily We withhold 20% of the training data for validation, selected uniformly at random at the beginning of each experiment. For the tests we used sub-regions enumerated 17 to 20 in figure 4.1, where the interactions between hot and cold waters make the dynamics interesting to study. All the regions numbered in figure 4.1, from 2006 to 2015 where used for training 4 . Each sequence of images used for training or for evaluation corresponds to a specific numbered sub-region. We make the simplifying hypothesis that the data in a single sub-region contains enough information to forecast the future of the sub-region. As the forecast is for a small temporal horizon we can assume that the influence from outside the region is small enough.
We standardise the daily SST acquisitions of each sub region using the mean and the standard deviation of all the SST data of the sub region acquired on the same day of the year, i.e. the SST acquisition of sub region 2 on date September 8th 2017 is standardized using the data of all the September 8th available in the dataset, for each sub-region. This removes the seasonal component from SST data.
Baseline Comparison
We compare our model with several baselines. Each model is evaluated with a mean square error metric, forecasting images on a horizon of 6 (we forecast from I t+1 to I t+6 and then average the MSE). The hyperparameters are tuned using the validation set. Neural network based models are run on a Titan Xp GPU, and runtime is given for comparison purpose.
Concerning the constraints on the vector field w (equation 6). the regularization coefficients selected via validation are λ div = 1, λ magn = −0.03 and λ grad = 0.4. We also compare the results with the model without any regularization.
Our reference model for forecasting is [Béréziat and Herlin, 2015], a numerical assimilation model which relies on data assimilation. In [Béréziat and Herlin, 2015], the ocean's dynamics are modeled using shallow water equations [Vallis, 2017] and the initial conditions, along with other terms, are estimated using assimilation techniques [Trémolet, 2006]. This is a state of the art assimilation model for predicting ocean dynamics, here SST.
The other baselines are 1) an autoregressive convolutional-deconvolutional NN (ACNN), with an architecture similar to our CDNN module, but trained to predict the future image directly, without explicitly representing the motion vector field. Each past observation is used as an input channel, and the output is used as new input for multi step forecasting, 2) a ConvLSTM model [Shi et al., 2015], which uses convolutional transitions in the inner LSTM module, and 3) the model in [Mathieu et al., 2015] which is a multi-scale ACNN trained as a Generative Adversial Network (GAN). We have used a non-official code for [Mathieu et al., 2015], which is made available at https: //github.com/dyelax/Adversarial_Video_Generation. For [Béréziat and Herlin, 2015], the code has been provided by the authors of the paper. We have implemented the ACNN and ConvLSTM models ourselves. The code for our models, along with these baselines will be made available.
Quantitative Results
Model Average Score (MSE) Average Time
Numerical model [Béréziat and Herlin, 2015] 1.99 4.8 s ConvLSTM [Shi et al., 2015] 5 Table 1: Average score and average time on test data. Average score is calculated using the mean square error metric (MSE), time is in seconds. The regularization coefficients for our model have been set using a validation set with λ div = 1, λ magn = −0.03 and λ grad = 0.4. Quantitatively, our model performs well. The MSE score is better than any of the baselines. The closest NN baseline is [Mathieu et al., 2015] which regularizes a regression convolution-deconvolution model with a GAN. The performance is however clearly below the proposed model and it does not allow to easily incorporate prior constraints inspired from the physics of the phenomenon. ACNN is a direct predictor of the image sequence, implemented via a CDNN module identical to the one used in our model. Its performance is poor. Clearly, a straightforward use of prediction models is not adapted to the complexity of the phenomenon. ConvLSTM performs better: as opposed to the ACNN, it seems to be able to capture a dynamic, although not very accurately. Overall, direct prediction models are not able to capture the complex underlying dynamics and produce blurry sequences of images. The GAN explicitly forces the network output to eliminate the blurring effect and then makes it able to capture short term dynamics. The state of the art numerical model [Béréziat and Herlin, 2015], performs well but has a lower performance than our regularized model, although it incorporates more prior constraints. This shows that pure ML models, when conceived adequately and when trained with enough data, can be competitive with state of the art dedicated models. Regularizing the motion vector w notably increases the performance w.r.t. to the unregularized model. The choice of the constraints (divergence, magnitude and smoothness) inspired here by physical background correspond to relevant priors on the dynamics of the model.
As for the running time, the proposed model is extremely fast, being just above the ConvLSTM model of [Shi et al., 2015]. The running time of [Béréziat and Herlin, 2015]'s model is not comparable to the others. It was run on a CPU (no GPU code) when all the others were run on Titan Xp GPU. However, an optimization procedure is required to estimate the motion field, and it is clearly slower than the straightforward NN predictions. Moreover, in order to prevent the numerical scheme from diverging, multiple intermediate forecasts are required.
I t I t+1 I t+3 I t+6
Besides MSE, we need to analyze the prediction samples qualitatively. Figure 4.3 shows the predictions obtained by the different models. The top row is the ground truth for a sequence of 4 temperature images corresponding to time t, t + 1, t + 3 and t + 6. The second row corresponds to our regularized model prediction at times t + 1, t + 3 and t + 6 (time t corresponds to the last input image, it is repeated on each row). The model seems to conserve temperatures. The prediction is close to the target for t + 1, t + 3 and starts to move away at time t + 6. The third row shows the motion flow estimated by the model. Each color in the flow images corresponds to a motion vector. There is clearly a strong evolving dynamic captured for this sequence. Row 4 is the numerical assimilation model of [Béréziat and Herlin, 2015]. It also clearly captures some dynamics and shows interesting patterns, but it tends to diverge when the prediction horizon increases. The ACNN model (row 5) rapidly produces blurry images; it does not preserve the temperatures and does not seem to capture any dynamics. Row 6 shows the predictions of the ConvLSTM model. Temperature is not preserved and although a dynamic is captured, it does not correspond to the target. Overall, the proposed model seems to forecast SST quite accurately, while retrieving a coherent motion vector field.
Related Work
ML for Physical modeling Close to this work is the field of spatio-temporal statistics. In their reference book [Cressie and Wikle, 2015] also advocate the use of physical background knowledge to build statistical models. They show how the design of statistical models can be inspired from partial differential equations linked to an observed physical phenomenon. They mainly consider autoregressive models within a hierarchical Bayesian framework. Another interesting research direction is the use of NNs for reducing the complexity of numerical simulation for physical processes. Generally, in these approaches statistical models are used in place of a computational demanding component of the numerical simulation process. For example in the domain of fluid dynamics, [Tompson et al., 2017] and [Ladický et al., 2015] propose to use regressors for simulating fluid and smoke animation. [Ladický et al., 2015] use a random forest to compute particle location and [Tompson et al., 2017] use a CNN to approximate part of a numerical PDE scheme. In these approaches, ML is only a component of a numerical simulation scheme whereas we aim at modeling the whole physical process via a Deep Learning approach. Farther to our objective, [Rudy et al., 2017] make use of a sparse regression method for discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain.
Our work is also related to recent developments in computer vision, in the related but distinct fields of video prediction and motion estimation in videos. Our goal and the domain of application are clearly different from video modeling, but since our solution involves predicting a motion field and the next SST image, the solutions share some similarities. Motion estimation and video predictions by deep architectures have motivated a series of work over the last two years. We briefly review them below and outline the differences.
Optical Flow Optical flow consists in retrieving the apparent motion of objects, surfaces, or particles between two consecutive frames of a video. The extracted motion can be used in many areas such as object detection, object tracking, movement detection, robot navigation and visual odometry. In the vision community, this is considered as a problem by itself and several papers are dedicated to this topic. Classical methods rely on the brightness constancy constrain equation (BCCE) (equation 2), derived from the observation that surfaces usually persist over time and hence the intensity value of a small region remains the same despite its position change [Sun et al., 2008]. Since using BCCE directly leads to complicated optimizing issues, classic approaches -namely differential methodsapproximate BCCE using a first order Taylor expansion and develop variational methods.
As an alternative to these methods, Deep Learning models have been recently proposed for estimating the optical flow between 2 images. [Fischer et al., 2015] formulate optical flow as a supervised regression problem, using a CNN to predict motion. [Ilg et al., 2016] build on this approach and propose to use an ensemble of these CNN architectures. They assess results on par with state of the art methods for optical flow, while maintaining a small computational overhead. The difficulty here is that these methods require a notable quantity of target data, i.e. optical flow images, while because of the complexity of manually annotating flow images, there are only a few small annotated collections available. [Fischer et al., 2015] and [Ilg et al., 2016] chose to pretrain their model on a synthetic dataset made of computer animations and their associated motion and show that this information transfers well to real videos. [Yu et al., 2016] demonstrate that it is possible to predict the optical flow between two input images in an unsupervised way using a CNN and a Spatial Transformer Network. This is however not extensible for prediction as is done in our setting since this requires the two images I t and I t+1 as input while I t+1 is not available at inference time for prediction.
Video prediction
It is only very recently that video prediction emerged as a task in the Deep Learning community. For this task, people are generally interested at predicting accurately the displacement/ emergence/ disappearing of objects in the video. In our application, the goal is clearly different since we are interested into modeling the whole dynamics behind image changes and not at following moving objects. Let us first introduce some methods that perform prediction by computing optical flow or a similar transformation. Both [Patraucean et al., 2015] and [Finn et al., 2016] use some form of motion flow estimation. For next frame prediction [Patraucean et al., 2015] introduce a STN module at the hidden layer of a LSTM in order do estimate a motion field in this latent space. The resulting image is then decoded in the original image space for prediction. This approach clearly does not allow introducing prior knowledge on the field vector as this has been done in our work. [Finn et al., 2016] learn affine transformations on image parts in order to predict object displacement and [Van Amersfoort et al., 2017] proposed a similar model.
Let us now consider models that directly attempt to predict the next frame without estimating a motion field. As shown in the experimental section, plain application of autoregressive models produces blurred images. [Mathieu et al., 2015], one of our baseline proposed to use different loss functions and a GAN regularization of a CDNN predictor which led to sharper and higher quality predictions. Significant improvements have been obtained with the Video Pixel Network of [Kalchbrenner et al., 2016], which is a sophisticated architecture composed of resolution preserving CNN encoders, LSTM and PixelCNN decoders which form a conditional Spatio-temporal video autoencoder with differentiable memory. This model is probably state of the art today for video prediction, They reach a high accuracy on moving MNIST and good performance on a robot video dataset. A drawback is the complexity of the model and the number of parameters: they are using respectively 20 M and 1 M frames on these two datasets. We did not test this model since up to our knowledge no code was available.
Conclusion
The data intensive paradigm offers alternative directions to the classical physical approaches for modeling complex natural processes. Our belief is that cross fertilization of both paradigms is essential for pushing further the frontier of complex data modeling. By using as an example application a problem of intermediate complexity concerning ocean dynamics, we proposed a principled way to design Deep Learning models using inspiration from the physics. The proposed approach can be easily generalized to a class of problems for which the underlying dynamics follow advectiondiffusion principles. We have compared the proposed approach to a series of baselines. It is able to reach performance comparable to a state of the art numerical model and clearly outperforms alternative NN models used as baselines.
A Proof of the theorem in section 2 1
Proof. In the following, bold x and y will denote vectors of R 2 , while x and y will correspond to the first and second components of x, respectively. Analogously, u and v will correspond to the components of w. The 2D Fourier Transformation F of f : R 2 → R is defined as
F(f ) = R 2 f (x)e −i<ξ,x> dx = R R f (x, y)e −ixξ1−iyξ2 dxdy(7)
We apply the Fourier Transform F to both sides of 3. As consequence of the linearity of the Fourier transform, we can calculate decompose the Fourier transform of the left hand side in the sum of the transforms of each term. We have three terms: ∂I ∂t , (w.∇)I and −D∇ 2 I.
F( ∂I ∂t ) = R 2 ∂I ∂t e −i<x,ξ> dx = R 2 ∂ ∂t (Ie −i<x,ξ> )dx = ∂ ∂t R 2 Ie −i<x,ξ> dx = ∂F(I) ∂t (8) F((w.∇)I) = R 2 (w.∇)Ie −i<x,ξ> dx = R R (u ∂I ∂x + v ∂I ∂y )e −ixξ1−iyξ2 dxdy = u R e −iyξ2 R ∂I ∂x e −ixξ1 dxdy + v R e −ixξ1 R ∂I ∂y e −iyξ2 dydx = iξ 1 u R e −iyξ2 R Ie −ixξ1 dxdy + iξ 2 v R e −ixξ1 R ∂I ∂y e −iyξ2 dydx = iξ 1 u R R Ie −ixξ1−iyξ2 dxdy + iξ 2 v R R Ie −ixξ1−iyξ2 dxdy = (iξ 1 u + iξ 2 v) R R Ie −ixξ1−iyξ2 dxdy = i < ξ, w > F(I) (9) F(−D∇ 2 I) = − R 2 D∇ 2 Ie −i<x,ξ> dx = − R R D( ∂ 2 I ∂x 2 + ∂ 2 I ∂y 2 )e −ixξ1−iyξ2 dxdy = −D R e −iyξ2 R ∂ 2 I ∂x 2 e −ixξ1 dxdy − D R e −ixξ1 R ∂ 2 I ∂y 2 e −iyξ2 dydx = −(iξ 1 ) 2 D R e −iyξ2 R Ie −ixξ1 dxdy − (iξ 2 ) 2 D R e −ixξ1 R Ie −iyξ2 dydx = Dξ 2 1 R e −iyξ2 R Ie −ixξ1 dxdy + Dξ 2 2 R e −ixξ1 R Ie −iyξ2 dydx = Dξ 2 1 R R Ie −ixξ1−iyξ2 dxdy + Dξ 2 2 R R Ie −ixξ1−iyξ2 dxdy = D ξ 2 F(I)(10)
Regrouping all three previously calculated terms, we obtain ∂F(I) ∂t + (i < ξ, w > +D ξ 2 )F(I) = 0 (11) This is a first order ordinary differential equation of the form f (t) + af (t) = 0, which admits a known solution f (t) = f (0)e −at . Thus, the solution of 11 is F(I) = F(I) 0 e −(i<ξ,w>+D ξ 2 )t = F(I) 0 e −i<ξ,w>t e −Dt ξ 2
where F(I) 0 denotes the initial condition of the advection diffusion equation in the frequency domain. In order to obtain a solution of 3 in the spatial domain, we calculate the inverse Fourier Transform F −1 of 12. The multiplication of two functions in the frequency domain is equivalent to their convolution in the spatial domain, i.e. F(f * g) = F(f )F(g). Hence, the inverse of both terms F(I) 0 e −i<ξ,w>t and e −Dt ξ 2 can be calculated separately:
Multiplication by a complex exponential in the frequency domain is equivalent to a shift in the spatial domain : e −i<ξ,w> F(f (x)) = F(f (x − w)), for v ∈ R 2 . Thus, for the first term,
F −1 (F(I) 0 e −(i<ξ,w>)t ) = I 0 (x − w)(13)
For the second term, we use the fact that the Fourier Transform of a Gaussian function also is a Gaussian function, i.e. F( 1 2πσ 2 e − 1 2σ 2 x 2 ) = e − 1 2 σ 2 ξ 2 . Identifying σ 2 with 2Dt, we have:
F −1 (e −Dt ξ 2 ) = 1 4πDt e − 1 4Dt x 2(14)
As has been stated above, the solution is a convolution of both previously calculated terms:
I(x, t) =
Figure 4 :
4Sub regions extracted for the dataset. Test regions are regions 17 to 20.
temperature acquisitions of 481 by 781 pixels, from 2006-12-28 to 2017-04-05 (3734 acquisitions). We extract 64 by 64 pixel sized sub-regions as indicated in figure 4.1. We use data from years 2006 to 2015 for training and validation (94743 training examples), and years 2016 to 2017 for testing.
Figure 5 :
5From top to bottom: target, our model prediction, our model flow, numerical assimilation model , ACNN, ConvLSTM. Data correspond to daily temperatures from January 17 to January 23, 2017
Figure 6 :Figure 7 :
67Output for the 6 of May to the 9 of May 2016, Output , From top to bottom: target, our model prediction, our model flow Output for the 6 of January to the 9 of January 2016. From top to bottom: target, our model prediction, our model flow
A proof of the theorem is provided in the appendix
NEMO data are available at http://marine.copernicus.eu/services-portfolio/ access-to-products/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_ PHY_001_024 4 non numbered regions correspond to land and not sea on the figure
AcknowledgmentsThis work was partially funded by ANR project LOCUST -ANR-15-CE23-0027 and by CLEAR Lab.
Coupling Dynamic Equations and Satellite Images for Modelling Ocean Surface Circulation. Dominique Béréziat, Isabelle Herlin, 10.1007/978-3-319-25117-2_12Springer International PublishingChamDominique Béréziat and Isabelle Herlin. Coupling Dynamic Equations and Satellite Images for Modelling Ocean Surface Circulation, pages 191-205. Springer International Publishing, Cham, 2015. ISBN 978-3-319-25117-2. doi: 10.1007/978-3-319-25117-2_12. URL https://doi. org/10.1007/978-3-319-25117-2_12.
Sea surface temperature estimation using the noaa 6 satellite advanced very high resolution radiometer. R L Bernstein, 10.1029/JC087iC12p09455Journal of Geophysical Research: Oceans. 87C12R. L. Bernstein. Sea surface temperature estimation using the noaa 6 satellite advanced very high resolution radiometer. Journal of Geophysical Research: Oceans, 87(C12):9455-9465, 1982. ISSN 2156-2202. doi: 10.1029/JC087iC12p09455. URL http://dx.doi.org/10.1029/ JC087iC12p09455.
Statistics for Spatio-Temporal Data. N Cressie, C K Wikle, WileyN. Cressie and C.K. Wikle. Statistics for Spatio-Temporal Data. Wiley, 2015. ISBN 9781119243045. URL https://books.google.fr/books?id=4L_dCgAAQBAJ.
Unsupervised learning for physical interaction through video prediction. CoRR, abs/1605.07157. Chelsea Finn, Ian J Goodfellow, Sergey Levine, Chelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. CoRR, abs/1605.07157, 2016. URL http://arxiv.org/abs/1605. 07157.
Flownet: Learning optical flow with convolutional networks. Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der, Daniel Smagt, Thomas Cremers, Brox, abs/1504.06852CoRRPhilipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. CoRR, abs/1504.06852, 2015. URL http://arxiv.org/abs/1504. 06852.
Deep residual learning for image recognition. CoRR, abs/1512.03385. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
Flownet 2.0: Evolution of optical flow estimation with deep networks. Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, Thomas Brox, abs/1612.01925CoRREddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. CoRR, abs/1612.01925, 2016. URL http://arxiv.org/abs/1612.01925.
. Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial transformer networks. CoRR, abs/1506.02025Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015. URL http://arxiv.org/abs/1506.02025.
. Nal Kalchbrenner, Aäron Van Den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, Koray Kavukcuoglu, abs/1610.00527Nal Kalchbrenner, Aäron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. CoRR, abs/1610.00527, 2016. URL http://arxiv.org/abs/1610.00527.
Datadriven fluid simulations using regression forests. Sohyeon L'ubor Ladický, Barbara Jeong, Marc Solenthaler, Markus Pollefeys, Gross, http:/doi.acm.org/10.1145/2816795.2818129ACM Trans. Graph. 346L'ubor Ladický, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. Data- driven fluid simulations using regression forests. ACM Trans. Graph., 34(6):199:1-199:9, October 2015. ISSN 0730-0301. doi: 10.1145/2816795.2818129. URL http://doi.acm.org/10.1145/ 2816795.2818129.
Diffusion distance for histogram comparison. Haibin Ling, K Okada, 10.1109/CVPR.2006.99IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 1Haibin Ling and K. Okada. Diffusion distance for histogram comparison. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 1, pages 246-253, June 2006. doi: 10.1109/CVPR.2006.99.
. G Madec, 27FranceNEMO ocean engine. Note du Pôle de modélisation, Institut Pierre-Simon Laplace (IPSLG. Madec. NEMO ocean engine. Note du Pôle de modélisation, Institut Pierre-Simon Laplace (IPSL), France, No 27, ISSN No 1288-1619, 2008.
Deep multi-scale video prediction beyond mean square error. CoRR, abs/1511.05440. Michaël Mathieu, Camille Couprie, Yann Lecun, Michaël Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. CoRR, abs/1511.05440, 2015. URL http://arxiv.org/abs/1511.05440.
Spatio-temporal video autoencoder with differentiable memory. Ankur Viorica Patraucean, Roberto Handa, Cipolla, abs/1511.06309Viorica Patraucean, Ankur Handa, and Roberto Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR, abs/1511.06309, 2015. URL http://arxiv.org/abs/1511. 06309.
Data-driven discovery of partial differential equations. H Samuel, Steven L Rudy, Joshua L Brunton, J Nathan Proctor, Kutz, SCIENCE ADVANCES |. 3Samuel H. Rudy, Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Data-driven discovery of partial differential equations. SCIENCE ADVANCES |, 3(April), 2017. URL http://arxiv. org/abs/1607.01067.
Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, Wang-Chun Woo, abs/1506.04214CoRRXingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. CoRR, abs/1506.04214, 2015. URL http://arxiv.org/abs/1506.04214.
Learning Optical Flow. Deqing Sun, Stefan Roth, J P Lewis, Michael J Black, 10.1007/978-3-540-88690-7_7SpringerBerlin Heidelberg; Berlin, HeidelbergDeqing Sun, Stefan Roth, J. P. Lewis, and Michael J. Black. Learning Optical Flow, pages 83- 97. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. ISBN 978-3-540-88690-7. doi: 10.1007/978-3-540-88690-7_7. URL https://doi.org/10.1007/978-3-540-88690-7_7.
Accelerating Eulerian fluid simulation with convolutional networks. Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, Ken Perlin, Proceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine LearningSydney, Australia70International Convention CentreJonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. Accelerating Eulerian fluid simulation with convolutional networks. In Doina Precup and Yee Whye Teh, editors, Pro- ceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3424-3433, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/tompson17a.html.
Accounting for an imperfect model in 4d-var. Yannick Trémolet, Quarterly Journal of the Royal Meteorological Society. 132621Yannick Trémolet. Accounting for an imperfect model in 4d-var. Quarterly Journal of the Royal Meteorological Society, 132(621):2483-2504, 2006.
Atmospheric and oceanic fluid dynamics. K Geoffrey, Vallis, Cambridge University PressGeoffrey K Vallis. Atmospheric and oceanic fluid dynamics. Cambridge University Press, 2017.
Transformation-Based Models of Video Sequences. Joost Van Amersfoort, Anitha Kannan, Marc'aurelio Ranzato, Arthur Szlam, Du Tran, Soumith Chintala, CoRR abs/1701.08435Joost Van Amersfoort, Anitha Kannan, Marc'Aurelio Ranzato, Arthur Szlam, Du Tran, and Soumith Chintala. Transformation-Based Models of Video Sequences. In CoRR abs/1701.08435 (2017), pages 1-11, 2017. URL http://arxiv.org/abs/1701.08435.
Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness. Jason J Yu, Adam W Harley, Konstantinos G Derpanis, 10.1007/978-3-319-49409-8_1Springer International PublishingChamJason J. Yu, Adam W. Harley, and Konstantinos G. Derpanis. Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness, pages 3-10. Springer Interna- tional Publishing, Cham, 2016. ISBN 978-3-319-49409-8. doi: 10.1007/978-3-319-49409-8_1. URL https://doi.org/10.1007/978-3-319-49409-8_1.
Guided optical flow learning. CoRR, abs/1702.02295. Yi Zhu, Zhen-Zhong Lan, Shawn D Newsam, Alexander G Hauptmann, Yi Zhu, Zhen-Zhong Lan, Shawn D. Newsam, and Alexander G. Hauptmann. Guided optical flow learning. CoRR, abs/1702.02295, 2017. URL http://arxiv.org/abs/1702.02295. |
220,961,494 | PDE-Driven Spatiotemporal Disentanglement | A recent line of work addresses the problem of predicting high-dimensional spatiotemporal phenomena by leveraging specific tools from the differential equations theory. Following this direction, we propose in this article a novel and general paradigm for this task based on a resolution method for partial differential equations: the separation of variables. This inspiration allows to introduce a dynamical interpretation of spatiotemporal disentanglement. It induces a simple and principled model based on learning disentangled spatial and temporal representations of a phenomenon to accurately predict future observations. We experimentally demonstrate the performance and broad applicability of our method against prior state-of-the-art models on physical and synthetic video datasets. * Equal contribution.Preprint. Under review. | [
214002473,
11758569,
2808403,
24069181,
3566136,
14124313,
3297437
] | PDE-Driven Spatiotemporal Disentanglement
Jérémie Donà [email protected]
Jean-Yves Franceschi [email protected]
Sylvain Lamprier [email protected]
Patrick Gallinari [email protected]
Sorbonne Université
CNRS
LIP6, F-75005ParisFrance
Sorbonne Université
CNRS
LIP6, F-75005ParisFrance
Sorbonne Université
CNRS
LIP6, F-75005ParisFrance
Criteo AI Lab
Sorbonne Université
CNRS
LIP6, F-75005Paris, ParisFrance, France
PDE-Driven Spatiotemporal Disentanglement
A recent line of work addresses the problem of predicting high-dimensional spatiotemporal phenomena by leveraging specific tools from the differential equations theory. Following this direction, we propose in this article a novel and general paradigm for this task based on a resolution method for partial differential equations: the separation of variables. This inspiration allows to introduce a dynamical interpretation of spatiotemporal disentanglement. It induces a simple and principled model based on learning disentangled spatial and temporal representations of a phenomenon to accurately predict future observations. We experimentally demonstrate the performance and broad applicability of our method against prior state-of-the-art models on physical and synthetic video datasets. * Equal contribution.Preprint. Under review.
Introduction
The interest of the machine learning community in physical phenomena has substantially grown for the last few years (Shi et al., 2015;Long et al., 2018;Greydanus et al., 2019). In particular, an increasing amount of works studies the challenging problem of modeling the evolution of dynamical systems, with applications in sensible domains like climate or health science, making the understanding of physical phenomena a key challenge in machine learning. To this end, the community has successfully leveraged the formalism of dynamical systems and their associated differential formulation as powerful tools to specifically design efficient prediction models. In this work, we aim at studying this prediction problem with a principled and general approach, through the prism of Partial Differential Equations (PDEs), with a focus on learning spatiotemporal disentangled representations.
Prediction via spatiotemporal disentanglement was first studied in video prediction works, in order to separate static and dynamic information (Denton & Birodkar, 2017) for prediction and interpretability purposes. Existing models are particularly complex, involving either adversarial losses or variational inference. Furthermore, their reliance on Recurrent Neural Networks (RNNs) hinders their ability to model spatiotemporal phenomena (Yıldız et al., 2019;Ayed et al., 2020;Franceschi et al., 2020). Our proposition addresses these shortcomings with a simplified and improved model by grounding spatiotemporal disentanglement in the PDE formalism.
Spatiotemporal phenomena obey physical laws such as the conservation of energy, that lead to describe the evolution of the system through PDEs. Practical examples include the conservation of energy for physical systems (Hamilton, 1835), or the equation describing constant illumination in a scene (Horn & Schunck, 1981) for videos that has had a longstanding impact in computer vision with optical flow methods (Finn et al., 2016;Dosovitskiy et al., 2015). We propose to model the evolution of partially observed spatiotemporal phenomena with unknown dynamics by leveraging a formal method for the analytical resolution of PDEs: the functional separation of variables (Miller, 1988). Our framework formulates spatiotemporal disentanglement for prediction as learning a separable solution, where spatial and dynamic information are represented in separate variables. Besides offering a novel interpretation of spatiotemporal disentanglement, it confers simplicity and performance compared to existing methods: disentanglement is achieved through the sole combination of a prediction objective with regularization penalties and the temporal dynamics is defined by a learned Ordinary Differential Equation (ODE). We experimentally demonstrate the applicability, disentanglement capacity, and forecasting performance of the proposed model on various spatiotemporal phenomena involving standard physical processes and synthetic video datasets against prior state-of-the-art models.
Related Work
Our contribution deals with two main directions of research: statiotemporal disentanglement and the coupling of neural networks and PDEs.
Spatiotemporal disentanglement. Disentangling factors of variations is an essential representation learning problem (Bengio et al., 2013). Its cardinal formulation for static data has been extensively studied, with state-of-the-art solutions, studied by Locatello et al. (2019), being essentially based on Variational Autoencoders (VAEs) (Kingma & Welling, 2014;Rezende et al., 2014). As for sequential data, several disentanglement notions have been formulated, ranging from distinguishing objects in a video (Hsieh et al., 2018;van Steenkiste et al., 2018), to separating and modeling multi-scale dynamics (Hsu et al., 2017;Yingzhen & Mandt, 2018).
We focus in this work on the dissociation of the dynamics and visual aspects for spatiotemporal data. Even in this case, dissociation can take multiple forms. Examples in the video generation community include decoupling the foreground and background when generating videos (Vondrick et al., 2016), constructing structured frame representations (Villegas et al., 2017b;Minderer et al., 2019;Liu et al., 2019), extracting physical dynamics (Le Guen & Thome, 2020), or latent modeling of dynamics in a state-space manner (Franceschi et al., 2020). Closer to our work, Denton & Birodkar (2017), Villegas et al. (2017a) and Hsieh et al. (2018) introduced in their video prediction models explicit latent disentanglement of static and dynamic information obtained using adversarial losses (Goodfellow et al., 2014) or VAEs. Disentanglement has also been introduced in more restrictive models relying on data-specific assumptions (Kosiorek et al., 2018;Jaques et al., 2020), and in video generation (Tulyakov et al., 2018). We aim in this work at grounding and improving spatiotemporal disentanglement with more adapted inductive biases, as suggested by Locatello et al. (2019), by introducing a paradigm leveraging the functional separation of variables resolution method of PDEs.
Spatiotemporal prediction and PDE-based neural network models. An increasing number of works combining neural networks and differential equations for spatiotemporal forecasting have been produced for the last few years. Some of them show substantial improvements for the prediction of dynamical systems or videos compared to standard RNNs by defining the dynamics using learned ODEs (Rubanova et al., 2019;Yıldız et al., 2019;Ayed et al., 2020;Le Guen & Thome, 2020), following Chen et al. (2018), or adapting them to stochastic data (Ryder et al., 2018;Li et al., 2020;Franceschi et al., 2020). Most PDE-based spatiotemporal models exploit some prior physical knowledge. It can induce the structure of the prediction function (Brunton et al., 2016;de Avila Belbute-Peres et al., 2018) or specific cost functions, thereby improving model performances. For instance, de Bézenac et al. (2018) shape their prediction function with an advection-diffusion mechanism, and Long et al. (2018Long et al. ( , 2019 estimate PDEs and their solutions by learning convolutional filters proven to approximate differential operators. Greydanus et al. (2019), Chen et al. (2020) andToth et al. (2020) introduce non-regression losses by taking advantage of Hamiltonian mechanics (Hamilton, 1835), while Tompson et al. (2017 and Raissi et al. (2020) combine physically inspired constraints and structural priors for fluid dynamic prediction. Our work deepens this literature by establishing a novel link between a resolution method for PDEs and spatiotemporal disentanglement, and thereby introducing a data-agnostic model leveraging any static information in observed phenomena.
Background: Separation of Variables
Analytically or numerically solving high-dimensional PDEs is a difficult problem (Bungartz & Griebel, 2004). Given a decomposition of the solution, e.g., a simple combination of lower-dimensional functions, it consists in reducing the PDE to equivalent simpler differential equations, thus simplifying its resolution.
Simple Case Study
Let us introduce the idea through a standard application of this technique, with proofs in Appendix A.1, on the one-dimensional heat diffusion problem (Fourier, 1822), e.g., a bar of length L, whose temperature at time t and position x is denoted by u(x, t) and satisfies:
∂ u ∂t = c 2 ∂ 2 u ∂x 2 , u(0, t) = u(L, t) = 0, u(x, 0) = f (x).(1)
Suppose that a solution u is product-separable, i.e., it can be decomposed as: u(x, t) = u 1 (x) · u 2 (t).
Combined with Equation (1), it leads to c 2 u 1 (x)/u 1 (x) = u 2 (t)/u 2 (t). The left and right hand sides of this equation are respectively independent from t and x, thus both sides are constant, and solving both resulting ODEs gives solutions of the form, with µ ∈ R and n ∈ N:
u(x, t) = µ sin nπ L x × exp − cnπ L 2 t .(2)
The superposition principle and the unicity of solutions under smoothness constraints allow then to build the set of solutions of Equation (1) with linear combinations of separable solutions (Le Dret & Lucquin, 2016). Besides this simple example, separation of variables can be more elaborate.
Functional Separation of Variables
The functional separation of variables (Miller, 1988) generalizes this method. Let u be a function obeying a given arbitrary PDE. The functional variable separation method amounts to finding a parameterization z, a functional U , an entangling function ξ, and representations φ and ψ such that:
z = ξ φ(x), ψ(t) , u(x, t) = U (z).(3)
Trivial choices ξ = u and identity function as U , φ and ψ ensure the validity of this reformulation. Finding suitable φ, ψ, U , and ξ with regards to the initial PDE can facilitate its resolution by inducing separate simpler PDEs on φ, ψ, and U . General results on the existence of separable solutions have indeed been proven (Miller, 1983), even though their unicity highly depends on the initial problem and the choice of functional separation (Polyanin, 2020). Functional separation of variables finds applications in various physics fields, such as reaction-diffusion with non-linear sources or convection-diffusion (Polyanin, 2019;Polyanin & Zhurov, 2020), Hamiltonian physics (Benenti, 1997), or even general relativity (Kalnins et al., 1992).
As an example, consider a refinement of Equation (1) on u along with the change of variable v :
∂ u ∂t + c ∂ u ∂x = χ ∂ 2 u ∂x 2 , v(x, t) = u(x, t)e −αx e −βt .(4)
A proper choice of constants α and β makes v satisfy Equation (1)'s PDE, which is solvable via separation of variables; see Appendix A.2 for details. Non-linear generalizations of Equations (1) and (4) also find solutions using the functional separation of variables, with for instance:
∂ u ∂t = ν(u) ∂ 2 u ∂x 2 + Q(x, u) ∂ u ∂x + f (x, u),(5)
for which Jia et al. (2008) exhibit conditions for the existence of solutions to this equation decomposed as follows:
z = φ(x) + ψ(t), u(x, t) = U (z).(6)
The functional decomposition of Equation (3) generalizes the separability defined in Section 3.1, as addition and product separability are recoverable by setting, respectively, U = id and U = exp.
We see reparameterizations such as Equation (6) as changes of coordinates inducing a natural spatiotemporal disentanglement, and introduce in the following a relaxation of this general method.
Proposed Method
We propose to model spatiotemporal phenomena using the functional variable separation formalism. We first describe our notations and then derive a principled model and constraints from this method.
Problem Formulation Through Separation of Variables
We consider a distribution P of observed spatio-temporal trajectories and corresponding observation samples v = (v t0 , v t0+∆t , . . . , v t1 ), with v t ∈ V ⊆ R m and t 1 = t 0 + ν∆t. Each sequence v ∼ P corresponds to an observation of a dynamical phenomenon, assumed to be described by a hidden functional u v (also denoted by u for the sake of simplicity) of space coordinates x ∈ X ⊆ R s and time t that characterizes the trajectories. More precisely, u v describes an unobserved continuous dynamics and v corresponds to instantaneous discrete spatial measurements associated to this dynamics. Therefore, we consider that v t results from a time-independent function ζ of the mapping u v (·, t).
For example, v might consist in temperatures measured at some points of the sea surface, while u v would be the circulation ocean model. v provides a partial information about u v and is a function (e.g. projection) of the full dynamics. We seek to learn a model which, when conditioned on prior observations, can predict future observations.
To this end, we posit that the state u of each observed trajectory v is driven by a hidden common PDE, shared among all trajectories; we discuss this assumption in details in Appendix C.1. Learning such PDE and its solutions would then allow to model observed trajectories v. We propose to do so by relying on the functional separation of variables of Equation (3), in order to leverage a potential separability of the hidden PDE. Therefore, analogously to Equation (3), we propose to formulate the problem as learning observation-constrained φ, ψ and U , as well as ξ and ζ, such that:
z = ξ φ(x), ψ(t) , u(x, t) = U (z), v t = ζ u(·, t) ,(7)
with φ and ψ allowing to disentangle the prediction problem. As with the formalism of the functional separation of variables, this amounts to learning a spatial ODE on φ, a temporal ODE on ψ, and a PDE on U , as well as their respective solutions.
Fundamental Limits and Relaxation
However, directly learning u is a restrictive choice, as it depends on the system coordinates. Indeed, learning explicit PDE solutions taking as input space and time coordinates, like Sirignano & Spiliopoulos (2018) and Raissi (2018), has major drawbacks: it requires to deal with the spatial coordinate system and to have prior knowledge about the involved PDEs, which may be unknown for complex data such as in climate modeling. We choose not to make such strong assumptions in order to maintain the generality of the proposed approach.
We overcome these issues by, instead, encoding the unknown spatial coordinate system in a spatial representation, and thus implicitly learn u by directly modeling sequences of observations thanks to representation learning. Indeed, Equation (7) induces that these spatial coordinates, hence the explicit resolution of PDEs on u or U , can be ignored, as it amounts to learning φ, ψ and D such that:
v t = (ζ • U • ξ) φ(·), ψ(t) = D φ, ψ(t) .(8)
In order to manipulate functionals φ and ψ in practice, we respectively introduce learnable timeinvariant and time-dependent representations of φ and ψ, denoted by S and T , such that:
φ ≡ S ∈ S ⊆ R d , ψ ≡ T : t → T t ∈ T ⊆ R p ,(9)
where the dependence of ψ ≡ T on time t will be modeled using a temporal ODE following the separation of variables, and the function φ, and consequently its spatial ODE, are encoded into a vectorial representation S. Besides their separation of variables basis, the purpose of S and T is to capture spatial and motion information of the data. For instance, S could encode static information such as objects appearance, while T typically contains motion variables.
Parameterization of the Functional Variable Separation
S and T t0 , because of their dependence on v in Equation (9), are inferred from an observation history, or conditioning frames, V τ (t 0 ), where V τ (t) = (v t , v t+∆t , . . . , v t+τ ∆t ), using respectively encoder Figure 1: Computational graph of the proposed model. E S and E T take contiguous observations as input; time invariance is enforced on S; the evolution of T t is modeled with an ODE and is constrained to coincide with E T ; T t0 is regularized; forecasting equates to decoding from S and T t .
networks E S and E T . We parameterize D of Equation (8) as a neural network that acts on both S and T t , and outputs the estimated observation v t = D(S, T t ). Unless specified otherwise, S and T t are fed concatenated into D, which then learns the parameterization ξ of their combination.
Temporal ODE on ψ ≡ T
We model the evolution of T t , thereby the dynamics of our system, with a first-order ODE:
∂ T t ∂t = f (T t ) ⇔ T t = T t0 + t t0 f (T t ) dt(10)
This is in accordance with the separation of variables method that induces an ODE on ψ. Note that the first-order ODE assumption can be taken without loss of generality since any ODE is equivalent to a higher-dimensional first-order ODE. Therefore, since T t is multi-dimensional, it can model complex interactions between system variables. Following Chen et al. (2018), f is implemented by a neural network and Equation (10) is solved with an ODE resolution scheme. Suppose initial ODE conditions S and T t0 have been computed with E S and E T . This leads to the following simple forecasting scheme, enforced by the corresponding regression loss:
v t = D S, T t0 + t t0 f (T t ) dt , L pred = 1 ν + 1 ν i=0 1 m v t0+i∆t − v t0+i∆t 2 2 ,(11)
where ν + 1 is the number of observations, and the m is the dimension of the observed variables v.
Equation (11) ensures that the evolution of T is coherent with the observations; we now should enforce its consistency with E T . Indeed, the dynamics of T t is modeled by Equation (10), while only its initial condition T t1 is computed with E T . However, there is no guaranty that T t , computed via integration, matches E T V τ (t) at any other time t, while they should in principle coincide. We introduce the following autoencoding constraint aiming at mitigating their potential divergence, thereby stabilizing the evolution of T :
L AE = 1 m D S, E T V τ (t 0 + i∆t) − v t 2 2 , i ∼ U 0, ν − τ .(12)
4.5 Spatial ODE on φ ≡ S As indicated hereinabove, the spatial ODE on φ is assumed to be encoded into S. Nonetheless, since S is inferred from an observation history, the time independence property on S is de facto relaxed; thus, we need to explicitly enforce it. Unlike Denton & Birodkar (2017) who penalize the squared difference between two contents representation taken at random times, we adopt a simpler PDE-motivated approach. Time independence implies:
∂ E S V τ (t) ∂t = 0.(13)
However, computing this derivative in practice is complex and costly; see Appendix B for more details. Moreover, observation histories may not convey identical spatial information (for example, when an object conceals another for the whole history period); thus, directly minimizing this derivative may hinder performances. Therefore, we relax this constraint thanks to a lower bound on the integral of temporal derivatives of E S obtained with Cauchy-Schwarz inequality:
t1−τ ∆t t0 ∂ E S V τ (t) ∂t 2 2 dt ≥ t1−τ ∆t t0 ∂ E S V τ (t) ∂t dt 2 2 .(14)
Thus, we only minimize the evolution of E S V τ (t) between two distant time steps by penalizing the right-hand side of Equation (14), where d is the dimension of S:
L S reg = 1 d E S V τ (t 0 ) − E S V τ (t 1 − τ ) 2 2 .(15)
Spatiotemporal Disentanglement
Abstracting the spatial ODE into a generic representation S leads, without additional constraints, to an underconstrained problem where spatiotemporal disentanglement cannot be guaranteed. Indeed, E S can be set to zero without breaking any prior constraint, because static information is not prevented to be encoded into T . Accordingly, information in S and T needs to be segmented.
Thanks to the design of our model, it suffices to ensure that S and T and disentangled at initial time t 0 for them be to be disentangled at all t. Indeed, the mutual information between two variables is preserved by invertible transformations. Equation (10) is an ODE and f , as a neural network, is Lipschitz-continuous, so T t → T t is invertible. Therefore, disentanglement between S and T t , characterized by a low mutual information between both variables, is preserved through time; see Appendix C for a detailed discussion. We thus only constrain the information quantity in T t0 by using a Gaussian prior to encourage it to contain only necessary dynamic information:
L T reg = 1 p T t0 2 2 = 1 p E T V τ (t 0 ) 2 2 .(16)
Loss Function
The global loss to be minimized is a linear combination of Equations (11), (12), (15) and (16), as illustrated in Figure 1:
L(v) = E v∼P λ pred L pred + λ AE · L AE + λ S reg · L S reg + λ T reg · L T reg .(17)
In the following, we conventionally set ∆t = 1. Note that the presented approach could be generalized to irregularly sampled observation times thanks to the dedicated literature (Rubanova et al., 2019), but this is out of the scope of this paper.
Experiments
We describe in this section the main experimental results of our model on three physical datasets and a synthetic video prediction dataset, briefly presented in this section and in more details in Appendix D. 2 We demonstrate the relevance of our model with ablation studies, and its performance by comparing it with more complex state-of-the-art models. We refer to Appendix F for more experiments and prediction examples, and to Appendix E for training details.
Physical Datasets: Wave Equation and Sea Surface Temperature
We first investigate two toy dynamical systems and a real-world dataset in order to show the advantage of PDE-driven spatiotemporal disentanglement for forecasting.
We first lean on the wave equation, occurring for example in acoustic or electromagnetism, with source term like Saha et al. (2020), to produce the toy dataset WaveEq consisting in 64 × 64 normalized images of the physical process. We additionally build the WaveEq-100 dataset by extracting 100 pixels, chosen uniformly at random and shared among sequences, from WaveEq frames; this experimental setting can be thought of as measurements from sensors partially observing the phenomenon. In both cases, τ = 4 and ν = 24. Our model is also tested on the real-world dataset SST, derived from the data assimilation engine NEMO (Madec, 2008) and introduced by de Bézenac et al. (2018), consisting in 64 × 64 frames showing the evolution of the sea surface temperature. Modeling its evolution is particularly challenging as its dynamic is highly non-linear, chaotic, and involves several unobserved quantities (e.g., forcing terms). In this case, τ = 3 and ν = 9. We compare our model on these three datasets to a version of this model with S removed and integrated into T , thus also removing L S reg and L T reg . We additionally include PKnl (de Bézenac et al., 2018), a model specifically designed for SST, in the comparison. Results are compiled in Table 1 for different forecast horizons, and an example of prediction is depicted in Figure 2.
On these three datasets, our model produces more accurate long-term prediction with S than without it. This indicates that learning an invariant component facilitates training and improves generalization on physical datasets. The influence of S can be observed on Figure 2 (swap row) where the S of a given sequence is replaced by another one extracted from another sequence, changing the aspect of the prediction. We provide in Appendix F further samples showing the influence of S in the prediction. Even though there is no evidence of separability in SST, our algorithm trained with a timeinvariant component takes advantage of this feature on both tested forecast horizons. Indeed, it outperforms PKnl despite the data-specific structure of the latter, whereas removing the static component decreases performances below PKnl.
A Synthetic Video Dataset: Moving MNIST
We also assess the prediction and disentanglement performances of our model on the Moving MNIST dataset (Srivastava et al., 2015) involving MNIST digits (LeCun et al., 1998) bouncing over 64 × 64 frame borders, with τ = 4 and ν = 14. We perform a full ablation study of our model, and compare it to DrNet (Denton & Birodkar, 2017) and DDPAE (Hsieh et al., 2018), which are state-of-the-art spatiotemporal disentangled prediction methods on Moving MNIST leveraging no restrictive dataspecific priors. Note that DrNet and DDPAE use powerful machine learning techniques, with the former based on adversarial losses and the latter on complex VAEs. Table 2 and illustrated in Figure 3 correspond to two tasks: prediction and disentanglement, at both short and long-term horizons. Disentanglement is evaluated via content swapping, which consists in replacing the content representation of a sequence by the one of another sequence, which should result for a perfectly disentangled model in swapping digits of both sequences. This is done by taking advantage of the synthetic nature of this dataset that allows us to implement the ground truth content swap and compare it to the generated swaps of the model. Performances are assessed by comparing to the ground truth using standard metrics (Denton & Fergus, 2018) Peak Signal-to-Noise Ratio (PSNR, higher is better) and Structured Similarity (SSIM, higher is better).
Results reported in
Both qualitative and quantitative results show the advantage of our model against all baselines, despite its simplicity compared to DrNet and DDPAE. DDPAE produces accurate predictions on a short horizon but does not extrapolate well to long-term digits movements, with altered shapes of digits. DrNet fails to even generate sharp digits. Because the content variable is fixed all along the forecast, this shows that both baselines have difficulties separating content and motion. Our model instead presents consistent samples at t + 95, even in the content swap setting, showing that it better separates motion from content than prior methods. Accordingly, our model significantly outperforms both of these baselines in terms of prediction and disentanglement, especially at a long-term horizon.
Ablation studies confirm that this advantage is due to the constraints inspired by the separation of variables. Indeed, the model fails to train correctly without S due to numerical instabilities, and removing any non-forecasting constraint of the training loss substantially reduces performances. In particular, the invariance loss on the static component and the regularization of initial condition T t0 are essential, as their absence hinders both prediction and disentanglement. Removing the autoencoding constraints affects the prediction accuracy in a minor measure, still allowing state-of-the art performances compared to other baselines. This observation shows that it suffices to implement an 2 constraint on the first time step of the sequence only to enforce disentanglement, as described in Section 4.6. Nonetheless, the auto-encoding constraint significantly strengthens our performances at a long-term horizon, confirming its benefits to the stabilization of dynamics.
Conclusion
We introduce a novel method for spatiotemporal prediction inspired by the separation of variables PDE resolution technique, involving only time invariance and regression constraints. This inspiration induces simple constraints ensuring the separation of spatial and temporal information. We experi-mentally demonstrate the benefits of the proposed model, which, despite its simplicity, outperforms prior state-of-the-art methods on physical and synthetic video datasets. We believe that this work, which provides a dynamical interpretation of spatiotemporal disentanglement and implements it in a simple method, could serve as the basis of more complex models further leveraging the PDE formalism. Another direction for future work could be extending the model with more involved tools such as VAEs to improve its performances, or adapt it to the prediction of natural stochastic videos (Denton & Fergus, 2018).
Broader Impact
Our work introduces a spatiotemporal disentanglement method for forecasting. Besides theoretical motivations, our method was designed in order to improve interpretability in machine learning prediction systems. Indeed, when using deep neural networks as predictive algorithms, exploring latent representations and evaluating their impact on the output is challenging, due to the complex geometry of the latent space. We believe that our work is a step forward in this direction. Moreover, our method provides a framework to automatically learn invariance in physical systems. The modeling of physical systems using neural networks gains momentum in the machine learning community and has potential applications in climatology and evaluation of climate change, as soon as the work results from the cooperation of experts form both fields.
However, the choice of studied datasets to learn spatiotemporal disentanglement should be carefully considered under its potential consequences. Even though our experiments only consider synthetic and physical data, separating motion from content in videos could lead to potential manipulations made possible by such disentanglement, with for instance deepfakes (Citron & Chesney, 2018), raising the broader question of the threats of machine learning technologies to privacy and society.
A Proofs
A.1 Resolution of the Heat Equation
In this section, we succinctly detail a proof for the existence and uniqueness for the solution to the two-dimensional heat equation. It allows to show that product-separable solutions build the entire solution space for this problem, highlighting our interest into the research of separable solutions.
Existence through separation of variables. Consider the heat equation problem:
∂ u ∂t = c 2 ∂ 2 u ∂x 2 , u(0, t) = u(L, t) = 0, u(x, 0) = f (x).(18)
Assuming product separability of u with u(x, t) = u 1 (x)u 2 (t) in Equation (18) gives:
c 2 u 1 (x) u 1 (x) = u 2 (t) u 2 (t) .(19)
Both sides being independent of each other variables, they are equal to a constant denoted by −α. If α is negative, solving the right side of Equation (19) results to non-physical solutions with exponentially increasing temperatures, and imposing border condition of Equation (18) makes this solution collapse to the null trivial solution. Therefore, we consider that α > 0.
Both sides of Equation (19) being equal to a constant leads to a second-order ODE on u 1 and a first-order ODE on u 2 , giving the solution shapes, with constants A, B and D:
u 1 (x) = A × cos √ αx + B sin √ αx u 2 (t) = D × e −αt×c 2 .(20)
Link with initial and boundary conditions Now we link the above equation to the boundary conditions of the problem. Because our separation is multiplicative, we can omit D for non-trivial solutions and set it with loss of generality to 1 (as it only scales the values of A and B).
u(0, t) = u(L, t) = 0 and ∀t > 0, u 1 (t) = 0 gives:
A = 0, B × e −αt×c 2 sin √ αL = 0,(21)
which means that, for non-trivial solution (i.e, B = 0), we have for a given n ∈ N: √ α = nπ/L. We can finally express our solution to the heat equation without initial conditions as:
u(x, t) = B sin nπ L x × exp − cnπ L 2 t .(22)
Considering the superposition principle, because the initial problem is homogeneous, all linear combinations of Equation (22) are solutions of the heat equation without initial conditions. Therefore, any following function is a solutions of the heat equation without initial conditions.
u(x, t) = +∞ n=0 B n sin nπ L x × exp − cnπ L 2 t .(23)
Finally, considering the initial condition u(x, 0) = f (x), a Fourier decomposition of f allows to set all coefficients B n , showing that, for any initial condition f , there exists a solution to Equation (18) of the form of Equation (23).
Uniqueness We present here elements of proof for establishing the uniqueness of the solutions of Equation (18) that belong to C 2 [0, 1] × R + . Detailed and rigorous proofs are given by Le Dret & Lucquin (2016).
The key element consists in establishing the so-called Maximum Principle which states that, considering a sufficiently smooth solution, the minimum value of the solution is reached on the boundary of the space and time domains.
For null border condition (as we have here), this means that the norm of the solution u is given by the norm of f (because of the initial condition). Finally, let us consider two smooth solutions of Equation (18) U 1 and U 2 . Then, their difference v = U 1 − U 2 follows the heat equation with null border and initial conditions (i.e, v(x, 0) = 0). Because v is as regular as U 1 and U 2 , it satisfies the previous fact about the norm of the solutions, i.e, the norm of v equals the norm of its initial condition: v = 0. Therefore, v is null and so is U 1 = U 2 , showing the unicity of the solutions.
Finally, this show that solutions of the form of Equation (23) shape the whole set of smooth solutions of Equation (18).
A.2 Heat Equation with Advection Term (Equation (4))
We give here details for the existence of product-separable solutions to Equation (4):
∂ u ∂t + c ∂ u ∂x = χ ∂ 2 u ∂x 2 , for − 1 < x < 1 and t < T, c > 0.(24)
Let α, β ∈ R; consider the following change of variables for u:
u(x, t) = v(x, t)e αx+βt .(25)
The partial derivatives from Equation (4) can be rewritten as functions of the new variable v:
∂ u ∂t = ∂ v ∂t e αx+βt + v × βe αx+βt (26) ∂ u ∂x = ∂ v ∂v e αx+βt + αve αx+βt (27) ∂ 2 u ∂x 2 = ∂ 2 v ∂x 2 × e αx+βt + 2α × ∂ v ∂x e αx+βt + α 2 ve αx+βt(28)
Using these expressions in Equation (4) and dividing it by e αx+βt leads to:
∂ v ∂t + β + cα − α 2 χ v + (c − 2αχ) ∂ v ∂x = ν ∂ 2 v ∂x 2 .(29)
α and β, being dummy parameters, cen be set such that:
β + cα − α 2 χ = 0 c − 2αχ = 0
We then retrieve the standard two-dimensional heat equation of Equation (18) given by:
∂ v ∂t = χ ∂ 2 v ∂x 2 ,(30)
which is known to have product-separable solutions as explained in the previous section. This more generally shows all solutions of Equation (4) can be retrieved from solutions to Equation (18).
B Accessing Time Derivatives of S
While explicitly constraining the time derivative of E S V τ (t) seems more intuitive than imposing time invariance as explained in Section 4.5, it is a difficult matter in practice. Indeed, E S does not take as input neither the time coordinate t nor spatial coordinates x, y as done by Raissi (2018) and Sirignano & Spiliopoulos (2018), which allows the authors to directly estimate the networks derivative thanks to automatic differentiation. In our case, E S rather takes as input observations.
As discussed in Section 4, a possible but costly solution to impose time invariance would be to discretize the left hand side of Equation (14), so that the following quantity would be minimized:
L S first order = 1 ν − τ ν−τ i=1 E S V τ (t 0 + i∆t) − E S V τ t 0 + (i − 1)∆t 2 2 .(31)
Another alternative is the loss introduced by Denton & Birodkar (2017) that instead minimized the difference between spatial representations taken at two random steps i and j uniformly sampled in 0, ν − τ :
L S random = E S V τ (t i ) − E S V τ t j 2 2 .(32)
However, both alternatives hinder performances, as analyzed in Appendix F, since they implement an overly strong constraint, as explained in Section 4.5.
Another workaround would be to model explicitly the evolution of E S V τ (t) with respect to time thanks to an integrator and a regression loss, similarly to T . It would give access to an estimate of the evolution of E S V τ (t) through time, enabling a direct control of the left hand side of Equation (14). This estimate could be loose and take into account the variation of spatial information due to potential hidden spatial features in the observations, allowing to relax the overly strong penalizing constraint.
To investigate this possibility, we propose to model the evolution of E S V τ (t) using a residual network, denoted by R S . Indeed, residual networks have been shown to implement ODE resolution schemes (Lu et al., 2017), thus assimilating their residual blocks to the true time derivatives of the system. Then, the regularisation loss on S, previously denoted L S reg is replaced by two components. The first component is a regression loss ensuring that our residual network R S accurately models the evolution of S:
L S ODE = R S E S V τ (t 0 ) − E S V τ (t 1 − τ ∆t) 2 2 .(33)
The second component is the regularisation of the residuals. We propose to minimize the 2 -norm of the residuals as a proxy to minimise the true time derivative of E S V τ (t) . If r S,1 , . . . , r S,l S are the residual blocks of R S , we define the regularization implementing the left hand side of Equation (14) as:
L S resblock = l S h=1 r S,h • id + r S,h−1 • . . . • id + r S,1 E S V τ (t 0 ) 2 2 .(34)
We show in Appendix F that this workaround is a viable alternative, as using such constraint leads to results that are numerically similar to the originally proposed method. Yet, even though it achieves slightly better results, this alternative is computationally less efficient than our method, and requires one more hyperparameter to tune (the coefficient in front of L S ODE ), making its use more complex. Therefore, it is an interesting option to study that also provides state-of-the-art results.
C Of Spatiotemporal Disentanglement
C.1 Modeling Spatiotemporal Phenomena with Differential Equations
Besides their increasing popularity to model spatiotemporal phenomena (see Section 2), the ability of residual networks to facilitate learning (Haber & Ruthotto, 2017) along with the success of their continuous counterpart (Chen et al., 2018) motivates our choice. Indeed, learning ODEs or discrete approximations as residual networks has become standard for a variety of tasks such as classification, inpainting, and generative models. Consequently, their application to forecasting physical processes and videos is only a natural extension of its already broad applicability discussed in Section 2. Furthermore, they present interesting properties, as detailed below.
C.2 Separation of Variables Preserves the Mutual Information of S and T through Time
C.2.1 Invertible Flow of an ODE
We first highlight that the general ODE Equation (10) admits, according to the Cauchy-Lipschitz theorem, exactly one solution for a given initial condition, since f is implemented with a standard neural network (see Appendix E), making it Lipschitz-continuous. Consequently, the flow of this ODE, denoted by Φ t and defined as:
Φ: R × R p → R p (t 0 , T t0 ) → Φ t (T t0 ) = T t0+t
is a bijection for all t. Indeed, let T t0 be fixed and t 0 , t 1 be two timesteps; thanks to the existence and unicity of the solution to the ODE with this initial condition:
Φ t0+t1 = Φ t0 • Φ t1 = Φ t1 • Φ t0 .
Therefore, Φ t is a bijection and Φ −1 t = Φ −t . Moreover, the flow is differentiable if f is continuously differentiable as well, which is not a restrictive assumption if it is implemented by a neural network with differentiable activation functions.
C.2.2 Preservation of Mutual Information by Invertible Mappings
A proof of the following result is given by Kraskov et al. (2004). We indicate below the major steps of the proof. Let X and Y be two random variables with marginal density µ X , µ Y . Let F be a diffeomorphism acting on Y , Y = F (Y ). If J F is the determinant of the Jacobian of F , we have:
µ x, y = µ(x, y)J F y .
Then, expressing the mutual information I in integral form, with the change of variables y = F (y) (F being a diffeomorphism), results in:
I X, Y = x,y µ x, y log µ x, y µ X (x) × µ Y (y ) dx dy = x,y µ(x, y) log µ(x, y) µ X (x) × µ Y (y) dx dy I X, Y = I(X, Y ).
C.3 Ensuring Disentanglement at any Time
As noted by Chen et al. (2016) and Achille & Soatto (2018), mutual information I is a key metric to evaluate disentanglement. We show that our model logically conserves the mutual information between S and T through time thanks to the flow of the learned ODE on T . Indeed, with the result of mutual information presevation by diffeomorphisms, and Φ t being a diffeomorphism as demonstrated above, we have, for all t and t :
I(S, T t ) = I X, Φ t −t (T t ) = I(S, T t ).(35)
Hence, if S and T t are disentangled, then so are S and T t .
The flow Φ t being dicretized in practice, its invertibility can no longer be guaranteed in general. Some numerical schemes or residual networks with Lipschitz-constrained residual blocks (Behrmann et al., 2019) provide sufficient conditions to concretely reach this invertibility. In our case, we did not observe the need to enforce invertibility. We can also leverage the data processing inequality to show that, for any t ≥ t 0 :
I(S, T t0 ) ≥ I(S, T t ),(36)
since T t is always a deterministic function of T t0 . Since we constrain the very first T value T t0 (i.e., we do not need to go back in time), there is no imperative need to enforce the invertibility of Φ t in practice: the inequality also implies that, if S and T t0 are disentangled, then so are S and T t for t ≥ t 0 . Nevertheless, should the need to disentangle for t < t 0 appear, the aforementioned mutual information conservation properties could allow, with further practical work to ensure the effective invertibility of Φ t , to still regularize T t0 only. This is, however, out of the scope of this paper.
D Datasets
D.1 WaveEq and WaveEq-100
These datasets are based on the two-dimensional wave equation on a functional w(x, y, t):
∂ 2 w ∂t 2 = c 2 ∇ 2 w + f (x, y, t),(37)
where ∇ 2 is the Laplacian operator, c denotes the wave celerity, and f is an arbitrary time-dependent source term. It has several application in physics, modeling a wide range of phenomena ranging from mechanical oscillations to electromagnetism. Note that the homogeneous equation, where f = 0, admits product-separable solutions.
We build the WaveEq dataset by solving Equation (37) for t ∈ [0, 0.298] and x, y ∈ [0, 63]. Sequences are generated using c drawn uniformly at random in [300,400] for each sequence to imitate the propagation of acoustic waves, with initial and Neumann boundary conditions:
w(x, y, 0) = w(0, 0, t) = w(32, 32, t) = 0,
and, following Saha et al. (2020), we make use of the following source term:
f (x, y, t) = f 0 e − t T 0 if (x, y) ∈ B (32, 32), 5 0 otherwise ,(39)
with T 0 = 0.05 and f 0 ∼ U [1, 30] . The source term is taken non-null in a circular central zone only in order to avoid numerical differentiation problems in the case of a punctual source.
We generate 300 sequences of 64 × 64 frames of length 150 from this setting by assimilating pixel (i, j) ∈ 0, 63 × 0, 63 to a point (x, y) ∈ [0, 63] × [0, 63] and selecting a frame per time interval of size 0.002. This discretization is used to solve Equation (37) as its spatial derivatives are estimated thanks to finite differences; once computed, they are used in an ODE numerical solver to solve Equation (37) on t. Spatial derivatives are estimated with finite differences of order 5, and the ODE solver is the fourth-order Runge-Kutta method with the 3/8 rule (Kutta, 1901;Hairer et al., 1993) and step size 0.001. The data are finally normalized following a min-max [0, 1] scaling per sequence.
The dataset is then split into training (240 sequences) and testing (60 sequences) sets. Sequences sampled during training are random chuncks of length ν + 1 = 25, including τ + 1 = 5 conditioning frames, of full-size training sequences. Sequences used during testing are all possible chunks of length τ + 1 + 40 = 45 from full-size testing sequences.
Finally, WaveEq-100 is created from WaveEq by selecting 100 pixels uniformly at random. The extracted pixels are selected before training and are fixed for both training and test. Therefore train and test sequences for WaveEq-100 consist of vector of size 100 extracted from WaveEq frames. Training and testing sequences are chosen to be the same as those of WaveEq.
D.2 Sea Surface Temperature
SST is composed of sea surface temperatures of the Atlantic ocean generated using E.U. Copernicus Marine Service Information thanks to the state-of-the-art simulation engine NEMO. The use of a so-called reanalysis procedure implies that these data accurately represent the actual temperature measures. For more information, we refer to the complete description of the data by de Bézenac et al. (2018). The data history of this engine is available online. 3 Unfortunately, due to recent maintenance, data history is limited to the last three years; prior histories should be manually requested.
The dataset uses daily temperature acquisitions from Thursday 28 th December, 2006 to Wednesday 5 th April, 2017 of a 481 × 781 zone, from which 29 zones of size 64 × 64 zones are extracted. We follow the same setting as de Bézenac et al. (2018) by training all models with τ + 1 = 4 conditioning steps and ν − τ = 6 steps to predict, and evaluating them on only zones 17 to 20. These zones are particularly interesting since they are the places where cold waters meet warm waters, inducing more pronounced motion.
We normalize the data in the same manner as de Bézenac et al. (2018). Each daily acquisition of a zone is first normalized using the mean and standard deviation of measured temperatures in this zone computed for all days with the same date of the year from the available data (daily history climatological normalization). Each zone is then normalized so the mean and variance over all acquisitions correspond to those of a standard Gaussian distribution. These normalized data are finally fed to the model; MSE scores reported in Table 1 are computed once the performed normalization of the data and model prediction is reverted to the original temperature measurement space, in order to compute physically meaningful scores.
Training sequences correspond to randomly selected chunks of length ν = 10 in the first 2987 acquisitions (corresponding to 80% of total acquisitions), and testing sequences to all possible chunks of length ν = 10 in the remaining 747 acquisitions.
D.3 Moving MNIST
This dataset involves two MNIST digits (LeCun et al., 1998) of size 28 × 28 that linearly move within 64 × 64 frames and deterministically bounce against frame borders following reflection laws. We use the modified version of the dataset proposed by Franceschi et al. (2020) instead of the original one (Srivastava et al., 2015). We train all models in the same setting as Denton & Birodkar (2017), with τ + 1 = 5 conditioning frames and ν − τ = 10 frames to predict, and test them to predict either 10 or 95 frames ahead. Training data consist in trajectories of digits from the MNIST training set, randomly generated on the fly during training. Test data are produced by computing a trajectory for each digit of the MNIST testing set, and randomly pairwise combining them, thus producing 5000 sequences.
To evaluate disentanglement with content swapping, we report PSNR and SSIM metrics between the swapped sequence produced by our model and a ground truth. However, having two digits in the image, there is an ambiguity as to in which order target digits should be swapped in the ground truth. To account for this ambiguity and thanks to the synthetic nature of the dataset, we instead build two ground truth sequences for both possible digit swap permutations, and report the lowest metric between the generated sequence and both ground truths (i.e., we choose the closest ground truth to compare to with respect to the considered metric).
E Training Details
Along with the code, we provide here sufficient details in order to replicate our results.
E.1 Reproduction of PKnl, DrNet and DDPAE
PKnl. We retrained PKnl (de Bézenac et al., 2018) on SST using their official implementation and the same hyperparameters they indicate.
DrNet. We trained DrNet (Denton & Birodkar, 2017) on our version of Moving MNIST using the same hyperparameters originally used for the alternative version of the dataset on which it was originally trained (with digits of different colors). To this end, we reimplemented the official Lua implementation into a Python code in order to train it with a more recent infrastucture.
DDPAE.
We trained DDPAE (Hsieh et al., 2018) Combination of S and T . As explained in Section 4, the default choice of combination of S and T as decoder inputs is the concatenation of both vectorial variables: it is generic, and allows the decoder to learn an appropriate combination function ζ as in Equation (7).
Nonetheless, further knowledge of the studied dataset can help to narrow the choices of combination functions. Indeed, we choose to multiply S and T before giving them as input to the decoder for both datasets WaveEq and WaveEq-100, given the knowledge of the existence of product-separable solutions to the homogeneous version of equation (i.e., without source). This shows that it is possible to change the combination function of S and T , and that existing combination functions in the PDE literature could be leveraged for other datasets.
Encoders E S and E T , and decoder D. For WaveEq, the encoder and decoder outputs are considered to be vectors; images are thus reshaped before encoding and after encoding to 64 × 64 frames. The encoder is a MultiLayer Perceptrons (MLP) with two hidden layers of size 1200 and internal ReLU activation functions. The decoder is an MLP with three hidden layers of size 1200, internal ReLU activation functions, and a final sigmoid activation function for the decoder. The encoder and decoder used for WaveEq-100 are similar to those used for WaveEq, but with two hidden layers each, of respective sizes 2400 and 150.
We used for SST a VGG16 architecture (Simonyan & Zisserman, 2015), mirrored between the encoder and the decoder, complemented with skip connections integrated into S (Ronneberger et al., 2015) from all internal layers of the encoder to corresponding decoder layers, also leveraged by de Bézenac et al. (2018) in their PKnl model. For Moving MNIST, the encoder and its mirrored decoder are shaped with the DCGAN discriminator and generator architecture (Radford et al., 2016), with an additional sigmoid activation after the very last layer of the decoder; this encoder and decoder DCGAN architecture is also used by DrNet and DDPAE. We highlight that we leveraged in both SST and Moving MNIST architectural choices that are also used in compared baselines, enabling fair comparisons.
Encoders E S and E T taking as input multiple observations, we combine them by either concatenating them for the vectorial observations of WaveEq-100, or grouping them on the color channel dimensions for the other datasets where observations are frames. Each encoder and decoder was initialized from a normal distribution with standard deviation 0.02.
ODE solver. Following the recent line of work assimilating residual networks (He et al., 2016) with ODE solvers (Lu et al., 2017;Chen et al., 2018), we use a residual network as an integrator for Equation (10). This residual network is composed of a given number K of residual blocks, each block i ∈ 1, K implementing the application id + g i , where g i is an MLP with a two hidden layers of size H and internal ReLU activation functions. The parameter values for each dataset are:
• WaveEq and WaveEq-100: K = 3 and H = 512;
• SST: K = 5 and H = 1024; • Moving MNIST: K = 1 and H = 512.
Each MLP is orthogonally initialized with the following gain for each dataset:
• WaveEq, WaveEq-100 and SST: 0.71; • Moving MNIST: 1.41.
Latent variable sizes. S and T have the following vectorial dimensions for each dataset:
• WaveEq and WaveEq-100: 32; • SST: 256;
• Moving MNIST: respectively, 128 and 20.
Note that, in order to perform fair comparisons, the size of T for baselines without static component S is chosen to be the sum of the vectorial sizes of S and T in the full model. The skip connections of S for SST cannot, however, be integrated into T , as its evolution is only modeled in the latent, and it is out of the scope of this paper to leverage low-level dynamics.
E.3 Optimization
Optimization is performed using the Adam optimizer (Kingma & Ba, 2015) with initial learning rate 4 × 10 −4 for WaveEq, WaveEq-100 and Moving MNIST and 2 × 10 −4 for SST, and with decay rates β 1 = 0.9 and β 2 = 0.99.
Loss function.
Chosen coefficients values of λ pred , λ AE , λ S reg , and λ T reg are the following:
• λ pred = 45;
• λ AE = 10 for SST and Moving MNIST, and 1 for WaveEq and WaveEq-100;
• λ S reg = 45 for WaveEq, WaveEq-100 and Moving MNIST, and 1500 for SST; • λ T reg = 1 2 p × 10 −3 , where p is the dimension of T .
The batch size is chosen to be 128 for WaveEq, WaveEq-100 and Moving MNIST, and 100 for SST.
Training length. The number of training epochs for each dataset is:
• WaveEq and WaveEq-100: 250 epochs;
• SST: 200 epochs for the full model, and 75 epochs for the model without S (the latter tending to overfit for higher number of epochs);
• Moving MNIST: 800 epochs, with an epoch corresponding to 200 000 trajectories (the dataset being infinite), with the learning rate successively divided by 2 at epochs 300, 400, 500, 600, and 700.
These correspond to the following appoximate training times of an Nvidia Titan V GPU:
• WaveEq and WaveEq-100: two hours;
• SST: a day;
• Moving MNIST: two days and a half.
E.4 Prediction Offset for SST
Using the formalism of our work, our algorithm trains to reconstruct v = (v t0 , ..., v t1 ) from conditioning frames V τ (t 0 ). Therefore, it first learns to reconstruct V τ .
However, the evolution of SST data is chaotic and predicting above an horizon of 6 with coherent and sharp estimations is challenging. Therefore, for the SST dataset only, we chose to supervise the prediction from t = t 0 + (τ + 1)∆t, i.e, our algorithm trains to forecast v t0+(τ +1)∆t , . . . , v t1 from V τ (t 0 ). It simply consists in making the temporal representation E T V τ (t 0 ) match the observation v t0+(τ +1)∆t instead of v t0 . This index offset does not change our interpretation of spatiotemporal disentanglement through separation of variables.
F Additional Results and Samples
F.1 Additional Results on Moving MNIST
We compare results on the Moving MNIST dataset in Table 3 for the several variants to impose time invariance as detailed in Appendix B.
We can conclude from Table 3 that our proposed method to enforce time invariance for minimizing the time derivative is significantly more efficient than directly minimizing the left hand side of Equation (14) (L S first order ). Indeed, our method provides more consistent long-term forecasts than those produced using L S first order . Furthermore, it performs better in both long and short-term forecasts than the L S random proposed by Denton & Birodkar (2017). Finally, compared to both L S first order and L S random , our method to impose time invariance strengthens the disentanglement ability of our algorithm providing better results in the swap experiment at t + 95. These results confirm the analysis of Appendix B.
Finally, modeling the evolution of the spatial content and minimizing the 2 -norm of the residuals is a competitive alternative compared to our approach in both prediction and disentanglement, but is more complex and computationally heavier as its execution time is increased by about 20%.
F.2 Additional Samples
F.2.1 WaveEq
We provide in Figure 4 a sample for the WaveEq dataset, highlighting the long-term consistency in the forecasts of our algorithm.
We also show in Figure 5 the effect in forecasting of changing the spatial code S from the one of another sequence.
F.2.2 SST
We provide an additional sample for SST in Figure 6.
Figure 2 :
2Example of predictions of compared models on SST.
Figure 3 :
3Example of predictions of compared models on Moving MNIST.
While new advances emerge for their detection(Dolhansky et al., 2019;Güera & Delp, 2018), only little information on their implementation in mass media platforms is available, further increasing the responsibility of the experimenter in his choice of disentanglement studies.Srivastava, N., Mansimov, E., and Salakhudinov, R. Unsupervised learning of video representations using LSTMs. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 843-852, Lille, France, July 2015. PMLR. Tompson, J., Schlachter, K., Sprechmann, P., and Perlin, K. Accelerating Eulerian fluid simulation with convolutional networks. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 3424-3433, International Convention Centre, Sydney, Australia, August 2017. PMLR. Toth, P., Rezende, D. J., Jaegle, A., Racanière, S., Botev, A., and Higgins, I. Hamiltonian generative networks. In International Conference on Learning Representations, 2020. Tulyakov, S., Liu, M.-Y., Yang, X., and Kautz, J. MoCoGAN: Decomposing motion and content for video generation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1526-1535, June 2018.
Figure 4 :
4Example of predictions of our model on WaveEq.
F.2. 3
3Moving MNISTWe provide two additional samples for Moving MNIST inFigures 7 and 8.
Figure 5 :
5Evolution of the scaled difference between the forecast of a sequence and the same forecast with a spatial code coming from another sequence for the WaveEq dataset.
Figure 6 :
6Example of predictions of compared models on SST.
Figure 7 :
7Example of predictions of compared models on Moving MNIST.
Figure 8 :
8Example of predictions of compared models on Moving MNIST.
Table 1 :
1Forecast mean squared errors on WaveEq-100, WaveEq, and SST for our model and PKnl with respect to indicated prediction horizons. Bold scores indicate the best performing method.Table 2: PSNR and SSIM scores of DrNet, DDPAE and our model on the Moving MNIST dataset for prediction and content swap tasks. Bold scores indicate the best performing method.Models
WaveEq-100
WaveEq
SST
t + 40
t + 40
t + 6 t + 10
PKnl
-
-
1.28
2.03
Ours
1.52 × 10 −5 4.78 × 10 −5 1.17
1.79
Ours (without S)
1.56 × 10 −4 1.99 × 10 −4 1.60
2.38
Models
Pred. (t + 10)
Pred. (t + 95)
Swap (t + 10)
Swap (t + 95)
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
DrNet
14.94 0.6596 12.91 0.5379 14.12 0.6206 12.80 0.5306
DDPAE
21.17 0.8814 13.56 0.6446 18.44 0.8256 13.25 0.6378
Ours
21.74 0.9094 17.22 0.7867 18.30 0.8343 16.21 0.7600
Ours (without S)
Failed: underflow after a few iterations
Ours (λ AE = 0)
21.51 0.9065 15.17 0.7054 18.01 0.8274 14.52 0.6884
Ours (λ S
reg = 0)
15.69 0.6670 13.77 0.6770 13.76 0.5392 13.56 0.6631
Ours (λ T
reg = 0)
15.06 0.7030 13.96 0.7218 14.64 0.6907 13.92 0.7208
on our version of Moving MNIST using the official implementation and the same hyperparameters they used for the original version of Moving MNIST.We used Python 3.8.1 and PyTorch 1.4.0(Paszke et al., 2019) to implement our model. Each model was trained on an Nvidia GPU with CUDA 10.1. Training is done with mixed-precision training(Micikevicius et al., 2018) thanks to the Apex library. 4E.2 Model Specifications
E.2.1 Implementation
E.2.2 Architecture
Table 3 :
3PSNR and SSIM scores of DrNet, DDPAE and our model on the Moving MNIST dataset for prediction and content swap tasks. The first part ofthe table reports results of Table 2, and the second half report additional results for alternative invariance losses. Bold scores indicate the best performing method in each part of thetable. 21.49 0.9054 15.80 0.7411 17.96 0.8242 15.11 0.7225 Ours L S random 21.67 0.9071 16.56 0.7648 18.39 0.8351 15.74 0.7432Models
Pred. (t + 10)
Pred. (t + 95)
Swap (t + 10)
Swap (t + 95)
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
DrNet
14.94 0.6596 12.91 0.5379 14.12 0.6206 12.80 0.5306
DDPAE
21.17 0.8814 13.56 0.6446 18.44 0.8256 13.25 0.6378
Ours
21.74 0.9094 17.22 0.7867 18.30 0.8343 16.21 0.7600
Ours (without S)
Failed: underflow after a few iterations
Ours (λAE = 0)
21.51 0.9065 15.17 0.7054 18.01 0.8274 14.52 0.6884
Ours (λ S
reg = 0)
15.69 0.6670 13.77 0.6770 13.76 0.5392 13.56 0.6631
Ours (λ T
reg = 0)
15.06 0.7030 13.96 0.7218 14.64 0.6907 13.92 0.7208
Ours (L S
resblock , L S
ODE ) 21.76 0.9080 17.89 0.8130 18.30 0.8327 16.68 0.7793
Ours L S
first order
Code is available at https://github.com/JeremDona/spatiotemporal_variable_separation.
https://resources.marine.copernicus.eu/?option=com_csw&view=details&product_id= GLOBAL_ANALYSIS_FORECAST_PHY_001_024.
https://github.com/nvidia/apex.
AcknowledgmentsWe would like to thank all members of the MLIA team from the LIP6 laboratory of Sorbonne Université for helpful discussions and comments.We acknowledge financial support from the LOCUST ANR project (ANR-15-CE23-0027) and the European Union's Horizon 2020 research and innovation programme under grant agreement 825619 (AI4EU). This study has been conducted using E.U. Copernicus Marine Service Information. This work was granted access to the HPC resources of IDRIS under the allocation 2020-AD011011360 made by GENCI (Grand Equipement National de Calcul Intensif).
Emergence of invariance and disentanglement in deep representations. A Achille, S Soatto, Journal of Machine Learning Research. 1950Achille, A. and Soatto, S. Emergence of invariance and disentanglement in deep representations. Journal of Machine Learning Research, 19(50):1-34, 2018.
Learning the spatio-temporal dynamics of physical processes from partial observations. I Ayed, E De Bézenac, A Pajot, P Gallinari, ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Ayed, I., de Bézenac, E., Pajot, A., and Gallinari, P. Learning the spatio-temporal dynamics of physical processes from partial observations. In ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3232-3236, 2020.
Invertible residual networks. J Behrmann, W Grathwohl, R T Q Chen, D Duvenaud, J.-H Jacobsen, PMLRProceedings of the 36th International Conference on Machine Learning. Chaudhuri, K. and Salakhutdinov, R.the 36th International Conference on Machine LearningLong Beach, California, USA97Behrmann, J., Grathwohl, W., Chen, R. T. Q., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 573-582, Long Beach, California, USA, June 2019. PMLR.
Intrinsic characterization of the variable separation in the Hamilton-Jacobi equation. S Benenti, Journal of Mathematical Physics. 3812Benenti, S. Intrinsic characterization of the variable separation in the Hamilton-Jacobi equation. Journal of Mathematical Physics, 38(12):6578-6602, 1997.
Representation learning: A review and new perspectives. Y Bengio, A Courville, P Vincent, IEEE Transactions on Pattern Analysis and Machine Intelligence. 358Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828, August 2013.
Discovering governing equations from data by sparse identification of nonlinear dynamical systems. S L Brunton, J L Proctor, J N Kutz, Proceedings of the National Academy of Sciences. 11315Brunton, S. L., Proctor, J. L., and Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932-3937, 2016.
H.-J Bungartz, M Griebel, Sparse grids. Acta Numerica. 13Bungartz, H.-J. and Griebel, M. Sparse grids. Acta Numerica, 13:147-269, 2004.
Neural ordinary differential equations. R T Q Chen, Y Rubanova, J Bettencourt, D Duvenaud, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, Garnett , Advances in Neural Information Processing Systems. R.Curran Associates, Inc31Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 6571-6583. Curran Associates, Inc., 2018.
Infogan: Interpretable representation learning by information maximizing generative adversarial nets. X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, Abbeel , P , Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R.Barcelona, SpainChen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2172-2180, 2016.
Symplectic recurrent neural networks. Z Chen, J Zhang, M Arjovsky, L Bottou, International Conference on Learning Representations. Chen, Z., Zhang, J., Arjovsky, M., and Bottou, L. Symplectic recurrent neural networks. In International Conference on Learning Representations, 2020.
Deep fakes: A looming challenge for privacy, democracy, and national security. D K Citron, R Chesney, Public Law Research Paper No. 1076921753U of Texas Law ; U of Maryland Legal StudiesCalifornia Law Review. Research Paper No. 2018-21.Citron, D. K. and Chesney, R. Deep fakes: A looming challenge for privacy, democracy, and national security. 107 California Law Review 1753 (2019); U of Texas Law, Public Law Research Paper No. 692; U of Maryland Legal Studies Research Paper No. 2018-21., 2018.
End-to-end differentiable physics for learning and control. F De Avila Belbute-Peres, K A Smith, K R Allen, J B Tenenbaum, J Z Kolter, Advances in Neural Information Processing Systems. Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R.Curran Associates, Inc31de Avila Belbute-Peres, F., Smith, K. A., Allen, K. R., Tenenbaum, J. B., and Kolter, J. Z. End-to-end differentiable physics for learning and control. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 7178-7189. Curran Associates, Inc., 2018.
Deep learning for physical processes: Incorporating prior scientific knowledge. E De Bézenac, A Pajot, P Gallinari, International Conference on Learning Representations. de Bézenac, E., Pajot, A., and Gallinari, P. Deep learning for physical processes: Incorporating prior scientific knowledge. In International Conference on Learning Representations, 2018.
Unsupervised learning of disentangled representations from video. E Denton, V ; Birodkar, U Von Luxburg, S Bengio, H Wallach, R Fergus, S V N Vishwanathan, Garnett , Advances in Neural Information Processing Systems. R.Curran Associates, Inc30Guyon, IDenton, E. and Birodkar, V. Unsupervised learning of disentangled representations from video. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 4414-4423. Curran Associates, Inc., 2017.
Stochastic video generation with a learned prior. E Denton, R Fergus, PMLRProceedings of the 35th International Conference on Machine Learning. Dy, J. and Krause, A.the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, Sweden80Denton, E. and Fergus, R. Stochastic video generation with a learned prior. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1174-1183, Stockholmsmässan, Stockholm, Sweden, July 2018. PMLR.
B Dolhansky, R Howes, B Pflaum, N Baram, C C Ferrer, arXiv:1910.08854The deepfake detection challenge (DFDC) preview dataset. arXiv preprintDolhansky, B., Howes, R., Pflaum, B., Baram, N., and Ferrer, C. C. The deepfake detection challenge (DFDC) preview dataset. arXiv preprint arXiv:1910.08854, 2019.
Learning optical flow with convolutional networks. A Dosovitskiy, P Fischer, E Ilg, P Hausser, C Hazirbas, V Golkov, P Van Der Smagt, D Cremers, T Brox, Flownet, The IEEE International Conference on Computer Vision (ICCV). Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., and Brox, T. FlowNet: Learning optical flow with convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), pp. 2758-2766, December 2015.
Unsupervised learning for physical interaction through video prediction. C Finn, I Goodfellow, S Levine, Advances in Neural Information Processing Systems. Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R.Curran Associates, Inc29Finn, C., Goodfellow, I., and Levine, S. Unsupervised learning for physical interaction through video prediction. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 64-72. Curran Associates, Inc., 2016.
Théorie analytique de la chaleur. Didot, Firmin. J B J Fourier, 1822Fourier, J. B. J. Théorie analytique de la chaleur. Didot, Firmin, 1822.
Stochastic latent residual video prediction. J.-Y Franceschi, E Delasalles, M Chen, S Lamprier, P Gallinari, arXiv:2002.09219arXiv preprintFranceschi, J.-Y., Delasalles, E., Chen, M., Lamprier, S., and Gallinari, P. Stochastic latent residual video prediction. arXiv preprint arXiv:2002.09219, 2020.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q.Curran Associates, Inc27Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014.
S Greydanus, M Dzamba, J Yosinski, Hamiltonian Neural Networks. In, H Wallach, H Larochelle, A Beygelzimer, F Buc, E Fox, Garnett , Advances in Neural Information Processing Systems. R.Curran Associates, Inc32Greydanus, S., Dzamba, M., and Yosinski, J. Hamiltonian neural networks. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 15379-15389. Curran Associates, Inc., 2019.
Deepfake video detection using recurrent neural networks. D Güera, E J Delp, 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). Güera, D. and Delp, E. J. Deepfake video detection using recurrent neural networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-6, 2018.
Stable architectures for deep neural networks. E Haber, L Ruthotto, 10.1088/1361-6420/aa9a90Inverse Problems. 34114004Haber, E. and Ruthotto, L. Stable architectures for deep neural networks. Inverse Problems, 34(1): 014004, dec 2017. doi: 10.1088/1361-6420/aa9a90.
Solving Ordinary Differential Equations I: Nonstiff Problems, chapter Runge-Kutta and Extrapolation Methods. E Hairer, S P Nørsett, G Wanner, SpringerBerlin Heidelberg; Berlin, HeidelbergHairer, E., Nørsett, S. P., and Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems, chapter Runge-Kutta and Extrapolation Methods, pp. 129-353. Springer Berlin Heidelberg, Berlin, Heidelberg, 1993.
Second essay on a general method in dynamics. W R Hamilton, Philosophical Transactions of the Royal Society. 125Hamilton, W. R. Second essay on a general method in dynamics. Philosophical Transactions of the Royal Society, 125:95-144, 1835.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, June 2016.
Determining optical flow. B K P Horn, B G Schunck, Artificial Intelligence. 171-3Horn, B. K. P. and Schunck, B. G. Determining optical flow. Artificial Intelligence, 17(1-3):185-203, August 1981.
Learning to decompose and disentangle representations for video prediction. J.-T Hsieh, B Liu, D.-A Huang, L Fei-Fei, J C Niebles, Advances in Neural Information Processing Systems. Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R.Curran Associates, Inc31Hsieh, J.-T., Liu, B., Huang, D.-A., Fei-Fei, L., and Niebles, J. C. Learning to decompose and disentangle representations for video prediction. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 517-526. Curran Associates, Inc., 2018.
Unsupervised learning of disentangled and interpretable representations from sequential data. W.-N Hsu, Y Zhang, J Glass, I Guyon, U Von Luxburg, S Bengio, H Wallach, R Fergus, S V N Vishwanathan, Garnett , Advances in Neural Information Processing Systems. R.Curran Associates, Inc30Hsu, W.-N., Zhang, Y., and Glass, J. Unsupervised learning of disentangled and interpretable representations from sequential data. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 1878-1889. Curran Associates, Inc., 2017.
Physics-as-inverse-graphics: Unsupervised physical parameter estimation from video. M Jaques, M Burke, T Hospedales, International Conference on Learning Representations. Jaques, M., Burke, M., and Hospedales, T. Physics-as-inverse-graphics: Unsupervised physical parameter estimation from video. In International Conference on Learning Representations, 2020.
Separation of variables and exact solutions to nonlinear diffusion equations with x-dependent convection and absorption. H Jia, W Xu, X Zhao, Li , Z , Journal of Mathematical Analysis and Applications. 3392Jia, H., Xu, W., Zhao, X., and Li, Z. Separation of variables and exact solutions to nonlinear diffusion equations with x-dependent convection and absorption. Journal of Mathematical Analysis and Applications, 339(2):982-995, March 2008.
Recent advances in the use of separation of variables methods in general relativity. E G Kalnins, W MillerJr, G C Williams, Philosophical Transactions: Physical Sciences and Engineering. 340Kalnins, E. G., Miller, Jr., W., and Williams, G. C. Recent advances in the use of separation of variables methods in general relativity. Philosophical Transactions: Physical Sciences and Engineering, 340(1658):337-352, 1992.
A method for stochastic optimization. D P Kingma, J Ba, Adam, International Conference on Learning Representations. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Auto-encoding variational Bayes. D P Kingma, M Welling, International Conference on Learning Representations. Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014.
Sequential attend, infer, repeat: Generative modelling of moving objects. A R Kosiorek, H Kim, Y W Teh, I Posner, Advances in Neural Information Processing Systems. Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R.Curran Associates, Inc31Kosiorek, A. R., Kim, H., Teh, Y. W., and Posner, I. Sequential attend, infer, repeat: Generative modelling of moving objects. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa- Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 8606-8616. Curran Associates, Inc., 2018.
Estimating mutual information. A Kraskov, H Stögbauer, P Grassberger, Physical Review E. 6966138Kraskov, A., Stögbauer, H., and Grassberger, P. Estimating mutual information. Physical Review E, 69:066138, June 2004.
M W Kutta, Beitrag zur näherungweisen Integration totaler Differentialgleichungen. Zeitschrift für Mathematik und Physik. 45Kutta, M. W. Beitrag zur näherungweisen Integration totaler Differentialgleichungen. Zeitschrift für Mathematik und Physik, 45:435-453, 1901.
Partial Differential Equations: Modeling, Analysis and Numerical Approximation, chapter The Heat Equation. Le Dret, H Lucquin, B , Springer International PublishingChamLe Dret, H. and Lucquin, B. Partial Differential Equations: Modeling, Analysis and Numerical Approximation, chapter The Heat Equation, pp. 219-251. Springer International Publishing, Cham, 2016.
Disentangling physical dynamics from unknown factors for unsupervised video prediction. V Le Guen, N Thome, arXiv:2003.01460arXiv preprintLe Guen, V. and Thome, N. Disentangling physical dynamics from unknown factors for unsupervised video prediction. arXiv preprint arXiv:2003.01460, 2020.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998.
X Li, T.-K L Wong, R T Q Chen, D Duvenaud, arXiv:2001.01328Scalable gradients for stochastic differential equations. arXiv preprintLi, X., Wong, T.-K. L., Chen, R. T. Q., and Duvenaud, D. Scalable gradients for stochastic differential equations. arXiv preprint arXiv:2001.01328, 2020.
Modeling parts, structure, and system dynamics via predictive learning. Z Liu, J Wu, Z Xu, C Sun, K Murphy, W T Freeman, J B Tenenbaum, International Conference on Learning Representations. Liu, Z., Wu, J., Xu, Z., Sun, C., Murphy, K., Freeman, W. T., and Tenenbaum, J. B. Modeling parts, structure, and system dynamics via predictive learning. In International Conference on Learning Representations, 2019.
Challenging common assumptions in the unsupervised learning of disentangled representations. F Locatello, S Bauer, M Lucic, G Rätsch, S Gelly, B Schölkopf, O Bachem, PMLRProceedings of the 36th International Conference on Machine Learning. Chaudhuri, K. and Salakhutdinov, R.the 36th International Conference on Machine LearningLong Beach, California, USA97Locatello, F., Bauer, S., Lucic, M., Rätsch, G., Gelly, S., Schölkopf, B., and Bachem, O. Challenging common assumptions in the unsupervised learning of disentangled representations. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 4114-4124, Long Beach, California, USA, June 2019. PMLR.
Learning PDEs from data. Z Long, Y Lu, X Ma, B Dong, Pde-Net, PMLRProceedings of the 35th International Conference on Machine Learning. Dy, J. and Krause, A.the 35th International Conference on Machine LearningStockholmsmässan, Stockholm Sweden80Long, Z., Lu, Y., Ma, X., and Dong, B. PDE-Net: Learning PDEs from data. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3208-3216, Stockholmsmässan, Stockholm Sweden, July 2018. PMLR.
PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. Z Long, Y Lu, Dong , B , Journal of Computational Physics. 399108925Long, Z., Lu, Y., and Dong, B. PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. Journal of Computational Physics, 399:108925, 2019.
Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. Y Lu, A Zhong, Q Li, Dong , B , arXiv:1710.10121arXiv preprintLu, Y., Zhong, A., Li, Q., and Dong, B. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.
Note du Pôle de modélisation. G Madec, Nemo Ocean Engine, FranceInstitut Pierre-Simon Laplace (IPSLMadec, G. NEMO ocean engine. Note du Pôle de modélisation, Institut Pierre-Simon Laplace (IPSL), France, No 27, 2008.
Mixed precision training. P Micikevicius, S Narang, J Alben, G Diamos, E Elsen, D Garcia, B Ginsburg, M Houston, O Kuchaiev, G Venkatesh, H Wu, International Conference on Learning Representations. Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., and Wu, H. Mixed precision training. In International Conference on Learning Representations, 2018.
The technique of variable separation for partial differential equations. Jr Miller, W , Nonlinear Phenomena. Wolf, K. B.Berlin, Heidelberg; Berlin HeidelbergSpringerMiller, Jr., W. The technique of variable separation for partial differential equations. In Wolf, K. B. (ed.), Nonlinear Phenomena, pp. 184-208, Berlin, Heidelberg, 1983. Springer Berlin Heidelberg.
Mechanisms for variable separation in partial differential equations and their relationship to group theory. Jr Miller, W , Symmetries and Nonlinear Phenomena: Proceedings of the International School on Applied Mathematics. Levi, D. and Winternitz, P.SingaporeWorld ScientificMiller, Jr., W. Mechanisms for variable separation in partial differential equations and their relation- ship to group theory. In Levi, D. and Winternitz, P. (eds.), Symmetries and Nonlinear Phenomena: Proceedings of the International School on Applied Mathematics, pp. 188-221, Singapore, 1988. World Scientific.
Unsupervised learning of object structure and dynamics from videos. M Minderer, C Sun, R Villegas, F Cole, K Murphy, H Lee, Advances in Neural Information Processing Systems. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R.Curran Associates, Inc32Minderer, M., Sun, C., Villegas, R., Cole, F., Murphy, K., and Lee, H. Unsupervised learning of object structure and dynamics from videos. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 92-102. Curran Associates, Inc., 2019.
An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Pytorch, Advances in Neural Information Processing Systems. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R.Curran Associates, Inc32Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. PyTorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8026-8037. Curran Associates, Inc., 2019.
Functional separable solutions of nonlinear convection-diffusion equations with variable coefficients. A D Polyanin, Communications in Nonlinear Science and Numerical Simulation. 73Polyanin, A. D. Functional separable solutions of nonlinear convection-diffusion equations with variable coefficients. Communications in Nonlinear Science and Numerical Simulation, 73:379- 390, July 2019.
Functional separation of variables in nonlinear PDEs: General approach, new solutions of diffusion-type equations. A D Polyanin, Mathematics. 8190Polyanin, A. D. Functional separation of variables in nonlinear PDEs: General approach, new solutions of diffusion-type equations. Mathematics, 8(1):90, 2020.
Separation of variables in PDEs using nonlinear transformations: Applications to reaction-diffusion type equations. A D Polyanin, A I Zhurov, Applied Mathematics Letters. 100106055Polyanin, A. D. and Zhurov, A. I. Separation of variables in PDEs using nonlinear transformations: Applications to reaction-diffusion type equations. Applied Mathematics Letters, 100:106055, February 2020.
Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, International Conference on Learning Representations. Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016.
Deep hidden physics models: Deep learning of nonlinear partial differential equations. M Raissi, Journal of Machine Learning Research. 1925Raissi, M. Deep hidden physics models: Deep learning of nonlinear partial differential equations. Journal of Machine Learning Research, 19(25):1-24, 2018.
Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. M Raissi, A Yazdani, G E Karniadakis, Science. 3676481Raissi, M., Yazdani, A., and Karniadakis, G. E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science, 367(6481):1026-1030, 2020.
Stochastic backpropagation and approximate inference in deep generative models. D J Rezende, S Mohamed, D Wierstra, PMLRProceedings of the 31st International Conference on Machine Learning. Xing, E. P. and Jebara, T.the 31st International Conference on Machine LearningBejing, China32Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Xing, E. P. and Jebara, T. (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 1278-1286, Bejing, China, June 2014. PMLR.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015. Navab, N., Hornegger, J., Wells, W. M., and Frangi, A. F.ChamSpringer International PublishingRonneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Navab, N., Hornegger, J., Wells, W. M., and Frangi, A. F. (eds.), Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015, pp. 234-241, Cham, 2015. Springer International Publishing.
Latent ordinary differential equations for irregularlysampled time series. Y Rubanova, R T Q Chen, D Duvenaud, Advances in Neural Information Processing Systems. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R.Curran Associates, Inc32Rubanova, Y., Chen, R. T. Q., and Duvenaud, D. Latent ordinary differential equations for irregularly- sampled time series. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 5320-5330. Curran Associates, Inc., 2019.
Black-box variational inference for stochastic differential equations. T Ryder, A Golightly, A S Mcgough, D Prangle, PMLRProceedings of the 35th International Conference on Machine Learning. Dy, J. and Krause, A.the 35th International Conference on Machine LearningStockholmsmässan, Stockholm Sweden80Ryder, T., Golightly, A., McGough, A. S., and Prangle, D. Black-box variational inference for stochastic differential equations. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th Interna- tional Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4423-4432, Stockholmsmässan, Stockholm Sweden, July 2018. PMLR.
P Saha, S Dash, S Mukhopadhyay, Phicnet, arXiv:2004.06243Physics-incorporated convolutional recurrent neural networks for modeling dynamical systems. arXiv preprintSaha, P., Dash, S., and Mukhopadhyay, S. PhICnet: Physics-incorporated convolutional recurrent neural networks for modeling dynamical systems. arXiv preprint arXiv:2004.06243, 2020.
Convolutional LSTM network: A machine learning approach for precipitation nowcasting. X Shi, Z Chen, H Wang, D.-Y Yeung, W.-K Wong, W Woo, Advances in Neural Information Processing Systems. Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R.Curran Associates, Inc28Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-k., and Woo, W.-c. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 802-810. Curran Associates, Inc., 2015.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, International Conference on Learning Representations. Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
Dgm: A deep learning algorithm for solving partial differential equations. J Sirignano, K Spiliopoulos, Journal of Computational Physics. 375Sirignano, J. and Spiliopoulos, K. Dgm: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375:1339-1364, 2018.
Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. S Steenkiste, M Chang, K Greff, J Schmidhuber, International Conference on Learning Representations. Steenkiste, S., Chang, M., Greff, K., and Schmidhuber, J. Relational neural expectation maxi- mization: Unsupervised discovery of objects and their interactions. In International Conference on Learning Representations, 2018.
Decomposing motion and content for natural video sequence prediction. R Villegas, J Yang, S Hong, X Lin, H Lee, International Conference on Learning Representations. Villegas, R., Yang, J., Hong, S., Lin, X., and Lee, H. Decomposing motion and content for natural video sequence prediction. In International Conference on Learning Representations, 2017a.
Learning to generate long-term future via hierarchical prediction. R Villegas, J Yang, Y Zou, S Sohn, X Lin, H Lee, PMLRProceedings of the 34th International Conference on Machine Learning. Precup, D. and Teh, Y. W.the 34th International Conference on Machine LearningSydney, Australia70International Convention CentreVillegas, R., Yang, J., Zou, Y., Sohn, S., Lin, X., and Lee, H. Learning to generate long-term future via hierarchical prediction. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 3560-3569, International Convention Centre, Sydney, Australia, August 2017b. PMLR.
Generating videos with scene dynamics. C Vondrick, H Pirsiavash, A Torralba, Advances in Neural Information Processing Systems. Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R.Curran Associates, Inc29Vondrick, C., Pirsiavash, H., and Torralba, A. Generating videos with scene dynamics. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 613-621. Curran Associates, Inc., 2016.
Disentangled sequential autoencoder. L Yingzhen, S Mandt, PMLRProceedings of the 35th International Conference on Machine Learning. Dy, J. and Krause, A.the 35th International Conference on Machine LearningStockholmsmässan, Stockholm Sweden80Yingzhen, L. and Mandt, S. Disentangled sequential autoencoder. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 5670-5679, Stockholmsmässan, Stockholm Sweden, July 2018. PMLR.
ODE 2 VAE: Deep generative second order odes with Bayesian neural networks. C Yıldız, M Heinonen, H Lahdesmaki, Advances in Neural Information Processing Systems. Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R.Curran Associates, Inc32Yıldız, C., Heinonen, M., and Lahdesmaki, H. ODE 2 VAE: Deep generative second order odes with Bayesian neural networks. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 13412-13421. Curran Associates, Inc., 2019. |
29,154,793 | A Universal Music Translation Network | We present a method for translating music across musical instruments, genres, and styles. This method is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the domain-independent encoder allows us to translate even from musical domains that were not seen during training. The method is unsupervised and does not rely on supervision in the form of matched samples between domains or musical transcriptions. We evaluate our method on NSynth, as well as on a dataset collected from professional musicians, and achieve convincing translations, even when translating from whistling, potentially enabling the creation of instrumental music by untrained humans. | [
5273326,
26100519
] | A Universal Music Translation Network
Noam Mor
Facebook AI Research
Lior Wolf
Facebook AI Research
Adam Polyak
Facebook AI Research
Yaniv Taigman
Facebook AI Research
A Universal Music Translation Network
We present a method for translating music across musical instruments, genres, and styles. This method is based on a multi-domain wavenet autoencoder, with a shared encoder and a disentangled latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the domain-independent encoder allows us to translate even from musical domains that were not seen during training. The method is unsupervised and does not rely on supervision in the form of matched samples between domains or musical transcriptions. We evaluate our method on NSynth, as well as on a dataset collected from professional musicians, and achieve convincing translations, even when translating from whistling, potentially enabling the creation of instrumental music by untrained humans.
Introduction
Humans have always created music and replicated it -whether it is by singing, whistling, clapping, or, after some training, playing improvised or standard musical instruments. This ability is not unique to us, and there are many other vocal mimicking species that are able to repeat a music from hearing.
Music is also one of the first domains to be digitized and processed by modern computers and algorithms. It is therefore somewhat surprising that in the core music task of mimicry, AI is still much inferior to biological systems.
In this work we are able, for the first time as far as we know, to produce high fidelity musical translation between instruments, styles, and genres. For example 1 , we convert the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven. Our ability builds upon two technologies that have recently become available: (i) the ability to synthesize high quality audio using auto regressive models, and (ii) the recent advent of methods that transform between domains in an unsupervised way.
The first technology is important for two reasons. First, it allows us to generate high quality and realistic audio. Second, trained with the teacher forcing technique, autoregressive models are efficiently trained as decoders. The second family of technologies contributes to the practicality of the solution, since posing the learning problem in the supervised setting would require a parallel dataset of different musical instruments.
In our architecture, we employ a single, universal, encoder and apply it to all inputs. In addition to the advantage of training fewer networks, this also enables us to convert from musical domains that were not heard during training to any of the domains encountered.
The key to being able to train a single encoder architecture is making sure that the domain-specific information is not encoded. We do this using a domain confusion network that provides an adversarial signal to the encoder. In addition, it is important for the encoder not to memorize the input signal but to encode it in a semantic way. We achieve this by distorting the input audio by random local pitch modulation.
During training, the network is trained as a denoising autoencoder, which recovers the undistorted version of the original input. Since the distorted input is no longer in the musical domain of the output, the network learns to project out-of-domain inputs to the desired output domain. In addition, the network no longer benefits from memorizing the input signal and employs a higher-level encoding.
Our results present abilities that are, as far as we know, unheard of. Asked to convert one musical instrument to another, our network is on par or slightly worse than professional musicians. Many times, people find it hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument. On the encoding side, our network is able to successfully process unseen musical instruments or other sources, such as whistles. On the output side, relatively high quality audio is produced and new instruments can be added without retraining the entire network.
Previous Work
Domain Transfer Recently, there has been a considerable amount of work, mostly on images and text, which performs unsupervised translation between domains A and B without being shown any matching pairs, i.e., in a completely unsupervised way. Almost all of this work employs GAN constraints [1] in order to ensure a high level of indistinguishability between the translations of samples in A and samples from the domain B. In our work, the output is generated by an autoregressive model and training takes place using the ground truth output of the previous time steps ("teacher forcing"), instead of the predicted ones. A complete autoregressive inference is only done during test time, and it is not practical to apply such inference during training in order to get a realistic generated ("fake") sample for the purpose of training the GAN.
Another popular constraint is that of circularity, namely that by mapping from A to B and back to A a reconstruction of the original sample is obtained [2,3,4]. In our work, for the same reason mentioned above, the output during training does not represent the future test time output, and such a constraint is unrealistic. An application of circularity in audio was present in [5], where a non-autoregressive model between vocoder features is used to convert between voices in an unsupervised way.
Cross domain translation is not restricted to a single pair of domains. The recent StarGAN [6] method creates multiple cycles for mapping between multiple (more than two) domains. The method employs a single generator that receives as input the source image as well as the specification of the target domain and produces the analog "fake" image from the target domain. Our work employs multiple decoders, one per domain, and attempts to condition a single decoder on the selection of the output domain failed to produce convincing results.
Another type of constraint is provided by employing a shared latent space from which samples in both domains are generated. CoGAN [7] learns a mapping from a random input vector z to matching samples, one in each domain. The two domains are assumed to be similar and their generators (and GAN discriminators) share many of the layers' weights. Specifically, the earlier generator layers are shared while the top layers are domain-specific. CoGAN has applied to the task of domain translation in the following way: given a sample x ∈ A, a latent vector z x is fitted to minimize the distance between the image generated by the first generator G A (z x ) and the input image x. Then, the analogous image in B is given by G B (z x ). Applying optimization during inference leads to slower solutions and to reliance on good initialization. On the other hand, it may lead to multiple solutions, which is sometimes desirable.
UNIT [8] employs an encoder-decoder pair per each domain, where the latent spaces of the domains are assumed to be shared. Similarly to CoGAN, the layers that are distant from the image (the top layers of the encoder and the bottom layers of the decoder) are the ones shared. Cycle-consistency is added as well, and structure is added to the latent space using a variational autoencoder [9] loss terms. Our method employs a single encoder, which eliminates the need for many of the associated constraints. In addition, we do not impose a VAE loss term [9] on the latent space of the encodings and instead employ a domain confusion loss [10].
Audio Synthesis WaveNet [11] is an autoregressive model that predicts the probability distribution of the next sample, given the previous samples and an input conditioning signal. Its generated output is currently considered of the highest naturalness, and is applied in a range of tasks. In [12], the authors have used it for denoising waveforms by predicting the middle ground-truth sample from its noisy input support. Recent contributions in Text-To-Speech(TTS) [13,14] have successfully conditioned wavenet on linguistic and acoustic features to obtain state of the art performance. In our encoder-decoder architecture, we use WaveNet as the output of the decoder, and backpropagate through it down to the encoder.
In [15], voice conversion was obtained by employing a variational autoencoder that produces a quantized latent space that is conditioned on the speaker identity. Similarly to our work, the decoder is based on WaveNet [11], however we impose a greater constraint on the latent space by (a) having a universal encoder, forcing the embeddings of all domains to lie in the same space, yet (b) training a separate reconstructing decoder for each domain, provided that the (c) latent space is disentangled, thereby reducing source-target pathways memorization, which is also accomplished by (d) employing augmentation to distort the input signal.
The specific architecture of the autoencoder we employ is the wavenet-autoencoder presented in [16]. In comparison to this work, our inputs are not controlled and are collected from consumer media. Our overall architecture differs in that multiple decoders and an additional auxiliary network used for disentanglement are trained and by the introduction of a crucial augmentation step. By choosing to employ the same hyperparameters as previous work for the encoder and decoders themselves, the contribution of our approach is further emphasized.
In the supervised learning domain, an audio style transfer between source and target spectrograms was performed with sequence-to-sequence recurrent networks [17]. This method requires matching pairs of samples played on different instruments. In another fully supervised work [18], a graphical model aimed at modeling polyphonic tones of Bach was trained on notes, capturing the specificity of Bach's chorales. This model is based on recurrent networks and requires a large corpus of notes of a particular instrument produced with a music editor.
Style Transfer Style transfer is often confused with domain translation and many times the distinction is not clear. In the task of style transfer, the "content" remains the same between the input and the output, but the "style" is modified. Notable contributions in the field include [19,20,21]. These methods synthesize a new image that minimizes the content loss with respect to the content-donor sample and the style loss with respect to one or more samples of a certain style. The content loss is based on comparing the activations of a network training for an image categorization task. The style loss compares the statistics of the activations in various layers of the categorization layer. An attempt at audio style transfer is described in [22].
We distance ourselves from style transfer and do not try to employ such methods since we believe that a melody played by a piano is not similar except for audio texture differences to the same melody sung by a chorus. The mapping has to be done at a higher level and the modifications are not simple local changes.
A support to our approach is provided by the current level of success using classical conversion methods, which are still limited to monophonic instruments (one note each time). Such methods employ an analysis followed by a synthesis framework. First, the signal is analyzed to extract pitch and timbre (using harmonics tracking) and then it is converted to another monophonic instrument, using a known timbre model [23].
Method
Our method is based on training multiple autoencoder pathways, one per musical domain, such that the encoders are shared. During training, a softmax-based reconstruction loss is applied to each domain separately. The input data is randomly augmented prior to applying the encoder in order to force the network to extract high-level semantic features instead of simply memorizing the data. In addition, a domain confusion loss [10] is applied to the latent space to ensure that the encoding is not domain-specific. A diagram of the architecture is shown in Fig. 1.
WaveNet Autoencoder
We reuse an existing autoencoder architecture that is based on a WaveNet decoder and a WaveNetlike dilated convolution encoder [16]. The WaveNet of each decoder is conditioned on the latent representation produced by the encoder. In order for the architecture to fit the inference-time, CUDA kernels provided by NVIDIA ( https://github.com/NVIDIA/nv-wavenet) were used after slightly modifying the WaveNet equations for compatibility.
The encoder is a fully convolutional network that can be applied to any sequence length. The network has three blocks of 10 residual-layers each. Each residual-layer contains a RELU nonlinearity, a dilated convolution with an increasing kernel size, a second RELU, and a 1 × 1 convolution followed by the residual summation of the activations before the first RELU. There is a fixed width of 128 channels. After the three blocks, there is an additional 1 × 1 layer. An average pooling with a kernel size of 50 milliseconds (800 samples) follows in order to obtain an encoding in R 64 , which implies a temporal down sampling by a factor of ×12.5.
The encoding is upsampled temporally to the original audio rate using nearest neighbor interpolation and is used to condition a WaveNet decoder. The conditioning signal is passed through a 1 × 1 layer that is different for each WaveNet layer. The audio (both input and output) is quantized using 8-bit mu-law encoding, similarly to both [11,16], which results in some inherent loss of quality. The WaveNet decoder has 4 blocks of 10 residual-layers, as a result the decoder has a receptive field of 250 milliseconds (4,093 samples).
Audio Input Augmentation
In order to improve the generalization capability of the encoder, as well as to enforce it to maintain higher-level information, we employ a dedicated augmentation procedure that changes the pitch locally. The resulting audio is of a similar quality but is slightly out off tune.
Specifically, we perform our training on segments of one second length. For augmentation, we uniformly select a segment of length between 0.25 and 0.5 seconds, and modulate its pitch by a random number between -0.5 and 0.5 of half-steps, using librosa [24].
Training and the Losses Used
Let s j be an input sample from domain j = 1, 2, . . . , k, k being the number of domains employed during training. Let E be the shared encoder, and D j the WaveNet decoder for domain j. Let C be the domain classification network, and O(s, r) be the random augmentation procedure applied to a sample s with a random seed r.
The C network predicts which domain the input data came from, based on the latent vectors. To do so it applies three 1D-convolution layers, with the ELU [25] nonlinearity. The last layer projects the vectors to dimension k. The vectors are then averaged to obtain a single vector of dimension k.
where L(o, y) is the cross entropy loss applied to each element of the output o and the corresponding element of the target y separately. Note that the decoder D j is an autoregressive model that is conditioned on the output of E. During training, the autoregressive model is fed the target output s j from the previous time-step, instead of the generated output. The domain confusion network C is trained to minimize the classification loss:
j s j E r L(C(E(O(s j , r))), j)(2)
Network during inference
To perform the actual transformation from a sample s from any domain, even from an unseen musical domain, to output domain j, we apply the autoencoder of domain j to it, without applying the distortion. The new sampleŝ j is therefore given as D j (E(s)). The bottleneck during inference is the autoregressive process done by the WaveNet, which is optimized by the dedicated CUDA kernels by NVIDIA.
Experiments
We describe below the training process, the datasets used for training, as well as an ablation study. Extensive experiments were done on unconstrained music as well as on the NSynth [16] dataset. Audio samples are available in the supplementary archive. During training, we iterate over the training domains, such that each training batch contains 16 randomly sampled one second samples from a single domain. Each batch is first used to train the adversarial discriminator, and then to train the universal encoder and the domain decoder given the updated discriminator.
The system was implemented in the PyTorch framework, and trained on eight Tesla V100 GPUs for a total of 6 days. We used the ADAM optimization algorithm with a learning rate of 10 −3 and a decay factor of 0.98 every 10,000 samples. We weighted the confusion loss with λ = 10 −2 .
We attempted to perform two ablation studies. In the first study, the training procedure did not use the augmentation procedure of Sec. 3.2; in the second, the domain confusion network was not used (λ = 0). Both models did not train well and either diverged after some time or trained too slowly. Despite considerable effort we were not able to obtain ablation models that are compatible with further experimentation.
Evaluation of translation quality We consider human musicians, who are equipped by evolution, selected among their peers according to their talent, and who have trained for decades, as the gold standard and do not expect to do better than humans. To compare our method to humans, we convert from domain X to piano, for various X. The piano was selected for practical reasons: pianists are in higher availability than other musicians and a piano is easier to produce than, e.g., an orchestra.
Three professional musicians with a diverse background were employed for the conversion task: E, who is a conservatory graduate with an extensive background in music theory and piano performance, and also specializes in transcribing music; M, who is a professional producer, composer, pianist and audio engineer who is an expert in musical transcription; and A who is a music producer, editor, and a skilled player of keyboards and other instruments.
The task used for comparison was to convert 60 segments of 5 seconds each to piano. Three varied sources were used. 20 of the segments were from Bach's keyboard works, played on a Harpsichord, and 20 others were from Mozart's 46 symphonies conducted by Karl Böhm, which are orchestral works. The last group of 20 segments was a mix of three different domains that were not encountered during training -Swing Jazz, metal guitar riffs, and instrumental Chinese music. The 60 music segments were encoded by the universal encoder and decoded by the WaveNet trained on Beethoven's piano sonatas as performed by Daniel Barenboim.
In order to compare between the conversions we employed both human evaluation and an automatic score. Each score has its own limitations. The human judgment could be a mix of the assessment of the audio quality and the assessment of the translation itself. The quality of the algorithm's output is upper bounded by the neural network architecture and cannot match that of a high quality recording. The machine judgment is also limited and measures a single aspect of the conversion.
Specifically, Mean Opinion Scores (MOS) were collected using the CrowdMOS [26] package. Two questions were asked: (1) what is the quality of the audio, and (2) how well does the converted version match the original. The results are shown in Tab. 1. It shows that our audio quality is considerably lower than the results produced by humans using a keyboard connected to a computer (which should be rated as near perfect and makes any other audio quality in the MOS experiment pale in comparison). Regarding the translation success, the conversion from Harpsichord is better than the conversion from Orchestra. Surprisingly, the conversion from unseen domains is more successful than both these domains. In all three cases, our system is outperformed by the human musicians, whose conversions will soon be released to form a public benchmark.
The automatic assessment employed the pitch tracker of the librosa package [24]. For each input segment and each translation result (by a human or by the network), we extracted the pitch information. Then, we compared the input pitch to the output pitch using either the normalized cross correlation (NCC) obtained for the optimal shift, or Dynamic Time Warping (DTW) followed by a normalized correlation.
The results are presented in Tab. 2. Comparing the pitch of the output to that of the input, our method is more conservative than the human translators. The gap is diminished after the application of DTW, which may suggest that the method preserves the timing of the input in a way that humans do not. Lineup experiment In another set of experiments, we evaluate the ability of persons to identify the source musical segment from the conversions. We present, in each test, a set of six segments. One segment is a real segment from a random domain out of the ones used to train our network, and five are the associated translations. We shuffle the segments and ask which is the original one and which are conversions. In order to equate the quality of the source to that of the translations, we attach the source after it was passed through its domain's autoencoder.
The translation is perfectly authentic if the distribution of answers is uniform. However, the task is hard to define. In a first attempt, Amazon Mechanical Turk (AMT) freelancers tended to choose the same domain as the source regardless of the real source and the presentation order. This is shown in the confusion matrix of Fig. 2(a). We therefore asked two amateur musicians (T, a guitarist, and S a dancer and a drummer with a background in piano) and the professional musician A (from the first experiment) to identify the source sample out of the six options based on authenticity.
The results, in Fig. 2(b-d) show that there is a great amount of confusion. T and A failed in most cases, and A tended to show a similar bias to the AMT freelancers. S also failed to identify the majority of the cases, but showed coherent confusion patterns between pairs of instruments.
Semantic blending The ability to blend between musical pieces in a seamless manner is one of the skills developed by DJs. It requires careful consideration of beat, harmony, volume and pitch. We use this ability in order to check the additivity of the embedding space and blend two segments linearly.
We have selected two random 5 second segments i and j from the Mozart symphony domain and embedded both using the encoder, obtaining e i and e j . Then, we combine the embeddings as follows: starting with 3.5 second from e i , we combine the next 1.5 seconds of e i with the first 1.5 seconds of e j using a linear weighting with weights 1 − t/1.5 and t/1.5 respectively, where t ∈ [0, 1.5]. We then use the decoder of the Mozart symphony to generate audio. The results are natural and the shift is completely seamless, as far as we observe. See supplementary for samples.
NSynth pitch experiments NSynth [16] is an audio dataset containing samples of 1,006 instruments, each sample labeled with a unique pitch, timbre, and envelope. Each sample is a four second monophonic 16kHz snippet, ranging over every pitch of a standard MIDI piano (21-108) as well as five different velocities. It was not seen during training of our system.
We measure the correlation of embeddings retrieved using the encoder of our network across pitch for multiple instruments. The first two columns (from the left hand side) of Fig. 3 show self-correlations, while the third column shows correlation across instruments. As can be seen, the embedding encodes pitch information very clearly, despite being trained on complex polyphonic audio. The cosine similarity between the two instruments for the same pitch is, on average, 0.90-0.95 (mean of the diagonal), depending on the pair of instruments.
Discussion
From a historical perspective, a universal representation has been a key component in many of the recent successes of machine learning. A notable example is AlexNet [27] and its successors, which were able to produce meaningful representations for many tasks outside ImageNet categorization.
In another example, Word2Vec [28] and subsequent variants, which are trained in an unsupervised manner, are extremely effective in a wide range of NLP tasks. We are therefore encouraged by the ability of our encoder to represent, despite being trained on only six homogeneous domains, a wide variety of out-of-domain inputs.
Our work could open the way to other high-level tasks, such as transcription of music and automatic composition of music. For the first task, the universal encoder may be suitable since it captures the required information in a way, just like score sheets, that is instrument dependent. For the second task, we have initial results that we find interesting. By reducing the size of the latent space, the decoders become more "creative" and produce outputs that are natural yet novel, in the sense that the exact association with the original input is lost.
Figure 1 :
1The architecture of our network. The confusion block (dashed line) is employed only during training.
Training
We train our network on six arbitrary classical musical domains: (i) Mozart's 46 symphonies conducted by Karl Böhm, (ii) Haydn's 27 string quartets, performed by the Amadeus Quartet, (iii) J.S Bach's cantatas for orchestra, chorus and soloists, (iv) J.S Bach's organ works, (v) Beethoven's 32 piano sonatas, performed by Daniel Barenboim, and (vi) J.S Bach's keyboard works, played on Harpsichord. The music recordings by Bach are from the Teldec 2000 Complete Bach collection.The training and test splits are strictly separated by dividing the tracks (or audio files) between the two sets. The segments used in the evaluation experiments below were not seen during training.
Figure 2 :
2Results of the lineup experiment. (a) listeners from the general population tend to select the same domain as source regardless of the actual source. (b) the musician A failed to identify the source most of the time. (c) the amateurs T and (d) S failed most of the time.
Figure 3 :
3Correlation of embeddings across pitch. (a) Self-correlation for NSynth's flute-acoustic-027. (b) Self-correlation for keyboard-electronic-019. (c) The correlation between the electronic keyboard (y-axis) and the flute. (d) Self-correlation for brass-acoustic-018. (e) Self-correlation for string-acoustic-029. (f) The correlation between the brass instrument (y-axis) and the string.
Table 1 :
1MOS scores (mean± SD) for the conversion tasks.Harpsichord→ Piano
Orchestra→ Piano
New domains→ Piano
Converter
Audio
Translation
Audio
Translation
Audio
Translation
quality
success
quality
success
quality
success
E
3.89 ± 1.06 4.10± 0.94 4.02± 0.81 4.12± 0.97 4.44±0.82 4.13± 0.83
M
3.82 ± 1.18 3.75± 1.17 4.13± 0.89 4.12± 0.98 4.48±0.72 3.97± 0.88
A
3.69 ± 1.08 3.91± 1.16 4.06± 0.86 3.99± 1.08 4.53±0.79 3.93± 0.95
Our
2.95 ± 1.18 3.07± 1.30 2.56± 1.04 2.86± 1.16 2.36±1.17 3.18± 1.14
Table 2 :
2Automatic quality scores for the conversion task.Converter Harpsichord→ Piano Orchestra→ Piano New domains→ Piano
NCC
DTW
NCC
DTW
NCC
DTW
E
0.82
0.98
0.78
0.97
0.76
0.97
M
0.69
0.96
0.65
0.95
0.72
0.95
A
0.76
0.97
0.73
0.95
0.75
0.94
Our
0.84
0.98
0.82
0.97
0.88
0.98
The authors of[16]have argued that while a WaveNet autoencoder cannot observe more than a fixed temporal context (around 1 second), the model is still able to produce arbitrarily long coherent audio due the continual conditioning of the encoder. We believe that these models may be able to capture additional long-term structure through the autoregressive process itself, either due to the consistency of the mapping or to being able to maintain some context. We are currently running experiments to explore this possibility.
I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Generative adversarial nets. In: NIPS. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS. (2014)
Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. T Kim, M Cha, H Kim, J Lee, J Kim, Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. In: ICML. (2017)
Unpaired image-to-image translation using cycleconsistent adversarial networks. J Y Zhu, T Park, P Isola, A A Efros, Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle- consistent adversarial networks. In: ICCV. (2017)
DualGAN: Unsupervised dual learning for image-to-image translation. Z Yi, H Zhang, P Tan, M Gong, Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: Unsupervised dual learning for image-to-image translation. In: ICCV. (2017)
Parallel-data-free voice conversion using cycle-consistent adversarial networks. T Kaneko, H Kameoka, arXiv:1711.11293arXiv preprintKaneko, T., Kameoka, H.: Parallel-data-free voice conversion using cycle-consistent adversarial networks. arXiv preprint arXiv:1711.11293 (2017)
StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. Y Choi, M Choi, M Kim, J Ha, S Kim, J Choo, arXiv preprint 1711.09020Choi, Y., Choi, M., Kim, M., Ha, J., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: arXiv preprint 1711.09020. (2017)
Coupled generative adversarial networks. M Y Liu, O Tuzel, NIPS. Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: NIPS. (2016) 469-477
Unsupervised image-to-image translation networks. M Y Liu, T Breuel, J Kautz, Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NIPS. (2017)
Auto-encoding variational bayes. D P Kingma, M Welling, Stat. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. Stat (2014)
Domain-adversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, J. Mach. Learn. Res. 171Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1) (2016) 2096-2030
Wavenet: A generative model for raw audio. A Van Den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu, Arxiv preprint 1609.03499.van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., Kavukcuoglu, K.: Wavenet: A generative model for raw audio. In: Arxiv preprint 1609.03499. (2016)
D Rethage, J Pons, X Serra, arXiv:1706.07162A wavenet for speech denoising. arXiv preprintRethage, D., Pons, J., Serra, X.: A wavenet for speech denoising. arXiv preprint arXiv:1706.07162 (2017)
Deep Voice 3: 2000-speaker neural text-to-speech. W Ping, K Peng, A Gibiansky, S Ö Arik, A Kannan, S Narang, J Raiman, J Miller, In: ICLR. Ping, W., Peng, K., Gibiansky, A., Arik, S.Ö., Kannan, A., Narang, S., Raiman, J., Miller, J.: Deep Voice 3: 2000-speaker neural text-to-speech. In: ICLR. (2018)
J Shen, R Pang, R J Weiss, M Schuster, N Jaitly, Z Yang, Z Chen, Y Zhang, Y Wang, R J Skerry-Ryan, R A Saurous, Y Agiomyrgiannakis, Y Wu, Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. Shen, J., Pang, R., Weiss, R.J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., Skerry-Ryan, R.J., Saurous, R.A., Agiomyrgiannakis, Y., Wu, Y.: Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. In: ICASSP. (2018)
A Van Den Oord, O Vinyals, Kavukcuoglu, Neural Discrete Representation Learning. In: NIPS. van den Oord, A., Vinyals, O., kavukcuoglu, k.: Neural Discrete Representation Learning. In: NIPS. (2017)
Neural audio synthesis of musical notes with WaveNet autoencoders. J Engel, C Resnick, A Roberts, S Dieleman, M Norouzi, D Eck, K Simonyan, Engel, J., Resnick, C., Roberts, A., Dieleman, S., Norouzi, M., Eck, D., Simonyan, K.: Neural audio synthesis of musical notes with WaveNet autoencoders. In: ICML. (2017)
Conditional end-to-end audio transforms. A Haque, M Guo, P Verma, Arxiv preprint 1804.00047.Haque, A., Guo, M., Verma, P.: Conditional end-to-end audio transforms. In: Arxiv preprint 1804.00047. (2018)
DeepBach: a steerable model for bach chorales generation. G Hadjeres, F Pachet, Hadjeres, G., Pachet, F.: DeepBach: a steerable model for bach chorales generation. In: ICML. (2017)
L A Gatys, A S Ecker, M Bethge, Image style transfer using convolutional neural networks. In: CVPR. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR. (2016)
D Ulyanov, V Lebedev, A Vedaldi, V Lempitsky, Texture networks: Feed-forward synthesis of textures and stylized images. In: ICML. Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.: Texture networks: Feed-forward synthesis of textures and stylized images. In: ICML. (2016)
Perceptual losses for real-time style transfer and superresolution. J Johnson, A Alahi, L Fei-Fei, ECCV.Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super- resolution. In: ECCV. (2016)
style" transfer for musical audio using multiple time-frequency representations. S Barry, Y Kim, Barry, S., Kim, Y.: "style" transfer for musical audio using multiple time-frequency representa- tions (2018)
Spectral modeling synthesis: A sound analysis/synthesis system based on a deterministic plus stochastic decomposition. X Serra, J Smith, Computer Music Journal. 144Serra, X., Smith, J.: Spectral modeling synthesis: A sound analysis/synthesis system based on a deterministic plus stochastic decomposition. Computer Music Journal 14(4) (1990) 12-24
librosa: Audio and music signal analysis in python. B Mcfee, C Raffel, D Liang, D P Ellis, M Mcvicar, E Battenberg, O Nieto, McFee, B., Raffel, C., Liang, D., Ellis, D.P., McVicar, M., Battenberg, E., Nieto, O.: librosa: Audio and music signal analysis in python. (2015)
Fast and accurate deep network learning by exponential linear units (elus). D A Clevert, T Unterthiner, S Hochreiter, International Conference on Learning Representations (ICLR. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: International Conference on Learning Representations (ICLR). (2017)
Crowdmos: An approach for crowdsourcing mean opinion score studies. F Ribeiro, D Florêncio, C Zhang, M Seltzer, Acoustics, Speech and Signal Processing. IEEE International ConferenceRibeiro, F., Florêncio, D., Zhang, C., Seltzer, M.: Crowdmos: An approach for crowdsourcing mean opinion score studies. In: Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference, IEEE (2011) 2416-2419
A Krizhevsky, I Sutskever, G E Hinton, Imagenet classification with deep convolutional neural networks. In: NIPS. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. (2012)
T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Distributed representations of words and phrases and their compositionality. In: NIPS. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS. (2013) |
247,244,739 | Why adversarial training can hurt robust accuracy | Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite may be true -Even though adversarial training helps when enough data is available, it may hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Our proof provides explanatory insights that may also transfer to feature learning models. Further, we observe in experiments on standard image datasets that the same behavior occurs for perceptible attacks that effectively reduce class information such as mask attacks and object corruptions. | [
202712891,
604334,
3488815,
52962648,
220403547,
210164926
] | Why adversarial training can hurt robust accuracy
Jacob Clarysse
Department of Computer Science
ETH Zürich
Julia Hörrmann
Department of Computer Science
ETH Zürich
Fanny Yang
Department of Computer Science
ETH Zürich
Why adversarial training can hurt robust accuracy
Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite may be true -Even though adversarial training helps when enough data is available, it may hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Our proof provides explanatory insights that may also transfer to feature learning models. Further, we observe in experiments on standard image datasets that the same behavior occurs for perceptible attacks that effectively reduce class information such as mask attacks and object corruptions.
Introduction
Figure 1
: On subsampled CIFAR10 attacked by 2 × 2 masks, adversarial training yields higher robust error than standard training when the sample size is small, even though it helps for large sample sizes. (see Sec. E for details).
Today's best-performing classifiers are vulnerable to adversarial attacks [17,46] and exhibit high robust error : for many inputs, their predictions change under adversarial perturbations, even though the true class stays the same. For example, in image classification tasks, we distinguish between two categories of such attacks that are contentpreserving [16] (or consistent [38]) if their strength is limited -perceptible and imperceptible perturbations. Most work to date studies imperceptible attacks such as bounded p -norm perturbations [17, 30,32], small transformations using image processing techniques [15,23,29,58] or nearby samples on the data manifold [27,60]. They can often use their limited budget to successfully fool a learned classifier but, by definition, do not visibly reduce information about the actual class: the object in the perturbed image looks exactly the same as in the original version.
On the other hand, perceptible perturbations may occur more naturally in practice or are physically realizable. For example, stickers can be placed on traffic signs [14], masks of different sizes may cover important features of human faces [52], images might be rotated or translated [13], animals in motion may appear blurred in photographs depending on the shutter speed, or the lighting conditions could be poor (see Figure 2). Some perceptible attacks can effectively use the perturbation budget to reduce actual class information in the input (the signal ) while still preserving the original class. For example, a stop sign with a small sticker doesn't lose its semantic meaning or a flying bird does not become a different species because it induces motion blur in the image. We refer to these attacks as directed attacks (see Section 2 for a more formal characterization).
In this paper, we demonstrate that one of the most common beliefs for adversarial attacks does not transfer to directed attacks, in particular when the sample size is small. Specifically, it is widely acknowledged that adversarial training often achieves significantly lower adversarial error than standard training. This holds in particular if the perturbation type [3,30,57] and perturbation budget match the attack during test time. Intuitively, the improvement is a result of decreased attack-susceptibility: independent of the true class, adversarial training explicitly encourages the classifier to predict the same class for all perturbed points.
In this paper, we question the efficacy of adversarial training to increase robust accuracy for directed attacks. In particular, we show that adversarial training not only increases standard test error as noted in [38,45,48,57]), but surprisingly, adversarial training may even increase the robust test error compared to standard training! Figure 1 illustrates the main message of our paper for CIFAR10 subsets: Although adversarial training outperforms standard training when enough training samples are available, it is inferior in the low-sample regime. More specifically, our contributions are as follows:
• We prove that, almost surely, adversarially training a linear classifier on separable data yields a monotonically increasing robust error as the perturbation budget grows. We further establish highprobability non-asymptotic lower bounds on the robust error gap between adversarial and standard training.
• Our proof provides intuition for why this phenomenon is particularly prominent for directed attacks in the small sample size regime.
• We show that this phenomenon occurs on a variety of real-world datasets and perceptible directed attacks in the small sample size regime.
Robust classification
We first introduce our robust classification setting more formally by defining the notions of adversarial robustness, directed attacks and adversarial training used throughout the paper.
Adversarially robust classifiers For inputs x ∈ R d , we consider multi-class classifiers associated with parameterized functions f θ : R d → R K , where K is the number of labels. In the special case of binary classification (K = 2), we use the output predictions y = sign(f θ (x)). For example, f θ (x) could be linear models (as in Section 3) or neural networks (as in Section 4).
One key step to encourage deployment of machine learning based classification in real-world applications, is to increase the robustness of classifiers against perturbations that do not change the ground truth label. Mathematically speaking, we would like to have a small te -robust error, defined as Err(θ; te ) := E (x,y)∼P max
x ∈T (x; te ) (f θ (x ), y),(1)
where is the multi-class zero-one loss, which only equals 1 if the predicted output using f θ (x) does not match the true label y. Further, T (x; te ) is a perturbation set associated with a transformation type and size te . Note that the (standard) error of a classifier corresponds to evaluating Err(θ; te ) at te = 0, yielding the standard error Err(θ; 0) = E (x,y)∼P (f θ (x), y).
(Signal)-Directed attacks Most works in the existing literature consider consistent perturbations where te is small enough such that all samples in the perturbation set have the same ground truth or expert label. Note that the ground truth model f θ is therefore robust against perturbations and achieves the same error for standard and adversarial evaluation. The inner maximization in Equation (1) is often called the adversarial attack of the model f θ and the corresponding solution is referred to as the adversarial example. In this paper, we consider directed attacks, as described in Section 1, that effectively reduce the information about the ground truth classes. Formally, we characterize directed attacks by the following property: for any model f θ with low standard error, the corresponding adversarial example is well-aligned with the adversarial example found using the ground truth model f θ . An example for such an attack are additive perturbations that are constrained to the direction of the ground truth decision boundary. We provide concrete examples for linear classification in Section 3.1.
Adversarial training In order to obtain classifiers with a good robust accuracy, it is common practice to minimize a (robust) training objective L tr with a surrogate classification loss L such as
L tr (θ) := 1 n n i=1 max x i ∈T (xi; tr) L(f θ (x i )y i ),(2)
which is called adversarial training. In practice, we often use the cross entropy loss L(z) = log(1 + e −z ) and minimize the robust objective by using first order optimization methods such as (stochastic) gradient descent. SGD is also the algorithm that we focus on in both the theoretical and experimental sections.
When the desired type of robustness is known in advance, it is standard practice to use the same perturbation set for training as for testing, i.e. T (x; tr ) = T (x; te ). For example, Madry et al. [30] shows that the robust error sharply increases for tr < te . In this paper, we show that for directed attacks in the small sample size regime, in fact, the opposite is true.
Theoretical results
In this section, we prove for linear functions f θ (x) = θ x that in the case of directed attacks, robust generalization deteriorates with increasing tr . The proof, albeit in a simple setting, provides explanations for why adversarial training fails in the high-dimensional regime for such attacks.
Setting
We now introduce the precise linear setting used in our theoretical results. Data model In this section, we assume that the ground truth and hypothesis class are given by linear functions f θ (x) = θ x and the sample size n is lower than the ambient dimension d. In particular, the generative distribution P r is similar to [35,48]: The label y ∈ {+1, −1} is drawn with equal probability and the covariate vector is sampled as x = [y r 2 ,x] with the random vectorx ∈ R d−1 drawn from a standard normal distribution, i.e.x ∼ N (0, σ 2 I d−1 ). We would like to learn a classifier that has low robust error by using a dataset D = (x i , y i ) n i=1 with n i.i.d. samples from P r . Notice that the distribution P r is noiseless: for a given input x, the label y = sign(x [1] ) is deterministic. Further, the optimal linear classifier (also referred to as the ground truth) is parameterized by θ = e 1 . 1 By definition, the ground truth is robust against all consistent perturbations and hence the optimal robust classifier.
Directed attacks The focus in this paper lies on consistent directed attacks that by definition efficiently concentrate their attack budget in the direction of the signal. For our linear setting, we can model such attacks by additive perturbations in the first dimension
T (x; ) = {x = x + δ | δ = βe 1 and − ≤ β ≤ }.(3)
Note that this attack is always in the direction of the true signal dimension, i.e. the ground truth. Furthermore, when < r 2 , it is a consistent directed attack. Observe how this is different from p attacks -an p attack, depending on the model, may add a perturbation that only has a very small component in the signal direction.
Robust max-2 -margin classifier A long line of work studies the implicit bias of interpolators that result from applying stochastic gradient descent on the logistic loss until convergence [9,21,28,34]. For linear models, we obtain the tr -robust maximum-2 -margin solution (robust max-margin in short)
θ tr := arg max θ 2≤1 min i∈[n],x i ∈T (xi; tr) y i θ x i .(4)
This can for example be shown by a simple rescaling argument using Theorem 3.4 in [28]. Even though our result is proven for the max-2 -margin classifier, it can easily be extended to other interpolators.
Main results
We are now ready to characterize the te -robust error as a function of tr , the separation r, the dimension d and sample size n of the data. In the theorem statement we use the following quantities
ϕ min = σ r/2 − te d − 1 n − 1 + 2 log(2/δ) n ϕ max = σ r/2 − te d − 1 n + 1 + 2 log(2/δ) n
that arise from concentration bounds for the singular values of the random data matrix. Further, let := r 2 − ϕmax √ 2 and denote by Φ the cumulative distribution function of a standard normal. Theorem 3.1. Assume d − 1 > n. For any te ≥ 0, the te -robust error on test samples from P r with 2 te < r and perturbation sets in Equation (3) and (9), the following holds:
1. The te -robust error of the tr -robust max-margin estimator reads
Err( θ tr ; te ) = Φ − r 2 − tr φ (5)
for a random quantityφ > 0 depending on σ, r, te , which is a strictly increasing function with respect to tr .
2. With probability at least 1 − δ, we further have ϕ min ≤φ ≤ ϕ max and the following lower bound on the robust error increase by adversarially training with size tr
Err( θ tr ; te ) − Err( θ 0 ; te ) ≥ Φ r/2 ϕ min − Φ r/2 − min{ tr ,˜ } ϕ min .(6)
The proof can be found in Appendix A and primarily relies on high-dimensional probability. Note that the theorem holds for any 0 ≤ te < r 2 and hence also directly applies to the standard error by setting te = 0. In Figure 3, we empirically confirm the statements of Theorem 3.1 by performing multiple experiments on synthetic datasets as described in Subsection 3.1 with different choices of d/n and tr . In the first statement, we prove that for small sample-size (n < d − 1) noiseless data, almost surely, the robust error increases monotonically with adversarial training budget tr > 0. In Figure 3a, we plot the robust error gap between standard and adversarial logistic regression in function of the adversarial training budget tr for 5 runs.
The second statement establishes a simplified lower bound on the robust error increase for adversarial training (for a fixed tr = te ) compared to standard training. In Figures 3a and 3c, we show how the lower bound closely predicts the robust error gap in our synthetic experiments. Furthermore, by the dependence of ϕ min on the overparameterization ratio d/n, the lower bound on the robust error gap is amplified for large d/n. Indeed, Figure 3c shows how the error gap increases with d/n both theoretically and experimentally. However, when d/n increases above a certain threshold, the gap decreases again, as standard training fails to learn the signal and yields a high error (see Figure 3b).
Proof idea: intuition and surprises
The reason that adversarial training hurts robust generalization is based on an extreme robust vs. standard error tradeoff. We provide intuition for the effect of directed attacks and the small sample regime on the solution of adversarial training by decomposing the robust error Err(θ; te ). Notice that te -robust error Err(θ; te ) can be written as the probability of the union of two events: the event that the classifier based on θ is wrong and the event that the classifier is susceptible to attacks:
Err(θ; te ) = E x,y∼P I{yf θ (x) < 0} ∨ max x ∈T (x; te) I{f θ (x)f θ (x ) < 0} ≤ Err(θ; 0) + Susc(θ; te )(7)
where Susc(θ; te ) is the expectation of the maximization term in Equation (7). Susc(θ; te ) represents the tr -attack-susceptibility of a classifier induced by θ and Err(θ; 0) its standard error. Equation (7) suggests that the robust error can only be small if both the standard error and susceptibility are small. In Figure 4b, we plot the decomposition of the robust error in standard error and susceptibility for adversarial logistic regression with increasing tr . We observe that increasing tr increases the standard error too drastically compared to the decrease in susceptibility, leading to an effective drop in robust accuracy. For completeness, in Appendix B, we provide upper and lower bounds for the susceptibility score. We now explain why, in the small-sample size regime, adversarial training with directed attacks (3) may increase standard error to the extent that it dominates the decrease in susceptibility.
A key observation is that the robust max-2 -margin solution of a dataset D = {(x i , y i )} n i=1 maximizes the minimum margin that reads min i∈[n] y i θ (x i − y i tr |θ [1] |e 1 ), where θ [i] refers to the i-th entry of vector θ. Therefore, it simply corresponds to the max 2 -margin solution of the dataset shifted towards the decision boundary D tr = {(x i − y i tr | θ tr [1] |e 1 , y i )} n i=1 . Using this fact, we obtain a closed-form expression of the (normalized) max-margin solution (4) as a function of tr that reads
θ tr = 1 (r − 2 tr ) 2 + 4γ 2 r − 2 tr , 2γθ ,(8)
where θ 2 = 1 andγ > 0 is a random quantity associated with the max-2 -margin solution of the d − 1 dimensional Gaussian inputs orthogonal to the signal direction (see Lemma A.1 in Section A).
In high dimensions, with high probability any two Gaussian random vectors are far apart -in our distributional setting, this corresponds to the vectors being far apart in the non-signal directions. In Figure 4c, we illustrate the phenomenon using a simplified 2D cartoon, where the few samples in the dataset are all far apart in the non-signal direction. We see how shifting the dataset closer to the true decision boundary, may result in a max-margin solution (yellow) that aligns much worse with the ground truth (gray), compared to the estimator learned from the original points (blue). Even though the new (robust max-margin) classifier (yellow) is less susceptible to directed attacks in the signal dimension, it also uses the signal dimension less. Mathematically, this is directly reflected in the expression of the max-margin solution in Equation (8): Even without the definition ofγ,θ, we can directly see that the first (signal) dimension is used less as tr increases.
Generality of the results
In this section we discuss how the theorem might generalize to other perturbation sets, models and training procedures.
Signal direction is known The type of additive perturbations used in Theorem 3.1, defined in Equation (3), is explicitly constrained to the direction of the true signal. This choice is reminiscent of corruptions where every possible perturbation in the set is directly targeted at the object to be recognized, such as motion blur of moving objects. Such corruptions are also studied in the context of domain generalization and adaptation [43].
Directed attacks in general, however, may also consist of perturbation sets that are only strongly biased towards the true signal direction, such as mask attacks. They may find the true signal direction only when the inner maximization is exact. The following corollary extends Theorem 3.1 to small 1 -perturbations
T (x; ) = {x = x + δ | δ 1 ≤ },(9)
for 0 < < r 2 that reflect such attacks. We state the corollary here and give the proof in Appendix A. Corollary 3.2. Theorem 3.1 also holds for (4) with perturbation sets defined in (9).
The proof uses the fact that the inner maximization effectively results in a sparse perturbation equivalent to the attack resulting from the perturbation set (3).
Other models Motivated by the implicit bias results of (stochastic) gradient descent on the logistic loss, Theorem 3.1 is proven for the max-2 -margin solution. We would like to conjecture that for the data distribution in Section 3, adversarial training can hurt robust generalization also for other models with zero training error (interpolators in short).
For example, Adaboost is a widely used algorithm that converges to the max-1 -margin classifier [47]. One might argue that for a sparse ground truth, the max-1 -margin classifier should (at least in the noiseless case) have the right inductive bias to alleviate large bias in high dimensions. Hence, in many cases the (sparse) max-1 -margin solution might align with the ground truth for a given dataset. However, we conjecture that even in this case, the robust max-1 -margin solution (of the dataset shifted towards the decision boundary) would be misled to choose a wrong sparse solution. This can be seen with the help of the cartoon illustration in Figure 4c.
Real-world experiments
In this section, we demonstrate that adversarial training may hurt robust accuracy in a variety of image attack scenarios on the Waterbirds and CIFAR10 dataset. The corresponding experimental details and more experimental results (including on an additional hand gestures dataset) can be found in Appendices D, E and F. We set n = 20 and plot the robust error decomposition as in Equation (7) with increasing tr . While the susceptibility decreases slightly, the increase in standard error is much more severe, resulting in an increase in robust error. (c) Adversarial training hurts robust generalization in the low sample size regime (n < 200), but helps when enough samples are available.
For more experimental details see Section D.
Datasets
We now describe the datasets and models that we use for the experiments. In all our experiments on CIFAR10, we vary the sample size by subsampling the dataset and use a ResNet18 [18] as model. We always train on the same (randomly subsampled) dataset, meaning that the variances arise from the random seed of the model and the randomness in the training algorithm. In Appendix E, we complement the results of this section by reporting the results of similar experiments with different architectures.
As a second dataset, we build a new version of the Waterbirds dataset, consisting of images of water-and landbirds of size 256 × 256 and labels that distinguish the two types of birds. We construct the dataset as follows: First, we sample equally many water-and landbirds from the CUB-200 dataset [50]. Then, we segment the birds and paste them onto a background that is randomly sampled (without replacement) from the Places-256 dataset [59]. For the implementation of the dataset we used the code provided by Sagawa* et al. [40]. Also, following the choice of Sagawa* et al. [40], we use as model a ResNet50 that was pretrained on ImageNet and which achieves near perfect standard accuracy.
Evaluation of directed attacks
We consider three types of directed attacks on our real world datasets: square masks, motion blur and adversarial illumination. The mask attack is a model used to simulate sticker-attacks and general occlusions of objects in images [14,52]. On the other hand, motion blur may arise naturally for example when photographing fast moving objects with a slow shutter speed. Further, adversarial illumination may result from adversarial lighting conditions or smart image corruptions. Next, we describe the attacks in more detail.
Mask attacks On CIFAR10, we consider the square black mask attack: the adversary can set a mask of size te × te to zero in the image. To ensure that the mask does not cover the whole signal in the image, we restrict the size of the masks to be at most 2 × 2. Hence, the search space of the attack consists of all possible locations of the masks in the targeted image. For exact robust error evaluation, we perform a full grid search over all possible locations during test time. See Figure 2a for an example of a mask attack on CIFAR10.
Motion blur On the Waterbirds dataset we consider two directed attacks: motion blur and adversarial illumination. For the motion blur attack, the bird may move at different speeds without changing the background. The aim is to be robust against all motion blur severity levels up to M max = 15. To simulate motion blur, we first segment the birds and then use a filter with a kernel of size M to apply motion blur on the bird only. Lastly, we paste the blurred bird back onto the background image. We can change the severity level of the motion blur by increasing the kernel size of the filter. See Appendix D for an ablation study and concrete expressions of the motion blur kernel. At test time, we perform a full grid search over all kernel sizes to exactly evaluate the robust error. We refer to Figure 2c and Section D for examples of our motion blur attack.
Adversarial illumination As a second attack on the Waterbirds dataset, we consider adversarial illumination. The adversary can darken or brighten the bird without corrupting the background of the image.
The attack aims to model images where the object at interest is hidden in shadows or placed against bright light. To compute the adversarial illumination attack, we segment the bird, then darken or brighten the it, by adding a constant a ∈ [− te , te ], before pasting the bird back onto the background image. We find the most adversarial lighting level, i.e. the value of a, by equidistantly partitioning the interval [− te , te ] in K steps and performing a full list-search over all steps. See Figure 2b and Section D for examples of the adversarial illumination attack.
Adversarial training procedure
For all datasets, we run SGD until convergence on the robust cross-entropy loss (2). In each iteration, we search for an adversarial example and update the weights using a gradient with respect to the resulting perturbed example [17,30]. For every experiment, we choose the learning rate and weight decay parameters that minimize the robust error on a hold-out dataset. We now describe the implementation of the adversarial search for the three types of directed attacks.
Mask attacks Unless specified otherwise, we use an approximate attack similar to Wu et al. [52] during training time: First, we identify promising mask locations by analyzing the gradient, ∇ x L(f θ (x), y), of the cross-entropy loss with respect to the input. Masks that cover part of the image where the gradient is large, are more likely to increase the loss. Hence, we compute the K mask locations (i, j), where ∇ x L(f θ (x), y) [i:i+2,j:j+2] 1 is the largest and take using a full list-search the mask that incurs the highest loss. Our intuition from the theory predicts that higher K, and hence a more exact "defense", only increases the robust error of adversarial training, since the mask could then more efficiently cover important information about the class. We indeed confirm this effect and provide more details in Section E.
Motion blur Intuitively the worst attack should be the most severe blur, rendering a search over a range of severity superfluous. However, similar to rotations, this is not necessarily true in practice since the training loss on neural networks is generally nonconvex. Hence, during training time, we perform a search over kernels with sizes 2i for i = 1, . . . , M max /2. Note that, at test time, we do an exact search over all kernels of sizes in
[1, 2, . . . , M max ].
Adversarial illumination Similar to the motion blur attack, intuitively the worst perturbation should be the most severe lighting changes; either darkening or illuminating the object maximally. However, again this is not necessarily the case, since finding the worst attack is a nonconvex problem. Therefore, during training and testing we partition the interval [− tr , tr ] in 33 and 65 steps respectively, and perform a full grid-search to find the worst perturbation.
Adversarial training can hurt robust generalization
Further, we perform the following experiments on the Waterbirds dataset using the motion blur and adversarial illumination attack. We vary the adversarial training budget tr , while keeping the number of samples fixed, and compute the resulting robust error. We see in Figure 5a and 6a that, indeed, adversarial training can hurt robust generalization with increasing perturbation budget tr .
Furthermore, to gain intuition as described in Section 3.3 and, we also plot the robust error decomposition (Equation 7) consisting of the standard error and susceptibility in Figure 5b and 6b. Recall that we measure susceptibility as the fraction of data points in the test set for which the classifier predicts a different class under an adversarial attack. As in our linear example, we observe an increase in robust error despite a slight drop in susceptibility, because of the more severe increase in standard error. Similar experiments for the hand gesture dataset can be found in F.
As predicted by our theorem, the phenomenon where adversarial training hurts robust generalization is most pronounced in the small sample size regime. Indeed, the experiments depicted in Figures 5a and 6a are conducted on small sample size datasets of n = 20 or 50. In Figure 1 and 5c, we observe that the as sample size increases, adversarial training does improve robust generalization compared to standard training, even for directed attacks. Moreover, on the experiments of CIFAR10 using the mask perturbation, which can be found in Figure 1 and Appendix E, we observe the same behaviour: Adversarial training hurts robust generalization in the low sample size regime, but helps when enough samples are available.
Discussion
In this section, we discuss how different algorithmic choices, motivated by related work, affect when and how adversarial training hurts robust generalization.
Strength of attack and catastrophic overfitting In many cases, the worst case perturbation during adversarial training is found using an approximate algorithm such as projected gradient descent. It is common belief that using the strongest attack (in the mask-perturbation case, full grid search) during training should also result in better robust generalization. In particular, the literature on catastrophic overfitting shows that weaker attacks during training lead to bad performance on stronger attacks during testing [2,26,51]. Our result suggests the opposite is true in the low-sample size regime for directed attacks: the weaker the attack, the better adversarial training performs.
Robust overfitting Recent work observes empirically [39] and theoretically [12,41], that perfectly minimizing the adversarial loss during training might in fact be suboptimal for robust generalization; that is, classical regularization techniques might lead to higher robust accuracy. The phenomenon is often referred to as robust overfitting. May the phenomenon be mitigated using standard regularization techniques? In Appendix D we shed light on this question and show that adversarial training hurts robust generalization even with standard regularization methods such as early stopping are used.
Related work
We now discuss how our results relate to phenomena that have been observed or proven in the literature before.
Robust and non-robust useful features In the words of Ilyas et al.
[19], Springer et al. [44], for directed attacks, all robust features become less useful, but adversarial training uses robust features more. In the small sample-size regime n < d − 1 in particular, robust learning assigns so much weight on the robust (possibly non-useful) features, that the signal in the non-robust features is drowned. This leads to an unavoidable and large increase in standard error that dominates the decrease in susceptibility and hence ultimately leads to an increase of the robust error.
Small sample size and robustness A direct consequence of Theorem 3.1 is that in order to achieve the same robust error as standard training, adversarial training requires more samples. This statement might remind the reader of sample complexity results for robust generalization in Khim & Loh [22], Schmidt et al. [42], Yin et al. [55]. While those results compare sample complexity bounds for standard vs. robust error, our theorem statement compares two algorithms, standard vs. adversarial training, with respect to the robust error.
Trade-off between standard and robust error Many papers observed that even though adversarial training decreases robust error compared to standard training, it may lead to an increase in standard test error [30,57]. For example, Chen et al. [7], Dobriban et al. [11], Javanmard et al.
[20], Tsipras et al. [48], Zhang et al. [57] study settings where the Bayes optimal robust classifier is not equal to the Bayes optimal (standard) classifier (i.e. the perturbations are inconsistent or the dataset is non-separable). [38] study consistent perturbations, as in our paper, and prove that for small sample size, fitting adversarial examples can increase standard error even in the absence of noise. In contrast to aforementioned works, which do not refute that adversarial training decreases robust error, we prove that for directed attacks perturbations, in the small sample regime adversarial training may also increase robust error.
Mitigation of the trade-off
Future work
This paper aims to caution the practitioner against blindly following current widespread practices to increase the robust performance of machine learning models. Specifically, adversarial training is currently recognized to be one of the most effective defense mechanisms for p -perturbations, significantly outperforming robust performance of standard training. However, we prove that this common wisdom is not applicable for directed attacks -that are perceptible (albeit consistent) but efficiently focus their attack budget to target ground truth class information -in the low-sample size regime. In particular, in such settings adversarial training can in fact yield worse accuracy than standard training.
In terms of follow-up work on directed attacks in the low-sample regime, there are some concrete questions that would be interesting to explore. For example, as discussed in Section 5, it would be useful to test whether some methods to mitigate the standard accuracy vs. robustness trade-off would also relieve the perils of adversarial training for directed attacks. Further, we hypothesize, independent of the attack during test time, it is important in the small sample-size regime to choose perturbation sets during training that align with the ground truth signal (such as rotations for data with inherent rotation). If this hypothesis were to be confirmed, it would break with yet another general rule that the best defense perturbation type should always match the attack during evaluation. The insights from this study might also be helpful in the context of searching for good defense perturbations.
[9] Chizat, L. and Bach, F. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. In International Conference on Learning Theory, pp. 1305-1338, Jul 2020.
[10] Ding, G. W., Sharma, Y., Lui, K. Y. C., and Huang, R. MMA training: Direct input space margin maximization through adversarial training. In International Conference on Learning Representations, Apr 2020.
[11] Dobriban, E., Hassani, H., Hong, D., and Robey, A. Provable tradeoffs in adversarially robust classification. arXiv preprint arXiv:2006.05161, 2020.
[12] Donhauser, K., Tifrea, A., Aerni, M., Heckel, R., and Yang, F. Interpolation can hurt robust generalization even when there is no noise. In Advances in Neural Information Processing Systems, Dec 2021. [20] Javanmard, A., Soltanolkotabi, M., and Hassani, H. Precise tradeoffs in adversarial training for linear regression. In Conference on Learning Theory, pp. 2034-2078, Apr 2020.
[21] Ji, Z. and Telgarsky, M. The implicit bias of gradient descent on nonseparable data. In Conference on Learning Theory, pp. 1772-1798, Jun 2019.
[22] Khim, J. and Loh, P.-L. Adversarial risk bounds via function transformation. arXiv preprint arXiv:1810.09519, 2018.
[23] Laidlaw, C., Singla, S., and Feizi, S. Perceptual adversarial robustness: Defense against unseen threat models. In International Conference on Learning Representation, Jun 2021.
[24] Lamb, A., Verma, V., Kannala, J., and Bengio, Y.
y i θ x i ,(10)
by simply setting tr = 0 in Equation (4). The 2 -margin of θ then reads min i∈[n] y i θ x i . Furthermore for a dataset D = {(x i , y i )} n i=1 we refer to the induced dataset D as the dataset with covariate vectors stripped of the first element, i.e.
D = {(x i , y i )} n i=1 := {((x i ) [2:d] , y i )} n i=1 ,(11)
where (x i ) [2:d] refers to the last d − 1 elements of the vector x i . Furthermore, remember that for any vector z, z [j] refers to the j-th element of z and e j denotes the j-th canonical basis vector. Further, recall the distribution P r as defined in Section 3.1: the label y ∈ {+1, −1} is drawn with equal probability and the covariate vector is sampled as x = [y r 2 ,x] wherex ∈ R d−1 is a random vector drawn from a standard normal distribution, i.e.x ∼ N (0, σ 2 I d−1 ). We generally allow r, used to sample the training data, to differ from r test , which is used during test time.
The following lemma derives a closed-form expression for the normalized max-margin solution for any dataset with fixed separation r in the signal component, and that is linearly separable in the last d − 1 coordinates with marginγ.
Lemma A.1. Let D = {(x i , y i )} n i=1
be a dataset that consists of points (x, y) ∈ R d × {±1} and x [1] = y r 2 , i.e. the covariates x i are deterministic in their first coordinate given y i with separation distance r. Furthermore, let the induced dataset D also be linearly separable by the normalized max-2 -margin solutionθ with an 2 -marginγ. Then, the normalized max-margin solution of the original dataset D is given by
θ = 1 r 2 + 4γ 2 r, 2γθ .(12)
Further, the standard accuracy of θ for data drawn from P rtest reads
P rtest (Y θ X > 0) = Φ r r test 4σγ .(13)
The proof can be found in Section A.3. The next lemma provides high probability upper and lower bounds for the marginγ of D whenx i are drawn from the normal distribution.
Lemma A.2. Let D = {(x i , y i )} n i=1
be a random dataset where y i ∈ {±1} are equally distributed and x i ∼ N (0, σI d−1 ) for all i, andγ is the maximum 2 margin that can be written as
γ = max θ 2≤1 min i∈[n] y iθ x i .
Then, for any t ≥ 0, with probability greater than 1 − 2e − t 2 2 , we haveγ min (t) ≤γ ≤γ max (t) wherẽ
γ max (t) = σ d − 1 n + 1 + t √ n ,γ min (t) = σ d − 1 n − 1 − t √ n .
A.1 Proof of Theorem 3.1
Given a dataset D = {(x i , y i )} drawn from P r , it is easy to see that the (normalized) tr -robust max-margin solution (4) of D with respect to signal-attacking perturbations T ( tr ; x i ) as defined in Equation (3), can be written as
θ tr = arg max θ 2≤1 min i∈[n],x i ∈T (xi; tr ) y i θ x i = arg max θ 2≤1 min i∈[n],|β|≤ tr y i θ (x i + βe 1 ) = arg max θ 2≤1 min i∈[n] y i θ (x i − y i tr sign(θ [1] )e 1 ).
Note that by definition, it is equivalent to the (standard normalized) max-margin solution θ of the shifted dataset D tr = {(x i − y i tr sign(θ [1] )e 1 , y i )} n i=1 . Since D tr satisfies the assumptions of Lemma A.1, it then follows directly that the normalized tr -robust max-margin solution reads
θ tr = 1 (r − 2 tr ) 2 + 4γ 2 r − 2 tr , 2γθ ,(14)
by replacing r by r − 2 tr in Equation (12). Similar to above,θ ∈ R d−1 is the (standard normalized) max-margin solution of {(x i , y i )} n i=1 andγ the corresponding margin.
Proof of 1. We can now compute the te -robust accuracy of the tr -robust max-margin estimator θ tr for a given dataset D as a function ofγ. Note that in the expression of θ tr , all values are fixed for a fixed dataset, while 0 ≤ tr ≤ r − 2γ max can be chosen. First note that for a test distribution P r , the te -robust accuracy, defined as one minus the robust error (Equation (1)), for a classifier associated with a vector θ, can be written as
Acc(θ; te ) = E X,Y ∼Pr I{ min x ∈T (X; te) Y θ x > 0} (15) = E X,Y ∼Pr I{Y θ X − te θ [1] > 0} = E X,Y ∼Pr I{Y θ (X − Y te sign(θ [1] )e 1 ) > 0}
Now, recall that by Equation (14) and the assumption in the theorem, we have r − 2 tr > 0, so that sign( θ tr ) = 1. Further, using the definition of the T ( tr ; x) in Equation (3) and by definition of the distribution P r , we have X [1] = Y r 2 . Plugging into Equation (15) then yields
Acc( θ tr ; te ) = E X,Y ∼Pr I{Y θ tr (X − Y te e 1 ) > 0} = E X,Y ∼Pr I{Y θ tr (X −1 + Y r 2 − te e 1 ) > 0} = P r−2 te (Y θ tr X > 0)
where X −1 is a shorthand for the random vector X −1 = (0; X [2] , . . . , X [d] ). The assumptions in Lemma A.1 (D tr is linearly separable) are satisfied whenever the n < d − 1 samples are distinct, i.e. with probability one. Hence applying Lemma A.1 with r test = r − 2 te and r = r − 2 tr yields
Acc( θ tr ; te ) = Φ r(r − 2 te ) 4σγ − tr r − 2 te 2σγ .(16)
Theorem statement a) then follows by noting that Φ is a monotically decreasing function in tr . The expression for the robust error then follows by noting that 1 − Φ(−z) = Φ(z) for any z ∈ R and defining
ϕ = σγ r/2 − te .(17)
Proof of 2. First define ϕ min , ϕ max usingγ min ,γ max as in Equation (17). Then we have by Equation (16) Err( θ tr ; te ) − Err( θ 0 ; te ) = Acc( θ 0 ; te ) − Acc( θ tr ; te )
= Φ r/2 ϕ − Φ r/2 − tr ϕ = r/2 r/2− tr 1 √ 2πφ e − x 2 ϕ 2 dx
By plugging in t = 2 log 2/δ n in Lemma A.2, we obtain that with probability at least 1 − δ we havẽ
γ min := σ d − 1 n − 1 + 2 log(2/δ) n ≤γ ≤ σ d − 1 n + 1 + 2 log(2/δ) n =:γ max
and equivalently ϕ min ≤φ ≤ ϕ max .
Now note the general fact that for allφ
≤ √ 2x the density function f (φ; x) = 1 √ 2πφ e − x 2
ϕ 2 is monotonically increasing inφ.
By assumption of the theorem,φ ≤ √ 2(r/2 − tr )(r/2 − te ) so that f (φ;
x) ≥ f (ϕ min ; x) for all x ∈ [r/2 − tr , r/2] and therefore
r/2 r/2− tr 1 √ 2πφ e − x 2 ϕ 2 dx ≥ r/2 r/2− tr 1 √ 2πϕ min e − x 2 ϕ 2 dx = Φ r/2 ϕ min − Φ r/2 − tr ϕ min .
and the statement is proved.
A.2 Proof of Corollary 3.2
We now show that Theorem 3.1 also holds for 1 -ball perturbations with at most radius . Following similar steps as in Equation (14), the tr -robust max-margin solution for 1 -perturbations can be written as
θ tr := arg max θ 2≤1 min i∈[n] y i θ (x i − y i tr sign(θ [j (θ)] )e j (θ) )(18)
where j (θ) := arg max j |θ j | is the index of the maximum absolute value of θ. We now prove by contradiction that the robust max-margin solution for this perturbation set (9) is equivalent to the solution (14) for the perturbation set (3). We start by assuming that θ tr does not solve Equation (14), which is equivalent to assuming 1 ∈ j ( θ tr ) by definition. We now show how this assumption leads to a contradiction.
Define the shorthand j := j ( θ tr ) − 1. Since θ tr is the solution of (18), by definition, we have that θ tr is also the max-margin solution of the shifted dataset D tr := (x i − y i tr sign(θ [j +1] )e j +1 , y i ). Further, note that by the assumption that 1 ∈ j ( θ tr ), this dataset D tr consists of input vectors x i = (y i r 2 ,x i − y i tr sign(θ [j +1] )e j +1 ). Hence via Lemma A.1, θ tr can be written as
θ tr = 1 r 2 − 4(γ tr ) 2 [r, 2γ trθ tr ],(19)
whereθ tr is the normalized max-margin solution of D := (x i − y i tr sign(θ [j ] )e j , y i ).
We now characterizeθ tr . Note that by assumption, j = j (θ tr ) = arg max j |θ tr [j] |. Hence, the normalized max-margin solutionθ tr is the solution of
θ tr := arg max θ 2≤1 min i∈[n] y iθ x i − tr |θ [j ] |(20)
Observe that the minimum margin of this estimatorγ tr = min i∈[n] y i (θ tr ) x i − tr |θ tr [j ] | decreases with tr as the problem becomes harderγ tr ≤γ, where the latter is equivalent to the margin ofθ tr for tr = 0.
Since r > 2γ max by assumption in the Theorem, by Lemma A.2 with probability at least 1 − 2e − α 2 (d−1) n , we then have that r > 2γ ≥ 2γ tr . Given the closed form of θ tr in Equation (19), it directly follows that θ tr [1] = r > 2γ tr θ tr 2 = θ tr [2:d] 2 and hence 1 ∈ j ( θ tr ). This contradicts the original assumption 1 ∈ j ( θ tr ) and hence we established that θ tr for the 1 -perturbation set (9) has the same closed form (14) as for the perturbation set (3).
The final statement is proved by using the analogous steps as in the proof of 1. and 2. to obtain the closed form of the robust accuracy of θ tr .
A.3 Proof of Lemma A.1
We start by proving that θ is of the form θ = a 1 , a 2θ ,
for a 1 , a 2 > 0. Denote by H(θ) the plane through the origin with normal θ. We define d ((x, y), H(θ)) as the signed euclidean distance from the point (x, y) ∈ D ∼ P r to the plane H(θ). The signed euclidean distance is the defined as the euclidean distance from x to the plane if the point (x, y) is correctly predicted by θ, and the negative euclidean distance from x to the plane otherwise. We rewrite the definition of the max l 2 -margin classifier. It is the classifier induced by the normalized vector θ, such that Because r > 0, the maximum over all θ has θ [1] ≥ 0. Take any a > 0 such that θ 2 = a. By definition the max l 2 -margin classifier,θ, maximizes min (x,y)∈D d ((x, y) , H(θ)). Therefore, θ is of the form of Equation (21).
Note that all classifiers induced by vectors of the form of Equation (21) classify D correctly. Next, we aim to find expressions for a 1 and a 2 such that Equation (21) is the normalized max l 2 -margin classifier. The distance from any x ∈ D to H( θ) is
d x, H( θ) = a 1 x [1] + a 2θ x .
Using that x [1] = y r 2 and that the second term equals a 2 d x, H(θ) , we get
d x, H( θ) = a 1 r 2 + a 2 d x, H(θ) = a 1 r 2 + 1 − a 2 1 d x, H(θ) .(22)
Let (x, y) ∈ D be the point closest in Euclidean distance toθ. This point is also the closest point in Euclidean distance to H( θ), because by Equation (22) d x, H( θ) is strictly decreasing for decreasing d x, H(θ) .
We maximize the minimum margin d x, H( θ) with respect to a 1 . Define the vectors a = [a 1 , a 2 ] and v = r 2 , d x, H(θ) . We find using the dual norm that
a = v v 2 .
Plugging the expression of a into Equation (21) yields that θ is given by
θ = 1 r 2 + 4γ 2 r, 2γθ .
For the second part of the lemma we first decompose
P rtest (Y θ X > 0) = 1 2 P rtest θ X > 0 | Y = 1 + 1 2 P rtest θ X < 0 | Y = −1
We can further write
P rtest θ X > 0 | Y = 1 = P rtest d i=2 θ [i] X [i] > − θ [1] X [1] | Y = 1 (23) = P rtest 2γ d−1 i=1θ [i] X [i] > −r r test 2 | Y = 1 = 1 − Φ − r r test 4σγ = Φ r r test 4σγ
where Φ is the cumulative distribution function. The second equality follows by multiplying by the normalization constant on both sides and the third equality is due to the fact that
d−1 i=1θ [i] X [i]
is a zero-mean Gaussian with variance σ 2 θ 2 2 = σ 2 sinceθ is normalized. Correspondingly we can write
P rtest θ X < 0 | Y = −1 = P rtest 2γ d−1 i=1θ [i] X [i] < −r − r test 2 | Y = −1 = Φ r r test 4σγ (24)
so that we can combine (23) and (23) and (24) to obtain P rtest (Y θ X > 0) = Φ r rtest 4σγ . This concludes the proof of the lemma.
A.4 Proof of Lemma A.2
The proof plan is as follows. We start from the definition of the max 2 -margin of a dataset. Then, we rewrite the max 2 -margin as an expression that includes a random matrix with independent standard normal entries. This allows us to prove the upper and lower bounds for the max-2 -margin in Sections A.4.1 and A.4.2 respectively, using non-asymptotic estimates on the singular values of Gaussian random matrices.
Given the dataset D = {(x i , y i )} n i=1 , we define the random matrix X = x 1 x 2 ... x n .(25)
wherex i ∼ N (0, σI d−1 ). Let V be the class of all perfect predictors of D. For a matrix A and vector b we also denote by |Ab| the vector whose entries correspond to the absolute values of the entries of Ab.
σ|Qv| [j] ,(26)
where Q = 1 σ X is the scaled data matrix. In the sequel we will use the operator norm of a matrix A ∈ R n×d−1 .
A 2 = sup v∈R d−1 | v 2=1 Av 2
and denote the maximum singular value of a matrix A as s max (A) and the minimum singular value as s min (A).
A.4.1 Upper bound
Given the maximality of the operator norm and since the minimum entry of the vector |Qv| must be smaller
than Q 2 √ n , we can upper boundγ byγ ≤ σ 1 √ n Q 2 .
Taking the expectation on both sides with respect to the draw of D and noting Q 2 ≤ s max (Q), it follows from Corollary 5.35 of [49] that for all t ≥ 0:
P √ d − 1 + √ n + t ≥ s max (Q) ≥ 1 − 2e − t 2 2 .
Therefore, with a probability greater than 1 − 2e − t 2 2 ,
γ ≤ σ 1 + t + √ d − 1 √ n .
A.4.2 Lower bound
By the definition in Equation (26), if we find a vector v ∈ V with v 2 = 1 such that for an a > 0, it holds that min j∈n σ|Xv| [j] > a, thenγ > a.
Recall the definition of the max-2 -margin as in Equation 25. As n < d − 1, the random matrix Q is a wide matrix, i.e. there are more columns than rows and therefore the minimal singular value is 0. Furthermore, Q has rank n almost surely and hence for all c > 0, there exists a v ∈ R d−1 such that
σQv = 1 {n} c > 0,(27)
where 1 {n} denotes the all ones vector of dimension n. The smallest non-zero singular value of Q, s min, nonzero (Q), equals the smallest non-zero singular value of its transpose Q . Therefore, there also exists a v ∈ V with v 2 = 1 such that
γ ≥ min j∈[n] σ|Qv| [j] ≥ σs min,nonzeros Q 1 √ n ,(28)
where we used the fact that any vector v in the span of non-zero eigenvectors satisfies Qv 2 ≥ s min, nonzeros (Q) and the existence of a solution v for any right-hand side as in Equation 27. Taking the expectation on both sides, Corollary 5.35 of [49] yields that with a probability greater than 1 − 2e − t 2 2 , t ≥ 0 we havẽ
γ ≥ σ √ d − 1 − t √ n − 1 .(29)
B Bounds on the susceptibility score In Theorem 3.1, we give non-asymptotic bounds on the robust and standard error of a linear classifier trained with adversarial logistic regression. Moreover, we use the robust error decomposition in susceptibility and standard error to gain intuition about how adversarial training may hurt robust generalization. In this section, we complete the result of Theorem 3.1 by also deriving non-asymptotic bounds on the susceptibility score of the max 2 -margin classifier.
Using the results in Appendix A, we can prove following Corollary B.1, which gives non asymptotic bounds on the susceptibility score.
Corollary B.1. Assume d − 1 > n.
For the te -susceptibility on test samples from P r with 2 te < r and perturbation sets in Equation (3) and (9) the following holds:
For tr < r 2 −γ max , with probability at least 1 − 2e − α 2 (d−1) 2 for any 0 < α < 1, over the draw of a dataset D with n samples from P r , the te -susceptibility is upper and lower bounded by
Susc( θ tr ; te ) ≤ Φ (r − 2 tr )( te − r 2 ) 2γ max σ − Φ (r − 2 tr )(− te − r 2 ) 2γ min σ Susc( θ tr ; te ) ≥ Φ (r − 2 tr )( te − r 2 ) 2γ min σ − Φ (r − 2 tr )(− te − r 2 ) 2γ max σ(30)
We give the proof in Subsection B.1. Observe that the bounds on the susceptibility score in Corollary B.1 consist of two terms each, where the second term decreases with tr , but the first term increases. We recognise following two regimes: the max 2 -margin classifier is close to the ground truth e 1 or not. Clearly, the ground truth classifier has zero susceptibility and hence classifiers close to the ground truth also have low susceptibility. On the other hand, if the max l 2 -margin classifier is not close to the ground truth, then putting less weight on the first coordinate increases invariance to the perturbations along the first direction. Recall that by Lemma A.1, increasing tr , decreases the weight on the first coordinate of the max 2 -margin classifier. Furthermore, in the low sample size regime, we are likely not close to the ground truth. Therefore, the regime where the susceptibility decreases with increasing tr dominates in the low sample size regime.
To confirm the result of Corollary B.1, we plot the mean and standard deviation of the susceptibility score of 5 independent experiments. The results are depicted in Figure 7. We see that for low standard error, when the classifier is reasonably close to the optimal classifier, the susceptibility increases slightly with increasing adversarial budget. However, increasing the adversarial training budget, tr , further, causes the susceptibility score to drop greatly. Hence, we can recognize both regimes and validate that, indeed, the second regime dominates in the low sample size setting.
B.1 Proof of Corollary B.1
We proof the statement by bounding the robustness of a linear classifier. Recall that the robustness of a classifier is the probability that a classifier does not change its prediction under an adversarial attack. The susceptibility score is then given by Susc( θ tr ; te ) = 1 − Rob( θ tr ; te ).
The proof idea is as follows: since the perturbations are along the first basis direction, e 1 , we compute the distance from the robust l 2 -max margin θ tr to a point (X, Y ) ∼ P. Then, we note that the robustness of θ tr is given by the probability that the distance along e 1 , from X to the decision plane induced by θ tr is greater then te . Lastly, we use the non-asymptotic bounds of Lemma A.2.
Recall, by Lemma A.1, the max l 2 -margin classifier is of the form of
θ tr = 1 (r − 2 tr ) 2 + 4γ 2 r − 2 tr , 2γθ .(32)
Let (X, Y ) ∼ P. The distance along e 1 from X to the decision plane induced by θ tr , H( θ tr ), is given by
d e1 (X, H( θ tr )) = X [1] + 1 θ tr [0] d i=2 θ tr [i] X [i] .
Substituting the expression of θ tr in Equation 32 yields
d e1 (X, H( θ tr )) = X [1] + 2γ 1 (r − tr ) d i=2θ [i] X [i] .
Let N be a standard normal distributed random variable. By definition θ 2 2 = 1 and using that a sum of Gaussian random variables is again a Gaussian random variable, we can write
d e1 (X, H( θ tr )) = X [1] + 2γ σ (r − tr ) N .
The robustness of θ tr is given by the probability that d e1 (X, H( θ tr )) > te . Hence, using that X 1 = ± r 2 with probability 1 2 , we get
Rob( θ tr ; te ) = P r 2 + 2γ σ (r − 2 tr ) N > te + P r 2 + 2γ σ (r − tr ) N < − te .(33)
We can rewrite Equation 33 in the form
Rob( θ tr ; te ) = P N > (r − 2 tr )( te − r 2 ) 2γσ + P N < (r − 2 tr )(− te − r 2 ) 2γσ .
Recall, that N is a standard normal distributed random variable and denote by Φ the cumulative standard normal density. By definition of the cumulative denisity function, we find that
Rob( θ tr ; te ) = 1 − Φ (r − 2 tr )( te − r 2 ) 2γσ + Φ (r − 2 tr )(− te − r 2 ) 2γσ
.
Substituting the bounds onγ of Lemma A.2 gives us the non-asymptotic bounds on the robustness score and by Equation 31 also on the susceptibility score.
C Experimental details on the linear model
In this section, we provide detailed experimental details to Figures 3 and 4.
We implement adversarial logistic regression using stochastic gradient descent with a learning rate of 0.01. Note that logistic regression converges logarithmically to the robust max l 2 -margin solution. As a consequence of the slow convergence, we train for up to 10 7 epochs. Both during training and test time we solve max x i ∈T (xi; tr ) L(f θ (x i )y i ) exactly. Hence, we exactly measure the robust error. Unless specified otherwise, we set σ = 1, r = 12 and te = 4.
Experimental details on Figure 3
D Experimental details on the Waterbirds dataset
In this section, we discuss the experimental details and construction of the Waterbirds dataset in more detail.
We also provide ablation studies of attack parameters such as the size of the motion blur kernel, plots of the robust error decomposition with increasing n, and some experiments using early stopping.
The waterbirds dataset To build the Waterbirds dataset, we use the CUB-200 dataset [50], which contains images and labels of 200 bird species, and 4 background classes (forest, jungle/bamboo, water ocean, water lake natural) of the Places dataset [59].The aim is to recognize whether or not the bird, in a given image, is a waterbird (e.g. an albatros) or a landbird (e.g. a woodpecker). To create the dataset, we randomly sample equally many water-as landbirds from the CUB-200 dataset. Thereafter, we sample for each bird image a random background image. Then, we use the segmentation provided in the CUB-200 dataset to segment the birds from their original images and paste them onto the randomly sampled backgrounds. The resulting images have a size of 256 × 256. Moreover, we also resize the segmentations such that we have the correct segmentation profiles of the birds in the new dataset as well. For the concrete implementation, we use the code provided by [40].
Experimetal training details Following the example of [40], we use a ResNet50 pretrained on the ImageNet dataset for all experiments, a weight-decay of 10 −4 , and train for 300 epochs using the Adam optimizer. Extensive fine-tuning of the learning rate resulted in an optimal learning rate of 0.006 for all experiments in the low sample size regime. Adversarial training is implemented as suggested in [30]: at each iteration we find the worst case perturbation with an exact or approximate method. In all our experiments, the resulting classifier interpolates the training set. We plot the mean over all runs and the standard deviation of the mean. Specifics to the motion blur attack Fast moving objects or animals are hard to photograph due to motion blur. Hence, when trying to classify or detect moving objects from images, it is imperative that the classifier is robust against reasonable levels of motion blur. We implement the attack as follows. First, we segment the bird from the original image, then use a blur filter and lastly, we paste the blurred bird back onto the background. We are able to apply more severe blur, by enlarging the kernel of the filter. See Figure 8 for an ablation study of the kernel size.
The motion blur filter is implemented as follows. We use a kernel of size M × M and build the filter as follows: we fill the row (M − 1)/2 of the kernel with the value 1/M . Thereafter, we use the 2D convolution implementation of OpenCV (filter2D) [5] to convolute the kernel with the image. Note that applying a rotation before the convolution to the kernel, changes the direction of the resulting motion blur. Lastly, we find the most detrimental level of motion blur using a list-search over all levels up to M max .
Specifics to the adversarial illumination attack An adversary can hide objects using poor lightning conditions, which can for example arise from shadows or bright spots. To model poor lighting conditions on the object only (or targeted to the object), we use the adversarial illumination attack. The attack is constructed as follows: First, we segment the bird from their background. Then we apply an additive constant to the bird, where the absolute size of the constant satisfies | | < te = 0.3. Thereafter, we clip the values of the bird images to [0, 1], and lastly, we paste the bird back onto the background. See Figure 9 for an ablation of the parameter of the attack. It is non-trivial how to (approximately) find the worst perturbation. We find an approximate solution by searching over all perturbations with increments of size te /K max . Denote by Figure 5c. The plots depict the mean and standard deviation of the mean over several independent experiments. We see that, in comparison to standard training, the reduction in susceptibility for adversarial training is minimal in the low sample size regime. Moreover, the increase in standard error of adversarial training is quite severe, leading to an overall increase in robust error in the low sample size regime.
seg, the segmentation profile of the image x. We consider all perturbed images in the form of
x pert = (1 − seg)x + seg(x + K K max 1 255×255 ), K ∈ [−K max , K max ].
During training time we set K max = 16 and therefore search over 33 possible images. During test time we search over 65 images (K max = 32).
Early stopping In all our experiments on the Waterbirds dataset, a parameter search lead to an optimal weight-decay and learning rate of 10 −4 and 0.006 respectively. Another common regularization technique is early stopping, where one stops training on the epoch where the classifier achieves minimal robust error on a hold-out dataset. To understand if early stopping can mitigate the effect of adversarial training aggregating robust generalization in comparison to standard training, we perform the following experiment. On the Waterbirds dataset of size n = 20 and considering the adversarial illumination attack, we compare standard training with early stopping and adversarial training ( tr = te = 0.3) with early stopping. Considering several independent experiments, early stopped adversarial training has an average robust error of 33.5 a early stopped standard training 29.1. Hence, early stopping does decrease the robust error gap, but does not close it.
Error decomposition with increasing n In Figure 5c, we see that adversarial training hurts robust generalization in the small sample size regime. For completeness, we plot the robust error composition for adversarial and standard training in Figure 10. We see that in the low sample size regime, the drop in susceptibility that adversarial training achieves in comparison to standard training, is much lower than the increase in standard error. Conversely, in the high sample regime, the drop of susceptibility from adversarial training over standard training is much bigger than the increase in standard error.
E Experimental details on CIFAR10
In this section, we give the experimental details on the CIFAR10-based experiments shown in Figures 1 and 12. Moreover, we also conduct similar experiments using different neural network architectures. First, we give the full experimental details and then provide the results of the experiments using the different architectures.
Subsampling CIFAR10 In all our experiments we subsample CIFAR10 to simulate the low sample size regime. We ensure that for all subsampled versions the number of samples of each class are equal. Hence, if we subsample to 500 training images, then each class has exactly 50 images, which are drawn uniformly from the 5k training images of the respective class.
Mask perturbation on CIFAR10 We consider square black-mask perturbations; the attacker can set in the image a patch of size 2 × 2 to zero. The attack is a simplification of the patch-attack as considered in [52]. We show an example of a black-mask attack on each of the classes in CIFAR10 in Figure 11. Clearly, the mask reduces the information about the class in the image as it occludes part of the object in the image. During test time, we evaluate the attack exactly by means of a full grid search over all possible windows. Note that a full grid search requires 900 forward passes to evaluate one image, which computationally too expensive during training time. Therefore, we use the same approximation as in [52] at training time. For each image in the training batch, we compute the gradient from the loss with respect to the input. Using that gradient, which is a tensor in R 3×32×32 , we compute the l 1 -norm of each patch by a full grid search and save the upper left coordinates of the K windows with largest l 1 -norm. The intuition is that windows with high l 1 -norm are more likely to change the prediction. Out of the K identified candidate windows, we take the most loss worsening by means of a full list-search. Figure 12: We plot the standard error, robust error and susceptibility for varying attack strengths K. We see that the larger K, the lower the susceptibility, but the higher the standard error.
Experimental training details For all our experiments on CIFAR10, we adjusted the code provided by [37]. As typically done for CIFAR10, we augment the data with random cropping and horizontal flipping. For the experiments with results depicted in Figures 1 and 12, we use a ResNet18 network and train for 100 epochs. We tune the parameters learning rate and weight decay for low robust error. For standard standard training, we use a learning rate of 0.01 with equal weight decay.
For adversarial training, we use a learning rate of 0.015 and a weight decay of 10 −4 . We run each experiment three times for every dataset with different initialization seeds, and plot the average and standard deviation over the runs.
For the experiments in Figure 1 and 13 we use an attack strength of K = 4. Recall that we perform a full grid search at test time and hence have a good approximation of the robust accuracy and susceptibility score.
Increasing training attack strength We investigate the influence of the attack strength K on the robust error for adversarial training. We take tr = 2 and n = 500 and vary K. The results are depicted in Figure 12. We see that for increasing K, the susceptibility decreases, but the standard error increases more severely, resulting in an increasing robust error.
Robust error decomposition In Figure 1, we see that the robust error increases for adversarial training compared to standard training in the low sample size regime, but the opposite holds when enough samples are available. For completeness, we provide a full decomposition of the robust error in standard error and susceptibility for standard and adversarial training. We plot the decomposition in Figure 13.
Multiple networks on CIFAR10 We run adversarial training for multiple network architectures on subsampled CIFAR10 (n = 500) with mask perturbations of size 2 × 2 and an attack strength of K = 4. We plot the results in Table 1. For all the different architectures, we notice a similar increase in robust error when trained with adversarial training instead of standard training.
F Static hand gesture recognition
The goal of static hand gesture or posture recognition is to recognize hand gestures such as a pointing index finger or the okay-sign based on static data such as images [36,54]. The current use of hand gesture recognition is primarily in the interaction between computers and humans [36]. More specifically, typical practical applications can be found in the environment of games, assisted living, and virtual reality [33]. In the following, we conduct experiments on a hand gesture recognition dataset constructed by [31], which consists of near-infrared stereo images obtained using the Leap Motion device. First, we crop or segment the (a) Robust error (b) Standard error (c) Susceptibility Figure 13: We plot the standard error, robust error and susceptibility of the subsampled datasets of CIFAR10 after adversarial and standard training. For small sample size, adversarial training has higher robust error then standard training. We see that the increase in standard error in comparison to the drop in susceptibility of standard versus robust training, switches between the low and high sample size regimes. Figure 14: We plot two images, where both correspond to the two different classes. We recognize the "L"-sign in Figure 14a and the index sign in Figure 14b. Observe that the near-infrared images highlight the hand pose well and blends out much of the non-useful or noisy background.
images after which we use logistic regression for classification. We see that adversarial logistic regression deteriorates robust generalization with increasing tr .
Static hand-gesture dataset We use the dataset made available by [31]. This dataset consists of nearinfrared stereo images taken with the Leap Motion device and provides detailed skeleton data. We base our analysis on the images only. The size of the images is 640 × 240 pixels. The dataset consists of 16 classes of hand poses taken by 25 different people. We note that the variety between the different people is relatively wide; there are men and women with different posture and hand sizes. However, the different samples taken by the same person are alike.
We consider binary classification between the index-pose and L-pose, and take as a training set 30 images of the users 16 to 25. This results in a training dataset of 300 samples. We show two examples of the training dataset in Figure 14, each corresponding to a different class. Observe that the near-infrared images darken the background and successfully highlight the hand-pose. As a test dataset, we take 10 images of each of the two classes from the users 1 to 10 resulting in a test dataset of size 200.
Cropping the dataset To speed up training and ease the classification problem, we crop the images from a size of 640 × 240 to a size of 200 × 200. We crop the images using a basic image segmentation technique to stay as close as possible to real-world applications. The aim is to crop the images such that the hand gesture is centered within the cropped image.
For every user in the training set, we crop an image of the L-pose and the index pose by hand. We call these images the training masks {masks i } 20 i=1 . We note that the more a particular window of an image resembles a mask, the more likely that the window captures the hand gesture correctly. Moreover, the near-infrared images are such that the hands of a person are brighter than the surroundings of the person itself. Based on these two observations, we define the best segment or window, defined by the upper left coordinates (i, j), for an image x as the solution to the following optimization problem: arg min i∈[440], j∈ [40] 20 l=1 masks l − x {i:i+200,j:j+200} 2 2 − 1 2
x {i+w,j+h} 1 .
Equation 34 is solved using a full grid search. We use the result to crop both training and test images. Upon manual inspection of the cropped images, close to all images were perfectly cropped. We replace the handful poorly cropped training images with hand-cropped counterparts. Figure 15a and 15b we show an example of the images cropped using Equation 34. We see that the hands are centered and the images have a size of 200 × 200. In Figure 15c we show an example of the square black-mask perturbation.
Square-mask perturbations Since we use logistic regression, we perform a full grid search to find the best adversarial perturbation at training and test time. For completeness, the upper left coordinates of the optimal black-mask perturbation of size tr × tr can be found as the solution to arg max
The algorithm is rather slow as we iterate over all possible windows. We show a black-mask perturbation on an L-pose image in Figure 15c.
Results We run adversarial logistic regression with square-mask perturbations on the cropped dataset and vary the adversarial training budget and plot the result in Figure 16. We observe attack that adversarial logistic regression deteriorates robust generalization.
Because we use adversarial logistic regression, we are able to visualize the classifier. Given the classifier induced by θ, we can visualize how it classifies the images by plotting
θ−min i∈[d] θ [i] max i∈[d] θ [i] ∈ [0, 1] d .
Recall that the class-prediction of our predictor for a data point (x, y) is given by sign(θ x) ∈ {±1}. The lighter parts of the resulting image correspond to the class with label 1 and the darker patches with the class corresponding to label −1. Figure 16: We plot the standard error and robust error for varying adversarial training budget tr . We see that the larger tr the higher the robust error.
We plot the classifiers obtained by standard logistic regression and adversarial logistic regression with training adversarial budgets tr of 10 and 25 in Figure 17. The darker parts in the classifier correspond to patches that are typically bright for the L-pose. Complementary, the lighter patches in the classifier correspond to patches that are typically bright for the index pose. We see that in the case of adversarial logistic regression, the background noise is much higher than for standard logistic regression. In other words, adversarial logistic regression puts more weight on non-signal parts in the images to classify the training dataset and hence exhibits worse performance on the test dataset. Figure 17a we plot the vector that induces the classifier obtained after standard training. In Figure 17b and Figure 17c we plot the vector obtained after training with square-mask perturbations of size 10 and 25, respectively. We note the non-signal enhanced background correlations at the parts highlighted with the red circles in the image projection of the adversarially trained classifiers.
Figure 2 :
2Examples of directed attacks on CIFAR10 and the Waterbirds dataset. InFigure 2a, we corrupt the image with a black mask of size 2 × 2 and inFigure 2band 2c we change the lighting conditions (darkening) and apply motion blur on the bird in the image respectively. All perturbations effectively reduce the information about the class in the images: they are the result of directed attacks.
Figure 3 :
3Robust error increase with tr (b) Standard-adversarial training (c) Effect of over-parameterization Experimental verification of Theorem 3.1. (a) We set d = 1000, r = 12, n = 50 and plot the robust error gap between standard and adversarial training with increasing adversarial budget tr of 5 independent experiments. For comparison, we also plot the lower bound given in Theorem 3.1. In (b) and (c), we set d = 10000 and vary the number of samples n. (b) We plot the robust error of standard and adversarial training ( tr = 4.5). (c) We compute the error gap and the lower bound of Theorem 3.1. For more experimental details see Appendix C.
Figure 4 :
4(a) We set d = 1000 and r = 12 and plot the robust error with increasing adversarial training budget ( tr ) and with increasing d/n. (b) We plot the robust error decomposition in susceptibility and standard error for increasing adversarial budget tr . Full experimental details can be found in Section C. (c) 2D illustration providing intuition for the linear setting: Training on directed attacks (yellow) effectively corresponds to fiting the original datapoints (blue) after shifting them closer to the decision boundary. The robust max-2 -margin (yellow dotted) is heavily tilted if the points are far apart in the non-signal dimension, while the standard max-2 -margin solution (blue dashed) is much closer to the ground truth (gray solid).
Figure 5 :
5Experiments on the Waterbirds dataset considering the adversarial illumination attack with te = 0.3. We plot the mean and standard deviation of the mean of several independent experiments. (a) The robust error increases with larger tr in the low sample size regime. (b)
Figure 6 :
6(a) We plot the robust error with increasing adversarial training budget tr of 5 experiments on the subsampled Waterbirds datasets of sample sizes 20 and 30. Even though adversarial training hurts robust generalization for low sample size (n = 20), it helps for n = 50. (b) We plot the decomposition of the robust error in standard error and susceptibility with increasing adversarial budget tr . We plot the mean and standard deviation of the mean of 5 experiments on a subsampled Waterbirds dataset of size n = 20. The increase in standard error is more severe than the drop in susceptibility, leading to a slight increase in robust error. For more experimental details see Section D.
[ 13 ]
13Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., and Madry, A. Exploring the landscape of spatial robustness. In International Conference on Machine Learning, pp. 1802-1811, Jun 2019. [14] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. Robust physical-world attacks on deep learning visual classification. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634, Jun 2018. [15] Ghiasi, A., Shafahi, A., and Goldstein, T. Breaking certified defenses: semantic adversarial examples with spoofed robustness certificates. In International Conference on Learning Representations, Apr 2019. [16] Gilmer, J., Adams, R. P., Goodfellow, I., Andersen, D., and Dahl, G. E. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732, 2018. [17] Goodfellow, I., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, pp. 1-10, Jan 2015. [18] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, Jun 2016. [19] Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, pp. 125-136, Dec 2019.
d
((x, y) , H(θ)) = min(x,y)∈D d (x, y) , H( θ ) .We use that D is deterministic in its first coordinate and
Figure 7 :
7We set r = 6, d = 1000, n = 50 and te = 2.5. (a) We plot the average susceptibility score and the standard deviation over 5 independent experiments. Note how the bounds closely predict the susceptibility score. (b) For comparison, we also plot the robust error decomposition in susceptibility and standard error. Even though the susceptibility decreases, the robust error increases with increasing adversarial budget tr .
(a) We draw 5 datasets with n = 50 samples and input dimension d = 1000 from the distribution P. We then run adversarial logistic regression on all 5 datasets with adversarial training budgets, tr = 1 to 5. To compute the resulting robust error gap of all the obtained classifiers, we use a test set of size 10 6 . Lastly, we compute the lower bound given in part 2. of Theorem 3.1. (b) We draw 5 datasets with different sizes n between 50 and 10 4 . We take an input dimension of d = 10 4 and plot the mean and standard deviation of the robust error after adversarial and standard logistic regression over the 5 samples.(c) We again draw 5 datasets for each d/n constellation and compute the robust error gap for each dataset.Experimental details onFigure 4For both (a) and (b) we set d = 1000, te = 4, and vary the adversarial training budget ( tr ) from 1 to 5. For every constellation of n and tr , we draw 10 datasets and show the average and standard deviation of the resulting robust errors. In (b), we set n = 50.
Figure 8 :
8We perform an ablation study of the motion blur kernel size, which corresponds to the severity level of the blur. We see that for increasing M , the severity of the motion blur increases. In particular, note that for M = 15 and even M = 20, the bird remains recognizable: we do not semantically change the class, i.e. the perturbations are consistent.
Figure 9 :Figure 10 :
910We perform an ablation study of the different lighting changes of the adversarial illumination attack. Even though the directed attack attacks the signal component in the image, the bird remains recognizable in all cases. We plot the robust error decomposition of the experiments depicted in
Figure 11 :
11We show an example of a mask perturbation for all 10 classes of CIFAR10. Even though the attack occludes part of the images, a human can still easily classify all images correctly.
Figure 15 :
15In
Figure 17 :
17We visualize the logistic regression solutions. In
A long line of work has proposed procedures to mitigate the trade-off phenomenon. For example Alayrac et al. [1], Carmon et al. [6], Raghunathan et al. [38], Zhai et al. [56] study robust self training, which leverages a large set of unlabelled data, while Lamb et al. [24], Lee et al. [25], Xu et al. [53] use data augmentation by interpolation. Balaji et al. [4], Cheng et al. [8], Ding et al.[10] on the other hand propose to use adaptive perturbation budgets tr that vary across inputs. Our intuition from the theoretical analysis suggests that the standard mitigation procedures for imperceptible perturbations may not work for perceptible directed attacks, because all relevant features are non-robust. We leave a thorough empirical study as interesting future work.
Table 1 :
1We subsample CIFAR10 to a dataset of sample size 500 and perform both standard training (ST) and adversarial training (AT) using different networks. We evaluate the resulting susceptibility score and the robust and standard error.Adversarial training on CIFAR10
Architecture
learning rate
weight
decay
Train
type
standard er-
ror
robust error
Susceptibility
ResNet34
0.02
0.025
ST
44
64
50
ResNet34
0.015
10 −4
AT
52
66
40
ResNet50
0.015
0.03
ST
45
62
47
ResNet50
0.015
10 −4
AT
53
68
45
VGG11bn
0.03
0.01
ST
40
55
43
VGG11bn
0.015
10 −4
AT
48
63
34
VGG16bn
0.02
0.01
ST
41
60
48
VGG16bn
0.015
10 −4
AT
50
65
42
i∈[200− tr ], j∈[200− tr ] l,m∈[ tr ] θ [i:i+l,j:j+m] .
Note that the result more generally holds for non-sparse models that are not axis aligned by way of a simple rotation z = U x. In that case the distribution is characterized by θ = u 1 and a rotated Gaussian in the d − 1 dimensions orthogonal to θ .
Are labels required for improving adversarial robustness?. J.-B Alayrac, J Uesato, P.-S Huang, A Fawzi, R Stanforth, P Kohli, Advances in Neural Information Processing Systems. Alayrac, J.-B., Uesato, J., Huang, P.-S., Fawzi, A., Stanforth, R., and Kohli, P. Are labels required for improving adversarial robustness? Advances in Neural Information Processing Systems, pp. 12214-12223, 2019.
Understanding and improving fast adversarial training. M Andriushchenko, N Flammarion, Advances in Neural Information Processing Systems. Andriushchenko, M. and Flammarion, N. Understanding and improving fast adversarial training. Advances in Neural Information Processing Systems, 2020.
Recent advances in adversarial training for adversarial robustness. T Bai, J Luo, J Zhao, B Wen, Wang , Q , International Joint Conference on Artificial Intelligence. Bai, T., Luo, J., Zhao, J., Wen, B., and Wang, Q. Recent advances in adversarial training for adversarial robustness. In International Joint Conference on Artificial Intelligence, pp. 4312-4321, Aug 2021.
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. Y Balaji, T Goldstein, J Hoffman, arXiv:1910.08051arXiv preprintBalaji, Y., Goldstein, T., and Hoffman, J. Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv preprint arXiv:1910.08051, 2019.
The OpenCV Library. Dr. Dobb's Journal of Software Tools. G Bradski, Bradski, G. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000.
Unlabeled data improves adversarial robustness. Y Carmon, A Raghunathan, L Schmidt, P Liang, J C Duchi, International Conference on Neural Information Processing Systems. Carmon, Y., Raghunathan, A., Schmidt, L., Liang, P., and Duchi, J. C. Unlabeled data improves adversarial robustness. In International Conference on Neural Information Processing Systems, pp. 11192-11203, Dec 2019.
More data can expand the generalization gap between adversarially robust and standard models. L Chen, Y Min, M Zhang, A Karbasi, International Conference on Machine Learning. Chen, L., Min, Y., Zhang, M., and Karbasi, A. More data can expand the generalization gap between adversarially robust and standard models. In International Conference on Machine Learning, pp. 1670-1680, Jun 2020.
M Cheng, Q Lei, P.-Y Chen, I Dhillon, C.-J Hsieh, Cat, arXiv:2002.06789Customized adversarial training for improved robustness. arXiv preprintCheng, M., Lei, Q., Chen, P.-Y., Dhillon, I., and Hsieh, C.-J. CAT: Customized adversarial training for improved robustness. arXiv preprint arXiv:2002.06789, 2020.
B Li, S Wang, S Jana, Carin , L , arXiv:2006.03089Towards understanding fast adversarial training. arXiv preprintLi, B., Wang, S., Jana, S., and Carin, L. Towards understanding fast adversarial training. arXiv preprint arXiv:2006.03089, 2021.
Dual manifold adversarial robustness: Defense against Lp and non-Lp adversarial attacks. W.-A Lin, C P Lau, A Levine, R Chellappa, S Feizi, Advances in Neural Information Processing Systems. Lin, W.-A., Lau, C. P., Levine, A., Chellappa, R., and Feizi, S. Dual manifold adversarial robustness: Defense against Lp and non-Lp adversarial attacks. In Advances in Neural Information Processing Systems, pp. 3487-3498, Dec 2020.
On the loss landscape of adversarial training: Identifying challenges and how to overcome them. C Liu, M Salzmann, T Lin, R Tomioka, S Süsstrunk, Advances in Neural Information Processing Systems. Liu, C., Salzmann, M., Lin, T., Tomioka, R., and Süsstrunk, S. On the loss landscape of adversarial training: Identifying challenges and how to overcome them. In Advances in Neural Information Processing Systems, pp. 21476-21487, 2020.
Towards imperceptible and robust adversarial example attacks against neural networks. B Luo, Y Liu, L Wei, Q Xu, AAAI Conference on Artificial Intelligence and Innovative Applications. Luo, B., Liu, Y., Wei, L., and Xu, Q. Towards imperceptible and robust adversarial example attacks against neural networks. In AAAI Conference on Artificial Intelligence and Innovative Applications, Feb 2018.
Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, International Conference on Learning Representations. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
A real-time gesture recognition system using near-infrared imagery. T Mantecón, C R Blanco, F Jaureguizar, N García, PLOS ONE. Mantecón, T., del Blanco, C. R., Jaureguizar, F., and García, N. A real-time gesture recognition system using near-infrared imagery. PLOS ONE, pp. 1-17, Oct 2019.
Deepfool: a simple and accurate method to fool deep neural networks. S.-M Moosavi-Dezfooli, A Fawzi, P Frossard, IEEE conference on computer vision and pattern recognition. Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE conference on computer vision and pattern recognition, pp. 2574-2582, Jun 2016.
A Mujahid, M J Awan, A Yasin, M A Mohammed, R Damaševičius, R Maskeliūnas, K H Abdulkareem, Real-time hand gesture recognition based on deep learning YOLOv3 model. Applied Sciences. 2021Mujahid, A., Awan, M. J., Yasin, A., Mohammed, M. A., Damaševičius, R., Maskeliūnas, R., and Abdulkareem, K. H. Real-time hand gesture recognition based on deep learning YOLOv3 model. Applied Sciences, 2021.
Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. M S Nacson, N Srebro, D Soudry, The 22th International Conference on Artificial Intelligence and Statistics. Nacson, M. S., Srebro, N., and Soudry, D. Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate. In The 22th International Conference on Artificial Intelligence and Statistics, pp. 3051-3059, Apr 2019.
Uniform convergence may be unable to explain generalization in deep learning. V Nagarajan, J Z Kolter, Advances in Neural Information Processing Systems. Nagarajan, V. and Kolter, J. Z. Uniform convergence may be unable to explain generalization in deep learning. In Advances in Neural Information Processing Systems, pp. 11611-11622, Dec 2019.
Hand gesture recognition based on computer vision: A review of techniques. M Oudah, A Al-Naji, J Chahl, Journal of Imaging. Oudah, M., Al-Naji, A., and Chahl, J. Hand gesture recognition based on computer vision: A review of techniques. Journal of Imaging, 2020.
. H Phan, Huyvnphan, 12021Phan, H. huyvnphan/pytorch cifar10, 1 2021.
Understanding and mitigating the tradeoff between robustness and accuracy. A Raghunathan, S M Xie, F Yang, J Duchi, P Liang, International Conference on Machine Learning. Raghunathan, A., Xie, S. M., Yang, F., Duchi, J., and Liang, P. Understanding and mitigating the tradeoff between robustness and accuracy. In International Conference on Machine Learning, pp. 7909-7919, Jul 2020.
Overfitting in adversarially robust deep learning. L Rice, E Wong, Z Kolter, International Conference on Machine Learning. Rice, L., Wong, E., and Kolter, Z. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, pp. 8093-8104, Jul 2020.
Distributionally robust neural networks. * Sagawa, S Koh, * , P W Hashimoto, T B Liang, P , International Conference on Learning Representations. Sagawa*, S., Koh*, P. W., Hashimoto, T. B., and Liang, P. Distributionally robust neural networks. In International Conference on Learning Representations, Apr 2020.
How benign is benign overfitting?. A Sanyal, P K Dokania, V Kanade, Torr , P , International Conference on Learning Representations. Sanyal, A., Dokania, P. K., Kanade, V., and Torr, P. How benign is benign overfitting? In International Conference on Learning Representations, Apr 2020.
Adversarially robust generalization requires more data. L Schmidt, S Santurkar, D Tsipras, K Talwar, A Madry, Advances in Neural Information Processing Systems. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems, pp. 5019-5031, Dec 2018.
Improving robustness against common corruptions by covariate shift adaptation. S Schneider, E Rusak, L Eck, O Bringmann, W Brendel, M Bethge, Advances in Neural Information Processing Systems. Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H.Schneider, S., Rusak, E., Eck, L., Bringmann, O., Brendel, W., and Bethge, M. Improving robustness against common corruptions by covariate shift adaptation. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, pp. 11539-11551, Dec 2020.
Adversarial perturbations are not so weird: Entanglement of robust and non-robust features in neural network classifiers. J M Springer, M Mitchell, G T Kenyon, arXiv:2102.05110arXiv preprintSpringer, J. M., Mitchell, M., and Kenyon, G. T. Adversarial perturbations are not so weird: Entanglement of robust and non-robust features in neural network classifiers. arXiv preprint arXiv:2102.05110, 2021.
Disentangling adversarial robustness and generalization. D Stutz, M Hein, B Schiele, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Stutz, D., Hein, M., and Schiele, B. Disentangling adversarial robustness and generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6967-6987, Jun 2019.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, Fergus , R , International Conference on Learning Representations. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. In International Conference on Learning Representations, apr 2014.
Margins, shrinkage, and boosting. M Telgarsky, International Conference on Machine Learning. Telgarsky, M. Margins, shrinkage, and boosting. In International Conference on Machine Learning, pp. 307-315, Jun 2013.
Robustness may be at odds with accuracy. D Tsipras, S Santurkar, L Engstrom, A Turner, A Madry, International Conference on Learning Representations. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. In International Conference on Learning Representations, May 2019.
Introduction to the non-asymptotic analysis of random matrices. R Vershynin, arXiv:1011.3027arXiv preprintVershynin, R. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-Ucsd, Birds, CNS-TR-2010-001California Institute of TechnologyTechnical ReportWelinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., and Perona, P. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
Fast is better than free: Revisiting adversarial training. E Wong, L Rice, J Z Kolter, International Conference on Learning Representations. Wong, E., Rice, L., and Kolter, J. Z. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, Apr 2020.
Defending against physically realizable attacks on image classification. T Wu, L Tong, Y Vorobeychik, International Conference on Learning Representations. Wu, T., Tong, L., and Vorobeychik, Y. Defending against physically realizable attacks on image classification. In International Conference on Learning Representations, Apr 2020.
Adversarial domain adaptation with domain mixup. M Xu, J Zhang, B Ni, T Li, C Wang, Q Tian, W Zhang, AAAI Conference on Artificial Intelligence. Xu, M., Zhang, J., Ni, B., Li, T., Wang, C., Tian, Q., and Zhang, W. Adversarial domain adaptation with domain mixup. In AAAI Conference on Artificial Intelligence, pp. 6502-6509, Feb 2020.
Hand gesture recognition: An overview. S Yang, P Premaratne, P Vial, IEEE International Conference on Broadband Network Multimedia Technology. Yang, S., Premaratne, P., and Vial, P. Hand gesture recognition: An overview. In IEEE International Conference on Broadband Network Multimedia Technology, pp. 63-69, 2013.
Rademacher complexity for adversarially robust generalization. D Yin, R Kannan, P Bartlett, International conference on machine learning. Yin, D., Kannan, R., and Bartlett, P. Rademacher complexity for adversarially robust generalization. In International conference on machine learning, pp. 7085-7094, Jun 2019.
Adversarially robust generalization just requires more unlabeled data. R Zhai, T Cai, D He, C Dan, K He, J Hopcroft, Wang , L , arXiv:1906.00555arXiv preprintZhai, R., Cai, T., He, D., Dan, C., He, K., Hopcroft, J., and Wang, L. Adversarially robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019.
Theoretically principled trade-off between robustness and accuracy. H Zhang, Y Yu, J Jiao, E Xing, L E Ghaoui, Jordan , M , International Conference on Machine Learning. Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L. E., and Jordan, M. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pp. 7472-7482, Jun 2019.
Towards large yet imperceptible adversarial image perturbations with perceptual color distance. Z Zhao, Z Liu, M Larson, IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhao, Z., Liu, Z., and Larson, M. Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1039-1048, 2020.
Places: A 10 million image database for scene recognition. B Zhou, A Lapedriza, A Khosla, A Oliva, A Torralba, IEEE Transactions on Pattern Analysis and Machine Intelligence. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., and Torralba, A. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
Manifold projection for adversarial defense on face recognition. J Zhou, C Liang, Chen , J , European Conference on Computer Vision. Zhou, J., Liang, C., and Chen, J. Manifold projection for adversarial defense on face recognition. In European Conference on Computer Vision, pp. 288-305, Aug 2020. |
8,153,918 | Semantic Code Repair using Neuro-Symbolic Transformation Networks | We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model. | [] | Semantic Code Repair using Neuro-Symbolic Transformation Networks
Jacob Devlin [email protected]
Microsoft Research
Google
Microsoft Research
Jonathan Uesato [email protected]
Microsoft Research
Google Deepmind
Microsoft Research
Rishabh Singh
Microsoft Research
Pushmeet Kohli [email protected]
Microsoft Research
Google Deepmind
Microsoft Research
Semantic Code Repair using Neuro-Symbolic Transformation Networks
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model.
Introduction
The term automatic code repair is typically used to describe two overarching tasks: The first involves fixing syntactic errors, which are malformations that cause the code to not adhere to some language specification [10,6]. The second, which is the focus of this work, involves fixing semantic bugs, which refer to any case where the actual program behavior is not the same as the behavior the programmer intended. Clearly, this covers an extremely wide range of code issues, so this work is limited to a class of simple semantic bugs, which we roughly define as: "Bugs that can be identified and fixed by an experienced human programmer, without running the code or having deep contextual knowledge of the program." This does not imply that the bugs are trivially fixable, as they often require time-consuming analysis of the code, rich background knowledge, and complex logical reasoning about the original programmer's intent.
Unlike previous work, we do not assume access to unit tests at training or test time. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might. Most previous work relies on unit tests -a common theme is combining coarse-grained repair models with search algorithms to find some repair that satisfies unit tests [11,19]. In contrast, our proposed task requires models to deeply understand the code in order to propose a single set of repairs. Thus, semantic code repair without unit tests presents a concrete, real-world test bed for the more general task of understanding and modifying source code. Our semantic repair model was trained on a large corpus of open-source Python projects with synthetically injected bugs. We test on both real-bug and synthetic-bug test sets. 2 To train the repair model, we first evaluated an attentional sequence-to-sequence architecture. Although this model was able to achieve non-trivial results, we believe it to be an unsuitable solution in a number of ways, such as the lack of direct competition between repair candidates at different locations. Instead, we use an alternative approach which decouples the non-statistical process of generating and applying repair proposal from the statistical process of scoring and ranking repairs.
This two-stage process itself is not new, but the core novelty in this work is the specific neural framework we propose for scoring repair candidates. We refer to our architecture as a Share, Specialize, and Compete (SSC) network:
• SHARE: The input code snippet is encoded with a neural network. This is a shared representation used by all repair types. • SPECIALIZE: Each repair type is associated with its own specialized neural module [3], which emits a score for every repair candidate of that type. • COMPETE: The raw scores from the specialized modules are normalized to compete in comparable probability space.
Our model can also be thought of as an evolution of work on neural code completion and summarization [2,7]. Like those systems, our SHARE network is used to learn a rich semantic understanding of the code snippet. Our SPECIALIZE modules then build on top of this representation to learn how to identify and fix specific bug types.
Although we have described our framework in relation to the problem of code repair, it has a number of other possible applications in sequence transformation scenarios where the input and output sequences have high overlap. For example, it could be applied to natural language grammar correction [18], machine translation post editing [12], source code refactoring [1], or program optimization [8].
Related Work
We believe this paper to be the first work addressing the issue of semantic program repair in the absence of unit tests, where functionality must be inferred from the code. However, our work adds to a substantial literature on program repair and program analysis, some of which we describe below:
Neural Syntax Repair: There have been several recent techniques developed for training neural networks to correct syntax errors in code. DeepFix [10] uses an attentional seq-to-seq model to fix syntax errors in a program by predicting both the buggy line and the statement to replace it. Bhatia and Singh [6] train an RNN based token sequence model to predict token insertion or replacement at program locations provided by the compiler to fix syntax errors.
Statistical Program Repair: Approaches such as Arcuri and Yao [4] and Goues et al. [9] use genetic programming techniques to iteratively propose program modifications. Prophet [13] learns a probabilistic model to rank patches for null pointer exceptions and array out-of-bounds errors. The model is learnt from human patches using a set of hand-engineered program features. In contrast, our neural model automatically learns useful program representations for repairing a much richer class of semantic bugs.
Natural Source Code / Big Code: A number of recent papers have trained statistical models on large datasets of real-world code. These papers study tasks involving varying degrees of reasoning about source code, such as code completion [17,16,7] and variable/class/function renaming [15,1].
Rule-Based Static Analyzers: Rule-based analyzers for Python (Pylint [20] and Pyflakes [14]) handle a highly disjoint set of issues compared to the type of bugs we are targeting, and generally do not directly propose fixes.
Problem Overview
As mentioned in the introduction, our goal is to develop a system which can statically analyze a piece of code and predict the location of the bug along with the actual fix. We do not assume to have unit tests or any other specification associated with the snippet being repaired. These proposed repairs can be directly presented to the user, or taken as input to some downstream application. Since the task of "fixing bugs in code" is incredibly broad, we limit ourselves to four classes of common Python bugs that are described with examples in Section 3.
Ideally, we would train such a repair model using a large number of buggy/repaired code snippets. However, such a large data set does not exist. It is possible to extract a modest test set of genuine bugs from project commit histories, but it is not enough to train a large-scale neural network. Fortunately, there is a large amount of real-world non-buggy code available to which bugs can be injected. We demonstrate that a model trained on synthesized bugs is able to generalize to a test set with real bugs.
Training Data To create the training data, we first downloaded all Python projects from GitHub that were followed by at least 15 users and had permissive licenses (MIT/BSD/Apache), which amounted to 19,000 total repositories. We extracted every function from each Python source file as a code snippet. In all experiments presented here, each snippet was analyzed on its own without any surrounding context. All models explored in this paper only use static code representations, so each snippet must be parsable as an Abstract Syntax Tree (AST), but does not need to be runnable. Note that many of the extracted functions are member functions of some class, so although they can be parsed, they are not runnable without external context. We only kept snippets with between 5 and 300 nodes in their AST, which approximately corresponds to 1 to 40 lines of code. The average extracted snippet had 45 AST nodes and 6 lines of code.
This data was carved into training, test, and validation at the repository level, to eliminate any overlap between training and test. We also filtered out any training snippet which overlapped with any test snippet by more than 5 lines. In total we extracted 2,900,000 training snippets, and held-out 2,000 for test and 2,000 for validation.
Bug/Repair Types In this work, we consider four general classes of semantic repairs, which were chosen to be "simple" but still common during development, as reported by the Python programmers:
• VarReplace: An incorrect local variable is used at a particular location, and should be replaced with another variable from the snippet. • CompReplace: An incorrect comparison operator is used at a particular location.
• IsSwap: The is operator is used instead of is not, or vice versa.
• ClassMember: A self accessor is missing from a variable.
Generating synthetic bugs from these categories is straightforward. For example, for VarReplace, we synthesize bugs by replacing one random variable from a snippet with another variable from the same snippet. All bug types, locations, and replacements were chosen with random uniform probability.
We applied this bug synthesis procedure to all of the training snippets to create our training data, as well as a synthetic test set (Synth-Bug Test).
Real-Bug Test Set
In order to evaluate on a test set where both the code and bugs were real, we mined the Git commit history from the projects crawled from Github. We found that it was quite difficult to automatically distinguish bug repairs from other code changes such as refactoring, especially since we wanted to avoid introducing biases into the data set through the use of complex filtering heuristics. For this reason, we limited extraction to commits where exactly one line in a file was changed, and the commit contained a word from the list "bug, error, issue, exception, fix". We then filtered these commits to only keep those that correspond to one of our four bug types. Overall, we obtained 926 buggy/repaired snippet pairs with exactly one bug each. We believe that the small number of extracted snippets does not reflect the true frequency of these bugs during development, but rather reflect the fact that (1) one line Git commits are quite rare, (2) these type of bugs rarely make it into the public branch of high-quality repositories.
Baseline Attentional Sequence-to-Sequence Model
Since the goal of program repair is to transform a buggy snippet into a repaired snippet, an obvious baseline is an attention sequence-to-sequence neural network [5], which has been successfully used for the related tasks of syntatic code repair and code completion. On those tasks, sequence-tosequence models have been shown to outperform a number of baseline methods such as n-gram language models or classifiers.
Because this model must actually generate a sequence, we first converted the buggy/repaired ASTs from Synth-Bug Train back to their tokenized source code, which is a simple deterministic process. The architecture used is almost identical to the machine translation system of Bahdanau et al. [5]. To handle the high number of rare tokens in the data, tokens were split by underscores and camel case. The size of the neural net vocabulary was 50,000, and the final out-of-vocabulary (OOV) rate was 1.1%. In evaluation we included OOVs in the reference, so OOVs did not cause a degradation in results. The LSTMs were 512-dimensional and decoding was performed with a beam of 8. When evaluating on the Single-Repair Synth-Bug Test set, the 1-best output exactly matches the reference 26% of the time. If we give the model credit when it predicts the correct repair but also predicts other changes, the accuracy is 41%.
Although this accuracy seem to be non-trivial, there are some intuitive weaknesses in using a sequenceto-sequence architecture for semantic code repair. First, the system is burdened with constructing the entire output sequence, even though on average it is 98.5% identical to the input. Second, potential repairs at different locations do not fairly "compete" with one another in probability space, but only compete with tokens at the same location. Third, it is difficult to use a richer code representation such as the AST, since the repaired code must be generated.
Share, Specialize, and Compete (SSC) Model
Instead of directly generating the entire output snippet with a neural network, we consider an alternative approach where repairs are iteratively applied to the input snippet. Here, for each bug type described in Section 3, the system proposes all possible repairs of that type in the snippet. Although these candidate generators are manually written, they simply propose all possible repairs of a given type and do not perform any heuristic pruning, so each of the four generators can be written in a few lines of code. The challenging work of determining the correct repair using the code context is performed by our statistical model.
For clarity of terminology, a repair candidate is a particular fix that can be made at a particular location (e.g., "Replace == with != at node 4"). A repair instance refers to a particular repair location the generator proposes and all of the candidates at that location. Each instance is guaranteed to have exactly one no-op candidate, which results in no change to the AST if applied (e.g., "Replace == with == at node 4"). The reference label refers to the correct candidate of a given instance (e.g., "The correct replacement at node 4 is <="). Note that for the majority of repair instances that are proposed, the reference label will be the no-op candidate. We now present the statistical model used to score repair candidates. We refer to it as a Share, Specialize, and Compete (SSC) network. A visual representation is given in Figure 1.
Share
The SHARE component performs a rich encoding of the input AST using a neural network. Crucially, this encoding is only conditioned on the AST itself and not on any repair candidates, so it serves a shared representation for the next component. This network can take many forms, with the only restriction being that it must emit one vector of some dimension d for each node in the AST. An example of a Python AST is given on the right side of Figure 1.
Here, for efficiency purposes, we encode the AST with a sequential bidirectional LSTM by enumerating a depth first traversal of the nodes, which roughly corresponds to "source code order." However, we encode the rich AST structure by using embeddings for (1) the absolute position of the node in the AST, (2) the type of the node in the AST, (3) the relationship between the node and its parent, and (4) the surface form string of the node. These tokens are projected through an embedding layer and then concatenated, and the resulting vector is used as input to a bidirectional LSTM. The output of this layer is represented as H = (h 1 , h 2 , ..., h n ), where h i ∈ R d , d is the hidden dimension, and n is the number of nodes in the AST.
The core concept of the shared component is that the vast majority of neural computation is performed here, independent of the repairs themselves. We contrast this to an alternative approach where each repair candidate is applied to the input AST and each resulting repair candidate AST is encoded with an RNN -such an approach would be orders of magnitude more expensive to train and evaluate.
Specialize
The SPECIALIZE component scores each repair candidate using a specialized network module [3] for each repair type. Instances of the same type are processed by the same module, but obtain separate scores since they have different input. Each module takes as input the shared representation H and a repair instance R with m candidates. It produces an un-normalized scalar score for each candidate in the instance,ŝ = (s 1 , ..., s m ). We use two module types:
Multi-Layer Perceptron (MLP) Module: This module performs scoring over a fixed label set using one non-linear hidden layer. This is used for the CompReplace, IsSwap, and ClassMember generators. It is computed as:
ŝ = V × tanh(W h j )
where V ∈ R m×c , W ∈ R c×d , c is the hidden dimension, m is the number of labels (i.e., transform candidates), and j is the transform location corresponding to the transform instance T . Note that separate V and W weights are learned for each repair type.
Pooled Pointer Module: Predicting variables for VarReplace presents a unique challenge when modeling code repair. First, the variable names in a test snippet may never have been seen in training. More importantly, the semantics of a variable are primarily defined by its usage, rather than its name.
To address this, instead of using a fixed output layer, each candidate (i.e., another variable) is encoded using pointers to each usage of that variable in the AST. An example is given in Figure 1. Formally, it is computed as:
s i = tanh(W h j ) · [MaxPool k∈pi (tanh(V h k ))]
where i is the candidate (i.e., variable) index, p i is the list of locations (pointers) of the variable i in the AST, j is the location of the repair in the AST, and V, W ∈ R c×d are learned weight matrices.
Compete
Once a scalar score has been produced for each repair candidate, these must be normalized to compete against one another. We consider two approaches to normalizing these scores:
Local Norm: A separate softmax is performed for each repair instance (i.e., location and type), so candidates are only normalized against other candidates in the same instance, including no-op. At test time we sort all candidates across all instances by probability, even though they have not been normalized against each other.
Global Norm: All candidates at all locations are normalized with a single softmax. No-op candidates are not included in this formulation.
Experimental Results
We train the SSC model on the Synth-Bug Train data for 30 epochs. Different bugs are synthesized at each epoch which significantly mitigates over-fitting. We set the hidden dimensions of the SHARE and SPECIALIZE components to 512, and the embedding size to 128. A dropout of 0.25 is used on the output of the SHARE component. Training was done with plain SGD + gradient clipping using an in-house toolkit. A small amount of hyperparameter tuning was performed on the Synth-Bug Val set.
In the first condition we evaluate, all snippets in both training and test have exactly one bug each. As was described in Section 3, for Synth-Bug Test, the code snippets are real, but the bugs have been artificially inserted at random. For Real-Bug Test, we extracted 926 buggy/fixed snippet pairs mined from GitHub commit logs, so both the snippet and bug are real. The average snippet in the Real-Bug Test set has 31 repair locations and 102 total repair candidates, compared to 20 locations and 50 candidates of the Synth-Bug Test test set. Table 1 presents Single-Repair results on Synth-Bug and Real-Bug test sets. The accuracy metric denotes how often the 1-best repair prediction exactly matches the reference repair, i.e., the model correctly detects where the bug is and correctly predicts how to fix it. In this case, the model was constrained to predict exactly one repair, but all candidates across all repair types are directly competing against one another. On Synth-Bug, the SSC model drastically outperforms the attentional sequence-to-sequence model, even using the upper bound seq-to-seq accuracy. Since global normalization and local normalization have similar performance and it is not obvious how to extend global normalization to multiple repairs, we use local normalization for multi-repair experiments.
On Real-Bug Test, the absolute accuracy is lower than on Synth-Bug Test, but the SSC model still significantly outperforms the seq-to-seq baseline. To better understand the absolute quality of the Real-Bug Test results, we perform a preliminary human evaluation in Section 6.
Example predictions from the Real-Bug Test set are presented below. The red region is the bug, and the green is the reference repair. For the incorrect predictions, the blue region is the predicted repair. Results on all 926 Real-Bug Test examples are provided in the supplementary material.
Single-Repair
Synth-Bug Real-Bug Accuracy Accuracy
Att. Seq-to-Seq 26% (40% † ) 13% (18% † )
SSC (Global Norm) 86% 41% SSC (Local Norm) 87% 41% VarReplace 82% 36% CompReplace 80% 29% IsSwap 96% 82% ClassMember 95% 56%
Multi-Repair
Synth-Bug Num Exact Bugs F-Score Accuracy
0 - 82% 1 85% 78% 2 84% 61% 3 81% 45%
All 82% 66% Table 1: Repair Accuracy: 1-best repair accuracy prediction for the single-repair and multi-repair condition † Denotes "upper bound" accuracies as in Sec. 4.
In the multi-repair setting, we consider the more realistic scenario where a snippet may have multiple bugs, or may have none. To model this scenario, the data was re-generated so that 0, 1, 2, or 3 bugs was added to each training/test/val snippet, with equal probability of each. We refer to these new sets as Synth-Multi-Bug Test and Synth-Multi-Bug Val. Unfortunately, we were not able to extract multi-bug examples from the Real-Bug data.
The major new complexity is that the system must now determine how many repairs to predict per snippet, if any. We use a simple threshold-based approach: Since each repair candidate is assigned a probability by the model, we simply emit all repairs which have probability greater than δ. The system is not constrained to emit only 3 repairs. A parameter sweep over the validation set revealed that accuracy is surprisingly un-sensitive to δ, so we simply use δ = 0.5. Note that we only perform a single pass of repair scoring here, but in future work we will explore an iterative decoder.
Results are presented on the right side of Table 1. For accuracy at the per-repair level, there is only a moderate decrease in F-score from 85% to 81% between the 1-repair and 3-repair settings. The Exact Accuracy does decrease significantly, but not beyond the "expected value." In other words, three independent 1-repair snippets have an expected accuracy of 0.78 3 = 0.47, which is similar to the 45% accuracy observed for 3-repair snippet. We also see that the system is 82% accurate at correctly predicting when a snippet has no bugs.
Human Evaluation To better understand the significance of the performance of our system, we performed a preliminary human evaluation under identical conditions to the model. The evaluator was presented with a snippet from the test set, where all repair instances were highlighted in the code. The evaluator could click on a repair location to see all candidates at that location. They were explained each of the four bug types and told that there was always exactly one bug per snippet. This evaluation required experienced Python programmers performing a complex task, so we performed a small evaluation using 4 evaluators and 30 snippets each from the Real-Bug Test set. Evaluators typically used 2-6 minutes per snippet. These snippets were limited to 150 nodes for the benefit of the human evaluators, so the SSC model accuracy is higher on this subset than on the full set.
On these snippets, the humans achieved 37% accuracy compared to the 60% accuracy of the SSC model. One possible reason for this performance gap is that the model is simply better than humans at this task, presumably because it has been able to learn from such a large amount of data. Another possible reason is that humans did not spend the time or mental energy to perform as well as they could. To examine these possibilities, we performed a second evaluation with the same set of humans.
In this evaluation, instead of having to consider all possible repairs -up to 100 candidates -the humans only had to decide between the four "most likely" repair candidates. These candidates were generated by taking the top four predictions from the SSC model (or the top three and the correct repair), shown in random order. In this evaluation, humans achieved 76% accuracy, which shows that the low performance of humans in the full task is due to the mental energy required, rather than lack of context or code understanding. We believe that these evaluations demonstrate that Real-Bug Test is a challenging set and that the accuracy achieved by the SSC model is empirically strong.
Analysis and Discussion
Our first goal is to conceptually understand at what "level" the model was able to generalize to new snippets. Although the hidden activations of the neural network model are not directly interpretable, we can attempt to interpret the latent model space using nearest neighbor retrieval on the hidden vectors h i . The goal is to determine if the model is simply memorizing common n-grams, or if it is actually learning high-level repair concepts. Nearest neighbor retrieval for several test snippets are presented here:
In Example 1, we see the model is able to learn a high-level pattern "y.x = x". In Example 2 we see the pattern "if (x c 1 y...) elif (x c 2 y...)". In Example 3 we see the pattern "Strings usually use the equality (or inequality) operator." In all cases, the surface form of the training nearest neighbor is very different from the test snippet. From this, it appears that the SSC model is able to learn a number of interesting, high-level patterns which it uses to generalize to new data.
We next examined failure cases of the SSC model which a human evaluator was able to repair correctly. Here, the primary weakness of the model was that humans were able to better infer program intent by using variable names, function names, and string literals. One major fault in the current implementation is a lack of sub-word representation. For example, consider a repair of the expression "dtypes.append(x)" where x could be dtype or syncnode. It is easy for a human to infer that dtype is the more sensible choice even without deeper understand of the code. In future work we plan to explore character-level encoding of value strings so that lexical similarity can be modeled latently by the network.
We finally examined cases where the SSC model succeeded but the human evaluator failed. Generally, we conclude that the model's primary advantage was the sheer amount of data it was able to learn from. For example, consider the expression "if (db.version_info <= 3)". This may not be immediately suspicious to a human, but if we analyze the reference training data we can measure that the pattern "if (x.version_info <= y)" is 10 times less frequent than the pattern "if (x.version_info < y)". Intuitively, this makes sense because if a feature is added in version y, it is not useful to check <= y. However, the neural model is able to easily learn such probabilistic distributions even without deeper understanding of why they are true.
Conclusion
We presented a novel neural network architecture that allows specialized network modules to explicitly model different transformation types based on a shared input representation. When applied to the domain of semantic code repair, our model achieves high accuracy relative to a seq2seq baseline and an expert human evaluation. In our analysis of the results, we find that our system is able to learn fairly sophisticated repair patterns from the training data. In future work we plan to expand our model to cover a larger set of bug types, and ideally these bug types would be learned automatically from a corpus of real-world bugs. We also plan to apply the SSC model to other tasks.
[18] Allen Schmaltz, Yoon Kim, Alexander M Rush, and Stuart M Shieber. Sentence-level grammatical error identification as sequence-to-sequence correction. arXiv preprint arXiv:1604.04677, 2016.
[19] Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback generation for introductory programming assignments. In PLDI, pages 15-26, 2013.
[20] Sylvain Thenault. Pylint, 2001. URL https://www.pylint.org/.
A Pooled Pointer Module Implementation The application of a pooled pointer module at a single time step, to predict the variable replacement scores for each potential replacement of the token fname. The input here is the per-token representation computed by the SHARE module. Representations for variable names are passed through a pooling module which outputs per-variable pooled representations. These representations are then passed through a similarity module, as in standard pointer networks, to yield a (dynamically-sized) output dictionary containing one score for each unique variable.
As described in Section 5.2, the pooling module consists of a projection layer followed by a pooling operation. For each variable i, its representation is computed by pooling the set of all its occurrences,
p i . v i = MaxPool k∈pi (tanh(V h k ))
where h k denotes the representation computed by the SHARE module at location k.
The similarity module produces un-normalized scores for each potential variable replacement i. When applied at repair location j, it computes:
s ij = tanh(W h j ) · v i
B Examples of Predictions
We include the full set of system predictions for the Real-Bug Test set. We have made these available at https://iclr2018anon.github.io/semantic_code_repair/index.html.
C Additional Results
Varying source code complexity Figure 3 presents accuracy of the model across functions with varying numbers of repair candidates. While the repair accuracy decreases with the number of repair candidates, the model achieves reasonably high accuracy even for functions with over 100 repair candidates. Among functions with 101-150 repair candidates, the model accuracy is 73% for synthetically introduced bugs and 36% for real bugs. Importance of AST structure The Python abstract syntax tree is a rich source of semantic information about the tokens in a snippet. As described in Section 5.1, in addition to the original token string, we also include (1) the absolute position of the node in the AST, (2) the type of the node, and (3) the relationship between the node and its parent. To test the model's reliance on this information, we present ablation results over these additional feature layers below in Table 2.
We see that using information from the AST provides a significant performance gain. Still, even when only using the surface form values, the SSC model outperforms the attentional sequence-to-sequence baseline by a large margin (78.3% repair accuracy compared to 26% for the sequence-to-sequence model). Table 2: Results on Synth-Bug Test with ablation on different token types from the input AST representation.
Repair
Figure 1 :
1Model Visualization: A visualization of the Share, Specialize, and Compete architecture for neural program repair.
Figure 2
2provides a diagram of the pooled pointer network module.
Figure 2 :
2Pooled Pointer Module:
Figure 3 :
3Results binned by number of repair candidates in the snippet
No Pos., Rel., Val.(Type Only)Location
Accuracy Accuracy
All
87.1%
91.3%
No Pos.
86.8%
91.1%
No Pos., Rel.
85.7%
90.9%
80.9%
86.7%
Value Only
78.3%
84.3%
All data sets will be made publicly available.
Suggesting accurate method and class names. Miltiadis Allamanis, Earl T Barr, Christian Bird, Charles A Sutton, FSE. Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles A. Sutton. Suggesting accurate method and class names. In FSE, pages 38-49, 2015.
A convolutional attention network for extreme summarization of source code. Miltiadis Allamanis, Hao Peng, Charles A Sutton, ICML. Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for extreme summarization of source code. In ICML, pages 2091-2100, 2016.
Neural module networks. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39-48, 2016.
A novel co-evolutionary approach to automatic software bug fixing. Andrea Arcuri, Xin Yao, IEEE Congress on Evolutionary Computation. IEEEAndrea Arcuri and Xin Yao. A novel co-evolutionary approach to automatic software bug fixing. In IEEE Congress on Evolutionary Computation, pages 162-168. IEEE, 2008.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Automated correction for syntax errors in programming assignments using recurrent neural networks. Sahil Bhatia, Rishabh Singh, abs/1603.06129CoRRSahil Bhatia and Rishabh Singh. Automated correction for syntax errors in programming assignments using recurrent neural networks. CoRR, abs/1603.06129, 2016.
Learning python code suggestion with a sparse pointer network. Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, Sebastian Riedel, arXiv:1611.08307arXiv preprintAvishkar Bhoopchand, Tim Rocktäschel, Earl Barr, and Sebastian Riedel. Learning python code suggestion with a sparse pointer network. arXiv preprint arXiv:1611.08307, 2016.
Learning to superoptimize programs. Rudy Bunel, Alban Desmaison, Pawan Kumar, H S Philip, Pushmeet Torr, Kohli, arXiv:1611.01787arXiv preprintRudy Bunel, Alban Desmaison, M Pawan Kumar, Philip HS Torr, and Pushmeet Kohli. Learning to superoptimize programs. arXiv preprint arXiv:1611.01787, 2016.
Genprog: A generic method for automatic software repair. C Le Goues, T Nguyen, S Forrest, W Weimer, IEEE Transactions on Software Engineering. 381C. Le Goues, T. Nguyen, S. Forrest, and W. Weimer. Genprog: A generic method for automatic software repair. IEEE Transactions on Software Engineering, 38(1):54-72, 2012.
Deepfix: Fixing common c language errors by deep learning. Rahul Gupta, Soham Pal, Aditya Kanade, Shirish Shevade, AAAI. Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. Deepfix: Fixing common c language errors by deep learning. In AAAI, 2017.
Automated patching techniques: the fix is in: technical perspective. Mark Harman, Communications of the ACM. 535Mark Harman. Automated patching techniques: the fix is in: technical perspective. Communications of the ACM, 53(5):108-108, 2010.
Cuni system for wmt16 automatic post-editing and multimodal translation tasks. Jindřich Libovickỳ, Jindřich Helcl, Marek Tlustỳ, Pavel Pecina, Ondřej Bojar, arXiv:1606.07481arXiv preprintJindřich Libovickỳ, Jindřich Helcl, Marek Tlustỳ, Pavel Pecina, and Ondřej Bojar. Cuni system for wmt16 automatic post-editing and multimodal translation tasks. arXiv preprint arXiv:1606.07481, 2016.
Automatic patch generation by learning correct code. Fan Long, Martin Rinard, POPL. Fan Long and Martin Rinard. Automatic patch generation by learning correct code. In POPL, pages 298-312, 2016.
. Pycqa, Pyflakes, PyCQA. Pyflakes, 2012. URL https://github.com/PyCQA/pyflakes.
Learning from large codebases. Veselin Raychev, PhD thesisVeselin Raychev. Learning from large codebases. PhD thesis, 2016.
Code completion with statistical language models. Veselin Raychev, Martin Vechev, Eran Yahav, ACM SIGPLAN Notices. ACM49Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In ACM SIGPLAN Notices, volume 49, pages 419-428. ACM, 2014.
Predicting program properties from big code. Veselin Raychev, Martin Vechev, Andreas Krause, ACM SIGPLAN Notices. ACM50Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from big code. In ACM SIGPLAN Notices, volume 50, pages 111-124. ACM, 2015. |
238,419,270 | UNDERSTANDING DOMAIN RANDOMIZATION FOR SIM-TO-REAL TRANSFER | Reinforcement learning encounters many challenges when applied directly in the real world. Sim-to-real transfer is widely used to transfer the knowledge learned from simulation to the real world. Domain randomization-one of the most popular algorithms for sim-to-real transfer-has been demonstrated to be effective in various tasks in robotics and autonomous driving. Despite its empirical successes, theoretical understanding on why this simple algorithm works is limited. In this paper, we propose a theoretical framework for sim-to-real transfers, in which the simulator is modeled as a set of MDPs with tunable parameters (corresponding to unknown physical parameters such as friction). We provide sharp bounds on the sim-to-real gap-the difference between the value of policy returned by domain randomization and the value of an optimal policy for the real world. We prove that sim-to-real transfer can succeed under mild conditions without any real-world training samples. Our theory also highlights the importance of using memory (i.e., history-dependent policies) in domain randomization. Our proof is based on novel techniques that reduce the problem of bounding the sim-to-real gap to the problem of designing efficient learning algorithms for infinite-horizon MDPs, which we believe are of independent interest. * These two authors contributed equally. Shipra Agrawal and Randy Jia. Posterior sampling for reinforcement learning: worst-case regret bounds. arXiv preprint arXiv:1705.07041, 2017. , et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 4243-4250. IEEE, 2018. Peter Buchholz and Dimitri Scheftelowitsch. Computation of weighted sums of rewards for concurrent MDPs. . Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016.Mark Cutler and Jonathan P How. Efficient reinforcement learning for robots using informative simulated priors. . Sim2real transfer learning for 3d human pose estimation: motion to the rescue. | [] | UNDERSTANDING DOMAIN RANDOMIZATION FOR SIM-TO-REAL TRANSFER
13 Mar 2022
Xiaoyu Chen
MOE
School of Artificial Intelligence
Department of Electrical and Computer Engineering
Key Laboratory of Machine Perception
Key Laboratory of Machine Perception, MOE, School of Artificial Intelligence, Peking University International Center for Machine Learning Research
Peking University
Princeton University
Peking University
Jiachen Hu
MOE
School of Artificial Intelligence
Department of Electrical and Computer Engineering
Key Laboratory of Machine Perception
Key Laboratory of Machine Perception, MOE, School of Artificial Intelligence, Peking University International Center for Machine Learning Research
Peking University
Princeton University
Peking University
Chi Jin [email protected]
MOE
School of Artificial Intelligence
Department of Electrical and Computer Engineering
Key Laboratory of Machine Perception
Key Laboratory of Machine Perception, MOE, School of Artificial Intelligence, Peking University International Center for Machine Learning Research
Peking University
Princeton University
Peking University
Lihong Li
MOE
School of Artificial Intelligence
Department of Electrical and Computer Engineering
Key Laboratory of Machine Perception
Key Laboratory of Machine Perception, MOE, School of Artificial Intelligence, Peking University International Center for Machine Learning Research
Peking University
Princeton University
Peking University
Liwei Wang [email protected]
MOE
School of Artificial Intelligence
Department of Electrical and Computer Engineering
Key Laboratory of Machine Perception
Key Laboratory of Machine Perception, MOE, School of Artificial Intelligence, Peking University International Center for Machine Learning Research
Peking University
Princeton University
Peking University
UNDERSTANDING DOMAIN RANDOMIZATION FOR SIM-TO-REAL TRANSFER
13 Mar 2022Published as a conference paper at ICLR 2022
Reinforcement learning encounters many challenges when applied directly in the real world. Sim-to-real transfer is widely used to transfer the knowledge learned from simulation to the real world. Domain randomization-one of the most popular algorithms for sim-to-real transfer-has been demonstrated to be effective in various tasks in robotics and autonomous driving. Despite its empirical successes, theoretical understanding on why this simple algorithm works is limited. In this paper, we propose a theoretical framework for sim-to-real transfers, in which the simulator is modeled as a set of MDPs with tunable parameters (corresponding to unknown physical parameters such as friction). We provide sharp bounds on the sim-to-real gap-the difference between the value of policy returned by domain randomization and the value of an optimal policy for the real world. We prove that sim-to-real transfer can succeed under mild conditions without any real-world training samples. Our theory also highlights the importance of using memory (i.e., history-dependent policies) in domain randomization. Our proof is based on novel techniques that reduce the problem of bounding the sim-to-real gap to the problem of designing efficient learning algorithms for infinite-horizon MDPs, which we believe are of independent interest. * These two authors contributed equally. Shipra Agrawal and Randy Jia. Posterior sampling for reinforcement learning: worst-case regret bounds. arXiv preprint arXiv:1705.07041, 2017. , et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 4243-4250. IEEE, 2018. Peter Buchholz and Dimitri Scheftelowitsch. Computation of weighted sums of rewards for concurrent MDPs. . Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016.Mark Cutler and Jonathan P How. Efficient reinforcement learning for robots using informative simulated priors. . Sim2real transfer learning for 3d human pose estimation: motion to the rescue.
INTRODUCTION
Reinforcement Learning (RL) is concerned with sequential decision making, in which the agent interacts with the environment to maximize its cumulative rewards. This framework has achieved tremendous empirical successes in various fields such as Atari games, Go and StarCraft (Mnih et al., 2013;Silver et al., 2017;Vinyals et al., 2019). However, state-of-the-art algorithms often require a large amount of training samples to achieve such a good performance. While feasible in applications that have a good simulator such as the examples above, these methods are limited in applications where interactions with the real environment are costly and risky, such as healthcare and robotics.
One solution to this challenge is sim-to-real transfer (Floreano et al., 2008;Kober et al., 2013). The basic idea is to train an RL agent in a simulator that approximates the real world and then transfer the trained agent to the real environment. This paradigm has been widely applied, especially in robotics (Rusu et al., 2017;Peng et al., 2018;Chebotar et al., 2019) and autonomous driving (Pouyanfar et al., 2019;Niu et al., 2021). Sim-to-real transfer is appealing as it provides an essentially unlimited amount of data to the agent, and reduces the costs and risks in training.
However, sim-to-real transfer faces the fundamental challenge that the policy trained in the simulated environment may have degenerated performance in the real world due to the sim-to-real gap-the mismatch between simulated and real environments. In addition to building higher-fidelity simulators to alleviate this gap, domain randomization is another popular method (Sadeghi & Levine, 2016;Tobin et al., 2017;Peng et al., 2018;OpenAI et al., 2018). Instead of training the agent in a single simulated environment, domain randomization randomizes the dynamics of the environment, thus exposes the agent to a diverse set of environments in the training phase. Policies learned entirely in the simulated environment with domain randomization can be directly transferred to the physical world with good performance (Sadeghi & Levine, 2016;Matas et al., 2018;OpenAI et al., 2018).
In this paper, we focus on understanding sim-to-real transfer and domain randomization from a theoretical perspective. The empirical successes raise the question: can we provide guarantees for the sub-optimality gap of the policy that is trained in a simulator with domain randomization and directly transferred to the physical world? To do so, we formulate the simulator as a set of MDPs with tunable latent variables, which corresponds to unknown parameters such as friction coefficient or wind velocity in the real physical world. We model the training process with domain randomization as finding an optimal history-dependent policy for a latent MDP, in which an MDP is randomly drawn from a set of MDPs in the simulator at the beginning of each episode.
Our contributions can be summarized as follows:
• We propose a novel formulation of sim-to-real transfer and establish the connection between domain randomization and the latent MDP model (Kwon et al., 2021). The latent MDP model illustrates the uniform sampling nature of domain randomization, and helps to analyze the sim-to-real gap for the policy obtained from domain randomization. • We study the optimality of domain randomization in three different settings. Our results indicate that the sim-to-real gap of the policy trained in the simulation can be o(H) when the randomized simulator class is finite or satisfies certain smoothness condition, where H is the horizon of the real-world interaction. We also provide a lower bound showing that such benign conditions are necessary for efficient learning. Our theory highlights the importance of using memory (i.e., history-dependent policies) in domain randomization. • To analyze the optimality of domain randomization, we propose a novel proof framework which reduces the problem of bounding the sim-to-real gap of domain randomization to the problem of designing efficient learning algorithms for infinite-horizon MDPs, which we believe are of independent interest. • As a byproduct of our proof, we provide the first provably efficient model-based algorithm for learning infinite-horizon average-reward MDPs with general function approximation (Algorithm 4 in Appendix C.3). Our algorithm achieves a regret bound ofÕ(D √ d e T ), where T is the total timesteps and d e is a complexity measure of a certain function class F that depends on the eluder dimension (Russo & Van Roy, 2013;Osband & Van Roy, 2014).
RELATED WORK
Sim-to-Real and Domain Randomization The basic idea of sim-to-real is to first train an RL agent in simulation, and then transfer it to the real environment. This idea has been widely applied to problems such as robotics (e.g., Ng et al., 2006;Bousmalis et al., 2018;Tan et al., 2018;OpenAI et al., 2018) and autonomous driving (e.g., Pouyanfar et al., 2019;Niu et al., 2021). To alleviate the influence of reality gap, previous works have proposed different methods to help with sim-to-real transfer, including progressive networks (Rusu et al., 2017), inverse dynamics models (Christiano et al., 2016) and Bayesian methods (Cutler & How, 2015;Pautrat et al., 2018). Domain randomization is an alternative approach to making the learned policy to be more adaptive to different environments (Sadeghi & Levine, 2016;Tobin et al., 2017;Peng et al., 2018;OpenAI et al., 2018), thus greatly reducing the number of real-world interactions.
There are also theoretical works related to sim-to-real transfer. Jiang (2018) uses the number of different state-action pairs as a measure of the gap between the simulator and the real environment. Under the assumption that the number of different pairs is constant, they prove the hardness of sim-to-real transfer and propose efficient adaptation algorithms with further conditions. Feng et al. (2019) prove that an approximate simulator model can effectively reduce the sample complexity in the real environment by eliminating sub-optimal actions from the policy search space. Zhong et al.
(2019) formulate a theoretical sim-to-real framework using the rich observation Markov decision processes (ROMDPs), and show that the transfer can result in a smaller real-world sample complexity. None of these results study benefits of domain randomization in sim-to-real transfer. Furthermore, all above works require real-world samples to fine-tune their policy during training, while our work and the domain randomization algorithm do not.
POMDPs and Latent MDPs Partially observable Markov decision processes (POMDPs) are a general framework for sequential decision-making problems when the state is not fully observable (Smallwood & Sondik, 1973;Kaelbling et al., 1998;Vlassis et al., 2012;Jin et al., 2020a;Xiong et al., 2021). Latent MDPs (Kwon et al., 2021), or LMDPs, are a special type of POMDPs, in which the real environment is randomly sampled from a set of MDPs at the beginning of each episode. This model has been widely investigated with different names such as hidden-model MDPs and multi-model MDPs. There are also results studying the planning problem in LMDPs, when the true parameters of the model is given (Chades et al., 2012;Buchholz & Scheftelowitsch, 2019;Steimle et al., 2021) . Kwon et al. (2021) consider the regret minimization problem for LMDPs, and provide efficient learning algorithms under different conditions. We remark that all works mentioned above focus on the problems of finding the optimal policies for POMDPs or latent MDPs, which is perpendicular to the central problem of this paper-bounding the performance gap of transferring the optimal policies of latent MDPs from simulation to the real environment.
Infinite-horizon Average-Reward MDPs Recent theoretical progress has produced many provably sample-efficient algorithms for RL in infinite-horizon average-reward setting. Nearly matching upper bounds and lower bounds are known for the tabular setting (Jaksch et al., 2010;Fruit et al., 2018;Zhang & Ji, 2019;Wei et al., 2020). Beyond the tabular case, Wei et al. (2021) propose efficient algorithms for infinite-horizon MDPs with linear function approximation. To the best of our knowledge, our result (Algorithm 4) is the first efficient algorithm with near-optimal regret for infinite-horizon average-reward MDPs with general function approximation.
PRELIMINARIES
EPISODIC MDPS
We consider episodic RL problems where each MDP is specified by M = (S, A, P, R, H, s 1 ). S and A are the state and the action space with cardinality S and A respectively. We assume that S and A are finite but can be extremely large. P : S × A → ∆(S) is the transition probability matrix so that P (·|s, a) gives the distribution over states if action a is taken on state s, R : S × A → [0, 1] is the reward function. H is the number of steps in one episode.
For simplicity, we assume the agent always starts from the same state in each episode, and use s 1 to denote the initial state at step h = 1. It is straight-forward to extend our results to the case with random initialization. At step h ∈ [H], the agent observes the current state s h ∈ S, takes action a h ∈ A, receives reward R(s h , a h ), and transits to state s h+1 with probability P (s h+1 |s h , a h ). The episode ends when s H+1 is reached.
We consider the history-dependent policy class Π, where π ∈ Π is a collection of mappings from the history observations to the distributions over actions. Specifically, we use traj h = {(s 1 , a 1 , s 2 , a 2 , · · · , s h ) | s i ∈ S, a i ∈ A, i ∈ [h]} to denote the set of all possible trajectories of history till step h. We define a policy π ∈ Π to be a collection of H policy func-
tions {π h : traj h → ∆(A)} h∈[H] . We define V π M,h : S → R to be the value function at step h under policy π on MDP M, i.e., V π M,h (s) = E M,π [ H t=h R(s t , a t ) | s h = s]. Accord- ingly, we define Q π M,h : S × A → R to be the Q-value function at step h: Q π M,h (s, a) = E M,π [R(s h , a h ) + H t=h+1 R(s t , a t ) | s h = s, a h = a]
. We use π * M to denote the optimal policy for a single MDP M. It can be shown that there exists π * M such that the policy at step h depends on only the state at step h but not any other prior history. That is, π * M can be expressed as a collection of H policy functions mapping from S to ∆(A). We use V * M,h and Q * M,h to denote the optimal value and Q-functions under the optimal policy π * M at step h.
PRACTICAL IMPLEMENTATION OF DOMAIN RANDOMIZATION
In this subsection, we briefly introduce how domain randomization works in practical applications. Domain randomization is a popular technique for improving domain transfer (Tobin et al., 2017;Peng et al., 2018;Matas et al., 2018), which is often used for zero-shot transfer when the target domain is unknown or cannot be easily used for training. For example, by highly randomizing the rendering settings for their simulated training set, Sadeghi & Levine (2016) trained vision-based controllers for a quadrotor using only synthetically rendered scenes. OpenAI et al. (2018) studied the problem of dexterous in-hand manipulation. The training is performed entirely in a simulated environment in which they randomize the physical parameters of the system like friction coefficients and vision properties such as object's appearance.
To apply domain randomization in the simulation training, the first step before domain randomization is usually to build a simulator that is close to the real environment. The simulated model is further improved to match the physical system more closely through calibration. Though the simulation is still a rough approximation of the physical setup after these engineering efforts, these steps ensure that the randomized simulators generated by domain randomization can cover the realworld variability. During the training phase, many aspects of the simulated environment are randomized in each episode in order to help the agent learn a policy that generalizes to reality. The policy trained with domain randomization can be represented using recurrent neural network with memory such as LSTM (Yu et al., 2018;OpenAI et al., 2018;Doersch & Zisserman, 2019). Such a memory-augmented structure allows the policy to potentially identify the properties of the current environment and adapt its behavior accordingly. With sufficient data sampled using the simulator, the agent can find a near-optimal policy w.r.t. the average value function over a variety of simulation environments. This policy has shown its great adaptivity in many previous results, and can be directly applied to the physical world without any real-world fine-tuning (Sadeghi & Levine, 2016;Matas et al., 2018;OpenAI et al., 2018).
FORMULATION
In this section, we propose our theoretical formulation of sim-to-real and domain randomization. The corresponding models will be used to analyze the optimality of domain randomization in the next section, which can also serve as a starting point for future research on sim-to-real.
SIM-TO-REAL TRANSFER
In this paper, we model the simulator as a set of MDPs with tunable latent parameters. We consider an MDP set U representing the simulator model with joint state space S and joint action space A. Each MDP M = (S, A, P M , R, H, s 1 ) in U has its own transition dynamics P M , which corresponds to an MDP with certain choice of latent parameters. Our result can be easily extended to the case where the rewards are also influenced by the latent parameters. We assume that there exists an MDP M * ∈ U that represents the dynamics of the real environment.
We can now explain our general framework of sim-to-real. For simplicity, we assume that during the simulation phase (or training phase), we are given the entire set U that represents MDPs under different tunable latent parameter. Or equivalently, the learning agent is allowed to interact with any MDP M ∈ U in arbitrary fashion, and sample arbitrary amount of trajectories. However, we do not know which MDP M ∈ U represents the real environment. The objective of sim-to-real transfer is to find a policy π purely based on U, which performs well in the real environment. In particular, we measure the performance in terms of the sim-to-real gap, which is defined as the difference between the value of learned policy π and the value of an optimal policy for the real world:
Gap(π) = V * M * ,1 (s 1 ) − V π M * ,1 (s 1 ).(1)
We remark that in our framework, the policy π is learned exclusively in simulation without the use of any real world samples. We study this framework because (1) our primary interests-domain randomization algorithm does not use any real-world samples for training;
(2) we would like to focus on the problem of knowledge transfer from simulation to the real world. The more general learning paradigm that allows the fine-tuning of policy learned in simulation using real-world samples can be viewed as a combination of sim-to-real transfer and standard on-policy reinforcement learning, which we left as an interesting topic for future research.
DOMAIN RANDOMIZATION AND LMDPS
We first introduce Latent Markov decision processes (LMDPs) and then explain domain randomization in the viewpoint of LMDPs. A LMDP can be represented as (U, ν), where U is a set of MDPs with joint state space S and joint action space A, and ν is a distribution over U. Each MDP M = (S, A, P M , R, H, s 1 ) in U has its own transition dynamics P M that may differs from other MDPs. At the start of an episode, an MDP M ∈ U is randomly chosen according to the distribution ν. The agent does not know explicitly which MDP is sampled, but she is allowed to interact with this MDP M for one entire episode.
Domain randomization algorithm first specifies a distribution over tunable parameters, which equivalently gives a distribution ν over MDPs in simulator U. This induces a LMDP with distribution ν. The algorithm then samples trajectories from this LMDP, runs RL algorithms in order to find the near-optimal policy of this LMDP. We consider the ideal scenario that the domain randomization algorithm eventually find the globally optimal policy of this LMDP, which we formulate as domain randomization oracle as follows:
Definition 1. (Domain Randomization Oracle) Let U be the set of MDPs generated by domain randomization and ν be the uniform distribution over U. The domain randomization oracle returns an optimal history-dependent policy π * DR of the LMDP (U, ν):
π * DR = arg max π∈Π E M∼ν V π M,1 (s 1 ).(2)
Since LMDP is a special case of POMDPs, its optimal policy π * DR in general will depend on history. This is in sharp contrast with the optimal policy of a MDP, which is history-independent. We emphasize that both the memory-augmented policy and the randomization of the simulated environment are critical to the optimality guarantee of domain randomization. We also note that we don't restrict the learning algorithm used to find the policy π * DR , which can be either in a model-based or model-free style. Also, we don't explicitly define the behavior of π * DR . The only thing we know about π * DR is that it satisfies the optimality condition defined in Equation 2. In this paper, we aim to bound the sim-to-real gap of π * DR , i.e., Gap(π * DR , U) under different regimes.
MAIN RESULTS
We are ready to present the sim-to-real gap of π * DR in this section. We study the gap in three different settings under our sim-to-real framework: finite simulator class (the cardinality |U| is finite) with the separation condition (MDPs in U are distinct), finite simulator class without the separation condition, and infinite simulator class. During our analysis, we mainly study the long-horizon setting where H is relatively large compared with other parameters. This is a challenging setting that has been widely-studied in recent years (Gupta et al., 2019;Mandlekar et al., 2020;Pirk et al., 2020). We show that the sim-to-real gap of π * DR is only O(log 3 (H)) for the finite simulator class with the separation condition, and onlyÕ( √ H) in the last two settings, matching the best possible lower bound in terms of H.
In our analysis, we assume that the MDPs in U are communicating MDPs with a bounded diameter.
Assumption 1 (Communicating MDPs (Jaksch et al., 2010)). The diameter of any MDP M ∈ U is bounded by D. That is, consider the stochastic process defined by a stationary policy π : S → A on an MDP with initial state s. Let T (s ′ |M, π, s) denote the random variable for the first time step in which state s ′ is reached in this process, then max s =s ′ ∈S min π:
S→A E [T (s ′ | M, π, s)] ≤ D.
This is a natural assumption widely used in the literature (Jaksch et al., 2010;Agrawal & Jia, 2017;Fruit et al., 2020). The communicating MDP model also covers many real-world tasks in robotics. For example, transferring the position or angle of a mechanical arm only costs constant time. Moreover, the diameter assumption is necessary under our framework.
Proposition 1. Without Assumption 1, there exists a hard instance U so that Gap(π * DR ) = Ω(H).
We prove Proposition 1 in Appendix G.1. Note that the worst possible gap of any policy is H, so π * DR becomes ineffective without Assumption 1.
FINITE SIMULATOR CLASS WITH SEPARATION CONDITION
As a starting point, we will show the sim-to-real gap when the MDP set U is a finite set with cardinality M . Intuitively, a desired property of π * DR is the ability to identify the environment the agent is exploring within a few steps. This is because π * DR is trained under uniform random environments, so we hope it can learn to tell the differences between environments. As long as π * DR has this property, the agent is able to identify the environment dynamics quickly, and behave optimally afterwards (note that the MDP set U is known to the agent).
Before presenting the general results, we first examine a simpler case where all MDPs in U are distinct. Concretely, we assume that any two MDPs in U are well-separated on at least one stateaction pair. Note that this assumption is much weaker than the separation condition in Kwon et al. (2021), which assumes strongly separated condition for each state-action pair. Assumption 2 (δ-separated MDP set). For any M 1 , M 2 ∈ U, there exists a state-action pair (s, a) ∈ S × A, such that the L 1 distance between the probability of next state of the different MDPs
is at least δ, i.e. (P M1 − P M2 ) (· | s, a) 1 ≥ δ.
The following theorem shows the sim-to-real gap of π * DR in δ-separated MDP sets. Theorem 1. Under Assumption 1 and Assumption 2, for any M ∈ U, the sim-to-real gap of π * DR is at most
Gap(π * DR ) = O DM 3 log(M H) log 2 (SM H/δ) δ 4 .(3)
The proof of Theorem 1 is deferred to Appendix D. Though the dependence on M and δ may not be tight, our bound has only poly-logarithmic dependence on the horizon H.
The main difficulty to prove Theorem 1 is that we do not know what π * DR does exactly despite knowing a simple and clean strategy in the real-world interaction with minimum sim-to-real gap. That is, to firstly visit the state-action pairs that help the agent identify the environment quickly and then follow the optimal policy in the real MDP M * after identifying M * . Therefore, we use a novel constructive argument in the proof. We construct a base policy that implements the idea mentioned above, and show that π * DR cannot be much worse than the base policy. The proof overview can be found in Section 6.
FINITE SIMULATOR CLASS WITHOUT SEPARATION CONDITION
Now we generalize the setting and study the sim-to-real gap of π * DR when U is finite but not necessary a δ-separated MDP set. Surprisingly, we show that π * DR can achieveÕ( √ H) sim-to-real gap when |U| = M .
Theorem 2. Under Assumption 1, when the MDP set induced by domain randomization U is a finite set with cardinality M , the sim-to-real gap of π * DR is upper bounded by
Gap(π * DR ) = O D M 3 H log(M H) .(4)
Theorem 2 is proved in Appendix E. This theorem implies the importance of randomization and memory in the domain randomization algorithms (Sadeghi & Levine, 2016;Tobin et al., 2017;Peng et al., 2018;OpenAI et al., 2018). With both of them, we successfully reduce the worst possible gap of π * DR from the order of H to the order of √ H, so per step loss will be onlyÕ(H −1/2 ). Without randomization, it is not possible to reduce the worst possible gap (i.e., the sim-to-real gap) because the policy is even not trained on all environments. Without memory, the policy is not able to implicitly "identify" the environments, so it cannot achieve sublinear loss in the worst case.
We also use a constructive argument to prove Theorem 2. However, it is more difficult to construct the base policy because we do not have any idea to minimize the gap without the well-separated condition (Assumption 2). Fortunately, we observe that the base policy is also a memory-based policy, which basically can be viewed as an algorithm that seeks to minimize the sim-to-real gap in an unknown underlying MDP in U. Therefore, we connect the sim-to-real gap of the base policy with the regret bound of the algorithms in infinite-horizon average-reward MDPs (Bartlett & Tewari, 2012;Fruit et al., 2018;Zhang & Ji, 2019). The proof overview is deferred to Section 6.
To illustrate the hardness of minimizing the worst case gap, we prove the following lower bound for Gap(π, U) to show that any policy must suffer a gap at least Ω( √ H).
Theorem 3. Under Assumption 1, suppose A ≥ 10, SA ≥ M ≥ 100, D ≥ 20 log A M, H ≥ DM , for any history dependent policy π = {π h : traj h → A} H h=1 , there exists a set of M MDPs U = {M m } M m=1 and a choice of M * ∈ U such that Gap(π) is at least Ω( √ DM H).
The proof of Theorem 3 follows the idea of the lower bound proof for tabular MDPs (Jaksch et al., 2010), which we defer to Appendix G.2. This lower bound implies that Ω( √ H) sim-to-real gap is unavoidable for the policy π * DR when directly transferred to the real environment.
INFINITE SIMULATOR CLASS
In real-world scenarios, the MDP class is very likely to be extensively large. For instance, many physical parameters such as surface friction coefficients and robot joint damping coefficients are sampled uniformly from a continuous interval in the Dexterous Hand Manipulation algorithms (OpenAI et al., 2018). In these cases, the induced MDP set U is large and even infinite. A natural question is whether we can extend our analysis to the infinite simulator class case, and provide a corresponding sim-to-real gap.
Intuitively, since the domain randomization approach returns the optimal policy in the average manner, the policy π * DR can perform bad in the real world M * if most MDPs in the randomized set differ much with M * . In other words, U must be "smooth" near M * for domain randomization to return a nontrivial policy. By "smoothness", we mean that there is a positive probability that the uniform distribution ν returns a MDP that is close to M * . This is because the probability that ν samples exactly M * in a infinite simulator class is 0, so domain randomization cannot work at all if such smoothness does not hold.
Formally, we assume there is a distance measure d(M 1 , M 2 ) on U between two MDPs M 1 and
M 2 . Define the ǫ-neighborhood C M * ,ǫ of M * as C M * ,ǫ def = {M ∈ U : d(M, M * ) ≤ ǫ}.
The smoothness condition is formally stated as follows: Assumption 3 (Smoothness near M * ). There exists a positive real number ǫ 0 , and a Lipchitz constant L, such that for the policy π * DR , the value function of any two MDPs in
C M * ,ǫ0 is L-Lipchitz w.r.t the distance function d, i.e. V π * DR M1,1 (s 1 ) − V π * DR M2,1 (s 1 ) ≤ L · d(M 1 , M 2 ), ∀M 1 , M 2 ∈ C M * ,ǫ0 .(5)
For example, we can set d(M 1 , M 2 ) = I[M 1 = M 2 ] in the finite simulator class. For complicated simulator class, we need to ensure there exists some d(·, ·) that L is not large.
With Assumption 3, it is possible to compute the sim-to-real gap of π * DR . In the finite simulator class, we have shown that the gap depends on M polynomially, which can be viewed as the complexity of U. The question is, how do we measure the complexity of U when it is infinitely large?
Motivated by Ayoub et al. (2020), we consider the function class
F = {f M (s, a, λ) : S × A × Λ → R such that f M (s, a, λ) = P M λ(s, a) for M ∈ U, λ ∈ Λ} ,(6)
where Λ = {λ * M , M ∈ U} is the optimal bias functions of M ∈ U in the infinite-horizon averagereward setting (Bartlett & Tewari (2012) (2019)). We note this function class is only used for analysis purposes to express our complexity measure; it does not affect the domain randomization algorithm. We use the the ǫ-log-covering number and the ǫ-eluder dimension of F to characterize the complexity of the simulator class U. In the setting of linear combined models (Ayoub et al., 2020), the ǫ-log-covering number and the ǫ-eluder dimension are O (d log(1/ǫ)), where d is the dimension of the linear representation in linear combined models. For readers not familiar with eluder dimension or infinite-horizon average-reward MDPs, please see Appendix A for preliminary explanations.
Here comes our bound of sim-to-real gap for the infinite simulator class setting, which is proved in Appendix F. Theorem 4. Under Assumption 1 and 3, the sim-to-real gap of the domain randomization policy π * DR is at most for 0 ≤ ǫ < ǫ 0
Gap(π * DR ) = O D d e H log (H · N (F , 1/H)) ν (C M * ,ǫ ) + Lǫ .(7)
Here The proof overview can be found in Section 6. The main technique is still a reduction to the regret minimization problem in infinite-horizon average-reward setting. We construct a base policy and shows that the regret of it is onlyÕ( √ H).
A key point to note is that our construction of the base policy also solves an open problem of designing efficient algorithms that achieveÕ( √ T ) regret in the infinite-horizon average-reward setting with general function approximation. This base policy is of independent interests.
To complement our positive results, we also provide a negative result that even if the MDPs in U have nice low-rank properties (e.g., the linear low-rank property (Jin et al., 2020b;Zhou et al., 2020)), the policy π * DR returned by the domain randomization oracle can still have Ω(H) sim-to-real gap when the simulator class is large and the smoothness condition (Assumption 3) does not hold. This explains the necessity of our preconditions. Please refer to Proposition 2 in Appendix G.3 for details.
PROOF OVERVIEW
In this section, we will give a short overview of our novel proof techniques for the results shown in section 5. The main proof technique is based on reducing the problem of bounding the sim-to-real gap to the problem of constructing base policies. In the settings without separation conditions, we further connect the construction of the base policies to the design of efficient learning algorithms for the infinite-horizon average-reward settings.
REDUCING TO CONSTRUCTING BASE POLICIES
Intuitively, if there exists a base policyπ ∈ Π with bounded sim-to-real gap, then the gap of π * DR will not be too large since π * DR defined in Eqn 2 is the policy with the maximum average value. Lemma 1. Suppose there exists a policyπ ∈ Π such that the sim-to-real gap ofπ for any MDP M ∈ U satisfies V * M,1 (s 1 ) − Vπ M,1 (s 1 ) ≤ C, then we have Gap(π * DR ) ≤ M C when U is a finite set with |U| = M . Furthermore, when U is an infinite set satisfying the smoothness condition (assumption 3), we have for any 0 < ǫ < ǫ 0 , Gap(π * DR ) ≤ C/ν (C M * ,ǫ ) + Lǫ.
We defer the proof to Appendix B.1. Now with this reduction lemma, the remaining problem is defined as follows: Suppose the real MDP M * belongs to the MDP set U. We know the full information (transition matrix) of any MDP in the MDP set U. How to design a history-dependent policyπ ∈ Π with minimum sim-to-real gap max M∈U V * M,1 (s 1 ) − Vπ M,1 (s 1 ) .
THE CONSTRUCTION OF THE BASE POLICIES
With separation conditions With the help of Lemma 1, we can bound the sim-to-real gap in the setting of finite simulator class with separation condition by constructing a history-dependent policy π. The formal definition of the policyπ can be found in Appendix C.1. The idea of the construction is based on elimination: the policyπ explicitly collects samples on the "informative" state-action pairs and eliminates the MDP that is less likely to be the real MDP from the candidate set. Once the agent identifies the real MDP representing the dynamics of the physical environment, it follows the optimal policy of the real MDP until the end of the interactions.
Without separation conditions The main challenge in this setting is that, we can no longer construct a policyπ that "identify" the real MDP using the approaches as in the settings with separation conditions. In fact, we may not be able to even "identify" the real MDP since there can be MDPs in U that is very close to real MDP. Here, we use a different approach, which reduces the minimization of sim-to-real gap ofπ to the regret minimization problem in the infinite-horizon average-reward MDPs.
The infinite-horizon average-reward setting has been well-studied (e.g., Jaksch et al., 2010;Agrawal & Jia, 2017;Fruit et al., 2018;Wei et al., 2020). The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps. The gain of a policy is defined in the average manner. The value of a policy π is defined as
ρ π (s) = E[lim T →∞ T t=1 R(s t , π (s t ))/T | s 1 = s].
The optimal gain is defined as ρ * (s) def = max s∈S max π ρ π (s), which is shown to be state-independent in Agrawal & Jia (2017), so we use ρ * for short. The regret in the infinite-horizon setting is defined as Reg(T ) = E T ρ * − T t=1 R(s t , a t ) , where the expectation is over the randomness of the trajectories. A more detailed explanation of infinite-horizon average-reward MDPs can be found in Appendix A.1.
For an MDP M ∈ U, we can view it as a finite-horizon MDP with horizon H; or we can view it as an infinite-horizon MDP. This is because Assumption 1 ensures that the agent can travel to any state from any state s H encountered at the H-th step (this may not be the case in the standard finitehorizon MDPs, since people often assume that the states at the H-th level are terminating state). The following lemma shows the connection between these two views. This lemma indicates that, if we can design an algorithm (i.e. the base policy)π in the infinitehorizon setting with regret Reg(H), then the sim-to-real gap of this algorithm in episodic setting satisfies Gap(π) = V * M,1 (s 1 ) − Vπ M,1 (s 1 ) ≤ Reg(H) + D. This lemma connects the sim-to-real gap ofπ in finite-horizon setting to the regret in the infinite-horizon setting.
With the help of Lemma 1 and 2, the remaining problem is to design an efficient exploration algorithm for infinite-horizon average-reward MDPs with the knowledge that the real MDP M * belongs to a known MDP set U. Therefore, we propose two optimistic-exploration algorithms (Algorithm 3 and Algorithm 4) for the setting of finite simulator class and infinite simulator class respectively. The formal definition of the algorithms are deferred to Appendix C.2 and Appendix C.3. Note that our Algorithm 4 is the first efficient algorithm withÕ( √ T ) regret in the infinite-horizon averagereward MDPs with general function approximation, which is of independent interest for efficient online exploration in reinforcement learning.
CONCLUSION
In this paper, we study the optimality of policies learned from domain randomization in sim-to-real transfer without real-world samples. We propose a novel formulation of sim-to-real transfer and view domain randomization as an oracle that returns the optimal policy of an LMDP with uniform initialization distribution. Following this idea, we show that the policy π * DR can suffer only o(H) loss compared with the optimal value function of the real environment when the simulator class is finite or satisfies certain smoothness condition, thus this policy can perform well in the long-horizon cases. We hope our formulation and analysis can provide insight to design more efficient algorithms for sim-to-real transfer in the future. 2020)). The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps instead of restarting every H steps. The gain of a policy is defined in the average manner. (2017)) The gain ρ π (s) of a stationary policy π from starting state s 1 = s is defined as:
Definition 2. (Definition 4 in Agrawal & Jia
ρ π (s) = E lim T →∞ 1 T T t=1 R(s t , π (s t )) | s 1 = s(8)
In this setting, a common assumption is that the MDP is communicating (Assumption 1). Under this assumption, we have the following lemma.
Lemma 3. (Agrawal & Jia, 2017, Lemma 2.1) For a communicating MDP M with diameter D: (a)
The optimal gain ρ * is state-independent and is achieved by a deterministic stationary policy π * DR ; that is, there exists a deterministic policy π * such that
ρ * := max s ′ ∈S max π ρ π (s ′ ) = ρ π * (s), ∀s ∈ S (9) (b)
The optimal gain ρ * satisfies the following equation: (10) where P λ(s, a) = s ′ P (s ′ |s, a)λ(s ′ ), and λ * is the bias vector of the optimal policy π * DR satisfying 0 ≤ λ * (s) ≤ D.
ρ * = min λ∈R S max s,a [R(s, a) + P λ(s, a) − λ(s)] = max a [R(s, a) + P λ * (s, a) − λ * (s)] , ∀s
(11)
The regret minimization problem has been widely studied in this setting, with regret to be defined as Reg(T ) = E T ρ * − T t=1 R(s t , a t ) , where the expectation is over the randomness of the trajectories. For example, Jaksch et al. (2010) proposed an efficient algorithm called UCRL2, which achieves regret upper boundÕ(DS √ AT ). For notation convenience, we use P V (s, a) or P λ(s, a) as a shorthand of s ′ ∈S P (s ′ |s, a)V (s ′ ) or s ′ ∈S P (s ′ |s, a)λ(s ′ ).
A.2 ELUDER DIMENSION
Proposed by Russo & Van Roy (2013), eluder dimension has become a widely-used concept to characterize the complexity of different function classes in bandits and RL (Wang et al., 2020;Ayoub et al., 2020;Jin et al., 2021;Kong et al., 2021). In this work, we define eluder dimension to characterize the complexity of the function F :
F = {f M (s, a, λ) : S × A × Λ → R such that f M (s, a, λ) = P M λ(s, a) for M ∈ U, λ ∈ Λ} ,(12)
where Λ = {λ * M , M ∈ U} is the optimal bias functions of M ∈ U in the infinite-horizon averagereward setting (Bartlett & Tewari (2012)
= {(s i , a i , λ i )} n i=1 ⊂ S × A × Λ be a sequence of history samples. • A history sample (s, a, λ) ∈ S × A × Λ is ǫ-dependent on Z with respect to F if any f, f ′ ∈ F satisfying f − f ′ Z ≤ ǫ also satisfies f (s, a) − f ′ (s, a) ≤ ǫ. Here f − f ′ Z is a shorthand of (s,a,λ)∈Z (f − f ′ ) 2 (s, a, λ). • An (s, a, λ) is ǫ-independent of Z with respect to F if (s, a, λ) is not ǫ-dependent on Z.
• The ǫ-eluder dimension of a function class F is the length of the longest sequence of elements in S × A × Λ such that, for some ǫ ′ ≥ ǫ, every element is ǫ ′ -independent of its predecessors.
B OMITTED PROOF IN SECTION 6
B.1 PROOF OF LEMMA 1
Proof. We firstly study the case where U is a finite set with |U| = M . Forπ, we have
1 M M∈U V * M,1 (s 1 ) − Vπ M,1 (s 1 ) ≤ C.(13)
By the optimality of π * DR , we know that
1 M M∈U V π * DR M,1 (s 1 ) ≥ 1 M M∈U Vπ M,1 (s 1 ) .(14)
Therefore,
1 M M∈U V * M,1 (s 1 ) − V π * DR M,1 (s 1 ) ≤ C.(15)Since the gap V * M,1 (s 1 ) − V π * DR M,1 (s 1 ) ≥ 0 for any i ∈ [M ], we have 1 M V * M * ,1 (s 1 ) − V π * DR M * ,1 (s 1 ) ≤ C. That is, V * M * ,1 (s 1 ) − V π * DR M * ,1 (s 1 ) ≤ M C.(16)
For the case where U is an infinite set satisfying Assumption 3, by the optimality of π * DR , we have
E M∼ν V π * DR M,1 (s 1 ) ≥ E M∼ν Vπ M,1 (s 1 ) .(17)
Therefore,
E M∼ν(C M * ,ǫ) V * M * ,1 (s 1 ) − V π * DR M,1 (s 1 ) ≤ E M∼ν V * M * ,1 (s 1 ) − V π * DR M,1 (s 1 ) ≤ E M∼ν V * M * ,1 (s 1 ) − Vπ M,1 (s 1 ) .(18)
By Assumption 3, for any M ∈ C(M * , ǫ), we have
V π * DR M * ,1 (s 1 ) − V π * DR M,1 (s 1 ) ≤ Lǫ.(19)
Therefore, we have
ν (C M * ,ǫ ) V * M * ,1 (s 1 ) − V π * DR M * ,1 (s 1 ) − Lǫ ≤ E M∼ν(C M * ,ǫ) V * M * ,1 (s 1 ) − V π * DR M,1 (s 1 )(20)
Combining Inq 18 and Inq 20, we have
ν (C M * ,ǫ ) V * M * ,1 (s 1 ) − V π * DR M * ,1 (s 1 ) − Lǫ ≤ C,(21)
The lemma can be proved by reordering the above inequality.
B.2 PROOF OF LEMMA 2
Proof. For MDP M, denote π * in as the optimal policy in the infinite-horizon setting and {π * ep,h } H h=1 as the optimal policy in the episodic setting. By the optimality of π * ep , we have V * M,1 (s
1 ) = V π * ep M,1 (s 1 ) ≥ V π * in M,1 (s 1 ).
By the Bellman equation in the infinite-horizon setting, we know that
λ * M (s) + ρ * M = R(s, π * in (s)) + P M λ * M (s, π * in (s)), ∀s ∈ S(22)
For notation simplicity, we use d h (s 1 , π) to denote the state distribution at step h after starting from state s 1 at step 1 following policy π. From the above equation, we have
λ * M (s 1 ) + Hρ * M = H h=1 E s h ∼d h (s1,π * in ) R(s h , , π * in (s h )) + E sH+1∼dH+1(s1,π * in )) λ * M (s H+1 ). (23)
That is,
| H h=1 E s h ∼d h (s1,π * in ) R(s h , , π * in (s h )) − Hρ * M | = |λ * M (s 1 ) − E sH+1∼dH+1(s1,π * in ) λ * M (s H+1 )| ≤ D,(24)
where H h=1 E s h ∼d h (s1,π * in ) R(s h , , π * in (s h )) = V π * in M,1 (s 1 ). Therefore, we have Hρ * M − D ≤ V π * in M,1 (s 1 ) ≤ V * M,1 (s 1 ). For the second inequality, by the Bellman equation in the infinite-horizon setting, we have
λ * M (s 1 ) + Hρ * M ≥ H h=1 E s h ∼d h (s1,π * ep ) R(s h , , π * ep,h (s h )) + E sH+1∼dH+1(s1,π * ep ) λ * M (s H+1 ).(25)
That is,
H h=1 E s h ∼d h (s1,π * ep ) R(s h , , π * ep,h (s h )) − Hρ * M ≤ λ * M (s 1 ) − E sH+1∼dH+1(s1,π * ep ) λ * M (s H+1 ) ≤ D,(26)
where H h=1 E s h ∼d h (s1,π * ep ) R(s h , , π * ep,h (s h )) = V * M,1 (s 1 ).
C THE CONSTRUCTION OF LEARNING ALGORITHMS C.1 FINITE SIMULATOR CLASS WITH SEPARATION CONDITION
In this subsection, we explicitly define the base policyπ with sim-to-real gap guarantee under the separation condition. Note that a history-dependent policy for LMDPs can also be regarded as an for i = 1, · · · , M do Denote the current state as s init Run the policy π Mi (s init , s 0 ) for 2D steps, breaking the loop immediately once the agent enters state s 0 end for if the agent enters state s 0 then 10:
Execute a 0 , enter the next state s ′ . counter N (s 0 , a 0 ) = N (s 0 , a 0 ) + 1, H = H {s ′ } end if end while Output: history data H algorithm for finite-horizon MDPs. By deriving an upper bound of the sim-to-real gap forπ, we can upper bound Gap(π * DR , U) with Lemma 1. The policyπ is formally defined in Algorithm 1. There are two stages in Algorithm 1. In the first stage, the agent's goal is to quickly explore the environment and find the real MDP M * from the MDP set U. This stage contains at most M − 1 parts. In each part, the agent randomly selects two MDPs M 1 and M 2 from the remaining MDP set D. Since the agent knows the transition dynamics of M 1 and M 2 , it can find the most informative state-action pair (s 0 , a 0 ) with maximum totalvariation difference between P M1 (·|s 0 , a 0 ) and P M2 (·|s 0 , a 0 ). The algorithm calls Subroutine 2 to collect enough samples from (s 0 , a 0 ) pairs, and then eliminates the MDP with less likelihood. At the end of the first stage, the MDP set D is ensured to contain only one MDP M * with high probability. Therefore, the agent can directly execute the optimal policy for the real MDP till step H + 1 in the second stage.
Subroutine 2 is designed to collect enough samples from the given state-action pair (s 0 , a 0 ). The basic idea in Subroutine 2 is to quickly enter the state s 0 and take action a 0 , until the visitation counter N (s 0 , a 0 ) = n 0 . Denote π M (s, s ′ ) as the policy with the minimum expected travelling time E [T (s ′ | M, π, s)] for MDP M. Suppose the agent is currently at state s init and runs the policy π M * (s init , s 0 ) in the following steps. By Assumption 1 and Markov's inequality, the agent will enter state s 0 in 2D steps with probability at least 1/2. Therefore, in Subroutine 2, we runs the policy π Mi (s init , s 0 ) for 2D steps for i ∈ [M ] alternatively. This ensures that the agent can enter state s 0 in 2M D steps with probability at least 1/2.
Theorem 5 states an upper bound of the sim-to-real gap for Algorithm 1, which is proved in Appendix D.1.
Theorem 5. Suppose we useπ to denote the history-dependent policy represented by Algorithm 1. Under Assumption 1 and Assumption 2, for any M ∈ U, the sim-to-real gap of Algorithm 1 is at most
V * M,1 (s 1 ) − Vπ M,1 (s 1 ) ≤ O DM 2 log(M H) log 2 (SM H/δ) δ 4 .(27)
C.2 FINITE SIMULATOR CLASS WITHOUT SEPARATION CONDITION
In this subsection, we propose an efficient algorithm in the infinite-horizon average-reward setting for finite simulator class. Our algorithm is described in Algorithm 3. In episode k, the agent executes the optimal policy of the optimistic MDP M k with the maximum expected gain ρ * M k . Once the agent collects enough data and realizes that the current MDP M k is not M * that represents the dynamics of the real environment, the agent eliminates M k from the MDP set. Take action a h = π * M k (s h ), obtain the reward R(s h , a h ), and observe the next state s h+1 5: Set h 0 = h + 1, and k = k + 1.
if h t=h0 P M k λ * M k (s t , a t ) − λ * M k (s t+1 ) > D 2(h − h 0 ) log(2HM )
9:
end if 10: end for
To indicate the basic idea of the elimination condition defined in Line 5 of Algorithm 3, we briefly explain our regret analysis of Algorithm 3. Suppose the MDP M k selected in episode k satisfies the optimistic condition ρ *
P M k λ * M k (s h , a h ) − λ * M k (s h+1 ) .
If this term is relatively small, we can continue following the policy π * M k with little loss. Since λ * M k (s h+1 ) is an empirical sample of P M * λ * M k (s h , a h ), we can guarantee that M k is not M * with high probability if this term is relatively large.
Based on the above discussion, we can upper bound the regret of Algorithm 3. We defer the proof of Theorem 6 to Appendix E.1. (33)
C.3 INFINITE SIMULATOR CLASS
In this subsection, we propose a provably efficient model-based algorithm solving infinite-horizon average-reward MDPs with general function approximation. To the best of our knowledge, our result is the first efficient algorithm with near-optimal regret for infinite-horizon average-reward MDPs with general function approximation.
Considering the model class U which covers the true MDP M * , i.e. M * ∈ U, we define the function space Λ = {λ * M , M ∈ U}, and space X = S × A × Λ. We define the function space
F = {f M (s, a, λ) : X → R such that f M (s, a, λ) = P M λ(s, a) for M ∈ U, λ ∈ Λ} .(34)
Our algorithm, which is described in Algorithm 4, also follows the well-known principle of optimism in the face of uncertainty. In each episode k, we calculate the optimistic MDP M k with maximum expected gain ρ * M k . We execute the optimal policy of M * to interact with the environment and collect more samples. Once we have collected enough samples in episode k, we update the model class U k and compute the optimistic MDP for episode k + 1.
Compared with the setting of episodic MDP with general function approximation (Ayoub et al., 2020;Wang et al., 2020;Jin et al., 2021), the additional problem in the infinite-horizon setting is that the regret technically has linear dependence on the number of total episodes, or the number of steps that we update the optimistic model and the policy. This corresponds to the last term (KD) in Inq 32. Therefore, to design efficient algorithm with near-optimal regret in the infinite-horizon setting, the algorithm should maintain low-switching property (Bai et al., 2019;Kong et al., 2021). Taking inspiration from the recent work that studies efficient exploration with low switching cost in episodic setting (Kong et al., 2021), we define the importance score, sup f1,f2∈F f1−f2 2 Znew f1−f2 2 Z +α , as a measure of the importance for new samples collected in current episode, and only update the optimistic model and the policy when the importance score is greater than 1. Here f 1 − f 2 2 Z is a shorthand of (s,a,s ′ ,λ)∈Z (f 1 (s, a, λ) − f 2 (s, a, λ)) 2 .
Algorithm 4 General Optimistic Algorithm 1: Initialize: the MDP set U 1 = U, episode counter k = 1 2: Initialize: the history data set Z = ∅, Z new = ∅ 3: α = 4D 2 + 1, β = cD 2 log (H · N (F , 1/H)) for a constant c. Add the history data Z new to the set Z 10:
CalculateM k+1 = arg min M∈U (s h ,a h ,s h+1 ,λ h )∈Z (P M λ h (s h , a h ) − λ h (s h+1 )) 2 11: Update U k+1 = M ∈ U : (s h ,a h ,s h+1 ,λ h )∈Z P M − PM k+1 λ h (s h , a h ) 2 ≤ β 12: Compute M k+1 = arg max M∈U k+1 ρ ⋆ M
13:
Episode counter k = k + 1 14:
end if 15: end for
We state the regret upper bound of Algorithm 4 in Theorem 5, and defer the proof of the theorem to Appendix F.1.
Theorem 7. Under Assumption 1, the regret of Algorithm 4 is uppder bounded by
Reg(H) ≤ O D d e H log (H · N (F , 1/H)) ,(35)
where d e is the ǫ-eluder dimension of function class F with ǫ = 1 H , and N (F , 1/H) is the 1 Hcovering number of F w.r.t L ∞ norm.
For α > 0, we say the covering number N (F , α) of F w.r.t the L ∞ norm equals m if there is m functions in F such that any function in F is at most α away from one of these m functions in norm · ∞ . The · ∞ norm of function f is defined as f ∞ The formal definition ofπ is given in Algorithm 1. To upper bound the sim-to-real gap ofπ, we discuss on the following two benign properties of Algorithm 1 in Lemma 4 and Lemma 7. Lemma 4 states that the true MDP M * will never be eliminated from the MDP set D. Therefore, in stage 2, the agent will execute the optimal policy in the remaining steps with high probability. Lemma 7 states that the total number of steps in stage 1 is upper bounded byÕ( M 2 δ 4 ). This is where the final bound in Theorem 5 comes from.
Lemma 4. With probability at least 1 − 1 H , the true MDP M * will never be eliminated from the MDP set D in stage 1.
The while-loop in stage 1 will last for M − 1 times. To prove Lemma 4, we need to prove that, if the true MDP M * is selected in a certain loop, then (s,a,s ′ )∈HM 1 ,M 2
Lemma 5. Suppose H = {s ′ i } n0 i=1
is a set of n 0 = c0 log 2 (SMH/δ) log(MH) δ 4 independent samples from a given state-action pair (s 0 , a 0 ) and MDP M * for a large constant c 0 . Let M 1 denote another MDP satisfying (P M * − P M1 )(·|s 0 , a 0 ) 1 ≥ δ, then the following inequality holds with probability at least 1 − 1 MH :
s ′ ∈H P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) > 1,(36)
Proof. The proof of Lemma 5 is inspired by the analysis in Kwon et al. (2021). To prove Inq 36, it is enough to show that
ln s ′ ∈H P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) = s ′ ∈H ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) > 0.(37)
holds with probability at least 1 − 1 MH .
Note that P M * (s ′ |s0,a0) PM 1 (s ′ |s0,a0) can be unbounded since P M1 (s ′ |s 0 , a 0 ) can be zero for certain s ′ . To tackle this issue, we defineP M1 for a sufficiently small α ≤ δ 4S :
P M1 (s ′ |s, a) = α + (1 − αS)P M1 (s ′ |s, a).(38)
We have
P M1 − P M1 (·|s, a) ≤ 2Sα ≤ δ 2 , thus P M1 − P M * (·|s, a) ≥ δ 2 . Also, we have ln 1 PM 1 (s ′ |s0,a0)
≤ ln(1/α) for any s ′ ∈ S. With the above definition, we can decompose Inq 37 into two terms:
s ′ ∈H ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) = s ′ ∈H ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) + s ′ ∈H ln P M1 (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) .(39)
Taking expectation over s ′ ∼ P M * (·|s, a) for the first term, we have
E n0 i=1
ln P M * (s ′ i |s 0 , a 0 ) P M1 (s ′ i |s 0 , a 0 ) = n0 i=1 s ′ P M * (s ′ |s 0 , a 0 ) ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 )
=n 0 D KL P M * (s ′ |s 0 , a 0 )|P M1 (s ′ |s 0 , a 0 )
≥ n 0 δ 2 2 ,(41)
where the last inequality is due to Pinsker's inequality.
Lemma 6. (Lemma C.2 in Kwon et al. (2021)) Suppose X is an arbitrary discrete random variable on a finite support X . Then, ln(1/P (X)) is a sub-exponential random variable (Vershynin, 2010) with Orcliz norm ln(1/P (X)) φ1 = 1/e.
From the above Lemma, we know that bothP M1 (s ′ |s 0 , a 0 ) and P M * (s ′ |s 0 , a 0 ) are sub-exponential random variables. By Azuma-Hoeffing's inequality, we have with probability at least 1 − δ 0 /2,
s ′ ∈H ln 1 P M1 (s ′ |s 0 , a 0 ) ≥ E n0 i=1
ln 1 P M1 (s ′ i |s 0 , a 0 ) − log(1/α) 2n 0 log(2/δ 0 ). (43)
By Proposition 5.16 in Vershynin (2010), with probability at least 1 − δ 0 /2, s ′ ∈H ln (P M * (s ′ |s 0 , a 0 )) ≥ E n0 i=1 ln (P M * (s ′ i |s 0 , a 0 )) − n 0 log(2/δ 0 )/c,
for a certain constant c > 0. Therefore, we can lower bound the first term in Eqn 39, s ′ ∈H ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) ≥ n 0 δ 2 2 − log(1/α) 2n 0 log(2/δ 0 ) − n 0 log(2/δ 0 )/c,
with probability at least 1 − δ 0 .
For the second term in Eqn 39, by the definition ofP M1 ,
s ′ ∈H ln P M1 (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) ≥ −2αSn 0(46)
Combining Inq 45 and Inq 46, we have s ′ ∈H ln P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) ≥ n 0 δ 2 2 − log(1/α) 2n 0 log(2/δ 0 ) − n 0 log(2/δ 0 )/c − 2αSn 0
Setting α = δ 2 8S , δ 0 = 1 MH , and n 0 = c0 log 2 (SMH/δ) log(MH)
Given state s and s ′ , by Markov's inequality, we know that with probability at least 1 2 ,
T (s ′ | M * , π M * (s, s ′ ), s) ≤ 2D.(50)
Consider the following stochastic process: In each episode k, the agent starts from a state s k that is arbitrarily selected, and run the policy π M * (s k , s 0 ) for 2D steps on the MDP M * . The process terminates once the agent enters a certain state s 0 . By Inq 50, the probability that the process terminates within k episodes is at least 1 − 1 2 k . By the basic algebraic calculation, the expected stopping episode can be bounded by a constant 4. Now we return to the proof of Lemma 7. In Subroutine 2, we run policy π Mi for each MDP M i ∈ D alternately. By Lemma 4, the true MDP M * is always contained in the MDP set D. Therefore, the expected travelling time to enter state s 0 for n 0 times is bounded by n 0 · M · 8D. In stage 1, we call Subroutine 2 for M − 1 times, which means that the expected steps in stage 1 satisfies E[h 0 ] ≤ 8M 2 n 0 d = O( DM 2 log 2 (SMH/δ) log(MH) δ 4
).
By Lemma 1, the sim-to-real gap of policy π * DR is bounded by
Gap(π * DR , U) ≤ O DM 3 log 2 (SM H) log(M H) δ 4 .(62)
The first term in (71)
for some constant C > 0.
Our idea is to use this result to upper bound the number of total switching steps.
Let τ (k) be the first step of episode k. By the definition of the function class F , we have (f 1 − f 2 ) 2 (x t ) ≤ 4D 2 for any f 1 , f 2 ∈ F and x t . By the switching rule, we know that, once the agent starts a new episode after step τ (k + 1) − 1, we have
τ (k+1)−1 t=τ (k) (f 1 − f 2 ) 2 (x t ) ≤ τ (k)−1 t=1 (f 1 − f 2 ) 2 (x t ) + α + 4D 2 , ∀f 1 , f 2 , x t(77)
Therefore, we have
τ (k+1)−1 t=1 (f 1 − f 2 ) 2 (x t ) ≤ 2 τ (k)−1 t=1 (f 1 − f 2 ) 2 (x t ) + α + 4D 2 , ∀f 1 , f 2 , x t(78)
; Fruit et al. (2018); Zhang & Ji
ν(C M * ,ǫ ) is the probability of ν sampling a MDP in C M * ,ǫ , d e = dim E (F , 1/H) is the 1/Heluder dimension F , and N (F , 1/H) is the 1/H-covering number of F w.r.t. L ∞ norm.Theorem 4 is a generalization of Theorem 2, since we can reduce Theorem 4 to Theorem 2 by setting d(M 1 , M 2 ) = I[M 1 = M 2 ] and ǫ = 0, in which case ν(C M * ,ǫ ) = 1/M and d e ≤ M .
Lemma 2 .
2For a MDP M, let ρ * M and V * M,1 (s 1 ) to be the optimal expected gain in the infinitehorizon view and the optimal value function in the episodic view respectively. We have the following inequality: Hρ * M − D ≤ V * M,1 (s 1 ) ≤ Hρ * M + D.
-HORIZON AVERAGE-REWARD MDPS The infinite-horizon average-reward setting has been well-explored in the recent few years (e.g. Jaksch et al. (2010); Agrawal & Jia (2017); Fruit et al. (2018); Wei et al. (
; Fruit et al. (2018); Zhang & Ji (2019)). Definition 3. (Eluder dimension). Let ǫ ≥ 0 and Z
Initialize: the MDP set D = U, n 0 = c0 log 2 (SMH) log(MH) δ 4 for a constant c 0 2: ⊲ Stage 1: Explore and find the real MDP M * 3: while |D| ≥ 1 do 4: Randomly select two MDPs M 1 and M 2 from the MDP set D 5: Choose (s 0 , a 0 ) = arg max (s,a)∈S×A (P M1 − P M2 ) (· | s, parameter (s 0 , a 0 ) and n 0 to collect history samples H M1,M2 7:if ∃s ′ ∈ H M1,M2 , P M2 (s ′ |s 0 , a 0 ) = 0 or s ′ ∈HM 1 ,M 2 PM 1 (s ′ |s0,a0) PM 2 (s ′ |s0,a0) 12: end while 13: ⊲ Stage 2: Run the optimal policy of M * 14: DenoteM as the remaining MDP in the MDP set D 15: Run the optimal policy ofM for the remaining steps Algorithm 2 Subroutine: collecting data for (s 0 , a 0 ) Input: informative state-action pair (s 0 , a 0 ), required visitation count n 0 Initialize: counter N (s 0 , a 0 ) = 0, history data H = ∅ Denote π M (s, s ′ ) as the policy with the minimum expected travelling time E [T (s ′ | M, π, s)] for MDP M (Defined in Assumption 1) while N (s 0 , a 0 ) ≤ n 0 do 5:
Algorithm 3
3Optimistic Exploration 1: Initialize: the MDP set U 1 = U, the episode counter k = 1, h 0 = 1 2: Calculate M 1 = arg max M∈U1 ρ * M 3: for step h = 1, · · · , H do 4:
Theorem 6 .
6Under Assumption 1, the regret of Algorithm 3 is upper bounded by Reg(H) ≤ O D M H log(M H) .
def = max x∈X |f (x)|. D OMITTED PROOF FOR FINITE SIMULATOR CLASS WITH SEPARATION CONDITION D.1 PROOF OF THEOREM 5
P M * (s ′ |s 0 , a 0 ) P M1 (s ′ |s 0 , a 0 ) > 0(48)holds with probability at least 1 − 1 MH .Lemma 7. Suppose Stage 1 in Algorithm 1 ends inh 0 steps. We have E[h 0 ] ≤ O( DM 2 log 2 (SMH/δ) log(MH)δ 4 ), where the expectation is over all randomness in the algorithm and the environment. Proof. Recall that π M (s, s ′ ) is the policy with the minimum expected travelling time E [T (s ′ | M, π, s)] for MDP M. By Assumption 1, we have E [T (s ′ | M * , π M (s, s ′ ), s)] ≤ D.
M, 1 ≤
1* ,1 (s 1 ) ≤ O D M H log(M H) be proved by combining Theorem 6 , Lemma 1 and Lemma 2. By Theorem 6, for any M ∈ U, the policyπ represented by Algorithm 3 has regret bound Hρ * M − H h=1 R(s h , a h ) ≤ O D M H log(M H) . Taking expectation over {s h , a h } h∈[H] and combining the inequality with Lemma 2, we have for any M ∈ U. V * M,1 (s 1 ) − Vπ M,1 (s 1 ) ≤ O D M H log(M H) . (74) Then the theorem can be proved by Lemma 1. F OMITTED PROOF FOR INFINITE SIMULATOR CLASS F.1 PROOF OF THEOREM 7 Lemma 9. (Low Switching) The total number of episode K is bounded by K ≤ O(dim E (F , 1/H) log(D 2 H) log(H)) (75) Proof. By Lemma 5 of Kong et al. (2021), we know that Cdim E (F , 1/H) log(D 2 H) log(H)
8 ACKNOWLEDGMENTS
8Liwei Wang was supported by National Key R&D Program of China (2018YFB1402600), Exploratory Research Project of Zhejiang Lab (No. 2022RC0AN02), BJNSF (L172037), Project 2020BD006 supported by PKUBaidu Fund. Rémi Pautrat, Konstantinos Chatzilygeroudis, and Jean-Baptiste Mouret. Bayesian optimization with automatic prior selection for data-efficient direct policy search. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7571-7578. IEEE, 2018. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of Go without human knowledge. nature, 550(7676):354-359, 2017.Richard D Smallwood and Edward J Sondik. The optimal control of partially observable Markov processes over a finite horizon.Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. InXue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of
robotic control with dynamics randomization. In 2018 IEEE international conference on robotics
and automation (ICRA), pp. 3803-3810. IEEE, 2018.
Sören Pirk, Karol Hausman, Alexander Toshev, and Mohi Khansari. Modeling long-horizon tasks
as sequential interaction landscapes. arXiv preprint arXiv:2006.04843, 2020.
Samira Pouyanfar, Muneeb Saleem, Nikhil George, and Shu-Ching Chen. ROADS: Randomization
for obstacle avoidance and driving in simulation. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition Workshops, pp. 0-0, 2019.
Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic
exploration. In NIPS, pp. 2256-2264. Citeseer, 2013.
Andrei A Rusu, Matej Večerík, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell.
Sim-to-real robot learning from pixels with progressive nets. In Conference on Robot Learning,
pp. 262-270. PMLR, 2017.
Fereshteh Sadeghi and Sergey Levine. Cad2rl: Real single-image flight without a single real image.
arXiv preprint arXiv:1611.04201, 2016.
Operations research, 21(5):1071-1088, 1973.
Lauren N Steimle, David L Kaufman, and Brian T Denton. Multi-model Markov decision processes.
IISE Transactions, pp. 1-16, 2021.
Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, and
Vincent Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint
arXiv:1804.10332, 2018.
2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23-30.
IEEE, 2017.
Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Juny-
oung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster
level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
Nikos Vlassis, Michael L Littman, and David Barber. On the computational complexity of stochastic
controller optimization in POMDPs. ACM Transactions on Computation Theory (TOCT), 4(4):
1-8, 2012.
Ruosong Wang, Ruslan Salakhutdinov, and Lin F Yang. Reinforcement learning with general
value function approximation: Provably efficient approach via bounded Eluder dimension. arXiv
preprint arXiv:2005.10804, 2020.
Chen-Yu Wei, Mehdi Jafarnia Jahromi, Haipeng Luo, Hiteshi Sharma, and Rahul Jain. Model-
free reinforcement learning in infinite-horizon average-reward Markov decision processes. In
International Conference on Machine Learning, pp. 10170-10180. PMLR, 2020.
Chen-Yu Wei, Mehdi Jafarnia Jahromi, Haipeng Luo, and Rahul Jain. Learning infinite-horizon
average-reward MDPs with linear function approximation. In International Conference on Artifi-
cial Intelligence and Statistics, pp. 3007-3015. PMLR, 2021.
Yi Xiong, Ningyuan Chen, Xuefeng Gao, and Xiang Zhou. Sublinear regret for learning POMDPs.
arXiv preprint arXiv:2107.03635, 2021.
Wenhao Yu, C Karen Liu, and Greg Turk. Policy transfer with strategy optimization. arXiv preprint
arXiv:1810.05751, 2018.
Zihan Zhang and Xiangyang Ji. Regret minimization for reinforcement learning by evaluating the
optimal bias function. arXiv preprint arXiv:1906.05110, 2019.
Yuren Zhong, Aniket Anand Deshmukh, and Clayton Scott. PAC reinforcement learning without
real-world feedback. arXiv preprint arXiv:1909.10449, 2019.
Dongruo Zhou, Quanquan Gu, and Csaba Szepesvari. Nearly minimax optimal reinforcement learn-
ing for linear mixture Markov decision processes. arXiv preprint arXiv:2012.08507, 2020.
4 :
4Compute M 1 = arg max M∈U1 ρ ⋆ M 5: for step h = 1, · · · , H do Take action a h = π * M k (s h ) inthe current state s h , and transit to state s h+1 Add (s h , a h , s h+1 , λ * M k ) to the set Z new6:
7:
8:
if importance score sup f1,f2∈F
f1−f2 2
Znew
f1−f2 2
Z +α ≥ 1 then
9:
M k ≥ ρ * M * , then the regret in H steps can be bounded as:Here we use K to denote the total number of episodes, and we use τ (k) to denote the first step of episode k. The first inequality is due to the optimistic condition ρ * M k ≥ ρ * M * . The first equality is due to the Bellman equation in the finite-horizon setting (Eqn 10). The last inequality is due to 0 ≤ λ * M (s) ≤ D. From the above inequality, we know that the regret in episode k depends on the summationProof. (Proof of Theorem 5) Recall that we use h 0 to denote the total steps in stage 1. Firstly, we prove that Gap(π, U) is upper bounded by O (E[h 0 ] + D). V * M * ,1 (s 1 ) − Vπ M * ,1 (s 1 ) (51)The outer expectation is over all possible h 0 , while the inner expectation is over the possible trajectories given fixed h 0 . By Lemma 4, we know thatπ = π * DR after h 0 steps with probability at least 1 − 1 H . If this high-probability event happens, the second part in the above inequality equals toWe can prove that this term is upper bounded by 2D.to denote the state distribution in step h after starting from state s in step h 0 + 1 following policy π.where the second equality is due to the Bellman equation in infinite-horizon setting (Eqn 10). Note that by the communicating property, we have 0 ≤ λ * M * (s) ≤ D for any s ∈ S. Therefore, we haveIf the high-probability event defined in Lemma 4 does not hold (This happens with probability atD.2 PROOF OF THEOREM 1Proof. The theorem can be proved by combining Theorem 5 and Lemma 1.By Theorem 5, we prove that the constructed policyπ satisfiesProof. For any fixed M ∈ U, and fixed step h ∈ [H], by Azuma's inequality, we have with probability at least 1 − 1Taking union bounds over all possible M and h, we know that the above event holds for all possible M and h with probability 1 − 1 MH . Under the above event, the true MDP M * will never be eliminated from the MDP set U k . Therefore, we have ρProof. (Proof of Theorem 6) By Lemma 8, we know that ρ * M k ≥ ρ * M * for any k ∈ [K]. We use τ (h) to denote the episode that step h belongs to. We can upper bound the regret in H steps as follows:By Azuma's inequality, we know thatholds with probability at least 1 − 1Published as a conference paper at ICLR 2022Now we upper bound the importance score in the switching step τ (k + 1) − 1.Suppose the number of episodes is at most K. If we set α = 4D 2 + 1, we haveBy the switching rule, we have sup f1,f2+α ≥ 1. Therefore, the LHS of the above inequality is exactly K. Thus we haveLemma 10. (Optimism) With probability at least 1 − 1 H , M * ∈ U k holds for any episode k ∈ [K].Proof. This lemma comes directly from Theorem 6 of Ayoub et al.(2020). Define the Filtration F = (F h ) h>0 so that F h−1 is generated by (s 1 , a 1 , λ 1 , · · · , s h , a h , λ h ). Proof. (Proof of Theorem 7) Let τ (k) be the first step of episode k. Under the high-probability event defined in Lemma 10, we can decompose the regret using the same trick in previous sections.where the first inequality is due to optimism condition in Lemma 10. The first equality is due to the Bellman equation 10 and λ h = λ * M k for τ (k) ≤ h ≤ τ (k + 1) − 1. The last inequality is due to 0 ≤ λ h (s) ≤ D for any s ∈ S. Now we bound the first two terms in Eqn 90. The second term can be regarded as a martingale difference sequence. By Azuma's inequality, with probability at least 1 − 1 H ,Now we focus on the upper bound of the first term in Eqn 90. Under the high-probability event defined in Lemma 10, the true model P is always in the model class U k . For episode k, from the construction of U k we know that anyMoreover, by the if condition in Line 8 of Alg. 4, we have for anySumming up the above two equations, we haveWe invoke Lemma 26 ofJin et al. (2021)Plugging the results back to Eqn 90, we haveBy Lemma 2, we have V * M * ,1 (s 1 ) ≤ Hρ * M * + D. Therefore, we havewith probability at least 1 − 2 MH . If the high-probability event doesn't holds (This happens with probability at most 2 H ), then the gap V * M * ,1 (s 1 ) − V π * DR M * ,1 (s 1 ) still can be bounded by H. Taking expectation over the trajectory {s h } h , we haveThen the theorem can be proved by Lemma 1.Proof. Consider the following construction of U. There are 3M + 1 states. There are M actions in the initial state s 0 , which is denoted as {a i } M i=1 . After taking action a i in state s 0 , the agent will transit to state s i,1 with probability 1. In state s i,1 for i ∈ [M ], the agent can only take action a 0 , and then transits to state s i,2 with probability p i , and transits to state s i,3 with probability 1 − p i . State {s i,2 } M i=1 and {s i,3 } M i=1 are all absorbing states. That is, the agent can only take one action a 0 in these states, and transits back to the current state with probability 1. The agent can only obtain reward 1 in state s i,2 for i ∈ [M ], and all the other states have zero rewards. Therefore, if the agent knows the transition dynamics of the MDP, it should take action a i with i = arg max i p i in state s 0 . Now we define the transition dynamics of each MDP M i . For each MDP M i ∈ U, we have p i = 1 and p j = 0 for all j ∈ [M ], j = i. Therefore, the agent cannot identify M * in state s 0 . The best policy in state s 0 for latent MDP U is to randomly take an action a i . In this case, the sim-to-real gap can be at least Ω(H) since the agent takes the wrong action in state s 0 with probability 1 − 1 M .Published as a conference paper at ICLR 2022G.2 PROOF OF THEOREM 3Proof. We first show that Ω( √ M H) holds with the hard instance for multi-armed bandit(Lattimore & Szepesvári, 2020). Consider a class of K-armed bandits instances with K = M . For the bandit instance M i , the expected reward of arm i is 1 2 + ǫ, while the expected rewards of other arms are 1 2 . Note that this is exactly the hard instance for K-armed bandits. Following the proof idea of the lower bound for multi-armed bandits, we know that the regret (sim-to-real gap) is at least Ω( √ M H).We restate the hard instance construction fromJaksch et al. (2010). This hard instance construction has also been applied to prove the lower bound in episodic setting(Jin et al., 2018). We firstly introduce the two-state MDP construction. In their construction, the reward does not depend on actions but states. State 1 always has reward 1 and state 0 always has reward 0. From state 1, any action takes the agent to state 0 with probability δ, and to state 1 with probability 1 − δ. In state 0, there is one action a * takes the agent to state 1 with probability δ + ǫ, and the other action a takes the agent to state 1 with probability δ. A standard Markov chain analysis shows that the stationary distribution of the optimal policy (that is, the one that takes action a * in state 0) has a probability of being in state 1 ofIn contrast, acting sub-optimally (that is taking action a in state 0) leads to a uniform distribution over the two states. The regret per time step of pulling a sub-optimal action is of order ǫ/δ.Consider O(S) copies of this two-state MDP where only one of the copies has such a good action a * . These copies are connected into a single MDP with an A-ary tree structure. In this construction, the agent needs to identify the optimal state-action pair over totally SA different choices. Setting δ = 1 D and ǫ = c SA T D , Jaksch et al.(2010)prove that the regret in the infinite-horizon setting is Ω( √ DSAT ).Our analysis follows the same idea of Jaksch et al.(2010). For the MDP instance M i , the optimal state-action pair is (s i , a i ) ((s i , a i ) = (s j , a j ), ∀i = j). With the knowledge of the transition dynamics of each M i , the agent needs to identify the optimal state-action pair over totally M different pairs in our setting. Therefore, we can similarly prove that the lower bound is Ω( √ DM H) following their analysis.G.3 LOWER BOUND IN THE LARGE SIMULATOR CLASSProposition 2. Suppose All MDPs in the MDP set U are linear mixture models(Zhou et al., 2020)sharing a common low dimensional representation with dimension d = O(log(M )), there exists a hard instance such that the sim-to-real gap of the policy π * DR returned by the domain randomization oracle can be still Ω(H) when M ≥ H.Proof. We can consider the following linear bandit instance as a special case. Suppose there are two actions with feature φ(a 1 ) = (1, 0) and φ(a 2 ) = (1, 1). In the MDP set M, there are M − 1 MDPs with parameter θ i = ( 1 2 , −p i ) with 1 4 < p i < 1 2 , i ∈ [M − 1], and one MDP with parameter θ M = ( 1 2 , 1 2 ). Suppose M = 4H + 5, the optimal policy of such an LMDP with uniform initialization will never pull the action a 2 , which can suffer Ω(H) sim-to-real gap in the MDP with parameter θ M . |
222,090,711 | EigenGame: PCA as a Nash Equilibrium | We present a novel view on principal component analysis (PCA) as a competitive game in which each approximate eigenvector is controlled by a player whose goal is to maximize their own utility function. We analyze the properties of this PCA game and the behavior of its gradient based updates. The resulting algorithm-which combines elements from Oja's rule with a generalized Gram-Schmidt orthogonalization-is naturally decentralized and hence parallelizable through message passing. We demonstrate the scalability of the algorithm with experiments on large image datasets and neural network activations. We discuss how this new view of PCA as a differentiable game can lead to further algorithmic developments and insights.Preprint. Under review. | [] | EigenGame: PCA as a Nash Equilibrium
Ian Gemp [email protected]
LondonUK
Brian Mcwilliams
LondonUK
Claire Vernade [email protected]
LondonUK
Thore Graepel
LondonUK
EigenGame: PCA as a Nash Equilibrium
We present a novel view on principal component analysis (PCA) as a competitive game in which each approximate eigenvector is controlled by a player whose goal is to maximize their own utility function. We analyze the properties of this PCA game and the behavior of its gradient based updates. The resulting algorithm-which combines elements from Oja's rule with a generalized Gram-Schmidt orthogonalization-is naturally decentralized and hence parallelizable through message passing. We demonstrate the scalability of the algorithm with experiments on large image datasets and neural network activations. We discuss how this new view of PCA as a differentiable game can lead to further algorithmic developments and insights.Preprint. Under review.
Introduction
The principal components of data are the vectors that align with the directions of maximum variance. These have two main purposes: a) as interpretable features and b) for data compression. Recent methods for principal component analysis (PCA) focus on the latter, explicitly stating objectives to find the k-dimensional subspace that captures maximum variance (e.g., [59]), and leaving the problem of rotating within this subspace to, for example, a more efficient downstream singular value decomposition (SVD) step [48] 1 . This point is subtle, yet critical. For example, any pair of two-dimensional, orthogonal vectors spans all of R 2 and, therefore, captures maximum variance of any two-dimensional dataset. However, for these vectors to be principal components, they must, in addition, align with the directions of maximum variance which depends on the covariance of the data. By learning the optimal subspace, rather than the principal components themselves, objectives focused on subspace error ignore the first purpose of PCA. In contrast, modern nonlinear representation learning techniques focus on learning features that are both disentangled (uncorrelated) and low dimensional [13,31,41,43,54].
It is well known that the PCA solution of the d-dimensional dataset X ∈ R n×d is given by the eigenvectors of X X or equivalently, the right singular vectors of X. Impractically, the cost of computing the full SVD scales with O(min{nd 2 , n 2 d})-time and O(nd)-space [56,59]. For moderately sized data, randomized methods can be used [26]. Beyond this, stochastic-or onlinemethods based on Oja's rule [47] or power iterations [52] are common. Another option is to use streaming k-PCA algorithms such as Frequent Directions (FD) [21] or Oja's algorithm [2] with storage complexity O(kd) 2 . Sampling or sketching methods also scale well, but again, focus on the top-k subspace [14,19,55].
In contrast to these approaches, we view each principal component (equivalently, eigenvector) as a player in a game whose objective is to maximize their own local utility function in competition with 1 After learning the top-k subspace V ∈ R d×k , the rotation can be recovered via an SVD of XV . 2 FD approximates the top-k subspace; Oja's algorithm approximates the top-k eigenvectors. other vectors. The proposed utility gradients are interpretable as a combination of Oja's rule and a generalized Gram-Schmidt process. We make the following contributions:
• A novel formulation of solving for the principal components as finding the Nash equilibrium of a suitable game,
• A sequential, globally convergent algorithm for approximating the Nash in the batch, nonstreaming setting,
• A decentralized version of the algorithm with an accompanying empirical analysis demonstrating the proposed approach as competitive with modern streaming k-PCA algorithms on synthetic and real data,
• In demonstration of the scaling of the approach, we compute the top-32 principal components of the matrix of RESNET-200 activations on the IMAGENET dataset (n ≈ 10 6 , d ≈ 20 · 10 6 ).
Each of these contributions is important. Novel formulations often lead to deeper understanding of problems, thereby, opening doors to improved techniques. In particular, k-player games are in general complex and hard to analyze. In contrast, PCA has been well-studied. By combining the two fields we hope to develop useful analytical tools. Our specific formulation is important because it obviates the need for any centralized orthonormalization step and lends itself naturally to decentralization. And lastly, theory and experiments support the viability of this approach for continued research.
Related work
PCA is a century-old problem and a massive literature exists and we only scratch the surface here (see e.g. [34] for a comprehensive overview from a statistical perspective and [24] for a numerical linear algebra perspective). When the dataset size allows it, the preferable solution is the SVD. For moderately sized datasets, SVD combined with randomized algorithms can be used to recover the top-k components [26].
Consider searching for the first eigenvalue of a symmetric matrix M . In neuroscience, Hebb's rule [28] refers to a connectionist rule that solves for the top components with additive updates of a vector v as v ← v + ηM v. Likewise, Oja's rule [47] refers to a similar update v ← v + η(I − vv )M v. In machine learning, using a normalization step of v ← v/||v|| with Hebb's rule is somewhat confusingly referred to as Oja's algorithm [56], the reason being that the subtractive term in Oja's rule can be viewed as a regularization term for implicitly enforcing the normalization. If a normalization step is added to Oja's rule, this is referred to as Krasulina's algorithm [36]. In the language of Riemannian manifolds, v/||v|| can be recognized as a retraction and (I − vv ) as projecting the gradient M v onto the tangent space of the sphere [1].
An extension of Krasulina's algorithm to the top-k setting, termed Matrix Krasulina [59], was recently proposed in the machine learning literature. This algorithm can be recognized as projecting the gradient onto the Stiefel manifold (the space of orthonormal matrices) followed by a QR step 3 (plus some minor sign accounting), which is a well known retraction.
Of these, Oja's algorithm has arguably been the most extensively studied. Recent approaches have augmented Oja's algorithm with variance reduction to improve convergence rate [58]. It is now known that the top-k subspace can be approximated to ρ accuracy in O(1/ρ) iterations with constants depending in particular on the spectral gap. Gap-free bounds also exist and require additional space, see [2] for a detailed discussion. For the top component (k = 1), there exist sharp convergence rates [27,33,57], later extended to k > 1 [2,60] with global convergence guarantees.
Maintaining orthonormality of the components via QR is computationally expensive. Amid and Warmuth [3] propose an alternative Krasulina method which does not require re-orthonormalization but instead requires inverting a k × k matrix; in a streaming setting restricted to minibatches of size 1 (X t ∈ R d ), Sherman-Morrison [24] can be used to efficiently replace the inversion step. Raja and Bajwa [50] develop a data-parallel distributed algorithm for the top eigenvector. In contrast, other methods extract the top components in sequence by solving for the ith component using an algorithm such as power iteration or Oja's, and then enforcing orthogonality by removing the learned subspace from the matrix, a process known as deflation. Alternatively, the deflation process may be intertwined with the learning of the top components. The generalized Hebbian algorithm [53] (GHA) works this way as do Lagrangian inspired formulations [22] as well as our own approach. We make the connection between GHA and our algorithm concrete in Prop. G.1. Note, however, that the GHA update is not the gradient of any utility (Prop. G.2) and therefore, lacks a clear game interpretation.
Our approach estimates the top-k principal components without reorthonormalization by means of a generalized Gram-Schmidt process. In Section 5, we prove theoretical properties of our proposed game and convergence of a sequential version of our algorithm to the actual principal components, not just the top-k subspace. This algorithm is sound but not decentralized. We propose a natural decentralized version in Algorithm 2 that allows us to achieve data and model parallelism and support this choice by demonstrating competitive performance on large scale experiments.
For context, Oja's algorithm converges to the actual principal components [2] and Matrix Krasulina's [59] converges to the top-k subspace. However, neither can be obviously decentralized. GHA [53] converges to the actual principal components asymptotically and can be decentralized. Each of these methods is applicable in the streaming k-PCA setting.
PCA as an Eigen-Game
We adhere to the following notation. Vectors and matrices meant to approximate principal components (equivalently eigenvectors) are designated with hats,v andV respectively, whereas true principal components are v and V . Subscripts indicate which eigenvalue a vector is associated with. For example, v i is the ith largest eigenvector. By an abuse of notation, v j<i refers to the set of vectors {v j |j ∈ {1, . . . , i − 1}} and are also referred to as the parents of v i (v i is their child). Sums over indices should be clear from context, e.g., j<i = i−1 j=1 . The inner product is written u, v = u v. We denote the unit sphere by S d−1 and simplex by ∆ d−1 in d-dimensional ambient space.
Outline of derivation As argued in the introduction, the PCA problem is often mis-interpreted as learning a projection of the data into a subspace that captures maximum variance (equiv. maximizing the trace of a suitable matrix R introduced below). This is in contrast to the original goal of learning the principal components. We first develop the intuition for deriving our utility functions by (i) showing that only maximizing the trace of R is not sufficient for recovering all principal components (equiv. eigenvectors), and (ii) showing that minimizing off-diagonal terms in R is a complementary objective to maximizing the trace and can recover all components. We then consider learning only the top-k and construct utilities that are consistent with findings in (i) and (ii), equal the true eigenvalues at the Nash of the game we construct, and result in a game that is amenable to analysis.
Derivation of player utilities. The eigenvalue problem for a symmetric matrix X X = M ∈ R d×d is to find a matrix of d orthonormal column vectors V (implies V is full-rank) such that M V = V Λ with Λ diagonal. Given a solution to this problem, the columns of V are known as eigenvectors and corresponding entries in Λ are eigenvalues. By left-multiplying by V and recalling V V = V V = I by orthonormality (i.e., V is unitary), we can rewrite the equality as
V M V = V V Λ unitary = Λ.(1)
LetV denote a guess or estimate of the true eigenvectors V and define R(V ) def =V MV . The PCA problem is often posed as maximizing the trace of R (equivalent to minimizing reconstruction error):
max V V =I i R ii = Tr(R) = Tr(V MV ) = Tr(VV M ) = Tr(M ) .(2)
Surprisingly, the objective in (2) is independent ofV , so it cannot be used to recover all (i.e., k = d) the eigenvectors of M -(i). Alternatively, Equation (1) implies the eigenvalue problem can be phrased as ensuring all off-diagonal terms of R are zero, thereby ensuring R is diagonal-(ii):
min V V =I i =j R 2 ij .(3)
It is worth further examining the entries of R in detail. Diagonal entries R ii = v i , Mv i are recognized as Rayleigh quotients because ||v i || = 1 by the constraints. Off-diagonal entries R ij = v i , Mv j measure alignment betweenv i andv j under a generalized inner product ·, · M .
So far, we have considered learning all the eigenvectors. If we repeat the logic for the top-k eigenvectors with k < d, then by Equation (1), R must still be diagonal. V is not square, so V V = I, but assuming V is orthonormal as before, we have V V = P is a projection matrix. Left-multiplying Equation (1) by V now reads (P M )V = V Λ so we are solving an eigenvalue problem for a subspace of M .
If we only desire the top-k eigenvectors, maximizing the trace encourages learning a subspace spanned by the top-k eigenvectors, but does not recover the eigenvectors themselves. On the other hand, Equation (3) places no preference on recovering large over small eigenvectors, but does enforce the columns ofV to actually be eigenvectors. The preceding exercise is intended to introduce minimizing the off-diagonal terms of R as a possible complementary objective for solving top-k PCA. Next, we will use these two objectives to construct utility functions for each eigenvectorv i .
We want to combine the objectives to take advantage of both their strengths. A valid proposal is
max V V =I i R ii − i =j R 2 ij .(4)
However, this objective ignores the natural hierarchy of the top-k eigenvectors. For example,v 1 is penalized for aligning withv k and vice versa, butv 1 , being the estimate of the largest eigenvector, should be free to search for the direction that captures the most variance independent of the locations of the other vectors. Instead, first consider solving for the top-1 eigenvector, v 1 , in which case,
R = [ v 1 , Mv 1 ] is a 1 × 1 matrix.
In this setting, Equation (3) is not applicable because there are no off-diagonal elements, so maxv 1v 1 =1 v 1 , Mv 1 is a sensible utility function forv 1 . If considering the top-2 eigenvectors,v 1 's utility remains as before, and we introduce a utility forv 2 . Equation (3) is now applicable, sov 2 's utility is
max v 2v 2 =1,v 1v 2=0 v 2 , Mv 2 − v 2 , Mv 1 2 v 2 , Mv 2(5)
where we have divided the off-diagonal penalty by v 2 , M v 2 so a) the two terms in Equation (5) are on a similar scale and b) for reasons that ease analysis. Additionally note that the constraintv 1v2 = 0 may be redundant at the optimum
(v * 1 = v 1 ,v * 2 = v 2 ) because the second term, v * 2 , Mv * 1 2 = v 2 , M v 1 2 = Λ 2 11 v 2 , v 1 2
, already penalizes such deviations (Λ ii is the ith largest eigenvector). These reasons motivate the following set of objectives (utilities), one for each vector i ∈ {1, . . . , k}:
max v iv i=1 u i (v i |v j<i ) =v i Mv i − j<i (v i Mv j ) 2 v j Mv j = ||Xv i || 2 − j<i Xv i , Xv j 2 Xv j , Xv j(6)
where the notation u i (a i |b) emphasizes that player i adjusts a i to maximize a utility conditioned on b.
It is interesting to note that by incorporating knowledge of the natural hierarchy, we are immediately led to constructing asymmetric utilities, and thereby, inspired to formulate the PCA problem as a game, rather than a direct optimization problem as in Equation (4).
Differentiable games The player utility functions are all differentiable. A differentiable game consists of k players, each with a differentiable optimization problem that depends on possibly all k players. The study of differentiable games has recently found application in machine learning for GANs [5,20], multi-agent reinforcement learning [38], and draws on a rich foundation in dynamical systems, variational inequalities, and game theory [45,46]. Our specific formulation has connections to resource congestion games [51]-here, resources are directions on the hypersphere and congestion is penalized through a generalized cosine distance.
A key concept in (differentiable) games is a Nash equilibrium. A Nash equilibrium specifies a variable for each player from which no player can unilaterally deviate and improve their outcome. In this case, V is a (strict-)Nash equilibrium if and only if for all i, u i (v i |v j<i ) > u i (z i |v j<i ) for all z i ∈ S d−1 . Theorem 3.1 (PCA Solution is the Unique strict-Nash Equilibrium). Assume that the top-k eigenvalues of X X are distinct. Then the top-k eigenvectors form the unique strict-Nash equilibrium of the proposed game in Equation (6). 4 The proof is deferred to Appendix H. Figure 1: Each player i's utility function depends on its parents represented here by a directed acyclic graph. Each parent must broadcast its vector, "location", down the hierarchy.
Solving for the Nash of a game is difficult in general (specifically, it belongs to the class of PPADcomplete problems [15,23]). However, because the game is hierarchical and each player's utility only depends on its parents, it is possible to construct a sequential algorithm that is convergent by solving each player's optimization problem in sequence. We elaborate in the next two sections.
Method
Utility gradient. In Section 3, we mentioned that normalizing the penalty term from Equation (5) had a motivation beyond scaling. Dividing by v j , Mv j results in the following gradient for player i:
∇v i u i (v i |v j<i ) = 2M v i − j<iv i Mv ĵ v j Mv jv j generalized Gram-Schmidt = 2X Xv i − j<i Xv i , Xv j Xv j , Xv j Xv j . (7)
The resulting gradient with normalized penalty term has an intuitive meaning. It consists of a single generalized Gram-Schmidt step followed by the standard matrix product found in power iteration and Oja's rule. Also, notice that applying the gradient as a fixed point operator in sequence (v i ← 1 2 ∇v i u i (v i |v j<i )) on M = I recovers the standard Gram-Schmidt procedure for orthogonalization.
A sequential algorithm. Each eigenvector can be learned by maximizing its utility. The vectors are constrained to the unit sphere, a non-convex Riemannian manifold, so we use Riemmanian gradient ascent with gradients given by Equation (7). Recall that each u i depends onv j<i . If any ofv j<i are being learned concurrently, thenv i is maximizing a non-stationary objective. Proving convergence in the non-stationary setting is very difficult. Instead, for completeness, we prove convergence assuming eachv i is learned in sequence. Algorithm 1 learnsv i given fixed parentsv j<i ; we present the convergence guarantee in Section 5 and details on setting ρ i and α in Appendix K.
Algorithm 1 EigenGame R -Sequential
Given: matrix X ∈ R n×d , initial vectorv 0 i ∈ S d−1 , learned approximate parentsv j<i , step size α, and maximum error tolerance
ρ i . v i ←v 0 i t i = 5 4 min(||∇v0 i u i ||/2, ρ i ) −2 for t = 1 : t i do ∇v i ← 2X Xv i − j<i Xvi,Xvj Xvj ,Xvj Xv j ∇ R vi ← ∇v i − ∇v i ,v i v î v i ←v i + α∇ R vî v i ←v i ||v i || end for returnv i
A decentralized algorithm. While Algorithm 1 enjoys a convergence guarantee, learning every parentv j<i before learningv i may be unnecessarily restrictive. Intuitively, as parents approach their respective optima, they become quasi-stationary, so we do not expect maximizing utilities in parallel to be problematic in practice. To this end, we propose running Algorithm 2 on eachv i in parallel visualized in Figure 2.
Algorithm 2 EigenGame R (EigenGame-update with ∇v i instead of ∇ R vi ) Given: data stream, X t ∈ R m×d , total iterations T , initial vectorv 0 i ∈ S d−1 , and step size α. v i ←v 0
i for t = 1 : T do ∇v i ← 2X t X tvi − j<i Xtvi,Xtvj Xtvj ,Xtvj X tvj ∇ R vi ← ∇v i − ∇v i ,v i v î v i ←v i + α∇ R vî v i ←v i ||v i || broadcast(v i ) end for returnv i v 1 v 2 v 3
True PCs In practice we can assign each eigenvector update to its own device (e.g. a GPU or TPU). Systems with fast interconnects may facilitate tens, hundreds or thousands 5 of accelerators to be used. In such settings, the overhead of broadcast(v i ) is minimal. We can also specify that the data stream is co-located with the update sov i updates with respect to its own X i,t . This is a standard paradigm for e.g. data-parallel distributed neural network training. We provide further details in Section 6.
Message Passing on a DAG. Our proposed utilities enforce a strict hierarchy on the eigenvectors. This is a simplification that both eases analysis (see Appendix I) and improves convergence 6 , however, it is not necessarily optimal. We assume vectors are initialized randomly on the sphere and, for instance, v k may be initialized closer to v 1 than evenv 1 and vice versa. The hierarchy shown in Figure 1 enforces a strict graph structure for broadcasting information of parents to the childrens' utilities.
Variations. We considered several variants of Equation (6). To our knowledge, our formulation is novel and uniquely extensible-notice the utility is composed entirely of inner products on Xv i terms which can be replaced by more general function approximators, f i (X), e.g., neural networks. The inner products themselves can be replaced by kernels. Other variants may solve PCA but may not be gradients of any function. One disadvantage of our formulation is that a naive estimation of gradients in the stochastic setting is biased. This is mitigated with large batch sizes (see experiments in Section 6 and further discussion in Appendix E). We leave reducing bias to future work.
Convergence of EigenGame
Here, we first show that Equation (6) has a simple form such that any local minimum of u i is also a global minimum. Player i's utility depends on its parents, so we next explain how error in the parents propagates to children through mis-specification of the player i's utility. Using the first result and accounting for this error, we are then able to give global, finite-sample convergence guarantees in the deterministic setting by leveraging recent non-convex Riemannian optimization theory.
The utility landscape and parent-to-child error propagation. Equation (6) is abstruse, but we prove that the shape of player i's utility is simply sinusoidal in the angular deviation ofv i from the optimum. The amplitude of the sinusoid varies with the direction of the angular deviation along the sphere and is dependent on the accuracy of players j < i. In the special case where players j < i Figure 3: (a) The longest streak of consecutive vectors with angular error less than π 8 radians is plotted vs algorithm iterations for a matrix M ∈ R 50×50 with a spectrum decaying from 1000 to 1 linearly and exponentially. Average runtimes are reported in milliseconds next to the method names 7 . We omit Krasulina's as it is only designed to find the top-k subspace. Both EigenGame variants and GHA achieve similar asymptotes on the linear spectrum. (b) Longest streak and subspace distance on MNIST with average runtimes reported in seconds. (a,b) Learning rates were chosen from {10 −3 , . . . , 10 −6 } on 10 held out runs. Solid lines denote results with the best performing learning rate. Dotted and dashed lines denote results using the best learning rate × 10 and 0.1. All plots show means over 10 trials. Shading highlights ± standard error of the mean for the best learning rates.
have learned the top-(i − 1) eigenvectors exactly, player i's utility simplifies (see Lemma J.1) to
u i (v i , v j<i ) = Λ ii − sin 2 (θ i ) Λ ii − l>i z l Λ ll(8)
where θ i is the angular deviation and z ∈ ∆ d−1 parameterizes the deviation direction. Note that sin 2 has period π instead of 2π, which simply reflects the fact that v i and −v i are both eigenvectors.
An error propagation analysis reveals that it is critical to learn the parents to a given degree of accuracy. The angular distance between v i and the maximizer of player i's utility with approximate parents has tan −1 dependence (i.e., a soft step-function; see Lemma J.5 and Figure 10 in Appendix J). Theorem 5.1 (Global convergence). Algorithm 1 achieves finite sample convergence to within θ tol angular error of the top-k principal components, independent of initialization. Furthermore, if eacĥ v i is initialized to within π 4 of v i , Algorithm 1 returns the components with angular error less than θ tol in
T = O k (k−1)! θtol k i=1 16Λ11 gi 2
iterations. Proofs are deferred to Appendices K.4 and K.5.
Angular error is defined as the angle betweenv i and v i :
θ i = sin −1 ( 1 − v i ,v i 2 )
. The first k in the formula for T appears from a naive summing of worst case bounds on the number of iterations required to learn eachv j<k individually. The constant 16 arises from the error propagation analysis; parent vectors,v j<i , must be learned to under 1/16th of a canonical error threshold for the childv i ,
gi (i−1)Λ11 where g i = Λ ii − Λ i+1,i+1 .
The Riemannian optimization theory we leverage dictates that 1 ρ 2 iterations are required to meet a O(ρ) error threshold. This is why the squared inverse of the error threshold appears here. Breaking down the error threshold itself, the ratio Λ 11 /g i says that more iterations are required to distinguish eigenvectors when the difference between them (summarized by the gap g i ) is small relative to the scale of the spectrum, Λ 11 . The (k − 1)! term appears because learning smaller eigenvectors requires learning a much more accuratev 1 higher up the DAG.
Lastly, the utility function for eachv i is sinusoidal, and it is possible that we initializev i with initial utility arbitrarily close to the trough (bottom) of the function where gradients are arbitrarily small. This is why the global convergence rate depends on the initialization in general. Note that Algorithm 1 effectively detects the trough by measuring the norm of the initial gradient (∇v0 i u i ) and scales the number of required iterations appropriately. A complete theorem that considers the probability of initializingv i within π 4 of v i is in Appendix K, but this possibility shrinks to zero in high dimensions. We would also like to highlight that these theoretical findings are actually strong relative to other claims. For example, the exponential convergence guarantee for Matrix Krasulina requires the initial guess at the eigenvectors capture the top-(k − 1) subspace [59], unlikely when d k. A similar condition is required in [58]. These guarantees are given for the mini-batch setting while ours is for the full-batch, however, we provide global convergence without restrictions on initialization. Improving our convergence rate to exponential is outside the scope of this work, but we characterize Figure 4: Top-8 principal components of the activations of a RESNET-200 on IMAGENET ordered block-wise by network topology (dimension of each block on the right y-axis). Block 1 is closest to input and Block 5 is the output of the network. Color coding is based on relative variance between blocks across the top-8 PCs from blue (low) to red (high).
the shape of the utilities as sinusoidal in Equation (8). If the eigenvectors are guessed within π 4 , they are within a region of their utility that is strongly-concave. An exponential convergence rate in the full-batch setting can be obtained by using recent Riemannian acceleration techniques [40].
Experiments
We compare our approach against GHA, Matrix Krasulina, and Oja's algorithm 8 . We present both EigenGame and EigenGame R which projects the gradient onto the tangent space of the sphere each step. We measure performance of methods in terms of principal component accuracy and subspace distance. We measure principal component accuracy by the number of consecutive components that are estimated within an angle of π 8 from ground truth. For example, if the angular errors of thev i 's returned by a method are, in order, [θ 1 , θ 2 , θ 3 , . . .] = [ π 16 , π 4 , π 10 , . . .], then the method is credited with a streak of only 1 regardless of the errors θ i>2 . For Matrix Krasulina, we first compute the optimal matching fromv i to ground truth before measuring angular error.We measure normalized subspace distance using
1 − 1 k · Tr(U * P ) ∈ [0, 1] where U * = V V † and P =VV † [59].
Synthetic data. Experiments on synthetic data demonstrate the viability of our approach (Figure 3a). Oja's algorithm performs best on synthetic experiments because strictly enforcing orthogonalization with an expensive QR step greatly helps when solving for all eigenvectors. EigenGame is able to effectively parallelize this over k machines and the advantage of QR diminishes in Figure 3b. The remaining algorithms perform similarly on a linearly decaying spectrum, however, EigenGame performs better on an exponentially decaying spectrum due possibly to instability of Riemannian gradients near the equilibrium (see Appendix F for details). GHA and EigenGame R are equivalent under specific conditions (see Proposition G.1).
MNIST handwritten digits. We compare EigenGame against GHA, Matrix Krasulina, and Oja's algorithm on the MNIST dataset ( Figure 3b). We flatten each image in the training set to obtain a 60, 000 × 784 dimensional matrix. EigenGame is competitive with Oja's in a high batch size regime (1024 samples per minibatch). The performance gap between EigenGame and the other methods shrinks as the minibatch size is reduced (see Appendix E), expectedly due to biased gradients.
The principal components of RESNET-200 activations on IMAGENET are edge filters. A primary goal of PCA is to obtain interpretable low-dimensional representations. To this end we present an example of using EigenGame to compute the top-32 principal components of the activations We implemented a data-and-model parallel version of EigenGame in JAX [12] where eachv i is assigned to it's own TPU [35]. Each device keeps a local copy of the RESNET parameters and the IMAGENET datastream. Sampling a minibatch (of size 128), computing the network activations and updatinĝ v i are all performed locally. The broadcast(v i ) step is handled by the pmap and lax.all_gather functions. Computing the top-32 principal components takes approximately nine hours on 32 TPUv3s. Figure 4 shows the top principal components of the activations of the trained network organized by network topology (consisting of five residual blocks). Note that EigenGame is not applied block-wise, but on all 20M dimensions. We do not assume independence between blocks and the eigenvector has unit norm across all blocks. We observe that Block 1 (closest to input) of PC 1 has very small magnitude activations relative to the other PCs. This is because PC 1 should capture the variance which discriminates most between the classes in the dataset. Since Block 1 is mainly concerned with learning low-level image filters, it stands to reason that although these are important for good performance, they do not necessarily extract abstract representations which are useful for classification. Conversely, we see that PC 1 has larger relative activations in the later blocks.
We visualize the average principal activation in Block 1 9 in Figure 5. The higher PCs learn distinct filters (Gabor filters, Laplacian-of-Gaussian filters, edge filters in different orientations c.f. [8]). Figure 6 shows a scree plot of the Rayleigh quotients recovered by EigenGame and the respective utility achieved by each player. The two curves almost perfectly overlap. The mean relative magnitude of the penalty terms to the respective Rayleigh quotient in the utility is 0.025 indicating that the solutions of each player are close to orthogonal with respect to the generalized inner product (Equation (6)). This implies that that the solutions are indeed eigenvectors. The scree plot has two distinct elbows at PC2 and PC6, corresponding to the differences in filters observed in Figure 5.
Conclusion
It seems easier to train a bi-directional LSTM with attention than to compute the SVD of a large matrix. -Chris Re NeurIPS 2017 Test-of-Time Award, Rahimi and Recht [49].
In this work we have motivated PCA from the perspective of a multi-player game. Based on this we developed a decentralized algorithm which enables truly large-scale principal components estimation. To demonstrate this we used EigenGame to analyze the behavior of a large neural network through the lens of PCA. To our knowledge this is the first analysis of its type and scale (for example [59] compute only 6 components of the d = 2300 output layer of VGG) and would be otherwise impractical with previous PCA approaches. Beyond this, EigenGame opens a variety of promising research directions.
Deep Variants. Player utilities are composed of Xv i projections that can be replaced with end-to-end trainable deep networks, f i (X|weights). It is known that shallow autoencoders recover the top-k subspace [4,11]. However, as we have shown, that is not equivalent to recovering the principal components, and, 30 years later, disentanglement is still a live research topic. What does our approach mean for (variational) autoencoders where disentanglement is still a challenge [41,43,54]?
Scale. In experiments, we broadcast across all edges in Figure 1 every iteration. Introducing lag or considering other random graph network structures may improve efficiency. Can we further reduce our memory footprint by storing only scalars of the losses (bandit feedback) and avoiding congestion through online bandit or reinforcement learning techniques? Our decentralized algorithm may have implications for federated and privacy preserving learning as well [9,29,30].
Games. EigenGame has a unique Nash equilibrium due to the fixed DAG structure, but vectors are initialized randomly sov k may start closer to v 1 thanv 1 does. Adapting the DAG could make sense, but might also introduce spurious fixed points or suboptimal Nash. Interesting connections exist between EigenGame and the algorithm [20] for solving LQ-GAN [18]. Can these be leveraged to improve upon both? Might replacing vectors with populations accelerate extraction of the top PCs?
Core ML. This work generalizes to any symmetric positive definite matrix. Can it be extended to the asymmetric matrices [6,7] that arise in game theoretic analyses? EigenGame could be useful as a diagnostic or for accelerating training [16,37]; similarly, spectral normalization has shown to be a valuable tool for stabilizing GAN training [44]. Also, eigenvectors of the graph Laplacian provide features for RL problems [42]. Spectral PCA also shares deep connections with clustering [17].
Lastly, GANs [25] recently reformulated learning a generative model as a two-player zero-sum game. Here, we show how another fundamental unsupervised learning task can be formulated as a k-player game. While two-player, zero-sum games are well understood, research on k-player, general-sum games lies at the forefront in machine learning. We hope that marrying a fundamental, well-understood task in PCA with the relatively less understood domain of many player games will help advance techniques on both ends.
A Experiment Details
In the synthetic experiments,V is initialized randomly so M ∈ R 50×50 is constructed as a diagonal matrix without loss of generality. The linear spectrum ranges from 1 to 1000 with equal spacing. The exponential spectrum ranges from 10 3 to 10 0 with equal spacing on the exponents.
A.1 Clarification of Oja Variants
As discussed in Section 2, it is easy to confuse the various Oja methods. In our experiments, Oja's algorithm refers to applying Hebb's rule v i ← v i + ηM v i followed by an orthonormalization step computed with QR as in Algorithm 3:
Algorithm 3 Oja's Algorithm Given: data stream, X t ∈ R m×d , T ,V 0 ∈ S d−1 ×. . .×S d−1 , step size η V ←V 0 mask ← LT(2I k − 1 k ) for t = 1 : T dô V ←V + ηX t X tV Q, R ← QR(V ) S = sign(sign(diag(R)) + 0.5) V = QS end for returnV
where 1 k is a k × k matrix of all ones, LT returns the lower-triangular part of a matrix (includes the diagonal), and sign =
−1 if x < 0 0 if x = 0 1 if x > 0 .
Oja's algorithm is the standard nomenclature for this variant in the machine learning literature [2].
In the scaled-down RESNET experiments (see Section D.3), we use Hebb's rule with deflation, also sometimes referred to as Oja's. Deflation is accomplished by directly subtracting out the parent vectors from the dataset. In detail, each batch of data samples, X t ∈ R m×d , is preprocessed as X (i),t ← X t (I − j<iv jv j ). Then to learn eachv i , we repeatedly apply Hebb's rule with M t = X (i),t X (i),t and thenv i ←v i ||vi|| to projectv i back to the unit-shere. After several iterations t and oncev i 's Rayleigh quotient appears to have stabilized, we move on tov i+1 .
B EigenGame Vectorized for CPU
Algorithm 4 presents Algorithm 2 in a vectorized form for implementation on a CPU. LT returns the lower-triangular part of a matrix (includes the diagonal). sum(A, dim = 0) sums over the rows of A. norm(A, dim = 0) returns an array with the L 2 -norm of each column of A. denotes elementwise multiplication. 1 k is a square k × k matrix of all ones. I k is the k × k identity matrix. When dividing a matrix by a vector (A/v), we assume broadcasting. Specifically, v is interpreted as a row-vector and stacked vertically to match the dimensions of A; the two matrices are then divided element wise.
C Smallest Eigenvectors
EigenGame can be used to recover the k smallest eigenvectors as well. Simply use EigenGame to estimate the top eigenvector with eigenvalue Λ 11 . Then run EigenGame on the matrix M =
M v d = Λ 11 v d − M v d = (Λ 11 − Λ dd )v d . Algorithm 4 EigenGame & EigenGame R -Vectorized Given: data stream, X t ∈ R m×d , T ,V 0 ∈ S d−1 ×. . .×S d−1 , step size α V ←V 0 mask ← LT(2I k − 1 k ) for t = 1 : T do R ← (X tV ) (X tV ) R norm ← R/diag(R) G s ←V (R norm mask) ∇V ← X t (X t G s ) ∇ R V −=V sum(∇V V , dim = 0) V ←V + α∇ R V V ←V /norm(V , dim = 0) end for returnV
D Frequent Directions
A reviewer from a previous submission of this work requested a comparison and discussion with Frequent Directions [21], another decentralized subspace-error minimizing k-PCA algorithm. Frequent Directions (FD) is a streaming algorithm that maintains an overcomplete sketch matrix with the goal of capturing the subspace of maximal variance within the span of its vectors. Each step of FD operates by first replacing a row of the sketch matrix with a single data sample. It then runs SVD on the sketch matrix and uses the resulting decomposition to construct a new sketch. Note that FD relies on SVD as a core inner step. In theory, EigenGame could replace SVD, however, we do not explore that direction here.
D.1 Recovering Principal Components from Principal Subspace
FD returns a sketch B =V of size R 2l×d where l ≥ k. The rows of FD are not principal components, but they should approximate the top-k subspace of the dataset. To recover approximate principal components, the optimal rotation of the vectors can be computed with Q ← SV D(XB ). This can be shown by inspecting R (as defined in Section 3) with rotated vectors:
(V Q) M (V Q) = Q V MV Q = Q (XV ) (XV )Q = Q M Q.(9)
By inspection, the problem of computing the optimal Q reduces to computing the eigenvectors of M ∈ R k×k . This requires projecting the dataset into the principal subspace, (XV ), to compute M however, this is typically a desired step anyways when performing PCA.
D.2 Complexity Analysis
We base our analysis on Section 3.1 of [21] which discusses parallelizing FD. Let b be number of shards to split the original dataset X ∈ R n×d into, each shard being in R n b ×d . Let k be the number of principal components sought. Finally, let l = k + 1 be the sketch size where 1 is a desired tolerance on the Frobenius norm of the subspace approximation error.
The runtime of FD is O(nld); call this Anld for some A. To decentralize FD, [21] instructs to 1. Split X into b shards and run FD on each individually in parallel.
• total runtime:
A( n b )ld = Anld( 1 b ) • output: b sketches (B i ∈ R 2l×d )
2. Merge sketches and run FD on the merged sketch to produce sketch B.
• total runtime: A(2lb)ld = Anld( 2bl n )
• output: 1 sketch (B ∈ R 2l×d ) Finally, normalize the rows of B, project the dataset Y ← XB , compute the right-singular vectors of the projected dataset, Q ∈ R 2l×2l ← SV D(Y ), computeV ← B Q, and compute the corresponding Rayleigh quotientsV MV = (Y Q) (Y Q) to determine the top-k eigenvectors with error within the desired tolerance. We assume this final step takes negligible runtime because we assume 2l d, however, for datasets with many samples (large n), this step could be nonnegligible without further approximation.
Using the runtimes listed above, we can determine the potential runtime multiplier from decentraliza-
tion is ( 1 b + 2bl n ) which is convex in b.
If we minimize this w.r.t. b for the optimal number of shards, we find b * = n 2l . Plugging this back in gives an optimal runtime multiplier of 2 √ 2 l n . The analysis above only considers one recursive step.
Step 1) can be decentralized as well. For simplicity, we assume the computation is dominated by Step 2), the merge step. Note these relaxations result in a lower bound on FD runtime, i.e., they favor FD in a comparison with EigenGame.
D.3 Small ImageNet Experiments
Consider running on a scaled down RESNET-50 experiment which has approximately 1.2M images (n = 1.2 × 10 6 , 24TB) and searching for the top-25 eigenvectors (k = 25). Using a modest = 0.25 k implies l = 5k = 125 with optimal batch size b * ≈ 70. Therefore, running FD on n b samples with a sketch size of 125 should give a rough lower bound on the runtime for an optimally decentralized FD implementation. The runtime obtained was 9 hours for FD vs 2 hours for EigenGame which actually processes the full dataset 3 times. The reason we run FD on a scaled down RESNET-50 experiment as opposed to the RESNET-200 is that the algorithm requires a final SVD step to recover the actual eigenvectors and we were not able to run SVD on a sketch of size k × d where d = 20 × 10 6 for the full scale experiment. That is to say FD is not applicable in this extremely large data regime. In contrast, EigenGame handles this setting without modification.
To obtain an approximate "ground truth" solution for the principal components we run Oja's algorithm with a low learning rate with a batch size of 128 for 3 epochs to extract the first eigenvector. We find successive eigenvectors using deflation. By running each step for many iterations and monitoring the convergence of the Rayleigh quotient (eigenvalue) v i M v i , we can control the quality of the recovered eigenvectors. This is the simplest and most reliable approach to creating ground truth on a problem where no solution already exists. See Section A.1 for further details.
E Gradient Bias
As expected, Figure 8 shows the performance of EigenGame degrades in the low batch size regime. This is expected because we use the same minibatch for all inner products in the gradient which contains products and ratios of random variables. GHA, on the other hand, is linear in the matrix M and as such is naturally unbiased. However, GHA does not appear to readily extend to more general function approximators, whereas EigenGame should. Instead we look to reduce the bias of EigenGame gradients using larger batch sizes (current hardware easily supports batches of 1024 for MNIST and 128 for IMAGENET). Further reducing bias is left to future work. EigenGame (24) EigenGame R (25) GHA (24) Krasulinas (33) Ojas (
F To project or not to project?
Projecting the update direction onto the unit-sphere, as suggested by Riemannian optimization theory, can result in much larger update steps. This effect is due to the composition of the retraction (z ←z/||z||) and update step (z ← z + ∆z). Omitting the projection can actually mimic modulating the learning rate, decaying it near an equilibrium and improving stability. Figure 9: (a) When thev i is near the optimum of its utility and its gradient is nearly orthogonal to the sphere, pointing directly away from the center (@ 90 • ), the combination of updating using the projected gradient (∇ R ) and the retraction can result in a large update, possibly movingv i away from the optimum. (b) Diagram presenting Riemannian optimization terminology. The retraction is not a projection in general although our specific choice appears that way for the sphere. A retraction applied atv i takes as input a scaled projected gradient and returns a vector on the manifold:v i ← Rv i (α∇ R ).
G Theoretical comparison with GHA
Proposition G.1. When the first i−1 eigenvectors have been learned exactly, GHA onv i is equivalent to projecting the first term in ∇v i u i onto the sphere, but omitting to project the second set of penalty terms.
Proof. The GHA update is
∆v i = 2 Mv i − (v i Mv i )v i − j<i (v i Mv j )v j .(10)
Plugging v j<i forv j<i into the GHA update, we find
∆ i = 2 Mv i − (v i Mv i )v i − j<i (v i M v j )v j (11) = 2 Mv i − (v i Mv i )v i − j<i Λ jj (v i v j )v j .(12)
Likewise for the gradient with the first term projected onto the tangent space of sphere:
2 (I −v iv i )Mv i − M j<iv i M v j v j M v j v j = 2 (I −v iv i )Mv i − M j<i (v i v j )v j (13) = 2 Mv i − (v i Mv i )v i − j<i Λ jj (v i v j )v j . (14)
Proposition G.2. The GHA update forv i is not the gradient of any function.
Proof. The Jacobian of ∆v i w.r.t.v i is
Jac(∆v i ) = 2 M − (v i Mv i )I − 2v iv i M − j<iv jv j M .(15)
The sum of thevv M terms are not, in general, symmetric, therefore, the Jacobian is not symmetric. The Jacobian of a gradient is the Hessian and the Hessian of a function is necessarily symmetric, therefore, the GHA update is not the gradient of any function.
G.1 Design Decisions
We made a number of algorithmic design decisions that led us to the proposed algorithm. The first to note is that a naive utility that simply subtracts off j<i v i ,v j will not solve PCA. This is because large v i , Mv i (read eigenvalues) can drown out these penalties. The intuition is that including M in the inner product gives the right boost to create a natural balance among terms. Next, it is possible to formulate the utilities without normalizing the terms as we did, however, this is harder to analyze and is akin to minimizing (err) 4 instead of (err) 2 which generally has better convergence properties near optima. Also, while updates formed using the standard Euclidean Gram-Schmidt procedure will solve the PCA problem, they are not the gradients of any utility function. Lastly, our formulation consists entirely of generalized inner products: v i , Mv j = Xv i , Xv j . Each Xv i can be thought of as a shallow function approximator with weightsv i . This means that our formulation is readily extended to more general function approximation, i.e., Xv i → f i (X) 10 . Note that any formulation that operates on v i ,v j instead is not easily generalized.
H Nash Proof
LetV be a matrix of arbitrary unit-length column vectors (v j ) and let M (symmetric) be diagonalized as U ΛU with U a unitary matrix. Then,
R def =V MV =V U ΛU V = (U V ) Λ(U V ) = Z ΛZ(16)
where Z is also a matrix of unit-length column vectors because unitary matrices preserve inner products ( U v i , U v i =v i U U v i =v iv i = 1). Therefore, rather than considering the action of an arbitrary matrixV on M , we can consider the action of an arbitrary matrix Z on Λ. This simplifies the analysis.
In light of this reduction, Equation (22) of Theorem H.1 can be rewritten as
u i (v i |v j<i ) = w Λ jj≥ii w(17)=v i Λ jj≥iivi(18)
because V is identity w.l.o.g. Therefore, player i's problem is simply to find the maximum eigenvector of a transformed matrix Λ jj≥ii , i.e., Λ with the first i − 1 eigenvalues removed. Theorem H.1 (PCA Solution is the Unique strict-Nash Equilibrium). Assume that the top-k eigenvalues of X X are distinct. Then the top-k eigenvectors form the unique strict-Nash equilibrium of the proposed game in Equation (6).
Proof. In what follows, let p, q = {1, . . . , d} and i ∈ {1, . . . , k}. We will prove optimality of v i by induction. Clearly, v 1 is the optimum of u 1 because
u 1 = v 1 , M v 1 = v1,M v1 v1,v1
= Λ 11 is the Rayleigh quotient which is known to be maximized for the maximal eigenvalue [32]. Now, Consider v i = d p=1 w p v p as a linear combination of the true eigenvectors. To ensure ||v i || = 1, we require ||w|| = 1. Then,
u i (v i |v j<i ) =v i Mv i − j<i (v i M v j ) 2 v j M v j =v i Mv i − j<i (v i M v j ) 2 Λ jj (19) = p q w p w q v p M v q − j<i p w p v p M v j 2 /Λ jj (20) = p q w p w q Λ qq v p v q − j<i p w p Λ jj v p v j 2 /Λ jj (21) = q w 2 q Λ qq − j<i Λ jj w 2 j = p≥i Λ pp z p(22)
where z p = w 2 p , and z ∈ ∆ d−1 which is a linear optimization problem over the simplex. For distinct Λ pp , z * = arg max(Λ pp≥ii ) = e i is unique. Assume each player i plays e i . Any player j that unilaterally deviates from e j strictly decreases their utility, therefore, the Nash is unique up to a sign change due to z * = e i = w 2 i . This is expected as both v i and −v i are principal components.
I Without the Hierarchy
In Section 3, we defined utilities to respect the natural hierarchy of eigenvectors sorted by eigenvalue and mentioned that this eased analysis. Here, we provide further detail as to the difficulty of analyzing the game without the hierarchy. Consider the following alternative definition of the utilities:
u i (v i |v −i ) =v i Mv i − j =i (v i Mv j ) 2 v j Mv j(23)
where the sum is now over all j = i instead of j < i as in Equation (6). With this form, the game is now symmetric across all players i. Despite the symmetry of the game, we can easily rule out the existence of a symmetric Nash. Proposition I.1. The EigenGame defined using symmetric utilities in Equation (23) does not contain a symmetric Nash equilibrium (assuming k ≥ 2 and rank(M ) ≥ 2).
Proof by Contradiction. Assume a symmetric Nash exists, i.e.,v i =v j for all i, j. The utility of a symmetric Nash using equation Equation (23) is
u i (v i |v −i ) = (1 − (n − 1))(v i Mv i ) = (2 − n)(v i Mv i ) ≤ 0.(24)
Consider a unilateral deviation ofv i to a direction orthogonal tov i , i.e.,v ⊥ ⊥v i such that
u i (v ⊥ ,v −i ) = (v ⊥ Mv ⊥ ) > 0.(25)
This utility is positive because rank(M ) ≥ 2 and therefore, always greater than the supposed Nash. Therefore, there is no symmetric Nash.
We can also prove that the true PCA solution is a Nash of this version of EigenGame. Proposition I.2. The the top-k eigenvectors of M form a strict-Nash equilibrium of the EigenGame defined using symmetric utilities in Equation (23) (assuming rank(M ) ≥ k).
Proof. Letv i = v i . We will assume this standard ordering, however, the proof follows through for any permutation of the eigenvectors. Clearly, the largest eigenvector is a best response to the spectrum because the penalty term (2nd term in Equation (23)) cannot be decreased below zero and the Rayliegh term (first term) is maximal, i.e., v 1 = arg maxv 1 u 1 (v 1 , v −1 ). So assume v i is another eigenvector and consider representingv i asv i = d p=1 w p v p as before in Section H. Repeating those same steps, we find
u i (v i , v −i ) = q w 2 q Λ qq − j =i Λ jj w 2 j = Λ ii z i(26)
where z k = w 2 k , z ∈ ∆ n−1 . Assuming Λ ii > 0, this objective is uniquely maximized for z i = 1 and
z k = 0 for all k = i. Therefore, v i = arg maxv i u i (v i , v −i ).
However, we were unable to prove that it is the only Nash. It is possible that other Nash equilibria exist. Instead of focusing on determining whether a second Nash equilibrium exists (which is NP-hard [15,23]), we learned through experiments that the EigenGame variant that incorporates knowledge of the hierarchy is much more performant. We leave determininig uniquess of the PCA solution for the less performant variant as an academic exercise.
J Error Propagation
J.1 Generalities
Notation. We can parameterize a vector on the sphere using the Riemannian exponential map, Exp, applied to a vector deviation from an anchor point. Formally, letv j = Exp vj (θ j ∆ j ) = cos(θ j )v j + sin(θ j )∆ j where v j is the jth largest eigenvector and ∆ j ∈ S d−1 is such that ∆ j , v j = 0. Therefore, θ j measures how farv j deviates from v j in radians and ∆ j denotes the direction of deviation.
Let Λ ii denote the ith largest eigenvalue and v i the associated eigenvector. Also define the eigenvalue gap g i = Λ ii − Λ i+1,i+1 . Finally, let κ i = Λ11 Λii denote the ith condition number. The following Lemma decomposes the utility of a player when the parents have learnt the preceding eigenvectors perfectly. Lemma J.1. Letv i = cos(θ i )v i + sin(θ i )∆ i without loss of generality. Then
u i (v i , v j<i ) = u i (v i , v j<i ) − sin 2 (θ i ) Λ ii − l>i z l Λ ll .(27)
Proof. Note that ∆ i can also be decomposed as ∆ i = d l=1 w l v l , ||w|| = 1 without loss of generality and that by Theorem H.1, this implies u i (∆ i , v j<i ) = l≥i z l Λ ll . This can be simplified further because ∆ i , v i = 0 by its definition, which implies that z i = 0. Therefore, more precisely,
u i (∆ i , v j<i ) = l>i z l Λ ll . Continuing we find u i (v i , v j<i ) = v i , Λv i − j<i v i , Λv j 2 v j , Λv j (28) = v i , Λv i − j<i Λ jj v i , v j 2 (29) = (cos 2 (θ i )Λ ii + sin 2 (θ i ) ∆ i , Λ∆ i ) − j<i Λ jj cos(θ i )v i + sin(θ i )∆ i , v j 2 (30) = (cos 2 (θ i )Λ ii + sin 2 (θ i ) ∆ i , Λ∆ i ) − j<i Λ jj sin 2 (θ i ) ∆ i , v j 2 (31) = Λ ii − sin 2 (θ i )Λ ii + sin 2 (θ i ) ∆ i , Λ∆ i − j<i Λ jj ∆ i , v j 2 (32) = u i (v i , v j<i ) − sin 2 (θ i ) Λ ii − u i (∆ i , v j<i ) (33) = u i (v i , v j<i ) − sin 2 (θ i ) Λ ii − l>i z l Λ ll . [TH.1](34)
J.2 Summary of Error Propagation Results
Player i's utility is sinusoidal in the angular deviation of θ i from the optimum. The amplitude of the sinusoid varies with the direction of the angular deviation along the sphere and is dependent on the accuracy of players j < i. In the special case where players j < i have learned the top-(i − 1) eigenvectors exactly, player i's utility simplifies (see Lemma J.1) to
u i (v i , v j<i ) = Λ ii − sin 2 (θ i ) Λ ii − l>i z l Λ ll .(35)
Note that sin 2 has period π as opposed to 2π, which simply reflects the fact that v i and −v i are both eigenvectors.
The angular distance between v i and the maximizer of player i's utility with approximate parents has tan −1 dependence (i.e., a soft step-function; see Lemma J.5). Figure 10 plots the dependence for a synthetic problem. This dependence reveals that there is an error threshold players j < i must fall below in order for player i to accurately learn the i-th eigenvector. Figure 10: Example 1 demonstrates that the angular error (x-axis) in the learned parentsv j<i must fall below a threshold (e.g., ≈ 18 • here) in order for the maximizer of player i's utility to lie near the true ith eigenvector (y-axis). The matrix M for this example has a condition number κ i = Λ11 Λii = 10.
J.3 Theorem and Proofs
In Theorem J.2, we prove that given parents close enough to their corresponding true eigenvectors, the angular deviation of a local maximizer of a child's utility from the child's true eigenvector is below a derived threshold. In other words, given accurate parents, a child can succesfully proceed to approximate its corresponding eigenvector (its utility is well posed). We prove this theorem in several steps.
First we show in Lemma J.3 that the child's utility function can be written as a composition of sinusoids with dependence on the angular deviation from the child's true eigenvector. The amplitude of the sinusoid depends on the directions in which the child and parents have deviated from their true eigenvectors along their spheres. We then simplify the composition of sinusoids to a single sinusoid in Lemma J.4. Any local max of a sinusoid is also a global max. Therefore, to upper bound the angular deviatiation of the child's local maximizer from its true corresponding eigenvector, we consider the worst case direction for the maximizer to deviate from the true eigenvector.
In Lemma J.5, we give a closed form solution for the angular deviation of a maximizer of a child's utility given any parents and deviation directions. This dependence is given by the arctan function which resembles a soft step function with a linear regime for small angular deviations, followed by a step, and then another linear regime for large angular deviations. The argument of the arctan is a ratio of terms, each with dependence on the parents' angular deviations and directions of deviation. We establish two minor lemmas, Lemma J.6 and Lemma J.7, to help bound the denominator in Lemma J.8. We then tighten the bounds on the ratio assuming parents with error below a certain threshold ("left" of the step) in Lemmas J.9, J.10, and J.11. Finally, using these bounds on the argument to the arctan, we are able to bound the angular deviation of any maximizer of the child's utility in Lemma J.2 given any deviation direction for the child or parents.
Theorem J.2. Assume it is given that |θ j | ≤ cigi (i−1)Λ11 ≤ 1 2 for all j < i with 0 ≤ c i ≤ 1 16 .
Then
|θ * i | = | arg max θi u i (v i (θ i , ∆ i ),v j<i )| ≤ 8c i .(36)
Proof. By Lemma J.11, A < 0 for c i < 1 8 . Therefore, |θ * i | = 1 2 tan −1 B A by Lemma J.5. Also, note that for z ≤ 1 2 , tan −1 (|z|) ≤ |z|. Setting c i ≤ 1 16 to ensures z = | B A | ≤ 1 2 . Then,
|θ * i | = 1 2 tan −1 B A ≤ 1 2 | B A | LJ.11 ≤ 1 2 8c 1 − 8c i ≤ 8c i .(37)
Lemma J.3. Letv j = cos(θ j )v j + sin(θ j )∆ j for all j ≤ i without loss of generality. Then
u i (v i ,v j<i ) = A(θ j , ∆ j , ∆ i ) sin 2 (θ i ) − B(θ j , ∆ j , ∆ i ) sin(2θ i ) 2 + C(θ j , ∆ j , ∆ i )(38)
where
A(θ j , ∆ j , ∆ i ) = ||∆ i || Λ −1 − Λ ii (39) − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 − Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || Λ −1 sin 2 (θ j ) (40) − j<i Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (41) B(θ j , ∆ j , ∆ i ) = j<i Λ ii Λ jj sin(2θ j ) ∆ j , v i ∆ i , v j + 2Λ ii sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (42) C(θ j , ∆ j , ∆ i ) = Λ ii − j<i Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j )
.
(43)
We abbreviate the above to A, B, C to avoid clutter in all upcoming statements and proofs. These functions are dependent on all variables except θ i .
Proof. Note that the true eigenvectors are orthogonal, so in what follows, any v i , v j = 0 where j = i. Also, recall that 2 sin(z) cos(z) = sin(2z). We highlight some but not all such simplifications. Finally, we recognize ∆ i , Λ∆ i = ||∆ i || Λ −1 as the generalized norm of ∆ i or the Mahalanobis distance from the origin.
u i (v i ,v j<i ) (44) = v i , Λv i − j<i v i , Λv j 2 v j , Λv j (45) = cos(θ i )v i + sin(θ i )∆ i , Λ cos(θ i )v i + sin(θ i )∆ i − j<i cos(θ i )v i + sin(θ i )∆ i , Λ cos(θ j )v j + sin(θ j )∆ j 2 cos(θ j )v j + sin(θ j )∆ j , Λ cos(θ j )v j + sin(θ j )∆ j (46) = Λ ii cos(θ i ) 2 + ∆ i , Λ∆ i sin 2 (θ i ) − j<i cos(θ i )v i + sin(θ i )∆ i , Λ cos(θ j )v j + sin(θ j )∆ j 2 Λ jj cos(θ j ) 2 + ∆ j , Λ∆ j sin 2 (θ j )(47)= Λ ii cos(θ i ) 2 + ||∆ i || 2 Λ −1 sin 2 (θ i ) − j<i Λ jj sin(θ i ) cos(θ j ) ∆ i , v j + Λ ii sin(θ j ) cos(θ i ) ∆ j , v i + sin(θ i ) sin(θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) .(48)
Developing the numerator of the fraction, we obtain terms in sin and in sin 2 that we later regroup to obtain the result:
= Λ ii − Λ ii sin(θ i ) 2 + ||∆ i || 2 Λ −1 sin 2 (θ i ) − j<i Λ 2 jj sin 2 (θ i ) cos 2 (θ j ) ∆ i , v j 2 + Λ 2 ii sin 2 (θ j ) cos 2 (θ i ) ∆ j , v i 2 + sin 2 (θ i ) sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (49) − 2 j<i Λ ii Λ jj sin(θ i )sin(θ j )cos(θ i )cos(θ j ) ∆ j , v i ∆ i , v j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (50) − 2 j<i Λ jj sin 2 (θ i ) sin(θ j ) cos(θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (51) − 2 j<i Λ ii sin(θ i ) cos(θ i ) sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j )(52)= Λ ii − Λ ii sin 2 (θ i ) + ||∆ i || 2 Λ −1 sin 2 (θ i ) − j<i Λ 2 jj sin 2 (θ i ) cos 2 (θ j ) ∆ i , v j 2 + Λ 2 ii sin 2 (θ j ) cos 2 (θ i ) ∆ j , v i 2 + sin 2 (θ i ) sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (53) − 1 2 j<i Λ ii Λ jj sin(2θ i )sin(2θ j ) ∆ j , v i ∆ i , v j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (54) − j<i Λ jj sin 2 (θ i ) sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (55) − j<i Λ ii sin(2θ i ) sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) .(56)
Collecting terms, we find
u i (v i ,v j<i ) (57) = sin 2 (θ i ) ||∆ i || 2 Λ −1 − Λ ii (58) − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 − Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (59) − j<i Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (60) − sin(2θ i ) 2 j<i Λ ii Λ jj sin(2θ j ) ∆ j , v i ∆ i , v j + 2Λ ii sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j )(61)+ Λ ii − j<i Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (62) def = A sin 2 (θ i ) − B sin(2θ i ) 2 + C.(63)
Lemma J.4. The utility function along ∆ i , θ :
→ u i (v i (θ i , ∆ i ),v j<i )
, is sinusoidal with period π:
u i (v i (θ i , ∆ i ),v j<i ) = 1 2 A 2 + B 2 cos(2θ i + φ) + A + 2C(64)
where φ = tan −1 B A .
Proof. Starting from Lemma J.3, we find
u i (v i (θ i , ∆ i ),v j<i ) = A sin 2 (θ i ) − B sin(2θ i ) 2 + C (65) = A 1 − cos(2θ i ) 2 − B sin(2θ i ) 2 + C (66) = 1 2 − A cos(2θ i ) − B sin(2θ i ) + A + 2C (67) = 1 2 A 2 + B 2 cos(2θ i + φ) + A + 2C (68) where φ = tan −1 B A .
Lemma J.5. The angular deviation, θ i , of the vector that maximizes the mis-specified objective,
arg max θi u i (v i (θ i , ∆ i ),v j<i )
, is given by
|θ * i | = 1 2 tan −1 | B A | if A < 0 π 4 if A = 0 1 2 π − tan −1 | B A | if A > 0(69)
where A and B are given by Lemma J.3.
Proof. First, we identify the critical points:
∂ ∂θ i u i (v i ,v j<i ) = 2A sin(θ i ) cos(θ i ) − B cos(2θ i ) = 0 (70) = A sin(2θ i ) − B cos(2θ i ) = 0 (71) = 1 cos(2θ i ) [tan(2θ i )A − B] = 0 (72) tan(2θ i ) = B A .(73)
Then we determine maxima vs minima:
∂ 2 ∂θ i u i (v i ,v j<i ) = 2 cos(2θ i ) [B tan(2θ i ) + A] = 2 cos(2θ i ) [ B 2 A + A],(74)
therefore, sign( ∂ 2 ∂θi u i ) = sign(cos(2θ i ))sign(A) < 0 for θ i to be a local maximum. If A < 0, then θ * i must lie within [− π 4 , π 4 ]. If A > 0, then θ * i must lie within [− π 2 , − π 4 ] or [ π 4 , π 2 ]. By inspection, if A = 0, then u i is maximized at θ i = − π 4 sign(B). In general, we are interested in the magnitude of θ i , not its sign.
Lemma J.6. The following relationship is useful for proving Lemma J.8:
b a + c = b a 1 − c a + c (75) Proof. b a + c = b a + x (76) =⇒ x = b a + c − b a = b 1 a + c − 1 a (77) = b a − (a + c) a(a + c) = − b a c a + c .(78)Lemma J.7. If ∆ i , v i = 0, then u i (∆ i , v j<i ) ≤ Λ i+1,i+1 .
Proof. Recall the Nash proof in Appendix H:
u i (∆ i , v j<i ) = p≥i Λ pp z p(79)
where z p = w 2 p , ∆ i = d p=1 w p v p , and z ∈ ∆ d−1 . The fact that ∆ i , v i = 0 implies that z i = 0. Therefore, the utility simplifies to
u i (∆ i , v j<i ) = p≥i+1 Λ pp z p(80)
which is upper bounded by Λ i+1,i+1 .
Lemma J.8. Assume |θ j | ≤ for all j < i (implies sin 2 (θ j ) ≤ 2 ). Then
A ≤ −g i + (i − 1)(Λ 11 + Λ ii ) 2 1 − 2 + 2(i − 1)Λ 11 √ 1 − 2 .(81)
Proof.
A(θ j<i ) = ||∆ i || 2 Λ −1 − Λ ii − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 − Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) − j<i Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (82) = ||∆ i || 2 Λ −1 − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j ) − Λ ii − j<i −Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 + Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j )(83)= ||∆ i || 2 Λ −1 − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 Λ jj cos 2 (θ j ) 1 − ||∆ j || 2 Λ −1 sin 2 (θ j ) Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j ) − Λ ii − j<i −Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 + Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j )(84)≤ ||∆ i || 2 Λ −1 − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 Λ jj cos 2 (θ j ) + j<i ||∆ j || 2 Λ −1 sin 2 (θ j ) Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 Λ 2 jj cos 4 (θ j ) − Λ ii + j<i Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + 2Λ jj sin 2 (θ j ) cos 2 (θ j )| ∆ i , v j || ∆ i , Λ∆ j | Λ jj cos 2 (θ j ) (85) = u i (∆ i , v j<i ) + j<i ||∆ j || 2 Λ −1 sin 2 (θ j ) ∆ i , v j 2 cos 2 (θ j ) − Λ ii + j<i Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + 2Λ jj sin 2 (θ j ) cos 2 (θ j )| ∆ i , v j || ∆ i , Λ∆ j | Λ jj cos 2 (θ j ) (86) [LJ.7] ≤ Λ i+1,i+1 − Λ ii + j<i ||∆ j || 2 Λ −1 sin 2 (θ j ) : 1 ∆ i , v j 2 cos 2 (θ j ) + j<i Λ 2 ii sin 2 (θ j ) : 1 ∆ j , v i 2 + 2Λ jj sin 2 (θ j ) cos 2 (θ j )| ∆ i , v j || ∆ i , Λ∆ j | Λ jj cos 2 (θ j )(87)≤ Λ i+1,i+1 − Λ ii + j<i 2 Λ 11 + Λ ii cos 2 (θ j ) + 2 Λ jj sin 2 (θ j ) cos 2 (θ j )| ∆ i , v j || ∆ i , Λ∆ j | Λ jj cos 2 (θ j ) (88) ≤ Λ i+1,i+1 − Λ ii + j<i 2 Λ 11 + Λ ii cos 2 (θ j ) + 2Λ 11 sin 2 (θ j ) cos 2 (θ) (89) ≤ Λ i+1,i+1 − Λ ii + (i − 1)(Λ 11 + Λ ii ) 2 1 − 2 + 2(i − 1)Λ 11 √ 1 − 2 . (90) Note Λ 2 ii Λjj < Λ ii because Λ ii < Λ jj for all j < i. Lemma J.9. Assume 2 ≤ 1 2 . Then A ≤ −g i + 8(i − 1)Λ 11 .(91)Assume 2 ≤ 1 2 so √ 1− 2 ≤ 1. Then A ≤ Λ i+1,i+1 − Λ ii + (i − 1)(Λ 11 + Λ ii ) 2 1 − 2 + 2(i − 1)Λ 11 √ 1 − 2 (92) ≤ −g i + (i − 1) √ 1 − 2 3Λ 11 + Λ ii (93) ≤ −g i + 4(i − 1)Λ 11 √ 1 − 2 (94) ≤ −g i + 8(i − 1)Λ 11 .(95)
Lemma J. 10.
Assume 2 ≤ 1 2 . Then |B| ≤ 8(i − 1)Λ ii κ i−1 .(96)
Proof.
|B| = j<i |Λ ii Λ jj sin(2θ j ) ∆ j , v i ∆ i , v j + 2Λ ii sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j | Λ jj cos(θ j ) 2 + ||∆ j || Λ −1 sin 2 (θ j ) (97) ≤ j<i Λ ii Λ jj sin 2 (2θ j ) + 2Λ ii sin 2 (θ j )Λ 11 Λ jj cos(θ j ) 2 (98) ≤ j<i Λ ii Λ jj 4 sin 2 (θ j ) cos 2 (θ j ) + 2Λ ii sin 2 (θ j )Λ 11 Λ jj cos(θ j ) 2 (99) ≤ 2 j<i Λ ii Λ jj + Λ ii 2 Λ 11 Λ jj (1 − 2 ) (100) = 2Λ ii 1 − 2 (i − 1) + j<i κ j (101) ≤ 4Λ ii (i − 1) + (i − 1)κ i−1 (102) = 4(i − 1)Λ ii 1 + κ i−1 (103) ≤ 4(i − 1)Λ ii 1 + 1 √ 2 κ i−1 (104) ≤ 8(i − 1)Λ ii κ i−1 .(105)
Lemma J.11. Let i = cigi (i−1)Λ11 with c i < 1 8 . Then
(i) A ≤ 0, (ii) B A ≤ 8ci 1−8ci .
Proof. Plugging in Lemma J.9 and i , we find
A ≤ −g i + 8c i (i − 1)Λ 11 g i (i − 1)Λ 11 = −g i + 8c i g i = (8c i − 1)g i .(106)
Since we assumed c i < 1/8, this proves (i). Plugging in Lemma J.10 and i solves (ii):
Equation (106) =⇒ |A| ≥ (1 − 8c i )g i (107) |B| ≤ 8c i (i − 1)Λ ii κ i−1 g i (i − 1)Λ 11 = 8c i g i Λ ii Λ i−1,i−1 ≤ 8c i g i (108) =⇒ | B A | ≤ 8c i 1 − 8c i .(109)
Example 1. We construct the following example in order to concreteley demonstrate the arctan dependence of a child (v i ) on a parent (v 1 in this case).
Let ∆ 1 = v i , ∆ i = v 1 , ∆ 1<j<i = v i+1
and constrain all parents to have error sin(θ j ) = for all j < i. Then the child's optimum has an angular deviation from the true eigenvector direction of
|θ * i | = 1 2 tan −1 | B A | if A < 0 π 4 if A = 0 1 2 π − tan −1 | B A | if A > 0 (110) where | B A | = 2 √ 1− 2 |1− 2 (κi+ 1 κ i )| .
Proof. Note that ∆ i , v 1<j<i , ∆ 1<j<i , v i , and ∆ i , Λ∆ j all equal 0 by design; and ∆ i , v 1 = ∆ 1 , v i = 1. Plugging into Lemma J.3, all elements of the sum disappear for j ≥ 1 and only the blue terms survive for j = 1. We find
A = ||∆ i || Λ −1 − Λ ii (111) − j<i Λ 2 jj cos 2 (θ j ) ∆ i , v j 2 − Λ 2 ii sin 2 (θ j ) ∆ j , v i 2 + sin 2 (θ j ) ∆ i , Λ∆ j 2 Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j ) (112) − j<i Λ jj sin(2θ j ) ∆ i , v j ∆ i , Λ∆ j Λ jj cos 2 (θ j ) + ||∆ j || 2 Λ −1 sin 2 (θ j ) (113) = Λ 11 − Λ ii − Λ 2 11 (1 − 2 ) − Λ 2 ii 2 Λ 11 (1 − 2 ) + Λ 11 2 (114) = Λ 11 − Λ ii − Λ 2 11 (1 − 2 ) − Λ 2 ii 2 Λ 11 (115) = Λ 11 − Λ ii − Λ 11 (1 − 2 ) − Λ ii κ i 2 (116) = −Λ ii + 2 (Λ 11 + Λ ii κ i )(117)
and
B = j<i Λ ii Λ jj sin(2θ j ) ∆ j , v i ∆ i , v j + 2Λ ii sin 2 (θ j ) ∆ j , v i ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j ) (118) = Λ ii Λ 11 sin(2θ 1 ) Λ 11 cos(θ 1 ) 2 + ||∆ 1 || 2 Λ −1 sin 2 (θ 1 ) (119) = 2 Λ ii Λ 11 2 (1 − 2 ) Λ 11 (1 − 2 ) + Λ 11 2 (120) = 2Λ ii 1 − 2 .(121)
Then
| B A | = 2Λ ii √ 1 − 2 |Λ ii − 2 (Λ 11 + Λii κi )| = 2 √ 1 − 2 |1 − 2 (κ i + 1 κi )| .(122)
K Convergence Proof
K.1 Non-Convex Riemannian Optimization Theory
We repeat the non-convex Riemannian optimization rates here from [10] for convenience. Lemma K.1. Under Assumptions K.2 and K.3, generic Riemannian descent (Algorithm 5) returns
x ∈ M satisfying f (x) ≤ f (x 0 ) and ||∇ R f (x)|| ≤ ρ in at most f (x 0 ) − f * ξ · 1 ρ 2 (123) iterations, provided ρ ≤ ξ ξ . If ρ > ξ ξ , at most f (x0)−f * ξ · 1 ρ iterations are required.
Proof. See Theorem 2.5 in [10]. [10].
Assumption K.2. There exists f * > −∞ such that f (x) ≥ f * for all x ∈ M . See Assumption 2.3 in [10]. Assumption K.3. There exist ξ, ξ > 0 such that, for all k ≥ 0, f (x k ) − f (x k+1 ) ≥ min(ξ||∇ R f (x k )||, ξ )||∇ R f (x k )||. See Assumption 2.4 in
Algorithm 5 Generic Riemannian descent algorithm
Given: f : M → R differentiable, a retraction Retr on M, x 0 ∈ M, ρ > 0 Init: k ← 0 while ||∇ R f (x k )|| > ρ do Pick η k ∈ T x k M end while return x k
K.2 Convergence of EigenGame
Theorem K.4 provides an asymptotic convergence guarantee for Algorithm 1 (below) to recover the top-k principal components. Assumingv i is initialized within π 4 of v i for all i ≤ k, Theorem K.5 provides a finite sample convergence rate. In particular, it specifies the total number of iterations required to learn parents such thatv k can be learned within a desired tolerance.
The proof of Theorem K.4 proceeds in several steps. First, recall that player i's utility is sinusoidal in its angular deviation from v i and therefore, technically, non-concave although it is simple in the sense that every local maximum is a global maximum (w.r.t. angular deviation). Also, note that our ascent is not performed on the natural parameters of the sphere (θ i and ∆ i ), but rather onv i directly withv i ∈ S d−1 , a Riemannian manifold.
We therefore leverage recent results in non-convex optimization, specifically minimization, for Riemannian manifolds [10], repeated here for convenience (see Theorem K.1). Note, we are maximizing a utility so we simply flip the sign of our utility to apply this theory. The convergence rate guarantee given by this theory is for generic Riemannian descent with a constant step size, Algorithm 5, and makes two assumptions. One is a bound on the utility (Lemma K.2) and the other is a smoothness or Lipschitz condition (Lemma K.3). The convergence rate itself states the number of iterations required for the norm of the Riemannian gradient to fall below a given threshold. The theory also guarantees descent in that the solution returned by the algorithm will have lower loss (higher utility) than the vector passed to the algorithm.
The probability of sampling a vectorv 0 i at angular deviation within φ of the maximizer is given by
P [|θ 0 i − θ * i | ≤ φ] = I sin 2 (φ) ( d − 1 2 , 1 2 ) = Beta(sin 2 φ, d−1 2 , 1 2 ) Beta(1, d−1 2 , 1 2 )(124)
where Beta is the incomplete beta function, and I is the normalized incomplete beta function [39]. This probability quickly approaches zero for φ < π 2 as the dimension d increases. Therefore, for large d, it becomes highly probable thatv i will be initialized near an angle π 2 from the true eigenvector-in other words, all points are far from each other in high dimensions. In this case,v i lies near a trough of the sinusoidal utility where gradients are small. Without a bound on the minimum possible gradient norm, a finite sample rate cannot be constructed (how many iterations are required to escape the trough?). Therefore, we can only guarantee asymptotic convergence in this setting. Next, we consider the fortuitous case where allv i have been initialized within π 4 . This is both to obtain a convergence rate for this setting, but also to highlight the Big-O dependencies. Note that the utility is symmetric across π 4 and the number of iterations required to escape a trough and reach the π 4 mark is equal to the number of iterations required to ascend from π 4 to the same distance from the peak. In order to ensure this theory can provide meaningful bounds for EigenGame, we first show, assuming a child is within π 4 of its maximizer, that the norm of the Riemannian gradient bounds the angular deviation of a child from this maximizer.
To begin the proof, we relate the error in the parents to a bound on the ambient gradient in Lemma K.8. This bound is then tightened assuming parents with error below a certain threshold in Lemma K.9. Using the fact that u i =v i ∇v i u i , this bound directly translates to a bound on the utility in Corollary K.9.1, thereby satisfying Assumption K.2. Again, given accurate parents, Lemma K.10 proves Assumption K.3 on smoothness is satisfied and derives some of the constants for the ultimate convergence rate.
Recall that we have so far been proving convergence to a local maximizer of a child's utility, which, assuming inaccurate parents, is not the same as the true eigenvector. Lemma K.11 upper bounds the angular deviation of an approximate maximizer from the true eigenvector using the angular deviation of a maximizer plus the approximate maximizer's approximation error. Lemma K.12 then provides the convergence rate for the child to approach the true eigenvector given accurate enough parents. Finally, Theorem K.4 compiles the chain of convergence rates leading up the DAG towardsv 1 and derives a convergence rate for child k given all previous parents have been learned to a high enough degree of accuracy. The number of iterations required for each parent in the chain is provided.
Theorem K.4. Assume all spectral gaps are positive, i.e. for i = 1...k, g i > 0. Let θ k denote the angular distance (in radians) ofv k from the true eigenvector v k . Let the maximum desired error for θ k = θ tol ≤ 1 radian. Then set c k = θtol 16 , ρ k = g k 2π θ tol , and
ρ i = g i g i+1 2πiΛ 11 c i+1 (125) c i ≤ (i − 1)! k j=i+1 g j (16Λ 11 ) k−i (k − 1)! c k(126)
for i < k where the c i 's are dictated by eachv i to its parents and represent fractions of a canonical error threshold; for example, ifv k sets c k = 1 16 , then this threshold gets communicated up the DAG to each parent, each time strengthening.
Consider learningv i by applying Algorithm 1 successively, i.e., learnv 1 , stop ascent, learnv 2 , and so on, each with step size 1 2L and corresponding ρ i where L = 4 Λ 11 k + (1 + κ k−1 ) g k 16 . Then the top-k principal components will be returned, each within tolerance θ tol , in the limit.
Proof. In order to learnv k , we need |θ j | ≤ c k g k (k−1)Λ11 with c k ≤ 1 16 for all j < k. If this requirement is met, then by Lemma K.11, the angular error inv k after running Riemannian gradient ascent is bounded as
|θ k | ≤¯ + 8c k(127)
where¯ denotes the convergence error and the error propagated by the parents is 8c k . The quantity, g k (k−1)Λ11 , in the parents bound is 8, so the parents must be very accurate to reduce the error propagated to the child. Each parent must then convey this information up the chain, strengthening the requirement each hop.
Let half the error in |θ k | come from mis-specifying the utility with imperfect parents,v j<k , and the other half from convergence error. The error after learningv k−1 via Riemannian gradient ascent must be less than the threshold required for learning the kth eigenvector. Assumingv k−1 's parents have been learned accurately enough, |θ j<k−1 | ≤ c k−1 g k−1 (k−2)Λ11 , and thatv j≤k were initialized within π 4 of their maximizers, we require:
|θ k−1 | LK.12 ≤ π g k−1 ρ k−1 + 8c k−1 ≤ c k g k (k − 1)Λ 11 .(128)
More generally, the error after learningv i−1 must be less than the threshold for learning any of its successors:
|θ i−1 | ≤ π g i−1 ρ i−1 + 8c i−1 ≤ min i−1<l≤k c l g l (l − 1)Λ 11 .(129)
Assume for now that the arg min of the expression is i, the immediate child. First we bound the error fromv i−1 's parents:
8c i−1 ≤ c i g i 2(i − 1)Λ 11 (130) =⇒ c i−1 ≤ c i g i 16(i − 1)Λ 11 .(131)
Note the 2 in the denominator of Equation (130) which appears because we desired half the error to come from the parents (half is an arbitrary choice in the analysis). Continuing this process recursively implies
c i−2 ≤ c i−1 g i−1 16(i − 2)Λ 11 ≤ c i g i−1 g i 16 2 (i − 2)(i − 1)Λ 2 11 (132) =⇒ c i−n ≤ (i − n − 1)! i j=i−n+1 g j (16Λ 11 ) n (i − 1)! c i .(133)
One can see that c j<i is strictly smaller than c i because each additional term added to the product is strictly less than 1-the assumption of the arg min above is therefore correct. In particular, this requires the first eigenvector to be learned to very high accuracy to enable learning the kth:
c 1 ≤ k j=2 g j (16Λ 11 ) k−1 (k − 1)! c k .(134)
More generally
c i ≤ (i − 1)! k j=i+1 g j (16Λ 11 ) k−i (k − 1)! c k(135)
This completes the requirement for mitigating error in the parents.
The convergence error from gradient ascent must also be bounded as (again, note the 2)
π g i ρ i ≤ c i+1 g i+1 2iΛ 11 (136) =⇒ ρ i ≤ g i g i+1 2πiΛ 11 c i+1(137)
which requires at most
t i = 5 πiΛ 11 g i g i+1 2 1 c 2 i+1(138)
iterations. Givenv i is initialized within π 4 of its maximizer, it follows that learning eachv j<k consecutively via Riemannian gradient ascent for at most k−1 i=1 t i iterations is sufficient for learning the k-th eigenvector. Riemannian gradient ascent onv k then returns (Lemma K.12)
|θ k | ≤ π g k ρ k + 8c k ≤ π g k ρ k + θ tol 2(139)
after at most
t k = 5 4 · 1 ρ 2 k = 5π 2 (θ tol g k ) 2(140)
iterations.
We can relax the assumption thatv i is initialized within π 4 of its maximizer and obtain global convergence. Assume that π 2 − |θ 0 i | ≤ π 4 and let ||∇v0 i || be the initial norm of the Riemannian gradient. The utility function u i (v i ,v j<i ) is symmetric across π 4 . Therefore, the number of iterations required to ascend to within π 4 is given by Lemma K.12:
t + i = 5 4 π g i 2 1 ( π 2 − |θ 0 i |) 2 .(141)
Alternatively, simply set the desired gradient norm to be less than the initial. This necessarily requires iterates to ascend to past π 4 . As long asv i is not initialized to exactly π 2 from the maximum (an event with Lebesgue measure 0), the ascent process will converge to the maximizer.
Theorem K.5. Apply the algorithm outlined in Theorem K.4 with the same assumptions. Then with probability
P [|θ 0 i − θ * i | ≤ π 4 ] = I 1 2 ( d − 1 2 , 1 2 ) (142)
where I is the normalized incomplete beta function, the max total number of iterations required for learning all vectors to adequate accuracy is
T k = O k (16Λ k 11 )(k − 1)! k j=1 g j 1 θ tol 2 .(143)
Discussion. In other words, assuming allv i are fortuitously initialized within π 4 of their maximizers, then we can state a finite sample convergence rate. The first k in the Big-O formula for total iterations appears simply from a naive summing of worst case bounds on the number of iterations required to learn eachv j<k individually. The constant 16 is a loose bound that arises from the error propagation analysis. Essentially, parent vectors,v j<i , must be learned to under 1 16 a canonical error threshold for the childv i , gi (i−1)Λ11 . The Riemannian optimization theory we leverage dictates that 1 ρ 2 i iterations are required to meet a O(ρ i ) error threshold. This is why the squared inverse of the error threshold appears here. Breaking down the error threshold itself, the ratio Λ11 gi says that more iterations are required to distinguish eigenvectors when the difference between them (summarized by the gap g i ) is small relative to the scale of the spectrum, Λ 11 . The (k − 1)! term appears because learning smaller eigenvectors requires learning a much more accuratev 1 higher up the chain.
Proof. Assumev i is sampled uniformly in S d−1 . Note this can be accomplished by normalizing a sample from a multivariate Gaussian. We will prove (i) the probability of the event thatv 0 i is within π 4 of the maximizer of u i (v i ,v j<i ), (ii) an upper bound on the number of iterations required to return allv i with angular error less than θ tol .
The probability of sampling a vectorv 0 i at angular deviation within π 4 of the maximizer is given by twice the probability of sampling from one of the spherical caps around v i or −v i . This probability is
P [|θ 0 i − θ * i | ≤ φ] = I sin 2 (φ) ( d − 1 2 , 1 2 ) = Beta(sin 2 (φ), d−1 2 , 1 2 ) Beta(1, d−1 2 , 1 2 )(144)
where Beta is the incomplete beta function, and I is the normalized incomplete beta function [39]. This probability quickly approaches zero for φ < π 2 as the dimension d increases. This proves (i). Plugging the bound on c i
c i ≤ (i − 1)! k j=i+1 g j (16Λ 11 ) k−i (k − 1)! c k(145)
into the bound on iterations t i = 5 πiΛ 11 g i g i+1
2 1 c 2 i+1(146)
we find
t i = 5 πiΛ 11 g i g i+1 2 (16Λ 11 ) 2(k−i−1) ((k − 1)!) 2 (i!) 2 k j=i+2 g 2 j 1 c 2 k (147) = 5π 2 16 2(k−i) Λ 2(k−i) 11 ((k − 1)!) 2 k j=i g 2 j ((i − 1)!) 2 1 (16c k ) 2 (148) ≤ 5π 2 (16Λ 11 ) k−1 (k − 1)! k j=1 g j 1 16c k 2 [Λ 11 ≥ g i ∀i] (149) = O (16Λ 11 ) k (k − 1)! k j=1 g j 1 16c k 2 (150)
which is now in a form independent of i (worst case). It can be shown that t k ≤ t 1 by taking their log and applying Jensen's inequality. The total iterations required for learningv j<k is at most k − 1 times this. Therefore,
T k = O k (16Λ 11 ) k (k − 1)! k j=1 g j 1 16c k 2 .
(151)
Lemma K.6. Assumev i is within π 4 of its maximizer, i.e., |θ i − θ * i | ≤ π 4 . Also, assume that |θ j<i | ≤ cigi (i−1)Λ11 ≤ 1 2 with 0 ≤ c i ≤ 1 16 . Then the norm of the Riemannian gradient of u i upper bounds this angular deviation:
|θ i − θ * i | ≤ π g i ||∇ R vi u i (v i ,v j<i )||.(152)
Proof. The Riemannian gradient measures how the utility u i changes while moving along the manifold. In contrast, the ambient gradient measures how u i changes while moving in ambient space, possibly off the manifold. Rather than bounding the angular deviation using the projection of the ambient gradient onto the tangent space of the manifold, (I −v iv i )∇v i u i , we instead reparameterizev i to ensure it lies on the manifold,v i = cos(θ i )v i + sin(θ i )∆ i where ∆ i is a unit vector and v i , ∆ i = 0. Computing gradients with respect to the new unconstrained arguments allows recovering a bound on the Riemannian gradient via a simple chain rule calculation.
We lower bound the norm of the Riemannian gradient as follows:
∂u i ∂θ i = ∇ R vi u i (v i ,v j<i ) ∂v ∂θ i (153) =⇒ || ∂u i ∂θ i || ≤ ||∇ R vi u i (v i ,v j<i |||| ∂v i ∂θ i || (154) =⇒ ||∇ R vi u i (v i ,v j<i || ≥ ||∂u i /∂θ i || ||∂v i /∂θ i || .(155)
Note that ||∂v i /∂θ i || = 1 by design. And the numerator can be bounded using Lemma J.4 as
||∂u i /∂θ i || = A 2 + B 2 | sin(2 θ i − θ * i ) |(156)
where θ * i = − φ 2 and φ = tan −1 B A . Furthermore, assume |θ i − θ * i | ≤ π 4 . Then
| sin(2 θ i − θ * i ) | ≥ 2 π θ i − θ * i ) .(157)≥ 2 π A 2 + B 2 |θ i − θ * i | (160) ≥ 2 π |A||θ i − θ * i | (161) LJ.11 ≥ 2 π (1 − 8c)g i |θ i − θ * i | (162) ≥ g i π |θ i − θ * i |(163)
completing the proof.
Lemma K.7. Let |θ j | ≤ < 1 for all j < i. Then the ratio of generalized inner products is bounded as
v i , Λv j v j , Λv j ≤ 1 + (1 + κ j ) √ 1 − 2 .(164)
Proof. We writev j≤i = cos(θ j )v j + sin(θ j )∆ j where ∆ j , v j = 0 without loss of generality. Note that |θ j | ≤ implies | sin(θ j )| ≤ . Then v i , Λv j v j , Λv j (165) = cos(θ i )v i + sin(θ i )∆ i , Λ cos(θ j )v j + sin(θ j )∆ j cos(θ j )v j + sin(θ j )∆ j , Λ cos(θ j )v j + sin(θ j )∆ j (166) = cos(θ i )v i + sin(θ i )∆ i , Λ cos(θ j )v j + sin(θ j )∆ j Λ jj cos(θ j ) 2 + ∆ j , Λ∆ j sin 2 (θ j ) (167) = Λ jj sin(θ i ) cos(θ j ) ∆ i , v j + Λ ii sin(θ j ) cos(θ i ) ∆ j , v i + sin(θ i ) sin(θ j ) ∆ i , Λ∆ j Λ jj cos(θ j ) 2 + ||∆ j || 2 Λ −1 sin 2 (θ j )
(168) ≤ Λ jj | sin(θ i )| √ 1 − 2 + Λ ii | cos(θ i )| + | sin(θ i )| Λ 11 Λ jj (1 − 2 ) (169) ≤ Λ jj √ 1 − 2 + Λ ii + Λ 11 Λ jj (1 − 2 )(170)= 1 √ 1 − 2 + Λ ii Λ jj + κ j √ 1 − 2(171)≤ 1 + (1 + κ j ) √ 1 − 2 .(172)
Lemma K.8 (Lipschitz Bound). Let |θ j | ≤ < 1 for all j < i. Then the norm of the ambient gradient of u i is bounded as
||∇v i u i (v i ,v j<i )|| ≤ 2Λ 11 1 + (i − 1) 1 + (1 + κ i−1 ) √ 1 − 2 .(173)
Proof. Let η = α∇ R vi u i = α(I −v iv i )∇v i u i , α > 0, andη = η ||η|| . Letv i =v i+η γ where γ = ||v i + η||.
u i (v i ) = 1 γ 2 (v i + η) Λ(v i + η) − j<i (v i + η) Λv j 2 v j Λv j (186) = 1 γ 2 v i Λv i − j<i (v i Λv j ) 2 v j Λv j + η Λη − j<i (η Λv j ) 2 v j Λv j + 2η Λv i − 2 j<i (v i Λv j )(η Λv j ) v j Λv j (187) = 1 γ 2 u i (v i ) + u i (η) + 2η ∇v i u i (v i ) (188) = 1 γ 2 u i (v i ) + ||η|| 2 u i (η) + 2η ∇v i u i (v i )(189)
The vectorsv i and ∇v i u i (v i ) define a 2-d plane in whichv i lies independent of the step size α. Therefore, we can consider gradients confined to a 2-d plane without loss of generality. Specifically, letv i = 0 1 and ∇ = ∇v i u i (v i ) = β cos(ψ) sin(ψ) . Then ∇ R = ∇ R vi u i (v i ) = β cos(ψ) 0 and γ = 1 + ||η|| 2 = 1 + α 2 β 2 cos 2 (ψ). Also, let z = β cos(ψ) and α < 1 Li (see Equation (179) for definition) which implies α 2 ||∇ R || 2 < 1. Then
u i (v i ) − u i (v i ) (190) = ≤0 ( 1 γ 2 − 1) u i (v i ) + 1 γ 2 (||η|| 2 u i (η) + 2η ∇v i u i (v i )) (191) CK.9.1 ≥ ( 1 γ 2 − 1)L i + 1 γ 2 (α 2 ||∇ R || 2 u i (η) + 2α∇ ∇ R ) (192) CK.9.1 ≥ ( 1 γ 2 − 1)L i + 1 γ 2 (2α∇ ∇ R + α 2 ||∇ R || 2 (−L i ))(193)
= ( 1 1 + α 2 β 2 cos 2 (ψ) − 1)L i + α 1 + α 2 β 2 cos 2 (ψ) (2 − αL i )β 2 cos 2 (ψ)
= ( 1 1 + α 2 z 2 − 1)L i + α(2 − αL i ) 1 + α 2 z 2 z 2 (195) = 1 1 + α 2 z 2 (L i − L i α 2 z 2 − L + α(2 − αL i )z 2 ) (196) = 1 1 + α 2 z 2 (−2L i α 2 z 2 + 2αz 2 ) (197)
= 2αz 2 1 + α 2 z 2 (1 − αL i ) > 0(198)
where the assumption that |θ j | ≤ cigi (i−1)Λ11 was used to leverage Corollary K.9.1. Let α = 1 2Li . Then ||η|| 2 = α 2 z 2 ≤ 1 4 and
u i (v i ) − u i (v i ) ≥ 2αz 2 1 + α 2 z 2 (1 − αL i ) (199) = 2α 2 z 2 1 + α 2 z 2 1 − αL i α (200) = 2L i α 2 z 2 1 + α 2 z 2 (201) = 2L i ||η|| 2 1 + ||η|| 2(202)
≥ min(ξ||η|| 2 , ξ ||η||)
with ξ = ξ = 8 5 L i .
Lemma K.11 (Approximate Optimization is Reasonable Given Accurate Parents). Assume |θ j | ≤ cigi (i−1)Λ11 ≤ 1 2 for all j < i with 0 ≤ c ≤ 1 16 , i.e., the parents have been learned accurately. Then for any approximate local maximizer (θ i ,∆ i ) of u i (v i (θ i , ∆ i ),v j<i ), if the angular deviation |θ i − θ * i | ≤¯ where θ * i forms the global max, |θ i | ≤¯ + 8c i (204)
whereθ i denotes the angular distance of the approximate local maximizer to the true eigenvector v i .
Proof. Note that the true eigenvector occurs atθ i = 0. The result follows directly from Theorem J.2:
|θ i | = |θ i − 0| ≤ |θ i − θ * i | + |θ * i − 0| ≤¯ + 8c i .(205)
Lemma K.12. Assumev i is initialized within π 4 of its maximizer and its parents are accurate enough, i.e., that |θ j<i | ≤ cigi (i−1)Λ11 ≤ 1 2 with 0 ≤ c i ≤ 1 16 . Let ρ i be the maximum tolerated error desired forv i . Then Riemannian gradient ascent returns
|θ i | ≤ π g i ρ i + 8c i(206)
after at most
5 4 · 1 ρ 2 i (207)
iterations.
Proof. Note that the assumptions of Lemma K.1 are met by Corollary K.9.1 and Lemma K.10 with ξ = ξ = 8 5 and Riemannian gradient ascent. Plugging into Lemma K.1 ensures that Riemannian gradient ascent returns unit vectorv i satisfying u(v i ) ≥ u(v 0 i ) and ||∇ R || ≤ ρ i in at most
u(v * i ) − u(v 0 i ) 8 5 L i · 1 ρ 2 i(208)
iterations (wherev i is initialized tov 0 i ). Additionally, note that for anyv i , u i (v * i ) − u i (v i ) ≤ 2L i where L i bounds the absolute value of the utility u i (see Corollary K.9.1) andv * i = arg max u i (v i ). Combining this with Lemma K.6 gives
|θ i − θ * i | ≤ π g i ρ i(209)
after at most
5 4 · 1 ρ 2 i(210)
iterations. Lastly, translating |θ i − θ * i | to |θ i | using Lemma K.11 gives the desired result.
Figure 2 :
2EigenGame guides eacĥ v i along the unit-sphere from to in parallel; M = diag([3, 2, 1]).
Figure 5 :
5Block 1 mean activation maps of the top-32 principal components of RESNET-200 on IMAGENET computed with EigenGame. of a pretrained RESNET-200 on the IMAGENET dataset. We concatenate the flattened activations from the output of each residual block resulting in a d = 20M dimensional vector representation for each of the 1.2M input images. It is not possible to store the entire 195TB matrix in memory, nor incrementally compute the Gram/covariance matrix.
Figure 6 :
6Top-32 Rayleigh quotients of the matrix of RESNET-200 activations recovered with EigenGame and their respective utilities.
Λ 11 I − M . The top-k eigenvectors of M ' are the bottom-k eigenvectors of M . For example, the dth eigenvector of M , v d , is the largest eigenvector of M :
Figure 7 :
7Comparison of mean activation maps between Oja's with deflation, EigenGame, and FD for a section of the top principal components of RESNET-50 on IMAGENET.
Figure 8 :
8(a) The longest streak of consecutive vectors with angular error less than π 8 radians is plotted vs algorithm iterations on MNIST for minibatch sizes of 256 and 512. Shaded regions highlight ± standard error of the mean for the best performing learning rates. Average runtimes are reported in seconds next to the method names. (b) Subspace distance on MNIST. (a,b) Learning rates were chosen from {10 −3 , . . . , 10 −6 } on 10 held out runs. All plots show means over 10 trials.
of v j < i to v i
Combining the results gives||∇ R vi u i (v i ,v j<i || ≥ ||∂u i /∂θ i || ||∂v/∂θ i || (158) = ||∂u i /∂θ i ||(159)
Most generalizations to the top-k involve adding an orthonormalization step.
Unique up to a sign change; this is expected as both vi and −vi represent the same principal component.
Systems such as Google Cloud offer a large number of accelerator cores https://cloud.google.com/ tpu/docs/system-architecture.6 EigenGame sans order learns max 1 PC and sans order+normalization 5 PCs on data inFigure 3a.7 EigenGame runtimes are longer than those of EigenGame R in the synthetic experiments despite strictly requiring fewer FLOPS; apparently this is due to low-level floating point arithmetic specific to the experiments.
A detailed discussion of Frequent Directions[21] can be found in Appendix D.
The activations in Block 1 result from convolving 64 filters over the layer's input. We take the mean over the 64 channels and plot the resulting 112 × 112 image.
Empirically, replacing ||vi|| = 1 with ||vi|| ≤ 1 does not harm performance while the latter is easier to enforce on neural networks for example[61].
AcknowledgementsWe are grateful to Trevor Cai for his help scaling the JAX implementation of EigenGame to handle the large IMAGENET experiment and to Daniele Calandriello for sharing his expert knowledge of related work and advice on revising parts of the manuscript.Proof. Starting with the gradient (Equation 7), we findThen the norm of the ambient gradient of u i is bounded asProof. Starting with Lemma K.8, we findCorollary K.9.1 (Bound on Utility). Assume |θ j | ≤ cigi (i−1)Λ11 ≤ 1 2 for all j < i with 0 ≤ c i ≤ 1 16 . Then the absolute value of the utility is bounded as followsthereby satisfying Assumption K.2.Lemma K.10. Assume |θ j | ≤ cigi (i−1)Λ11 ≤ 1 2 for all j < i with 0 ≤ c i ≤ 1 16 . Then Assumption K.3 is satisfied with ξ = ξ = 8 5 L i .
Optimization Algorithms on Matrix Manifolds. P-A Absil, Robert Mahony, Rodolphe Sepulchre, Princeton University PressP-A. Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2009.
First efficient convergence for streaming k-PCA: a global, gap-free, and near-optimal rate. Zeyuan Allen, -Zhu , Yuanzhi Li, 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEEZeyuan Allen-Zhu and Yuanzhi Li. First efficient convergence for streaming k-PCA: a global, gap-free, and near-optimal rate. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 487-492. IEEE, 2017.
An implicit form of Krasulina's k-PCA update without the orthonormality constraint. Ehsan Amid, Manfred K Warmuth, arXiv:1909.04803arXiv preprintEhsan Amid and Manfred K Warmuth. An implicit form of Krasulina's k-PCA update without the orthonormality constraint. arXiv preprint arXiv:1909.04803, 2019.
Neural networks and principal component analysis: learning from examples without local minima. Pierre Baldi, Kurt Hornik, Neural Networks. 21Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: learning from examples without local minima. Neural Networks, 2(1):53-58, 1989.
David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, Thore Graepel, arXiv:1802.05642The mechanics of n-player differentiable games. arXiv preprintDavid Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. arXiv preprint arXiv:1802.05642, 2018.
Re-evaluating evaluation. David Balduzzi, Karl Tuyls, Julien Perolat, Thore Graepel, Advances in Neural Information Processing Systems. David Balduzzi, Karl Tuyls, Julien Perolat, and Thore Graepel. Re-evaluating evaluation. In Advances in Neural Information Processing Systems, pages 3268-3279, 2018.
Open-ended learning in symmetric zero-sum games. David Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech M Czarnecki, Julien Perolat, Max Jaderberg, Thore Graepel, arXiv:1901.08106arXiv preprintDavid Balduzzi, Marta Garnelo, Yoram Bachrach, Wojciech M Czarnecki, Julien Perolat, Max Jaderberg, and Thore Graepel. Open-ended learning in symmetric zero-sum games. arXiv preprint arXiv:1901.08106, 2019.
The "independent components" of natural scenes are edge filters. J Anthony, Terrence J Bell, Sejnowski, Vision Research. 3723Anthony J Bell and Terrence J Sejnowski. The "independent components" of natural scenes are edge filters. Vision Research, 37(23):3327-3338, 1997.
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H Brendan Mcmahan, arXiv:1902.01046Towards federated learning at scale: system design. arXiv preprintKeith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H Brendan McMahan, et al. Towards federated learning at scale: system design. arXiv preprint arXiv:1902.01046, 2019.
Global rates of convergence for nonconvex optimization on manifolds. Nicolas Boumal, Pierre-Antoine Absil, Coralia Cartis, IMA Journal of Numerical Analysis. 391Nicolas Boumal, Pierre-Antoine Absil, and Coralia Cartis. Global rates of convergence for nonconvex optimization on manifolds. IMA Journal of Numerical Analysis, 39(1):1-33, 2019.
Auto-association by multilayer perceptrons and singular value decomposition. Hervé Bourlard, Yves Kamp, Biological Cybernetics. 594-5Hervé Bourlard and Yves Kamp. Auto-association by multilayer perceptrons and singular value decompo- sition. Biological Cybernetics, 59(4-5):291-294, 1988.
JAX: composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, Skye Wanderman-Milne, James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
Infogan: interpretable representation learning by information maximizing generative adversarial nets. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel, Advances in Neural Information Processing Systems. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2172-2180, 2016.
Input sparsity time low-rank approximation via ridge leverage score sampling. Cameron Michael B Cohen, Christopher Musco, Musco, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMMichael B Cohen, Cameron Musco, and Christopher Musco. Input sparsity time low-rank approximation via ridge leverage score sampling. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1758-1777. SIAM, 2017.
The complexity of computing a Nash equilibrium. Constantinos Daskalakis, W Paul, Christos H Goldberg, Papadimitriou, SIAM Journal on Computing. 391Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1):195-259, 2009.
Natural neural networks. Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Advances in Neural Information Processing Systems. Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2071-2079, 2015.
K-means clustering via principal component analysis. Chris Ding, Xiaofeng He, Proceedings of the International Conference on Machine Learning. the International Conference on Machine Learning29Chris Ding and Xiaofeng He. K-means clustering via principal component analysis. In Proceedings of the International Conference on Machine Learning, page 29, 2004.
Understanding GANs: the LQG setting. Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse, arXiv:1710.10793arXiv preprintSoheil Feizi, Farzan Farnia, Tony Ginart, and David Tse. Understanding GANs: the LQG setting. arXiv preprint arXiv:1710.10793, 2017.
Turning big data into tiny data: Constant-size coresets for k-means, PCA, and projective clustering. Dan Feldman, Melanie Schmidt, Christian Sohler, SIAM Journal on Computing. 493Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, PCA, and projective clustering. SIAM Journal on Computing, 49(3):601-657, 2020.
Global convergence to the equilibrium of GANs using variational inequalities. Ian Gemp, Sridhar Mahadevan, arXiv:1808.01531arXiv preprintIan Gemp and Sridhar Mahadevan. Global convergence to the equilibrium of GANs using variational inequalities. arXiv preprint arXiv:1808.01531, 2018.
Frequent directions: simple and deterministic matrix sketching. Mina Ghashami, Edo Liberty, Jeff M Phillips, David P Woodruff, SIAM Journal on Computing. 455Mina Ghashami, Edo Liberty, Jeff M Phillips, and David P Woodruff. Frequent directions: simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762-1792, 2016.
Eigenvalue and generalized eigenvalue problems: Tutorial. Fakhri Benyamin Ghojogh, Mark Karray, Crowley, arXiv:1903.11240arXiv preprintBenyamin Ghojogh, Fakhri Karray, and Mark Crowley. Eigenvalue and generalized eigenvalue problems: Tutorial. arXiv preprint arXiv:1903.11240, 2019.
Nash and correlated equilibria: some complexity considerations. Itzhak Gilboa, Eitan Zemel, Games and Economic Behavior. 11Itzhak Gilboa and Eitan Zemel. Nash and correlated equilibria: some complexity considerations. Games and Economic Behavior, 1(1):80-93, 1989.
. H Gene, Charles F Van Golub, Loan, Matrix Computations. 3JHU pressGene H Golub and Charles F Van Loan. Matrix Computations, volume 3. JHU press, 2012.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014.
Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. Nathan Halko, Joel A Per-Gunnar Martinsson, Tropp, SIAM review. 532Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217-288, 2011.
The noisy power method: a meta algorithm with applications. Moritz Hardt, Eric Price, Advances in Neural Information Processing Systems. Moritz Hardt and Eric Price. The noisy power method: a meta algorithm with applications. In Advances in Neural Information Processing Systems, pages 2861-2869, 2014.
The Organization of Behavior: A Neuropsychological Theory. Hebb Donald Olding, Psychology PressDonald Olding Hebb. The Organization of Behavior: A Neuropsychological Theory. Psychology Press, 2005.
Dual-loco: distributing statistical estimation using random projections. Christina Heinze, Brian Mcwilliams, Nicolai Meinshausen, Artificial Intelligence and Statistics. Christina Heinze, Brian McWilliams, and Nicolai Meinshausen. Dual-loco: distributing statistical estima- tion using random projections. In Artificial Intelligence and Statistics, pages 875-883, 2016.
Preserving privacy between features in distributed estimation. Christina Heinze-Deml, Brian Mcwilliams, Nicolai Meinshausen, Stat. 71189Christina Heinze-Deml, Brian McWilliams, and Nicolai Meinshausen. Preserving privacy between features in distributed estimation. Stat, 7(1):e189, 2018.
Beta-VAE: learning basic visual concepts with a constrained variational framework. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner, International Conference on Learning Representations. 26Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. Beta-VAE: learning basic visual concepts with a constrained variational framework. International Conference on Learning Representations, 2(5):6, 2017.
A Roger, Charles R Johnson Horn, Matrix Analysis. Cambridge University PressRoger A Horn and Charles R Johnson. Matrix Analysis. Cambridge University Press, 2012.
Streaming PCA: matching matrix Bernstein and near-optimal finite sample guarantees for Oja's algorithm. Prateek Jain, Chi Jin, M Sham, Praneeth Kakade, Aaron Netrapalli, Sidford, Conference on Learning Theory. Prateek Jain, Chi Jin, Sham M Kakade, Praneeth Netrapalli, and Aaron Sidford. Streaming PCA: matching matrix Bernstein and near-optimal finite sample guarantees for Oja's algorithm. In Conference on Learning Theory, pages 1147-1164, 2016.
Principal components in regression analysis. T Ian, Jolliffe, Principal Component Analysis. SpringerIan T Jolliffe. Principal components in regression analysis. In Principal Component Analysis. Springer, 2002.
In-datacenter performance analysis of a tensor processing unit. P Norman, Cliff Jouppi, Nishant Young, David Patil, Gaurav Patterson, Raminder Agrawal, Sarah Bajwa, Suresh Bates, Nan Bhatia, Al Boden, Borchers, Proceedings of the 44th Annual International Symposium on Computer Architecture. the 44th Annual International Symposium on Computer ArchitectureNorman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, pages 1-12, 2017.
The method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix. Tp Krasulina, USSR Computational Mathematics and Mathematical Physics. 96TP Krasulina. The method of stochastic approximation for the determination of the least eigenvalue of a symmetrical matrix. USSR Computational Mathematics and Mathematical Physics, 9(6):189-195, 1969.
Scalable adaptive stochastic optimization using random projections. Gabriel Krummenacher, Brian Mcwilliams, Yannic Kilcher, Joachim M Buhmann, Nicolai Meinshausen, Advances in Neural Information Processing Systems. Gabriel Krummenacher, Brian McWilliams, Yannic Kilcher, Joachim M Buhmann, and Nicolai Mein- shausen. Scalable adaptive stochastic optimization using random projections. In Advances in Neural Information Processing Systems, pages 1750-1758, 2016.
Alistair Letcher, Jakob Foerster, David Balduzzi, Tim Rocktäschel, Shimon Whiteson, arXiv:1811.08469Stable opponent shaping in differentiable games. arXiv preprintAlistair Letcher, Jakob Foerster, David Balduzzi, Tim Rocktäschel, and Shimon Whiteson. Stable opponent shaping in differentiable games. arXiv preprint arXiv:1811.08469, 2018.
Concise formulas for the area and volume of a hyperspherical cap. Shengqiao Li, Asian Journal of Mathematics and Statistics. 41Shengqiao Li. Concise formulas for the area and volume of a hyperspherical cap. Asian Journal of Mathematics and Statistics, 4(1):66-70, 2011.
Accelerated first-order methods for geodesically convex optimization on Riemannian manifolds. Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao, Advances in Neural Information Processing Systems. Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, and Licheng Jiao. Accelerated first-order meth- ods for geodesically convex optimization on Riemannian manifolds. In Advances in Neural Information Processing Systems, pages 4868-4877, 2017.
Challenging common assumptions in the unsupervised learning of disentangled representations. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningFrancesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the International Conference on Machine Learning, pages 4114-4124, 2019.
Proto-value functions: developmental reinforcement learning. Sridhar Mahadevan, Proceedings of the International Conference on Machine learning. the International Conference on Machine learningSridhar Mahadevan. Proto-value functions: developmental reinforcement learning. In Proceedings of the International Conference on Machine learning, pages 553-560, 2005.
Emile Mathieu, Tom Rainforth, Yee Whye Siddharth, Teh, arXiv:1812.02833Disentangling disentanglement in variational autoencoders. arXiv preprintEmile Mathieu, Tom Rainforth, N Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. arXiv preprint arXiv:1812.02833, 2018.
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida, arXiv:1802.05957Spectral normalization for generative adversarial networks. arXiv preprintTakeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
Projected Dynamical Systems and Variational Inequalities with Applications. Anna Nagurney, Ding Zhang, Springer Science & Business Media2Anna Nagurney and Ding Zhang. Projected Dynamical Systems and Variational Inequalities with Applica- tions, volume 2. Springer Science & Business Media, 2012.
Noam Nisan, Tim Roughgarden, Eva Tardos, Vijay V Vazirani, Algorithmic Game Theory. Cambridge University PressNoam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V Vazirani. Algorithmic Game Theory. Cambridge University Press, 2007.
Simplified neuron model as a principal component analyzer. Erkki Oja, Journal of Mathematical Biology. 153Erkki Oja. Simplified neuron model as a principal component analyzer. Journal of Mathematical Biology, 15(3):267-273, 1982.
From principal subspaces to principal components with linear autoencoders. Elad Plaut, arXiv:1804.10253arXiv preprintElad Plaut. From principal subspaces to principal components with linear autoencoders. arXiv preprint arXiv:1804.10253, 2018.
Reflections on random kitchen sinks. Ali Rahimi, Benjamin Recht, Ali Rahimi and Benjamin Recht. Reflections on random kitchen sinks, 2017. URL http://www.argmin. net/2017/12/05/kitchen-sinks/.
Distributed stochastic algorithms for high-rate streaming principal component analysis. Haroon Raja, U Waheed, Bajwa, arXiv:2001.01017arXiv preprintHaroon Raja and Waheed U Bajwa. Distributed stochastic algorithms for high-rate streaming principal component analysis. arXiv preprint arXiv:2001.01017, 2020.
A class of games possessing pure-strategy Nash equilibria. W Robert, Rosenthal, International Journal of Game Theory. 21Robert W Rosenthal. A class of games possessing pure-strategy Nash equilibria. International Journal of Game Theory, 2(1):65-67, 1973.
Simultaneous iteration method for symmetric matrices. H Rutishauser, Handbook for Automatic Computation. SpringerH Rutishauser. Simultaneous iteration method for symmetric matrices. In Handbook for Automatic Computation, pages 284-302. Springer, 1971.
Optimal unsupervised learning in a single-layer linear feedforward neural network. D Terence, Sanger, Neural Networks. 26Terence D Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2(6):459-473, 1989.
Learning interpretable disentangled representations using adversarial VAEs. Abouzar Mhd Hasan Sarhan, Nassir Eslami, Shadi Navab, Albarqouni, Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. SpringerMhd Hasan Sarhan, Abouzar Eslami, Nassir Navab, and Shadi Albarqouni. Learning interpretable disentangled representations using adversarial VAEs. In Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data, pages 37-44. Springer, 2019.
Improved approximation algorithms for large matrices via random projections. Tamas Sarlos, 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). IEEETamas Sarlos. Improved approximation algorithms for large matrices via random projections. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06), pages 143-152. IEEE, 2006.
A stochastic PCA and SVD algorithm with an exponential convergence rate. Ohad Shamir, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningOhad Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. In Proceedings of the International Conference on Machine Learning, pages 144-152, 2015.
Convergence of stochastic gradient descent for PCA. Ohad Shamir, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningOhad Shamir. Convergence of stochastic gradient descent for PCA. In Proceedings of the International Conference on Machine Learning, pages 257-265, 2016.
Fast stochastic algorithms for SVD and PCA: Convergence properties and convexity. Ohad Shamir, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningOhad Shamir. Fast stochastic algorithms for SVD and PCA: Convergence properties and convexity. In Proceedings of the International Conference on Machine Learning, pages 248-256, 2016.
Exponentially convergent stochastic k-PCA without variance reduction. Cheng Tang, Advances in Neural Information Processing Systems. Cheng Tang. Exponentially convergent stochastic k-PCA without variance reduction. In Advances in Neural Information Processing Systems, pages 12393-12404, 2019.
Averaging stochastic gradient descent on Riemannian manifolds. Nilesh Tripuraneni, Nicolas Flammarion, Francis Bach, Michael I Jordan , arXiv:1802.09128arXiv preprintNilesh Tripuraneni, Nicolas Flammarion, Francis Bach, and Michael I Jordan. Averaging stochastic gradient descent on Riemannian manifolds. arXiv preprint arXiv:1802.09128, 2018.
Lipschitz regularity of deep neural networks: analysis and efficient estimation. Aladin Virmaux, Kevin Scaman, Advances in Neural Information Processing Systems. Aladin Virmaux and Kevin Scaman. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In Advances in Neural Information Processing Systems, pages 3835-3844, 2018. |
258,686,472 | Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs | Recent studies have shown that episodic reinforcement learning (RL) is no harder than bandits when the total reward is bounded by 1, and proved regret bounds that have a polylogarithmic dependence on the planning horizon H. However, it remains an open question that if such results can be carried over to adversarial RL, where the reward is adversarially chosen at each episode. In this paper, we answer this question affirmatively by proposing the first horizon-free policy search algorithm. To tackle the challenges caused by exploration and adversarially chosen reward, our algorithm employs (1) a variance-uncertainty-aware weighted least square estimator for the transition kernel; and (2) an occupancy measure-based technique for the online search of a stochastic policy. We show that our algorithm achieves an O (d + log(|S| 2 |A|)) √ K regret with full-information feedback 2 , where d is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, K is the number of episodes, |S| and |A| are the cardinalities of the state and action spaces. We also provide hardness results and regret lower bounds to justify the near optimality of our algorithm and the unavoidability of log |S| and log |A| in the regret bound. 2 Here O(·) hides logarithmic factors of H, K and 1/δ. 1 arXiv:2305.08359v1 [cs.LG] 15 May 2023 with no polynomial dependence on H for both tabular MDPs (Zhang et al., 2021a) and RL with linear function approximation(Zhang et al., 2021b;Kim et al., 2022;Zhou and Gu, 2022). This suggests that episodic RL is no more difficult than contextual bandits (CB), which is equivalent to episodic RL with H = 1 and no state transition. Nevertheless, these horizon-free algorithms are only applicable to learning MDPs where the reward function is either fixed or stochastic, yet in many real-world scenarios, we have cope with the adversarially changing reward(Even-Dar et al., 2009;Yu et al., 2009;Zimin and Neu, 2013). However, little is known about horizon-free RL in adversarial MDPs. Thus, the following question remains open.Can we design near-optimal horizon-free RL algorithms under adversarial reward and unknown transition with function approximation employed?In this paper, we affirmatively answer the question in the setting of linear mixture MDPs with adversarial reward under full-information feedback(Cai et al., 2020;He et al., 2022b). We propose a new algorithm termed Horizon-Free Occupancy-Measure Guided Optimistic Policy Search (HF-O 2 PS). Following Cai et al. (2020); He et al. (2022b), we use online mirror descent (OMD) to update the policies and value-targeted regression (VTR) (Jia et al., 2020; Ayoub et al., 2020) to learn the transition. Nevertheless, we show that the value-function-based mirror descent inevitably introduces the polynomial dependency on the planning horizon H in the regret upper bound. To address this issue, inspired by Rosenberg and Mansour (2019a); Jin et al. (2020a) and Kalagarla et al. (2020), we use occupancy measure as a proxy of the policy and conduct OMD on the occupancy measures to update. Like Jin et al. (2020a), we maintain a confidence set of the transition kernel and utilize constrained OMD to handle the unknown transition. To achieve a horizon-free regret bound, we also extend the high-order moment estimator inZhou and Gu (2022)to stochastic policies and obtain a tighter Bernstein-type confidence set. The regret of our algorithm can be upper bounded by O(d √ K + d 2 ) in the first K episodes with high probability, where d is the dimension of the feature mapping. To the best of our knowledge, our algorithm is the first algorithm to achieve horizon-free regret in learning adversarial linear mixture MDPs. Our three main contributions are summarized as follows. • We propose a new algorithm, HF-O 2 PS, for linear mixture MDPs with adversarial reward uniformly bounded by 1/H. Compared to the previous works (e.g., Cai et al. (2020); He et al. (2022b)), HF-O 2 PS use occupancy-measure-based OMD rather than direct policy optimization. HF-O 2 PS also use the high-order moment estimator to further facilitate the learning of the transition kernel.• Our analysis shows that HF-O 2 PS achieves a regret bound O(d √ K + d 2 ), where K is the number of episodes and d is the dimension of the feature mapping. As far as we know, HF-O 2 PS is the first algorithm for adversarial RL achieving a horizon-free regret upper bound.• We also provide hardness results in addition to the regret upper bound. Our first lower bound shows that an unbounded |S| will result in a lower bound asymptotically linear in √ H, which justifies our assumption of |S| < ∞. We also provide a minimax lower bound of Ω(d √ K), which manifests the near optimality of HF-O 2 PS.Notation We denote scalars by lowercase letters, and denote vectors and matrices by lower and uppercase boldface letters respectively. We use [n] to denote the set {1, . . . , n}, and [n] for set | [
208910151,
203902511
] | Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs
Kaixuan Ji
Qingyue Zhao
Jiafan He
Weitong Zhang
¶
Quanquan Gu
Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs
Recent studies have shown that episodic reinforcement learning (RL) is no harder than bandits when the total reward is bounded by 1, and proved regret bounds that have a polylogarithmic dependence on the planning horizon H. However, it remains an open question that if such results can be carried over to adversarial RL, where the reward is adversarially chosen at each episode. In this paper, we answer this question affirmatively by proposing the first horizon-free policy search algorithm. To tackle the challenges caused by exploration and adversarially chosen reward, our algorithm employs (1) a variance-uncertainty-aware weighted least square estimator for the transition kernel; and (2) an occupancy measure-based technique for the online search of a stochastic policy. We show that our algorithm achieves an O (d + log(|S| 2 |A|)) √ K regret with full-information feedback 2 , where d is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, K is the number of episodes, |S| and |A| are the cardinalities of the state and action spaces. We also provide hardness results and regret lower bounds to justify the near optimality of our algorithm and the unavoidability of log |S| and log |A| in the regret bound. 2 Here O(·) hides logarithmic factors of H, K and 1/δ. 1 arXiv:2305.08359v1 [cs.LG] 15 May 2023 with no polynomial dependence on H for both tabular MDPs (Zhang et al., 2021a) and RL with linear function approximation(Zhang et al., 2021b;Kim et al., 2022;Zhou and Gu, 2022). This suggests that episodic RL is no more difficult than contextual bandits (CB), which is equivalent to episodic RL with H = 1 and no state transition. Nevertheless, these horizon-free algorithms are only applicable to learning MDPs where the reward function is either fixed or stochastic, yet in many real-world scenarios, we have cope with the adversarially changing reward(Even-Dar et al., 2009;Yu et al., 2009;Zimin and Neu, 2013). However, little is known about horizon-free RL in adversarial MDPs. Thus, the following question remains open.Can we design near-optimal horizon-free RL algorithms under adversarial reward and unknown transition with function approximation employed?In this paper, we affirmatively answer the question in the setting of linear mixture MDPs with adversarial reward under full-information feedback(Cai et al., 2020;He et al., 2022b). We propose a new algorithm termed Horizon-Free Occupancy-Measure Guided Optimistic Policy Search (HF-O 2 PS). Following Cai et al. (2020); He et al. (2022b), we use online mirror descent (OMD) to update the policies and value-targeted regression (VTR) (Jia et al., 2020; Ayoub et al., 2020) to learn the transition. Nevertheless, we show that the value-function-based mirror descent inevitably introduces the polynomial dependency on the planning horizon H in the regret upper bound. To address this issue, inspired by Rosenberg and Mansour (2019a); Jin et al. (2020a) and Kalagarla et al. (2020), we use occupancy measure as a proxy of the policy and conduct OMD on the occupancy measures to update. Like Jin et al. (2020a), we maintain a confidence set of the transition kernel and utilize constrained OMD to handle the unknown transition. To achieve a horizon-free regret bound, we also extend the high-order moment estimator inZhou and Gu (2022)to stochastic policies and obtain a tighter Bernstein-type confidence set. The regret of our algorithm can be upper bounded by O(d √ K + d 2 ) in the first K episodes with high probability, where d is the dimension of the feature mapping. To the best of our knowledge, our algorithm is the first algorithm to achieve horizon-free regret in learning adversarial linear mixture MDPs. Our three main contributions are summarized as follows. • We propose a new algorithm, HF-O 2 PS, for linear mixture MDPs with adversarial reward uniformly bounded by 1/H. Compared to the previous works (e.g., Cai et al. (2020); He et al. (2022b)), HF-O 2 PS use occupancy-measure-based OMD rather than direct policy optimization. HF-O 2 PS also use the high-order moment estimator to further facilitate the learning of the transition kernel.• Our analysis shows that HF-O 2 PS achieves a regret bound O(d √ K + d 2 ), where K is the number of episodes and d is the dimension of the feature mapping. As far as we know, HF-O 2 PS is the first algorithm for adversarial RL achieving a horizon-free regret upper bound.• We also provide hardness results in addition to the regret upper bound. Our first lower bound shows that an unbounded |S| will result in a lower bound asymptotically linear in √ H, which justifies our assumption of |S| < ∞. We also provide a minimax lower bound of Ω(d √ K), which manifests the near optimality of HF-O 2 PS.Notation We denote scalars by lowercase letters, and denote vectors and matrices by lower and uppercase boldface letters respectively. We use [n] to denote the set {1, . . . , n}, and [n] for set
Introduction
Learning in episodic Markov Decision Processes (MDPs) (Altman, 1999;Dann and Brunskill, 2015;Neu and Pike-Burke, 2020) is a key problem in reinforcement learning (RL) (Szepesvári, 2010;Sutton and Barto, 2018), where an agent sequentially interacts with an environment with a fixed horizon length H. Each action a t the agent takes at state s t incurs some reward r(s t , a t ), and takes it into the next state s t+1 . The agent will restart in the same environment after every H time steps. Although the curse of horizon has been deemed as a challenge in episodic RL (Jiang and Agarwal, 2018), a recent line of works have developed near-optimal algorithms to achieve a regret {0, . . . , n − 1}. Given a R d×d Σ 0 and vector x ∈ R d , we denote the vector's L 2 -norm by x 2 and define x Σ = √
x Σx. For two sequences {a n } ∞ n=1 and {b n } ∞ n=1 that are positive, we say a n = O(b n ) if there exists an absolute constant C > 0 such that a n ≤ Cb n holds for all n ≥ 1, and say a n = Ω(b n ) if there exists an absolute constant C > 0 such that a n ≥ Cb n holds for all n ≥ 1. We say a n = Θ(b n ) if both a n = O(b n ) and a n = Ω(b n ) holds. We further use O(·) to hide the polylogarithmic factors. Let 1{·} denote the indicator function, and [x] [a,b] denote the truncation function x · 1{a ≤ x ≤ b} + a · 1{x < a} + b · 1{x > b} where a ≤ b ∈ R, x ∈ R. Let ∆(·) represent the probability simplex over a finite set.
Related Work
RL with linear function approximation To make MDPs with large state space amenable for provable RL, there has been an explosion of works relying on MDP classes with various linear structures (Jiang et al., 2017;Sun et al., 2019;Du et al., 2021;Jin et al., 2021). Among different assumptions made in recent work Wang et al., 2020b;Jin et al., 2020b;Du et al., 2019;Zanette et al., 2020;Ayoub et al., 2020;Jia et al., 2020;Weisz et al., 2021;Zhou et al., 2021;He et al., 2022b;Zhou and Gu, 2022;He et al., 2022a), we consider the linear mixture MDP setting Ayoub et al., 2020;Zhou et al., 2021;Zhang et al., 2021a;He et al., 2022b), where the transition kernel is a linear combination of d given models. More specifically, we focus on the adversarial linear mixture MDP of He et al. (2022b), whose approach is nearly minimax optimal but insufficient to obtain horizon-free regret, with a refined reward assumption. There is also a parallel line of work (Jin et al., 2020b;He et al., 2022a) investigating the linear MDP model of Jin et al. (2020b) with much larger degree of freedom, where the transition function and reward function are linear in a known state-action feature mapping respectively.
Horizon-free RL RL is once believed to be far more harder than contextual bandits. However, recent works has begun to overthrow this long-standing stereotype (Wang et al., 2020a). To achieve a fair comparison with CB, there are two assumptions. One is to assume the total reward is bounded by one in each episode (Jiang and Agarwal, 2018). Under such assumption, It is possible to obtain algorithms with entirely H-independent regret in the tabular setting (Zhang et al., 2021a(Zhang et al., , 2022. Zhang et al. (2021b); Kim et al. (2022); Chen et al. (2022); Zhou and Gu (2022) further proposed horizon-free algorithms for linear mixture MDPs and linear MDPs. Though near-optimal algorithms have been proposed to learn a Dirac policy with O(d √ K +d 2 ) regret under linear function approximation (Zhou and Gu, 2022), and similar regret guarantees with no poly(H) dependency has been established in various settings (Zhang et al., 2020;Tarbouriech et al., 2021;, any of the above work can not even learn a nontrivial categorical policy. Another assumption is to assume that the reward is uniformly bounded by 1/H (Assumption 2, Zhang et al., 2021a). We employ the later assumption to approach MDPs with large state space and adversarial reward and learn stochastic policies in a horizon-free manner.
RL with adversarial reward A long line of works (Even-Dar et al., 2009;Yu et al., 2009;Neu et al., 2010Neu et al., , 2012Zimin and Neu, 2013;Dick et al., 2014;Rosenberg and Mansour, 2019a;Cai et al., 2020;Jin et al., 2020a;Shani et al., 2020;Luo et al., 2021;He et al., 2022b) has studied RL with adversarial reward, where the reward is selected by the environment at the beginning of each episode. To cope with adversarial reward, there are generally two iterative schemes. The first scheme is the policy-optimization-based method (Neu et al., 2010;Cai et al., 2020;Luo et al., 2021;He et al., 2022b), where the policy is updated according to the estimated state-value function directly. Following this spirit, under bandit feedback, Neu et al. (2010) achieves a regret upper bound of O(T 2/3 ) with known transition, and Shani et al. (2020) achieves O( √ S 2 AH 4 K 2/3 ) regret under unknown transition. Under full-information feedback, Cai et al. (2020) establish the first sublinear regret guarantee and POWERS in He et al. (2022b) achieves a near-optimal O(dH √ K) regret for adversarial linear mixture MDPs. The second scheme is occupancy-measure-based method (Zimin and Neu, 2013;Rosenberg and Mansour, 2019a,b;Jin et al., 2020a;Luo et al., 2021;Neu and Olkhovskaya, 2021;Dai et al., 2022). The policy is updated under the guidance of optimization of occupancy measure. In particular, Zimin and Neu (2013)
Preliminaries
We study RL for episodic linear mixture MDPs with adversarial reward. We introduce the definitions and necessary assumptions as follows.
MDPs with adversarial reward
We denote a homogeneous, episodic MDP by a tuple M = M (S, A, H, {r k } k∈ [K] , P), where S is the state space and A is the action space, H is the length of the episode, r k : S × A → [0, 1/H] is the deterministic reward function at the k-th episode, and P(·|·, ·) is the transition kernel from a state-action pair to the next state. r k is adversarially chosen by the environment at the beginning of the k-th episode and revealed to the agent at the end of that episode. A policy π = {π h } H h=1 is a collection of functions π h : S → ∆(A).
Value function and regret
For (s, a) ∈ S × A, we define the action-value function Q π k,h and (state) value function V π k,h as follows:
Q π k,h (s, a) = r k (s, a) + E H h =h+1 r k (s h , a h ) s h = s, a h = a , V π k,h (s) = E a∼π h (·|s) Q π k,h (s, a) , V π k,H+1 (s) = 0.
Here in the definition of Q π k,h , we use E[·] to denote the expectation over the state-action sequences (s h , a h , s h+1 , a h+1 , .., s H , a H ), where s h = s, a h = a and s h +1 ∼ P h (·|s h , a h ), a h +1 ∼ π h +1 (·|s h +1 ) for all h = h, ...H − 1. For simplicity, for any function V : S → R, we denote where V 2 is a shorthand for the function whose value at state s is V (s) 2 . Using this notation, for policy π, we have the following Bellman equality Q π k,h (s, a) = r k (s, a) + [PV π k,h+1 ](s, a). In the online learning setting, the agent determines a policy π k and start from a fixed state s 1 at the beginning of episode k. Then at each stage h ∈ [H], the agent takes an action a h ∼ π k h (·|s k h ) and observes the next state s k h+1 ∼ P(·|s k h , a k h ). For the adversarial reward, the goal of RL is to minimize the expected regret, which is the expected loss of the algorithm relative to the best-fixed policy in hindsight (Cesa-Bianchi and Lugosi, 2006). We denote the optimal policy as π * = sup π K k=1 V π k,1 (s k 1 ). Then we have the following Bellman optimally equation
Q * k,h (s, a) = r k h (s, a) + [P h V * k,h ](s, a), where Q * k,h (s, a), V * k,h (s, a)
are the corresponding optimal action-value function and value function. Thus the expected regret can be written as:
Regret(K) = K k=1 V * k,1 (s k 1 ) − V π k k,1 (s k 1 ) .
In this paper, we focus on achieving a horizon-free bound on Regret(K). Two assumptions are crucial to this end. The first assumption assumes that there is no spiky reward in each episode. The next assumption assumes the transition kernel P enjoys a linear representation w.r.t. a triplet feature mapping. We define the linear mixture MDPs Ayoub et al., 2020;Zhou et al., 2021;Zhou and Gu, 2022) as follows. 1 Assumption 3.2 (Linear mixture MDP). A MDP M = (S, A, H, {r k } k∈ [K] , P) is called an episode B-bounded linear mixture MDP, if there exists a known feature mapping φ(s |s, a) : S × A × S → R d and an unknown vector θ * ∈ R d such that P(s |s, a) = φ(s |s, a), θ * for any state-action-next-state triplet (s, a, s ). We assume θ * 2 ≤ B and for any bounded function V : S → [0, 1] and any (s, a) ∈ S × A, we have ||φ V (s, a)|| 2 ≤ 1, where φ V (s, a) = s ∈S φ(s |s, a)V (s ).
Linear mixture MDPs have the following key properties. For any function V : S → R and any state-action pair (s, a) ∈ S × A, the conditional expectation of V over P(·|s, a) is a linear function of θ * , i.e., [PV ](s, a) = φ V (s, a), θ * . Meanwhile, the conditional variance of V over P(s, a) is quadratic in θ * , i.e., [VV ]
(s, a) = φ V 2 (s, a), θ * − [ φ V (s, a), θ * ] 2 .
Occupancy measure
We introduce the concept of occupancy measure (Altman, 1999;Jin et al., 2020a) as a proxy of the stochastic policy, which will be used in our algorithm design. The occupancy measure z π = {z π h : S × A × S → [0, 1]} H h=1 associated with a stochastic policy π and a transition function P is defined as z π h (s, a, s ; P) = E[1{s h = s, a h = a, s h+1 = s }|π, P].
A reasonable occupancy measure z π must satisfy the following constraints:
1 We inevitably only consider finite S and A due to technical reasons (see Section 5 for details). • Initial distribution for all (s, a) ∈ S × A:
z π 1 (s, a, s ) = π 1 (a|s) 1 {s = s 1 } P(s |s, a). (2019a)). If a set of functions z π = {z π h : S × A × S → [0, 1]} H h=1 satisfies (3.1) and (3.2), then it is a valid occupancy measure. This occupancy measure is associated with the following induced transition function P: We use z * to denote the occupancy measure induced by the optimal fixed-policy in hindsight, π * and the true transition function, P.
P h (s |s, a) = z π h (s, a, s ) s ∈S z π h (s, a, s ) ,(3.
The Proposed Algorithm
In this section, we demonstrate a horizon-free algorithm HF-O 2 PS for learning episodic linear mixture MDPs with adversarial reward. At a high level, in each episode, HF-O 2 PS can be divided into two steps. First, HF-O 2 PS updates the policy based on observed data. After that, HF-O 2 PS uses VTR Ayoub et al., 2020) to learn the linear mixture MDP. To achieve horizonfree, we use occupancy measure guided mirror descent rather than proximal policy optimization to update the policy, and adopt variance-uncertainty-aware linear regression estimator and high-order moment estimator to learn the MDP. Please refer to Section 6 for a detailed discussion on technical issues. H h=1 as uniform distribution and assume r 0 (·, ·) = 0.
2: For m ∈ [M ], set θ 1,m ← 0, Σ 0,H+1,m ← λI, b 0,H+1,m ← 0. Set V 1,H+1 (·) ← 0, C 1 ← {θ : θ ≤ β 1 } 3: for k = 1, . . . , K do 4:
Receive s k 1 .
5:
Set C k ← {θ : Σ 1/2 k,0 (θ − θ k,0 ) 2 ≤ β k }, D k as in (4.1) 6: π k ← Algorithm 2(z k−1 , D k , α) 7:
for h = 1, . . . , H do 8:
Take action a k h ∼ π k h (·|s k h ) and receive next state s k h+1 ∼ P h (·|s k h , a k h )
9:
Observe the adversarial reward function r k (·, ·) 10: end for 11:
for h = H, . . . , 1 do 12: To utilize learned information, we hope that the transition induced by the new occupancy measure is close to our estimation. Given the confidence set of θ * at the beginning of k-th episode, C k (Line 5, Algorithm 1), we construct the feasible domain of occupancy measure D k such that for all occupancy lies in D k , the transition it induced lies in the confidence set C k .
Set Q k,h (·, ·) ← r k (·, ·) + θ k,0 , φ V k,h+1 (·, ·) + β k Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 [0,1] 13: Set V k,h (·) ← E a∼π k h (·|·) [Q k,h (·,For m ∈ [M ], denote φ k,h,m = φ V 2 m k,h+1 (s k h , a k h ). 18: Set {σ k,h,m } m∈[M ] ←Algorithm 3({φ k,h,m , θ k,m , Σ k,h,m , Σ k,m } m∈[M ] , β k , ξ,
Definition 4.1. Given the confidence set C k of parameter θ * , we define the feasible occupancy measure set D k ⊆ R |S| 2 |A| as follows:
D k = z h (·, ·, ·) ∈ R |S| 2 |A| , h ∈ [H] z h (·, ·, ·) ≥ 0;Set z k ← argmin z∈D k D Φ (z, w k ) 4:
Set π k h (a|s) ← x z k h (s,a,x) a,y z k h (s,a,y)
5: end for
Algorithm 3 High-order moment estimator (HOME) (Zhou and Gu, 2022) Require:
Features {φ k,h,m } m∈[M ] , vector estimators { θ k,m } m∈[M ] , covariance matrix { Σ k,m } m∈[M ] and { Σ k,h,m } m∈[M ] , confidence radius β k , ξ, γ 1: for m = 0, . . . , M − 2 do 2: Set [V k,m V 2 m k,h+1 ](s k h , a k h ) ← φ k,h,m+1 , θ k,m+1 [0,1] − φ k,h,m , θ k,m 2 [0,1] 3: Set E k,h,m ← min 1, 2 β k φ k,h,m Σ −1 k,m + min 1, β k φ k,h,m+1 Σ −1 k,m+1 4: Setσ 2 k,h,m ← max [V k,m V 2 m k,h+1 ](s k h , a k h ) + E k,h,m , ξ 2 , γ 2 φ k,h,m Σ −1 k,h,m 5: end for 6: Setσ 2 k,h,M −1 ← max 1, ξ 2 , γ 2 φ k,h,M −1 Σ −1 k,h,M −1 Ensure: {σ k,h,m } m∈[M ] a,s z h (s, a, s ) = a,s z h−1 (s , a, s), ∀(s, h) ∈ S × [2 : H]; a,s z 1 (s, a, s ) = 1{s = s 1 }, ∀s ∈ S; ∀(s, a, h) ∈ S × A × [H], s.t. y∈S z h (s, a, y) > 0, ∃θ s,a,h,k ∈ C k , s.t. z h (s, a, ·) y∈S z h (s, a, y) = θ s,a,h,k , φ(·|s, a) . (4.1)
Remark 4.2. In Definition 4.1, the second and the third constrain follows (3.2) and (3.1), which implies that the total probability of every z h , i.e., s,a,s ∈S×A×S z h (s, a, s ), is 1. Under linear function approximation, we want the induced transition generated from φ(·|·, ·), which is indicated by the last constraint.
The following lemma shows that these domains are convex.
Lemma 4.3. For all k ∈ [K], D k is a convex set.
Since z k h (·, ·, ·) can be viewed as a probability distribution, we choose the standard mirror map Φ for probability simplex, which is defined as follows:
Φ(z) = H h=1 s,a,s z h (s, a, s )(log z h (s, a, s ) − 1). (4.2)
We also define the corresponding Bregman divergence D Φ :
D Φ (x, y) = Φ(x) − Φ(y) − x − y, ∇Φ(y) .
And the following lemma shows that our mirror map is 1/H-strongly convex.
Lemma 4.4. Φ is 1/H-strongly convex on the "simplex" of occupancy measure with respect to · 1 , thus strongly convex on D k .
The basic idea of updating z k is to minimize the trade-off between the value-loss and the distance from the occupancy measure of last episode. Formally we have:
z k = arg min z∈D k α z k−1 , r k−1 + D Φ (z, z k−1 ), (4.3)
where α is the learning rate and the inner product is defined as follows:
z, r = s,a,s ,h∈S×A×S×[H]
z h (s, a, s )r(s, a). (2019a), we split (4.3) to the two-step optimization at Line 2 and 3 of Algorithm 2. Now by Lemma 3.3, we update the policy as follows:
Following Rosenberg and Mansour
π k h = s z k h (s, a, s ) a,s z k h (s, a, s )
.
For sake of simplicity, we useV k,1 (s 1 ) to denote h,s,a,a z h (s, a, s )r(s, a), which is the optimistic expected total reward given by the occupancy measure. After obtaining π k , HF-O 2 PS chooses actions a k h based on our new policy π k h and observe the whole reward function r k at the end of the episode. Implementation detail of Line 3 Whether Line 3 in Algorithm 2 is computationally efficient is not obvious at first glance, which is a Bregman projection step onto D k . Despite such a projection can not be formulated as a linear program, we can show that D k is an intersection of convex sets of explicit linear or quadratic forms, over which the Bregman projection onto convex sets problem can be implemented Dysktra's algorithm efficiently. Please refer to Appendix D for a detailed discussion.
VTR with high-order moment estimation
The second phase of HF-O 2 PS is to estimate the transition model θ * , φ and evaluate the policy π k . In this step, we construct a variance-uncertainty-aware weighted least square estimator (Zhou and Gu, 2022) and explicitly estimate higher moments of P (Zhang et al., 2021b;Zhou and Gu, 2022), which are poly(θ * ) under Assumption 3.2.
Concretely, we first compute the optimistic estimation of Q π k h (resp. V π k h ), Q k,h (resp. V k,h ), in a backward manner. Specifically, HF-O 2 PS computes the optimistic Q k,h and V k,h as:
Q k,h (·, ·) = r k (·, ·) + θ k,0 , φ V k,h+1 (·, ·) + β k Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 [0,1] , V k,h (·) = E a∼π k h (·|·) [Q k,h (·, a)],
where θ k,0 is the 0-th estimator of θ * , Σ k,0 is the covariance matrix and β k is the radius of the confidence set defined as:
β k = 12 d log(1 + kH/(ξ 2 dλ)) log(32(log(γ 2 /ξ) + 1)k 2 H 2 /δ) + 30 log(32(log(γ 2 /ξ) + 1)k 2 H 2 /δ)/γ 2 + √ λB, (4.4)
Then we estimate θ * by a weighted regression problem with predictor φ k,h,
0 = φ V k,h+1 (s k h , a k h ) against response V k,h+1 (s k h+1 )
. Specifically, θ k,0 is the solution to the VTR problem:
argmin θ λ θ 2 2 + k−1 j=1 H h=1 [ φ j,h,0 , θ − V j,h+1 (s j h+1 )] 2 /σ 2 j,h,0 , where the weightσ 2 j,h,0 is a high-probability upper bound of the conditional variance [VV j,h+1 ](s j h , a j h ). In detail, for each k ∈ [K] and a ∈ A, if [VV k,h+1 ](s k h , a k h ) can be computed for a function V efficiently, we defineσ 2 k,h,0 = max{[VV k,h+1 ](s k h , a k h ), ξ 2 , γ 2 φ k,h,0 Σ −1 k,h,0 }, (4.5) where [VV k,h+1 ](s k h , a k h ) is the variance-aware term and γ 2 φ k,h,0 Σ −1 k,h,0 is the uncertainty-aware term. However, we choose to replace [VV k,h+1 ](s k h , a k h ) with [VV k,h+1 ](s k h , a k h ) + E k,h,0 in (4.5)
since the true transition P is unknown, and hence the true conditional variance is not exactly available. Here
E k,h,0 (Line 3 in Algorithm 3) is an error bound such that [VV k,h+1 ](s k h , a k h )+E k,h,0 ≥ [VV k,h+1 ](s k h , a k h ) with high probability and [VV k,h+1 ](s k h , a k h ) (Line 2 in Algorithm 3) is designed as [ φ k,h,1 , θ k,1 ] [0,1] − [ φ k,h,0 , θ k,0 ] 2 [0,1] , where φ k,h,1 = φ V 2 k,h+1
(s k h , a k h ) and θ k,1 is the solution to theσ 2 k,h,1 -weighted regression problem with predictor φ k,h,1 against response V 2 k,h+1 (s k h+1 ). Similar to the estimating procedure of θ k,0 , we set
σ 2 k,h,1 based on [VV 2 k,h+1 ](s k h , a k h ) + E k,h,1 , which is an upper bound of [VV 2 k,h+1 ](s k h , a k h )
with high probability. Repeating this process, we recursively estimate the conditional 2 m -th moment of V k,h+1 by its variance in Algorithm 3, which is dubbed as high-order moment estimator.
Main Results
Regret upper bound for HF-O 2 PS
We first provide the regret bound for HF-O 2 PS.
Theorem 5.1. Set M = log 2 (4KH), ξ = d/(KH), γ = 1/d 1/4 , λ = d/B 2 and α = H/ √ K.
For any δ > 0, with probability at least 1 − (3M + 2)δ, Algorithm 1 yields a regret bounded as follows:
Regret(K) = O d + log |S| 2 |A| √ K + d 2 .
(5.
Hardness Results
We also provide two regret lower bounds. The next theorem gives a regret lower bound of MDPs with known transition and adversarial reward.
Theorem 5.3. When H = 2 H, where H is a positive integer, for any algorithm and any given nonempty action space A, there exists an MDP satisfying Assumptions 3.1 and 3.2 with d = 1 and
|S| = Θ(|A| H ) such that lim H→∞ lim K→∞ E[Regret(K)] HK log |A| ≥ c 1 = 1 √ 2 , lim H→∞ lim K→∞ E[Regret(K)] K log |S| ≥ c 2 = 1 2 √ 2 .
Remark 5.4. Theorem 5.3 indicates that even the estimation error I 2 disappears in (6.1), which means we are in the "learning-free" setting, with infinitely large S, purely the adversarial environment can introduce a √ H dependency asymptotically. Therefore, we can only expect a horizon-free algorithm whose regret upper bound at least depends on log |S|, log |A|, d, and K.
The following theorem provides another regret lower bound of learning homogeneous linear mixture MDPs with adversarial reward.
Theorem 5.5. Let B > 1 and K > max{3d 2 , (d − 1)/(192(b − 1))}, for any algorithm. there exists a B-bounded adversarial MDP satisfying Assumption 3.1and 3.2, such that the expected regret E[Regret(K)] has lower bound d
√ K/(16 √ 3).
Remark 5.6. Theorem 5.5 shows that when K is large enough, any algorithm for adversarial MDPs satisifying Assumption 3.1 and 3.2 has regret at least Ω(d √ K). Moveover, the regret lower bound in Theorem 5.5 matches the regret upper bound in Theorem 5.1, which suggests that HF-O 2 PS is near-optimal.
Proof Overview
In this section, we provide the proof sketch of Theorem 5.1 and illustrate the key technical issues.
Proof sketch of Theorem 5.1. First, we have the regret decomposition:
K k=1 V * k,1 (s 1 ) − V π k 1 (s 1 ) = K k=1 V * k,1 (s 1 ) −V k,1 (s 1 ) I 1 + K k=1 V k,1 (s 1 ) − V π k 1 (s 1 ) I 2 + K k=1 V k,1 (s 1 ) − V k,1 (s 1 ) I 3 .
(6.1)
Bounding I 1 . I 1 is the regret of policy updating. By the standard regret analysis of OMD, the regret on probability simplex is bounded by O(L √ K) where L is the upper bound of the gradients and K is the number of iterations. In MDPs, we have H decisions to make in each episode. Therefore, policy updating can be seen as conducting mirror descent simultaneously on H simplexes, and the total regret is the summation of regret on each simplexes. Consequently, the regret upper bound is roughly O(HL √ K), whereL is the average upper bound of the gradients over all the simplexes. In OPPO (Cai et al., 2020) and POWERS (He et al., 2022b), the policy is updated via proximal policy optimization: π k h (a|s) ∝ π k−1 h (a|s) exp{αQ k−1,h (s, a)}. Hence the gradients is Q k−1,h (s, a), which, after taking average over h ∈ [H], result in an averageL = O(1) and consequently a regret bound of O(H √ K). To address this issue, we consider using r k as the gradients, which is enabled by introducing an occupancy measure. By Assumption 3.1, the standard regret analysis of OMD results in
I 1 = O( √ K).
Bounding I 2 . I 2 can be further decomposed into three major terms, the sum of bonus, transition noise and policy noise. Roughly, we have:
I 2 = Γ + K k=1 H h=2 [PV k,h (s k h−1 , a k h−1 ) − V k,h (s k h )] (ii) transition noise + K k=1 H h=2 [E a∼π k h (·|s k h ) [Q k,h (s k h , a)] − Q k,h (s k h , a k h )] (iii) policy noise + K k=1 H h=1 [Q k,h (s k h , a k h ) − r(s k h , a k h ) − PV k,h+1 (s k h , a k h )] bonus terms ,
where Γ is defined as follows, which can be bounded by O( √ K) using Azuma-Hoeffding's inequality:
Γ = K k=1 E a∼π k 1 (·|s k 1 ) [Q k,1 (s k 1 , a)|s k 1 ] − Q k,1 (s k 1 , a k 1 ) + K k=1 H h=1 r(s k h , a k h ) − V π k 1 (s k 1 ) .
The standard estimation of the bonus term is to bound it with the total variance of transition noise (He et al., 2022b;Zhou et al., 2021) and then use total variance lemma (Jin et al., 2018, Lemma C.5). However, in our case, a naive adaptation of He et al. (2022b, Lemma 6.4) and total variance lemma results in an upper bound with √ KH-dependence. Also, the transition noise and policy noise can only be bounded using standard concentration inequalities, which results in another √ KH term.
To address these issues, we applied both variance-aware and uncertainty-aware linear regression estimator and high-order moment estimator, which enable us to bound the bonus term and transition noise recursively as in Zhou and Gu (2022). The biggest challenge is to tackle the randomness in π k (·|·), (iii), which will yield a upper bound of O( √ KH) if simply applying Azuma-Hoeffding's inequality. We follow the procedure of bounding (ii) in Zhou and Gu (2022), where the transition noise of order m is first bounded the sum of conditional variance VV 2 m k,h (s k h−1 , a k h−1 ) using martingale concentration inequality. Then, the key step is bounding the conditional variance with higher order transition noise as follows:
VV 2 m k,h (s k h−1 , a k h−1 ) ≤ X(m) + PV 2 m+1 k,h (s k h−1 , a k h−1 ) − V 2 m+1 k,h (s k h )
transition noise of higher order
+ V 2 m+1 k,h (s k h ) − Q 2 m+1 k,h (s k h , a k h ) (*) ,(6.2)
where X(m) only depends on m, the second term of the right hand side is exactly the transition noise of higher-order Value function. For argmax policy, (*) in (6.2) is 0, which indicates that the total variance can be bounded by the martingale difference of higher order. For policy noise term which did not appear in (Zhou and Gu, 2022), we first bound the martingale by the sum of conditional variance. Then, we have:
E a∼π k h (·|s k h ) [Q 2 m+1 k,h (s k h , a)] − E 2 a∼π k h (·|s k h ) [Q 2 m k,h (s k h , a)] = E a∼π k h (·|s k h ) [Q 2 m+1 k,h (s k h , a)] − Q 2 m+1 k,h (s k h , a k h ) + Q 2 m+1 k,h (s k h , a k h ) − E 2 a∼π k h (·|s k h ) [Q 2 m k,h (s k h , a)]. (6.3)
When the policy is random, (6.2) also hold, Combining (6.2) with (6.3), we have the follows:
E a∼π k h (·|s k h ) [Q 2 m+1 k,h (s k h , a)] − E 2 a∼π k h (·|s k h ) [Q 2 m k,h (s k h , a)] + VV 2 m k,h (s k h−1 , a k h−1 ) ≤ X(m) + PV 2 m+1 k,h (s k h−1 , a k h−1 ) − V 2 m+1 k,h (s k h ) ( ) + E a∼π k h (·|s k h ) [Q 2 m+1 k,h (s k h , a)] − Q 2 m+1 k,h (s k h , a k h ) (*) + V 2 m+1 k,h (s k h ) − E 2 a∼π k h (·|s k h ) [Q 2 m k,h (s k h , a)] :=(**)≤0
, which is nearly the same as (6.2) except (**). Therefore if we view the transition noise ( ) and policy noise (*) as a single martingale, then it can be bounded by total noise of higher order the same as (6.2). The rest framework of HOME in Zhou and Gu (2022) can be adapted without difficulties and yields an upper bound O(d √ K + d 2 ).
Bounding I 3 . I 3 is the gap between the optimistic value function derived from occupancy measure guided policy updating and the other one derived from backward propagation (Line 13 of Algorithm 1). By Lemma 3.3, for each k ∈ [K], the occupancy measure {z k h } H h=1 induces a new MDP and policy. Then z k ∈ D k implies that the transition still lies in the confidence set, thus can also be bounded by Q k,h (·, ·) and V k,h (·). Formally, we have the following lemma:
Lemma 6.1. For all k ∈ [K], letV k,1 (s 1 ) be the optimistic value function given by occupancy measure and V k,1 (s 1 ) the value function computed by backward propagation (Line 13). We havē V k,1 (s 1 ) ≤ V k,1 (s 1 ), and thus I 3 ≤ 0.
Finally, combining the upper bounds of all three terms finishes our proof.
Conclusion
In this work, we considered learning homogeneous linear mixture MDPs with adversarial reward. We proposed a new algorithm based on occupancy measure and high-order moment estimator. We show that HF-O 2 PS achieves the near-optimal regret upper bounded O(d √ K + d 2 ). To the best of our knowledge, our algorithm is the first horizon-free algorithm in this setting. Currently, our result requires the uniformly bounded reward assumption, i.e., Assumption 3.1. For horizon-free algorithms require only the total reward in each episode bounded by 1, we leave it as future work.
A Proof of Lemmas in Section 4 an Section 6
A.1 Proof of Lemma 4.3
Proof of Lemma 4.3. First it is easy to verify that C k is a convex set. Consider two occupancy measure z and w satisfying the constraints. Now consider t = (z + w)/2. It is easy to verify that t also satisfy the first four constraints. We only need to show that t satisfy the fifth constraint. We fix (s, a, h) ∈ S × A × [H], if s ∈S z h (s, a, s ) = 0 or s ∈S w h (s, a, s ) = 0, then it is obvious that t also satisfy the last constraint. Thus, we only consider the case that s ∈S z h (s, a, s ) > 0 and (1 − α s,a,h ) = (1 − α s,a,h )θ w s,a,h,k + α s,a,hθ z s,a,h,k , φ(s |s, a) = θ t s,a,h,k , φ(s |s, a) .
Since C k is convex, we have thatθ t s,a,h,k ∈ C k , which complete our proof.
A.2 Proof of Lemma 4.4
Proof. Say we have two occupancy measure z, w, then we have
D Φ (z||w) = H h=1 s,= 1 2H z − w 2 1 ,
where the first inequality holds due to Pinsker's inequality and the second inequality holds due to Cauchy-Schwartz inequality.
A.3 Proof of Lemma 6.1
Proof of Lemma 6.1. Given a set of occupancy measure, we define the respective transition as the follows: For the sake of simplicity, we (recursively) define the value functions on the imaginary MDP M k :
p k h (s |s, a) = θ s,a,h,k , φ(s |s, a) = z k h (s, a, s ) s z k h (s, a, s ) , ∀(s, a, h) ∈ S × A × [H], s.t.V k,H+1 (s) = 0, Q k,h (s, a) = r h (s, a) + θ s,a,h,k , φV k,h+1 (s, a) , V k,h (s) = E a∼π k h (a|s) [Q k,h (s, a)].
Then it is easy to verify thatV k,1 (s 1 ) computed by occupancy measure is the same as the one computed by the above way. Then, we can prove our theorem by induction. The conclusion trivially holds for n = H + 1. Suppose the statement holds for n = h + 1, then for n = h, for each (s, a), sinceQ k,h (s, a) ≤ 1, so if Q k,h (s, a) = 1 then the proof is finished. Otherwise we have:
Q k,h (s, a) −Q k,h (s, a) ≥ θ k,0 , φ V k,h+1 (·, ·) + β k Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 − θ s,a,h,k , φ V k,h+1 (s, a) = θ k,0 −θ s,a,h,k , φ V k,h+1 (·, ·) + β k Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 ≥ β k Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 − Σ 1/2 k,0 ( θ k,0 −θ s,a,h,k ) 2 Σ −1/2 k,0 φ V k,h+1 (·, ·) 2 ≥ 0,
where the first inequality holds by the inductive hypothesis, the second inequality holds due to Cauchy-Schwartz inequality and the third inequality holds due toθ s,a,h,k ∈ C k . By induction, we finish the proof.
B Proof of the Main Result
In this section, we are going to provide the proof of Theorem 5.1. First, we define the σ-algebra generated by the random variables representing the transition noise and the stochastic policy noise.
B.1 Lemmas for self-concentration Martingales
In this section, we provide two results of self-concentration martingales, which are key to our proof.
Lemma B.1 (Lemma B.1, Zhou and Gu 2022). Let {σ k , β k } k≥1 be a sequence of non-negative numbers, ξ, γ > 0, {x k } k≥1 ⊂ R d and x k 2 ≤ L. Let {Z k } k≥1 and {σ k } k≥1 be inductively defined in the following way: Z 1 = λI,
∀k ≥ 1,σ k = max{σ k , ξ, γ x k 1/2 Z −1 k }, Z k+1 = Z k + x k x k /σ 2 k .
Let ι = log(1 + KL 2 /(dλξ 2 )). Then we have
K k=1 min 1, β k x k Z −1 k ≤ 2dι + 2 max k∈[K] β k γ 2 dι + 2 √ dι K k=1 β 2 (σ 2 + ξ 2 ).
Same as in Zhou and Gu (2022), first we need to prove that the vector θ * lies in the series of confidence sets, which implies the estimation we get via occupancy measure is optimistic and the high-order moments are close to their true values.
Σ 1/2 k,m θ k,m − θ * 2 ≤ β k , |[V k,m V 2 m k,h+1 ](s k h , a k h ) − [VV 2 m k,h+1 ](s k h , a k h )| ≤ E k,h,m .
Let E B.2 denote the event described by Lemma B.2. The following lemma provides a highprobability bound of estimation error terms.
Q k,h (s k h , a k h ) − r(s k h , a k h ) − PV k,h+1 (s k h , a k h ) ≤ 2 min 1, β k Σ −1/2 k,0 φ k,h,0 2 .
Proof of Lemma B.3. The proof is almost the same as that of Lemma C.4 in Zhou and Gu (2022),
only to replace V k,h (s k h ) with Q k,h (s k h , a k h ).
B.2 Recursive bounds for stochastic policy
For any k ∈ [k], h ∈ [H], we define the indicator function I k h as the following
I k h := 1 ∀m ∈ [M ], det( Σ −1/2 k,m )/ det( Σ −1/2 k,h,m ) ≤ 4 ,
where I k h is obviously G k h -measurable and monotonically decreasing. For all m ∈ [M ], we also define the following quantities:
R m = K k=1 H h=1 I k h min 1, β k Σ −1/2 k,m φ k,h,m 2 , (B.2) A m = K k=1 H h=1 I k h PV 2 m k,h+1 (s k h , a k h ) − V 2 m k,h+1 (s k h+1 ) + J k h+1 Q 2 m k,h+1 (s k h+1 ) − Q 2 m k,h+1 (s k h+1 , a k h+1 ) , (B.3) S m = K k=1 H h=1 I k h VV 2 m k,h+1 (s k h , a k h ) + J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − J k h+1 Q 2 m k,h+1 (s k h+1 ) 2 , (B.4) G = K k=1 (1 − I k H ). (B.5)
Finally, for simplicity, we also define ι = log(1 + KH/(dλξ 2 )), (B.6) ζ = 4 log(4 log(KH)/δ). (B.7)
Remark B.4. Our definition is nearly the same as in Zhou and Gu (2022), despite that in our algorithm we use stochastic policies, which induce an additional term each in A m and S m , regarding to the random variable and its conditional variance of the policies noise. Now we are going to bound all these quantities. Basically, the technique we are using is nearly the same as in Zhou and Gu (2022). The only difference is that we need to deal with the extra policy noise resulted by stochastic policy.
Lemma B.5. Let γ, ξ be defined in Algorithm 1, then for m ∈ [M − 1], we have
R m ≤ min{8dι + 8 β K γ 2 dι + 8 β K √ dι S m + 4R m + 2R m+1 + KHξ 2 , KH}. (B.8)
We also have R M −1 ≤ KH.
Proof. For (k, h) such that I k h = 1, using Lemma C.2, we have
Σ −1/2 k,m φ k,h,m 2 ≤ Σ −1/2 k,k,m φ k,h,m 2 · det( Σ −1 k,m ) det( Σ −1 k,m ) ≤ 4 Σ −1/2 k,k,m φ k,h,m 2 .
Substituting the above inequality into (B.2), we have
R m ≤ 4 K k=1 H h=1 min 1, I k h β k Σ −1/2 k,h,m φ k,h,m 2 ,
where the right hand side can be bounded by Lemma B.1, with
β k,h = I k h β k ,σ k,h =σ k,h,m , x k,h = φ k,h,m and Z k,h = Σ k,h.m . We have K k=1 H h=1 min 1, I k h β k Σ −1/2 k,h,m φ k,h,m 2 ≤ 2dι + 2 β K γ 2 dι + 2 β K √ dι K k=1 H h=1 I k h [VV 2 m k,h+1 (s k h , a k h ) + E k,h,m ] + KHξ 2 ≤ 2dι + 2 β K γ 2 dι + 2 β K √ dι K k=1 H h=1 I k h [VV 2 m k,h+1 (s k h , a k h ) + 2E k,h,m ] + KHξ 2 ≤ 2dι + 2 β K γ 2 dι + 2 β K √ dι K k=1 H h=1 I k h VV 2 m k,h+1 (s k h , a k h ) + 4R m + 2R m+1 + KHξ 2 . (B.9) Since we have K k=1 H h=1 I k h J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − J k h+1 Q 2 m k,h+1 (s k h+1 ) 2 ≥ 0, which, substituted into (B.4), gives K k=1 H h=1 I k h VV 2 m k,h+1 (s k h , a k h ) ≤ S m . (B.10)
Therefore by substituting (B.10) into (B.9), we have
R m ≤ 8dι + 8 β K γ 2 dι + 8 β K √ dι K k=1 H h=1 I k h VV 2 m k,h+1 (s k h , a k h ) + 4R m + 2R m+1 + KHξ 2 ≤ 8dι + 8 β K γ 2 dι + 8 β K √ dι S m + 4R m + 2R m+1 + KHξ 2 ,
which completes the proof. Proof. The proof follows the proof of Lemma C.6 in Zhou and Gu (2022) and Lemma 25 in Zhang et al. (2021b). We have
S m = K k=1 H h=1 I k h VV 2 m k,h+1 (s k h , a k h ) + J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − J k h+1 Q 2 m k,h+1 (s k h+1 ) 2 = K k=1 H h=1 I k h PV 2 m+1 k,h+1 (s k h , a k h ) − Q 2 m+1 k,h+1 (s k h+1 , a k h+1 ) + K k=1 H h=1 I k h Q 2 m+1 k,h (s k h , a k h ) − ([PV 2 m k,h+1 ](s k h , a k h )) 2 + K k=1 H h=1 I k h [Q 2 m+1 k,h+1 (s k h+1 , a k h+1 ) − Q 2 m+1 k,h (s k h , a k h )] + K k=1 H h=1 I k h J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − J k h+1 Q 2 m k,h+1 (s k h+1 ) 2 = K k=1 H h=1 I k h [PV 2 m+1 k,h+1 (s k h , a k h ) − V 2 m+1 k,h+1 (s k h+1 ) + J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − Q 2 m+1 k,h+1 (s k h+1 )] A m+1 + K k=1 H h=1 I k h [Q 2 m+1 k,h (s k h , a k h ) − ([PV 2 m k,h+1 ](s k h , a k h )) 2 ] + K k=1 H h=1 I k h [Q 2 m+1 k,h+1 (s k h+1 , a k h+1 ) − Q 2 m+1 k,h (s k h , a k h )] + K k=1 H h=1 I k h V 2 m+1 k,h+1 (s k h+1 ) − [J k h+1 Q 2 m k,h+1 (s k h+1 )] 2 .
The first term here is exactly A m+1 , so we have
S m = A m+1 + K k=1 H h=1 I k h Q 2 m+1 k,h (s k h , a k h ) − ([PV 2 m k,h+1 ](s k h , a k h )) 2 + K k=1 H h=1 I k h [Q 2 m+1 k,h+1 (s k h+1 , a k h+1 ) − Q 2 m+1 k,h (s k h , a k h )] + K k=1 H h=1 I k h V 2 m+1 k,h+1 (s k h+1 ) − [J k h+1 Q 2 m k,h+1 (s k h+1 )] 2 ≤ A m+1 + K k=1 H h=1 I k h [Q 2 m+1 k,h (s k h , a k h ) − ([PV 2 m k,h+1 ](s k h , a k h )) 2 ] + K k=1 I k h k Q 2 m+1 k,h k +1 (s k h k +1 , a k h k +1 ) + K k=1 H h=1 I k h V 2 m+1 k,h+1 (s k h+1 ) − [J k h+1 Q 2 m k,h+1 (s k h+1 )] 2 ,
where h k is the largest index satisfying
I k h = 1. If h k < H, we have I k h k Q 2 m+1 k,h k +1 (s k h k +1 , a k h k +1 ) ≤ 1 = 1 − I k H and if h k = H, we have I k h k Q 2 m+1 k,h k +1 (s k h k +1 , a k h k +1 ) = 0 = 1 − I k H , so in both circumstances we have S m ≤ A m + K k=1 H h=1 I k h [Q 2 m+1 k,h (s k h , a k h ) − ([PV 2 m k,h+1 ](s k h , a k h )) 2 ] (ii) + K k=1 (1 − I k H ) G + K k=1 H h=1 I k h V 2 m+1 k,h+1 (s k h+1 ) − [J k h+1 Q 2 m k,h+1 (s k h+1 )] 2 (iv) . (B.11) For (ii) in (B.11), we have K k=1 H h=1 I k h Q 2 m+1 k,h (s k h , a k h ) − [PV 2 m k,h+1 ](s k h , a k h ) 2 ≤ K k=1 H h=1 I k h Q 2 m+1 k,h (s k h , a k h ) − [PV k,h+1 ](s k h , a k h ) I k h (Q k,h (s k h , a k h ) − [PV k,h+1 ](s k h , a k h )) m i=0 (Q 2 i k,h (s k h , a k h ) + ([PV k,h+1 ](s k h , a k h )) 2 i ) ≤2 m+1 K k=1 H h=1 I k h r k (s k h , a k h ) + 2 min{1, β k Σ −1/2 k,m φ k,h,0 2 } ≤2 m+1 (K + 2R 0 ),
where the first inequality holds by recursively using EX 2 ≥ (E 2 X), the second holds due to Assumption 3.1 and the third holds due to Lemma B.3. It remains to bound the last term (iv) in (B.11). We have
K k=1 H h=1 I k h V 2 m+1 k,h+1 (s k h+1 ) − [J k h+1 Q 2 m k,h+1 (s k h+1 )] 2 = K k=1 H h=1 I k h (E a∼π k h+1 (·|s k h+1 ) [Q k,h+1 (s k h+1 , a)|s k h+1 ]) 2 m+1 − E 2 a∼π k h+1 (·|s k h+1 ) [Q 2 m k,h+1 (s k h+1 , a)|s k h+1 ] = K k=1 H h=1 (E a∼π k h+1 (·|G k,h+1 ) [I k h Q k,h+1 (s k h+1 , a)|G k,h+1 ]) 2 m+1 − E 2 a∼π k h+1 (·|s k h+1 ) [(I k h Q k,h+1 (s k h+1 , a)) 2 m |s k h+1 ] ≤0,
where the first equality holds due to the definition of V k,h+1 (s k h+1 ), the second holds due to I k h is G k,h+1 -measurable, and the inequality holds due to EX 2 ≥ (E 2 X). Combining the estimations of the four terms completes the proof. Proof. The proof follows the proof of Lemma 25 in Zhang et al. (2021b). First, we define
x k,h = I k h PV 2 m k,h+1 (s k h , a k h ) − V 2 m k,h+1 (s k h+1 ) , y k,h = I k h J k h+1 Q 2 m k,h+1 (s k h+1 ) − Q 2 m k,h+1 (s k h+1 , a k h+1 ) .
Then, we have that x 1,1 , y 1,1 , ..., x k,h , y k,h is a martingale difference.
Thus we have E[x k,h |F k,h ] = E[y k,h |G k,h+1 ] = 0. We also have E[x 2 k,h |F k,h ] = I k h [VV 2 m k,h+1 (s k h , a k h )], E[y 2 k,h |G k,h+1 ] = I k h J k h+1 Q 2 m+1 k,h+1 (s k h+1 ) − J k h+1 Q 2 m k,h+1 (s k h+1 ) 2 .
Summing these terms over
[K] × [H] yields K k=1 H h=1 (E[x 2 k,h |F k,h ] + E[y 2 k,h |G k,h+1 ]) = S m . (B.12)
Therefore, by Lemma C.3, for each m ∈ [M ], with probability at least 1 − δ, we have
A m ≤ 2ζS m + ζ.
Taking union bound over m ∈ [M ], and also using the fact that |x k,h |, |y k,h | ≤ 1 completes the proof.
Lemma B.8 (Lemma C.8, Zhou and Gu 2022). Let G be defined in (B.5), then we have G ≤ M dι/2.
Finally wer provide the high-probability bounds of two remained martingales, both of which are direct application of Lemma C.1. Lemma B.9. With probability at least 1 − δ, we have
K k=1 H h=1 (r(s k h , a k h ) − V π k 1 (s k 1 ) ≤ 2K log(1/δ).
Lemma B.10. With probability at least 1 − δ, we have
K k=1 J k 1 Q k,1 (s k 1 ) − Q k,1 (s k 1 , a k 1 ) ≤ 2K log(1/δ).
We use E B.9 and E B.10 to denote the event described by the corresponding lemmas.
B.3 Proof of Theorem 5.1
Now we can proof our main result. First we are going to provide two theorems. The first theorem provides a horizon-free regret analysis of high-order moment estimator.
Theorem B.11. Set M = log(4KH)/ log 2, for any δ > 0, on event E B.2 ∩ E B.7 ∩ E B.9 ∩ E B.10 , we have
K k=1 V k,1 (s 1 ) − V π k 1 (s 1 ) ≤ 2432 max{32 β 2 K dι, ζ} + 192(dι + β K γ 2 dι + β K √ dι M dι/2 + KHα 2 ) + M dι/2 + 24( ζM dι + ζ) + [2 2 log(1/δ) + 32 max{8 β K √ dι, 2ζ}] √ 2K,
where ι, ζ are defined in (B.6) and (B.7). Moreover, setting ξ = d/(KH), γ = 1/d 1/4 and λ = d/B 2 yields a bound I 2 = O(d √ K + d 2 ) with high probability.
Proof. All the following proofs are under the event E B.2 ∩ E B.7 ∩ E B.9 ∩ E B.10 . First, we have the composition for I 2 , for all k, we define Q k,H+1 (s, a) = 0.
K k=1 V k,1 (s k 1 ) = K k=1 J k 1 Q k,1 (s k 1 ) − Q k,1 (s k 1 , a k 1 ) + K k=1 H h=1 Q k,h (s k h , a k h ) − Q k+1,h+1 (s k h+1 , a k h+1 ) = K k=1 J k 1 Q k,1 (s k 1 ) − Q k,1 (s k 1 , a k 1 ) + K k=1 H h=1 I k h Q k,h (s k h , a k h ) − Q k+1,h+1 (s k h+1 , a k h+1 ) + K k=1 H h=1 (1 − I k h ) Q k,h (s k h , a k h ) − Q k+1,h+1 (s k h+1 , a k h+1 ) ≤ K k=1 J k 1 Q k,1 (s k 1 ) − Q k,1 (s k 1 , a k 1 ) + K k=1 (1 − I k h k )Q k,h k (s k h k , a k h k ) + K k=1 H h=1 I k h Q k,h (s k h , a k h ) − Q k+1,h+1 (s k h+1 , a k h+1 ) ,
(B.13) where h k is the smallest number such that I k h k = 0. Then for the second term we have
K k=1 H h=1 I k h Q k,h (s k h , a k h ) − Q k+1,h+1 (s k h+1 , a k h+1 ) = K k=1 H h=1 I k h [r(s k h , a k h )] + K k=1 H h=1 I k h [Q k,h (s k h , a k h ) − r(s k h , a k h ) − PV k,h+1 (s k h , a k h )] + K k=1 H h=1 I k h [PV k,h+1 (s k h , a k h ) − V k,h+1 (s k h+1 ) + J k h+1 Q(s k h+1 ) − Q k,h+1 (s k h+1 , a k h+1 )] ≤ K k=1 H h=1 r(s k h , a k h ) + K k=1 H h=1 I k h [Q k,h (s k h , a k h ) − r(s k h , a k h ) − PV k,h+1 (s k h , a k h )] + A 0 .
Substituting the inequality above to (B.13), we have:
K k=1 V k,1 (s k 1 ) − V π k 1 (s k 1 ) ≤ K k=1 J k 1 Q k,1 (s k 1 ) − Q k,1 (s k 1 , a k 1 ) + K k=1 (1 − I k H ) + K k=1 H h=1 (r(s k h , a k h ) − V π k 1 (s k 1 ) + K k=1 H h=1 I k h [Q k,h (s k h , a k h ) − r(s k h , a k h ) − PV k,h+1 (s k h , a k h )] + A 0 ≤ 2 2K log(1/δ) + G + 2R 0 + A 0 ,
where the second inequality holds due to Lemma B.10, Lemma B.9 and Lemma B.3. Thus, we only need to bound 2R 0 + A 0 . We have
|A m | ≤ 2ζS m + ζ ≤ 2ζ(|A m+1 | + G + 2 m+1 (K + 2R 0 ) + ζ ≤ 2ζ |A m+1 | + 2 m+1 (K + 2R 0 ) + 2ζG + ζ,
where the first inequality holds due to Lemma B.7, the second inequality holds due to Lemma B.6 and the third holds due to
√ a + b ≤ √ a + √ b.
We also have:
R m ≤ 8dι + 8 β K γ 2 dι + 8 β K √ dι S m + 4R m + 2R m+1 + KHα 2 ≤ 8 β K √ dι |A m+1 | + G + 2 m+1 (K + 2R 0 ) + 4R m + 2R m+1 + KHα 2 + 8dι + 8 β K γ 2 dι ≤ 8 β K √ dι |A m+1 | + 2 m+1 (K + 2R 0 ) + 4R m + 2R m+1 + 8dι + 8 β K γ 2 dι + 8 β K √ dι G + KHα 2 ,
where the first inequality holds due to Lemma B.5, the second holds due to Lemma B.6 and we denote I c = 8dι + 8 β K γ 2 dι + 8 β K √ dι √ G + KHα 2 + √ 2ζG + ζ. Combining the two estimations we have
|A m | + 2R m ≤ 2I c + √ 2 max{8 β K √ dι, 2ζ} |A m+1 | + ·2 m+1 (K + 2R 0 ) + 4|A m+1 | + 4 · 2 m+1 (K + 2R 0 ) + 16R m + 8R m+1 ≤ 2I c + 4 max{8 β K √ dι, 2ζ} |A m+1 | + 2R m+1 + |A m | + 2R m + 2 m+1 (K + 2R 0 + |A 0 |),
where the first inequality holds due to √ a + √ b ≤ 2(a + b). Then by Lemma C.4, with a m = 2|A m | + R m ≤ 4KH and M = log(4KH)/ log 2, we have:
|A 0 | + 2R 0 ≤ 22 · 16 max{64 β K 2 dι, 2ζ} + 12I c + 4 · 4 max{8 β K √ dι, 2ζ} 2(K + 2R 0 + |A 0 |)
≤ 704 max{32 β K 2 dι, ζ} + 12(8dι + 8 β K γ 2 dι + 8 β K √ dι G + KHα 2 + 2ζG + ζ)
+ 16 max{8 β K √ dι, 2ζ} √ 2K + 16 √ 2 max{8 β K √ dι, 2ζ} 2R 0 + |A 0 |. By the fact that x ≤ a √ x + b ⇒ x ≤ 2a 2 + 2b, we have |A 0 | + 2R 0 ≤ 2432 max{32 β K 2 dι, ζ} + 24(8dι + 8 β K γ 2 dι + 8 β K √ dι G + KHα 2 + 2ζG + ζ) + 32 max{8 β K √ dι, 2ζ} √ 2K.
Bounding G by Lemma B.8, we have
K k=1 V k,1 (s k 1 ) − V π k 1 (s k 1 ) ≤ 2 2K log(1/δ) + G + 2R 0 + A 0 ≤ 2432 max{32 β K 2 dι, ζ} + 192(dι + β K γ 2 dι + β K √ dι M dι/2 + KHα 2 ) + M dι/2 + 24( ζM dι + ζ) + [2 2 log(1/δ) + 32 max{8 β K √ dι, 2ζ}] √ 2K,
which completes the proof.
Theorem B.12. On the event E B.2 , we have K k=1 (V * k,1 (s 1 ) −V k,1 (s 1 )) ≤ H log S 2 A α + Kα 2H .
Proof. This follows the standard regret analysis of online mirror descent. The only difference from standard arguments is that we need to deal with the changing convex set. We include the adapted proof for completeness. For sake of brevity, we denote f k (z) = h,s,a,s z h (s, a, s )r k (s, a), then we have f k (z * ) = V * k,1 (s 1 ), f k (z k ) =V k,1 (s 1 ), ∇f k (·) = (r k h (s, a)) s,a,s ,h , where z * is the occupancy measure induced by π * and true transition. Since we have that for all k ∈ [1 : K], θ * ∈ C k , we know that z * ∈ D k for all k. Then we have
f k (z * ) − f k (z k ) = ∇f k (z k ) (z * − z k ) = α −1 (∇Φ(w k+1 ) − ∇Φ(z k )) (z k − z * ) = α −1 (D Φ (z * ||z k ) + D Φ (z k ||w k+1 ) − D Φ (x * ||w k+1 )),
where the equities hold due to the update rule of mirror descent. Because D k+1 is convex and z * ∈ D k+1 , we have the first order optimality for z k+1 :
(∇Φ(z k+1 ) − ∇Φ(w k+1 )) (z k+1 − z * ) ≤ 0,
which can be written equivalently as the generalized Pythagorean inequality:
D Φ (z * ||w k+1 ) ≥ D Φ (z * ||z k+1 ) + D Φ (z k+1 ||w k+1 )..
(B.14)
Combining the two expression, we have
f k (z * ) − f k (z k ) ≤ α −1 (D Φ (z * ||z k ) − D Φ (z * ||z k+1 )) + α −1 (D Φ (z k ||w k+1 ) − D Φ (z k+1 ||w k+1 )).
For the second term, we have
D Φ (z k ||w k+1 ) − D Φ (z k+1 ||w k+1 ) = Φ(z k ) − Φ(z k+1 ) − ∇Φ(w k+1 ) (z k − z k+1 ) ≤ (∇Φ(z k ) − ∇Φ(w k+1 ) (z k − z k+1 ) − 1 2H z k − z k+1 2 1 = α∇f k (z k − z k+1 ) − 1 2H z k − z k+1 2 1 ≤ α H z k − z k+1 1 − 1 2H z k − z k+1 2 1 ≤ α 2 2H ,
where the first inequality holds due to Lemma 4.4, the second inequality holds due to r k (·, ·) ≤ 1/H, and the third inequality holds due to quadratic inequality. Summing up over k, we have
K k=1 (f k (z * ) − f k (z k )) ≤ α −1 (D Φ (z * ||z 1 ) − D Φ (z * ||z K+1 )) + αK 2H ≤ D Φ (z * ||z 1 ) α + Kα 2H ≤ D Φ (z * ||w 1 ) α + Kα 2H ≤ H log S 2 A α + Kα 2H ,
where the third inequality holds due to extended Pythagorean's inequality (B.14) and the forth holds since w 1 h = z 0 h is an uniform distribution on S × A × S. Now we are able to prove our main result.
Proof of Theorem 5.1. First we have the following regret decomposition
K k=1 V * k,1 (s 1 ) − V π k 1 (s 1 ) = K k=1 V * k,1 (s 1 ) −V k,1 (s 1 ) +V k,1 (s 1 ) − V k,1 (s 1 ) + V k,1 (s 1 ) − V π k 1 (s 1 ) ≤ K k=1 V * k,1 (s 1 ) −V k,1 (s 1 ) I 1 + K k=1 V k,1 (s 1 ) − V π k 1 (s 1 ) I 2 ,
where the inequality holds due to Lemma 6. We bridge the gap between the reward design in Lemma B.13 and Assumption 3.1 in Theorem 5.3 via the second stage of this reduction.
When H is even, Lemma B.13 also holds forM 1 := M 1 (S, A, H/2, {r k h }, P) with H replaced by H/2, where the S, A, and P from M 1 are restricted to the first H/2 time steps inM 1 and r k H/2 (·, ·) ∈ [0, 1] and the agent gets no reward in all the first H/2 − 1 time steps by construction. We can equivalently transformM 1 into a MDP M 2 satisfying Assumption 3.1 with planning horizon H as follows. We replace every node S [H/2 + 1, ·] in the (H/2 + 1)-th layer ofM 1 by a (H/2 + 1)layer complete |A|-way tree, and further assign the transition kernel of M 1 to this extendedM 1 . To obtain M 2 , a refined reward design is to assign zero reward for actions (edges) conducted in states in the first H/2 layers and we assign each edge (action) in this subtree with a reward r k H/2 (S [H/2, m] , A [H/2, m, n]) /H ∈ [0, 1/H] for any subtree rooted in S [H/2 + 1, (m − 1)|A| + n]. Such a construction yields M 2 (S, A, H, { r k h }, P), learning in which can similarly be reduced to the standard prediction with expert advice with |A| H/2 experts and K rounds. Therefore, Lemma B.13 also holds for M 2 with H replaced by H/2, yet the properties of the reward assignment in M 2 is strictly strong than Assumption 3.1 in that all the actions conducted from states in the same subtree rooted in the (H/2 + 1)-th layer causes the same reward.
Our goal is to claim a lower bound for a M 3.1 (S, A, H, { r k h }, P), which shares the same S, A, and P with M 1 but has its reward assignment generally satisfying Assumption 3.1, i.e. all actions taken from all states cause a reward r k h ∈ [0, 1/H]. Since M 2 is strictly a special case of M 3.1 , which implies that the asymptotic lower bound for M 3.1 can not be lower than that in Lemma B.13 up to a constant factor √ 2. Also, it is obvious that |S| = Θ(|A| H ) in a complete |A|-way tree with H + 1 layers.
B.5 Proof of Theorem 5.5
Proof of Theorem 5.5. The proof is almost identical to the proof of Theorem 5.4 in Zhou and Gu (2022). Consider the MDP M = (S, A, H, r , P) constructed in Theorem 5.4, Zhou and Gu (2022). Now we consider a linear mixture MDP with adversarial reward M = (S, A, H, {r k } k∈ [K] , P), where all the elements except reward function is inherited from M . Now we define r k (·, ·) = r (·, ·) for all k ∈ [K]. It is easy to verify that M satisfy Assumption 3.1 and Assumption 3.2.
Since the adversarial reward functions are fixed, we know that the optimal hind-sight policy of M is the optimal policy of M . Thus, the adversarial MDP will degenerate to a non-adversarial MDP. The adversarial regret of algorithm on M will also be identical to the non-adversarial regret on M . By Theorem 5.4 in Zhou and Gu 2022, we know that when K > max{3d 2 , (d − 1)/(192(b − 1))}, for any algorithm, there exists a B-bounded homogeneous linear mixture MDPs with adversarial rewards such that the expected regret E[Regret(K)] is lower bounded by d √ K/(16 √ 3).
C Auxiliary Lemmas
Lemma C.1 (Azuma-Hoeffding inequality, Azuma 1967). Let M > 0 be a constant. Let {x i } n i=1 be a stochastic process, G i = σ(x 1 , . . . , x i ) be the σ-algebra of x 1 , . . . , x i . Suppose E[x i |G i−1 ] = 0, |x i | ≤ M almost surely. Then, for any 0 < δ < 1, we have P n i=1
x i ≤ M 2n log(1/δ) > 1 − δ. E[x 2 i |G i−1 ] + 2 log(1/δ) + 2M log(1/δ) > 1 − 2(log(M 2 n/ 2 ) + 1)δ.
Lemma C.4 (Lemma 12, Zhang et al. 2021b). Let λ 1 , λ 2 , λ 4 > 0, λ 3 ≥ 1 and κ = max{log 2 λ 1 , 1}. Let a 1 , . . . , a κ be non-negative real numbers such that a i ≤ min{λ 1 , λ 2 a i + a i+1 + 2 i+1 λ 3 + λ 4 } for any 1 ≤ i ≤ κ. Let a κ+1 = λ 1 . Then we have a 1 ≤ 22λ 2 2 + 6λ 4 + 4λ 2 √ 2λ 3 .
D Computational Issues of line 3 in Algorithm 2
First we provide an closed-form expression of the only implicit constraint in Definition 4.1.
Lemma D.1. For every (s, a, h) ∈ S × A × [H], let z h,s,a denote the vector of occupancy measure z h (s, a, ·) and B s,a ∈ R |S|×d denote the matrix generated by stacking φ(·|s, a) , i.e. Proof. Given (s, a, h) ∈ S × A × [H], if s ∈S z h (s, a, s ) = 0, then obviously it satisfy (D.2). Now we consider the case that s ∈S z h (s, a, s ) > 0, then we denote p to be the normalized vector, i.e. p h (s, a, r) = z h (s, a, r)/ s ∈S z h (s, a, s ). Then, our new constraint is equivalent to (B s,a Σ −1/2 k,0 ) † (p − B s,a θ k,0 ) 2 ≤ β k (D.3) and our original constraint becomes:
∃θ ∈ C k , s.t., p = B s,aθ which is equivalent to ∃θ ∈ C k , s.t., p − B s,a θ k,0 = B s,a Σ 1/2
k,0 [Σ −1/2 k,0 (θ − θ k,0 )].
By definition of our confidence set, we know thatθ ∈ C k means Σ Algorithm 4 Dykstra algorithm with Bregman projections Require: > 0, Φ, as defined in (4.2), which is strictly convex; N closed convex sets C 1 , . . . , C N , corresponding to the decomposition in (D.5), C := ∩ i C i = ∅; x 0 ← w k , where w k is defined in line 3 of Algorithm 2; q −(N −1) := . . . := q −1 := q 0 := 0 ∈ R |S| 2 |A|H serves as an auxiliary initialization. 1: repeat 2:
x n ← (P n • ∇Φ * ) (∇f (x n−1 ) + q n−N ); 3: q n ← ∇f (x n−1 ) + q n−N − ∇f (x n ); 4: until ||x n − x n−1 || TV ≤ and Borwein, 1996)) problem. Since D k is the intersection of several hyperplanes, halfspaces, and ellipsoids 5 , onto which (Bregman) projections are hopefully easier to conduct, the Dykstra algorithm with Bregman projections (Censor and Reich, 1998), which is verified to be convergent for general closed convex constraints (Bauschke and Lewis, 2000), can be utilized.
For the implementation of line 2 in Algorithm 4, a specialized scheme employing the Dykstra algorithm with Bregman projections may invoke the projected gradient descent algorithm to deal with the information projection subproblems onto hyperplanes and halfspaces, both of which are blessed with closed-form Euclidean projection formulas (see Lemma D.3); and invoke Frank-Wolfe to address the information projection subproblems onto ellipsoids, which only requires an efficient implementation of a linear optimization problem over the quadratic constraint, in that linear optimization over an ellipsoid has a closed-form formula (see Lemma D.4).
Remark D.2. The number of variables in line 3 of Algorithm 2 is of order O(|S| 2 |A|), while its dual problem can not be much easier. The inequality constraints in (D.5) must be conducted for each (s, a, h), i.e. the unknown transition kernel incurs at least |S||A|H dual variables in the dual problem.
Lemma D.3. If A ∈ R m×n is of full row rank, b ∈ R m , c ∈ R n \{0}, d ∈ R, the orthogonal (Euclidean) projections of x ∈ R n onto {x : Ax = b} and {x : c x ≤ d} are unique respectively, and have closed-form solutions as follows:
x − A (AA ) −1 (Ax − b) = argmin y = x + Ac √ c Ac .
[
PV ](s, a) = E s ∼P(·|s,a) V (s ), [VV ](s, a) = [PV 2 ](s, a) − [PV ](s, a) 2 ,
Algorithm 1
1HF-O 2 PS Require: Regularization parameter λ, an upper bound B of the 2 -norm of θ * , confidence radius { β k } k≥1 , level M , variance parameters ξ, γ, [M ] = {0, . . . , M − 1}, learning rate α 1: Set initial occupancy measure z 0 h (·, ·, ·)
's consider another MDP M k = (S, A, H, {r h }, {P k,h,s,a }), where the state space, action space, length of horizon, reward functions are the same as the true MDP M , and P k,h,s,a (·|·, ·) =p k h (·|·, ·). However, our new MDP is a tabular one and its transition kernel is different from M . Consider running first inner loop in our algorithm (line 11 -line 14), since M and M k share the same reward function, and the other terms also do not depend on true transition, the results running on the two MDPs should be the same.
Lemma B. 2 (
2Lemma C.1, Zhou and Gu 2022). Set { β k } k≥1 as (4.4), then, with probability at least 1 − M δ, we have for any k ∈ [K], h ∈ [H], m ∈ [M ],
Lemma B. 6 .
6On the event E B.2 , for all m ∈ [M − 1], we have S m ≤ |A m+1 | + 2 m+1 (K + 2R 0 ) + G.
Lemma B. 7 (
7Lemma 25, Zhang et al. 2021b). We have P(E B.7 ) > 1 − 2M δ, where E B.7 := {∀m ∈ [M ], |A m | ≤ min{ 2ζS m + ζ, 2KH}}.
Lemma C. 2 (
2Lemma 12, Abbasi-Yadkori et al. 2011). Suppose A, B ∈ R d×d are two positive definite matrices satisfying A B, then for anyx ∈ R d , x A ≤ x B · det(A)/ det(B). Lemma C.3 (Lemma 11, Zhang et al. 2021b). Let M > 0 be a constant. Let {x i } n i=1 be a stochastic process, G i = σ(x 1 , . . . , x i ) be the σ-algebra of x 1 , . . . , x i . Suppose E[x i |G i−1 ] = 0, |x i | ≤ M and E[x 2i |G i−1 ] < ∞ almost surely. Then, for any δ,
z h,s,a = z h (s, a
, . . . , (|S|)} is a indices set 3 of all states, then the only constraint including explicitlȳ θ s,a,h,k in Definition 4.1 is equivalent to the following closed-form: 0 ) † (z h,s,a − z h,s,a 1 B s,a θ k,0 ) 2 ≤ z h,s,a 1 β k , ∀(s, a, h) ∈ S × A ×[H] (D.2)
Lemma D. 4 .
4If A 0, then linear optimization over an ellipsoid defined by A ∈ S n ++ and x ∈ R n : max y c y s.t. ||y − x|| A −1 ≤ 1, has the unique solution with closed-form expression:
regret for linear mixture MDPs with bandit feedback. In this work, we use occupancy-measure-based method to deal with adversarial reward and focus on the setting of linear mixture MDPs with full-information feedback.proposed O-REPS which achieves
O(
√
HSAK) regret for bandit feedback and near-optimal O(H
√
K) regret for full-information
feedback for known transition. For unknown transition and bandit feedback, Rosenberg and
Mansour (2019a) achieves O(H 3/2 SA 1/4 K 3/4 ) regret, which was later improved to O(HS
√
AK)
by Jin et al. (2020a). Under linear function approximation, Neu and Olkhovskaya (2021) achieves
O(H
√
dK) regret for linear MDPs with known transition and bandit feedback, and Anonymous
(2023) achieves O(dS 2 √
K +
√
HSAK)
Assumption 3.1 (Uniform reward (Assumption 2, Zhang et al. 2021a)). r k (s h , a h ) ≤ 1 H , ∀h ∈ [H] for any trajectory {s h , a h } H h=1 induced by a h ∼ π h (·|s h ) and s h+1 ∼ P(·|s h , a h ) for any policy π in every episode k ∈ [K].
At the beginning of each episode, followingJin et al. (2020a) andKalagarla et al. (2020), HF-O 2 PS uses occupancy measures to update the policy based on the observed data. First we calculate the occupancy measure of this episode {z kγ)
19:
For m ∈ [M ], set Σ k,h+1,m ← Σ k,h,m + φ k,h,m φ k,h,m /σ 2
k,h,m
20:
For m ∈ [M ], set b k,h+1,m ← b k,h,m + φ k,h,m V 2 m
k,h+1 (s k
h+1 )/σ 2
k,h,m
21:
end for
22:
For m ∈ [M ], set Σ k+1,m ← Σ k,H+1,m , b k+1,m ← b k,H+1,m , θ k+1,m ← Σ −1
k+1,m b k+1,m
23: end for
4.1 OMD on occupancy measure
h } H
h=1 based on the occupancy measure {z k−1
h } H
h=1 and the
reward {r k−1
h } H
h=1 of the last episode.
Algorithm 2 Mirror Descent on Occupancy MeasureRequire: the occupancy measure of last iteration z k−1 , constraint set D k , learning rate α1: for (h, s, a, s ) ∈ [H] × S × A × S do
2:
Set w k
h (s, a, s ) ← z k−1
h (s, a, s ) exp{αr k−1
h (s, a)}
3:
1 )
1Remark 5.2. By omitting the logarithmic terms in (5.1), HF-O 2 PS achieves a horizon free regret upper bound O(d √ K + d 2 ). Our regret bound is better than O((H + d) √ K + d 2 H) obtained by He et al. (2022b) when H = Ω(log |S|). Additionally, compared with HF-UCRL-VTR+ algorithm proposed by Zhou and Gu (2022) for episodic linear mixture MDPs with fixed reward, HF-O 2 PS provides a robustness against adversarial reward while maintaining its regret upper bounded by O(d √ K + d 2 ).
s ∈S w h (s, a, s ) > 0. In this case, we have∃θ z
s,a,h,k ,θ w
s,a,h,k ∈ C k , s.t.∀s ∈ S :
z h (s, a, s )
y∈S z h (s, a, y)
= θ z
s,a,h,k , φ(s |s, a) ,
w h (s, a, s )
y∈S w h (s, a, y)
= θ w
s,a,h,k , φ(s |s, a) .
Then, for any fixed s , we have:
t h (s, a, s )
y∈S t h (s, a, y)
=
z h (s, a, s ) + w h (s, a, s )
y∈S z h (s, a, y) + w h (s, a, y)
=
z h (s, a, s )
y∈S z h (s, a, y)
y∈S z h (s, a, y)
y∈S z h (s, a, y) + w h (s, a, y)
+
w h (s, a, s )
y∈S w h (s, a, y)
y∈S w h (s, a, y)
y∈S z h (s, a, y) + w h (s, a, y)
=
z h (s, a, s )
y∈S z h (s, a, y)
α s,a,h +
w h (s, a, s )
y∈S w h (s, a, y)
For k ∈ [K], h ∈ [H], we define F k,h the σ-algebra of state and actions till stage k and step h, and G k,h the state till stage k and step h. That is, for any (k, h) ∈ [K] × [H] and function f : S × A → R for simplicity.s 1
1 , a 1
1 , ..., s 1
h , a 1
h , ..., s 1
H , a 1
H ,
s 2
1 , a 2
1 , ..., s 2
h , a 2
h , ..., s 2
H , a 2
H ,
...
s k
1 , a k
1 , ..., s k
h , a k
h ,
generates F k,h , and
s 1
1 , a 1
1 , ..., s 1
h , a 1
h , ..., s 1
H , a 1
H ,
s 2
1 , a 2
1 , ..., s 2
h , a 2
h , ..., s 2
H , a 2
H ,
...
s k
1 , a k
1 , ..., s k
h ,
generates G k,h . Second, we define J k
h as
J k
h f (s) = E a∼π k
h (·|s) [f (s, a)|s],
(B.1)
Lemma B.3. On the event E B.2 , we have for any k ∈ [K], h ∈ [H],
1. Picking ξ = d/(KH), γ = 1/d 1/4 and λ = d/B 2 , by Theorem B.11, we know that I 2 = O(d √ K +d 2 ) on event E B.2 ∩E B.7 ∩E B.9 ∩E B.10 . By Theorem B.12, we have Setting α = H/ √ K, combining the two terms and taking the union bound of event E B.2 ∩ E B.7 ∩ E B.9 ∩ E B.10 completes the proof. MDP (with the corresponding A) satisfying Assumption 3.2 such that its expected regret satisfies lim if the total reward in each episode is bounded in [0, 1].Proof of Lemma B.13. See the proof of Cesa-Bianchi and Lugosi (2006, Theorem 3.7) for details. The only work left is to verify Assumption 3.2. Let d = 1, θ = 1 and the deterministic transition kernel P in M 1 be the only basic model in the linear mixture MDP, then we can see that the M 1 we construct indeed satisfies Assumption 3.2.I 1 ≤
H log S
2 A
α
+
Kα
2H
.
H→∞
lim
K→∞
Regret(K)
(HK/2) log |A|
≥ 1,
m+1 = K k=1 H h=1
Here in the constructions of this proof, we allow the reward function to be time-inhomogeneous because although in Assumption 3.1 we set the reward to be time-homogeneous for the simplicity of notation, all the arguments in the proof of our regret upper bound can naturally be applicable to the time-inhomogenous case.
In this paper, si means the i-th state visited in an episode, while s (i) , i = 1, . . . , |S| is irrelevant to the episodic learning setting and only denotes the indexing order when we refer to the wildcard · ∈ S in a vectorized notation.
In our case, it is just the information projection(Cover, 1999)
Rigorously speaking, (D.2) can be relaxed to an elliptical constraint, because we only concern about z h,s,a with ||z h,s,a ||1 = 0. For (h, s, a) whose ||z h,s,a ||1 = 0, its induced transition kernel P h (·|s, a) can be any eligible unit simplex, which doesn't need to follow (3.4) in Lemma 3.3.
B.4 Proof of Theorems 5.3Proof of Theorem 5.3. The major idea is to cast learning a special MDP with finite S, A and deterministic (and known) transition, which can be represented as a complete |A|-way tree, as prediction with expert advice and leverage the asymptotic lower bound(Cesa-Bianchi and Lugosi, 2006, Theorem 3.7)to manifest a HK log |A| or K log |S| dependence in the lower bound. Our two-stage reduction begins with a hard-to-learn MDP M 1 with its total reward in each episode bounded by 1.The hard instance M 1 (S, A, H, {r k h }, P) is purely deterministic, where H is even, i.e., ∀a ∈ A, s, s ∈ P(s |s, a) is either 0 or 1. The transition dynamics forms a complete |A|-way tree with each node corresponding to a state and each edge directed to leaves corresponding to the transition after an action. Let S[l, m] denote the m-th state (node) in the l-th layer of the tree, ∀l ∈ [H + 1], m ∈ |A| l and let A[l, m, n] denote the only action (edge) from S[l, m] to S[l + 1, (m − 1)|A| + n], ∀l ∈ [H], m ∈ |A| l , n ∈ [|A|]. The agent is forced to start from s k 1 := S[1, 1] in every episode k ∈ [K] so it will always end up in a leaf state, which is denoted by s k H+1 := S[H + 1, m 0 ] for some m 0 ∈ |A| H . To align with prediction with expert advice, we constrain r k h (·, ·) := 0, ∀h ∈ [H − 1] and r k H (·, ·) ∈ [0, 1], which implies the agent can not receive any positive reward until it is moving towards the last layer of the MDP (tree). Under these constraints, We allow r k to change arbitrarily across episodes. 2 Notice that unlike the common reward design in the hard instance constructions for obtaining information-theoretic lower bounds, which are usually to illustrate the difficulty of parameter estimation, we do not assign specific numeric values to r k h in order to expose the impact of the adversarial environment.All the |A| H rewards towards leaves in M 1 , r k H (·, ·), form an array of experts and any given policy π k = π k h (·|·) H h=1 actually induces a probability simplex (of state-reaching after taking the action a k H−1 ) over these experts in episode k, which can be represented by a weight vector w k ∈ ∆ |A| H . Clearly, V π k k,1 (s k 1 ) = w k , r k H , where we abuse r k H to denote the reward vector r k H ∈ [0, 1] |A| H towards leaves corresponding to w k . With hindsight, π * = sup π K k=1 V π k,1 (s k 1 ), by which the optimal weight vector w * is induced. In such a deterministic MDP, π * may not be unique but the corresponding w * can have a restricted support set over the |A| H experts, which we re-index as r k H [i]. To be more rigorous, let W = suppw * := i ∈ |A| H : w * [i] = 0 , then obviously W) reveals the connection between learning in M 1 with its |S| = Θ(|A| H ) and prediction with expert advice with |A| H experts and K rounds. Each expert has its reward bounded in [0, 1]. The first stage of this reduction accounts for the overhead incurred by the adversary under full-information feedback. For any algorithm, there is a well-known asymptotic lower bound for Regret(K):Lemma B.13. For any algorithm and any given nonempty action space A, there exists an episodic −1/2 k,0 (θ − θ k,0 ) 2 ≤ β k , so this is the same as that the following function has a solution with norm less than β k . In other word, this means that the solution with the least norm has a norm no bigger than β k :where x is the unknown variable. The least norm solution of (D.4) is (B s,a Σ −1/2 k,0 ) † (p − B s,a θ k,0 ), which should have a norm no bigger than β k , and thus yields (D.3). Therefore, we conclude that the two constraints are equivalent.By Definition 4.1 and Lemma D.1, D k can essentially be reformulated as the joint of several "easier" closed convex sets:z h (·, ·, ·) ≥ 0 (s,a,h)∈S×A×[H]z h (·, ·, ·) ∈ R |S| 2 |A| , h ∈ [H] (B s,a Σ −1/2 k ) † (z h,s,a − z h,s,a 1 B s,a θ k,0 ) 2 ≤ z h,s,a 1 β k , .(D.5) Therefore, the best approximation problem w.r.t Bregman divergence 4 step, i.e. line 3 in Algorithm 2 can be cast to the projection onto convex sets under Bregman divergence (POCS (Bauschke
Improved algorithms for linear stochastic bandits. Y Abbasi-Yadkori, D Pál, C Szepesvári, Advances in neural information processing systems 24. Abbasi-Yadkori, Y., Pál, D. and Szepesvári, C. (2011). Improved algorithms for linear stochastic bandits. Advances in neural information processing systems 24.
Constrained Markov decision processes: stochastic modeling. E Altman, RoutledgeAltman, E. (1999). Constrained Markov decision processes: stochastic modeling. Routledge.
Learning adversarial linear mixture markov decision processes with bandit feedback and unknown transition. Anonymous, Submitted to The Eleventh International Conference on Learning Representations. Under review. Anonymous (2023). Learning adversarial linear mixture markov decision processes with bandit feedback and unknown transition. In Submitted to The Eleventh International Conference on Learning Representations. Under review.
Model-based reinforcement learning with value-targeted regression. A Ayoub, Z Jia, C Szepesvari, M Wang, L Yang, PMLRInternational Conference on Machine Learning. Ayoub, A., Jia, Z., Szepesvari, C., Wang, M. and Yang, L. (2020). Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning. PMLR.
Weighted sums of certain dependent random variables. K Azuma, Tohoku Mathematical Journal, Second Series. 19Azuma, K. (1967). Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, Second Series 19 357-367.
On projection algorithms for solving convex feasibility problems. H H Bauschke, J M Borwein, SIAM review. 38Bauschke, H. H. and Borwein, J. M. (1996). On projection algorithms for solving convex feasibility problems. SIAM review 38 367-426.
Dykstras algorithm with bregman projections: A convergence proof. H H Bauschke, A S Lewis, Optimization. 48Bauschke, H. H. and Lewis, A. S. (2000). Dykstras algorithm with bregman projections: A convergence proof. Optimization 48 409-427.
Provably efficient exploration in policy optimization. Q Cai, Z Yang, C Jin, Z Wang, PMLRInternational Conference on Machine Learning. Cai, Q., Yang, Z., Jin, C. and Wang, Z. (2020). Provably efficient exploration in policy optimization. In International Conference on Machine Learning. PMLR.
The dykstra algorithm with bregman projections. Y Censor, S Reich, Communications in Applied Analysis. 2Censor, Y. and Reich, S. (1998). The dykstra algorithm with bregman projections. Communica- tions in Applied Analysis 2 407-420.
N Cesa-Bianchi, G Lugosi, Prediction, Learning, and Games. Cambridge University PressCesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press.
Improved no-regret algorithms for stochastic shortest path with linear mdp. L Chen, R Jain, H Luo, PMLRInternational Conference on Machine Learning. Chen, L., Jain, R. and Luo, H. (2022). Improved no-regret algorithms for stochastic shortest path with linear mdp. In International Conference on Machine Learning. PMLR.
Elements of information theory. T M Cover, John Wiley & SonsCover, T. M. (1999). Elements of information theory. John Wiley & Sons.
Follow-the-perturbed-leader for adversarial markov decision processes with bandit feedback. Y Dai, H Luo, L Chen, arXiv:2205.13451arXiv preprintDai, Y., Luo, H. and Chen, L. (2022). Follow-the-perturbed-leader for adversarial markov decision processes with bandit feedback. arXiv preprint arXiv:2205.13451 .
Sample complexity of episodic fixed-horizon reinforcement learning. C Dann, E Brunskill, Advances in Neural Information Processing Systems 28. Dann, C. and Brunskill, E. (2015). Sample complexity of episodic fixed-horizon reinforcement learning. Advances in Neural Information Processing Systems 28.
Online learning in markov decision processes with changing cost sequences. T Dick, A Gyorgy, C Szepesvari, PMLRInternational Conference on Machine Learning. Dick, T., Gyorgy, A. and Szepesvari, C. (2014). Online learning in markov decision processes with changing cost sequences. In International Conference on Machine Learning. PMLR.
Bilinear classes: A structural framework for provable generalization in rl. S Du, S Kakade, J Lee, S Lovett, G Mahajan, W Sun, R Wang, PMLRInternational Conference on Machine Learning. Du, S., Kakade, S., Lee, J., Lovett, S., Mahajan, G., Sun, W. and Wang, R. (2021). Bilinear classes: A structural framework for provable generalization in rl. In International Conference on Machine Learning. PMLR.
Is a good representation sufficient for sample efficient reinforcement learning. S S Du, S M Kakade, R Wang, L F Yang, International Conference on Learning Representations. Du, S. S., Kakade, S. M., Wang, R. and Yang, L. F. (2019). Is a good representation sufficient for sample efficient reinforcement learning? In International Conference on Learning Representations.
Online markov decision processes. E Even-Dar, S M Kakade, Y Mansour, Mathematics of Operations Research. 34Even-Dar, E., Kakade, S. M. and Mansour, Y. (2009). Online markov decision processes. Mathematics of Operations Research 34 726-736.
Nearly minimax optimal reinforcement learning for linear markov decision processes. J He, H Zhao, D Zhou, Q Gu, arXiv:2212.06132arXiv preprintHe, J., Zhao, H., Zhou, D. and Gu, Q. (2022a). Nearly minimax optimal reinforcement learning for linear markov decision processes. arXiv preprint arXiv:2212.06132 .
Near-optimal policy optimization algorithms for learning adversarial linear mixture mdps. J He, D Zhou, Q Gu, PMLRInternational Conference on Artificial Intelligence and Statistics. He, J., Zhou, D. and Gu, Q. (2022b). Near-optimal policy optimization algorithms for learning adversarial linear mixture mdps. In International Conference on Artificial Intelligence and Statistics. PMLR.
Model-based reinforcement learning with value-targeted regression. Z Jia, L Yang, C Szepesvari, M Wang, PMLRLearning for Dynamics and Control. Jia, Z., Yang, L., Szepesvari, C. and Wang, M. (2020). Model-based reinforcement learning with value-targeted regression. In Learning for Dynamics and Control. PMLR.
Open problem: The dependence of sample complexity lower bounds on planning horizon. N Jiang, A Agarwal, PMLRConference On Learning Theory. Jiang, N. and Agarwal, A. (2018). Open problem: The dependence of sample complexity lower bounds on planning horizon. In Conference On Learning Theory. PMLR.
Contextual decision processes with low bellman rank are pac-learnable. N Jiang, A Krishnamurthy, A Agarwal, J Langford, R E Schapire, International Conference on Machine Learning. Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J. and Schapire, R. E. (2017). Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning.
Is q-learning provably efficient?. C Jin, Z Allen-Zhu, S Bubeck, M I Jordan, Advances in neural information processing systems. 31Jin, C., Allen-Zhu, Z., Bubeck, S. and Jordan, M. I. (2018). Is q-learning provably efficient? Advances in neural information processing systems 31.
Learning adversarial markov decision processes with bandit feedback and unknown transition. C Jin, T Jin, H Luo, S Sra, T Yu, PMLRInternational Conference on Machine Learning. Jin, C., Jin, T., Luo, H., Sra, S. and Yu, T. (2020a). Learning adversarial markov decision processes with bandit feedback and unknown transition. In International Conference on Machine Learning. PMLR.
Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. C Jin, Q Liu, S Miryoosefi, Advances in neural information processing systems. 34Jin, C., Liu, Q. and Miryoosefi, S. (2021). Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems 34 13406-13418.
Provably efficient reinforcement learning with linear function approximation. C Jin, Z Yang, Z Wang, M I Jordan, PMLRConference on Learning Theory. Jin, C., Yang, Z., Wang, Z. and Jordan, M. I. (2020b). Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory. PMLR.
A sample-efficient algorithm for episodic finite-horizon mdp with constraints. K C Kalagarla, R Jain, P Nuzzo, AAAI Conference on Artificial Intelligence. Kalagarla, K. C., Jain, R. and Nuzzo, P. (2020). A sample-efficient algorithm for episodic finite-horizon mdp with constraints. In AAAI Conference on Artificial Intelligence.
Improved regret analysis for variance-adaptive linear bandits and horizon-free linear mixture mdps. Y Kim, I Yang, K.-S Jun, Advances in Neural Information Processing Systems. 35Kim, Y., Yang, I. and Jun, K.-S. (2022). Improved regret analysis for variance-adaptive linear bandits and horizon-free linear mixture mdps. Advances in Neural Information Processing Systems 35 1060-1072.
Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. H Luo, C.-Y Wei, C.-W Lee, Advances in Neural Information Processing Systems. 34Luo, H., Wei, C.-Y. and Lee, C.-W. (2021). Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. Advances in Neural Information Processing Systems 34 22931- 22942.
Online markov decision processes under bandit feedback. G Neu, A Antos, A György, C Szepesvári, Advances in Neural Information Processing Systems 23. Neu, G., Antos, A., György, A. and Szepesvári, C. (2010). Online markov decision processes under bandit feedback. Advances in Neural Information Processing Systems 23.
The adversarial stochastic shortest path problem with unknown transition probabilities. G Neu, A Gyorgy, C Szepesvári, PMLRArtificial Intelligence and Statistics. Neu, G., Gyorgy, A. and Szepesvári, C. (2012). The adversarial stochastic shortest path problem with unknown transition probabilities. In Artificial Intelligence and Statistics. PMLR.
Online learning in mdps with linear function approximation and bandit feedback. G Neu, J Olkhovskaya, Advances in Neural Information Processing Systems. 34Neu, G. and Olkhovskaya, J. (2021). Online learning in mdps with linear function approximation and bandit feedback. Advances in Neural Information Processing Systems 34 10407-10417.
A unifying view of optimism in episodic reinforcement learning. G Neu, C Pike-Burke, Advances in Neural Information Processing Systems. 33Neu, G. and Pike-Burke, C. (2020). A unifying view of optimism in episodic reinforcement learning. Advances in Neural Information Processing Systems 33 1392-1403.
Online convex optimization in adversarial markov decision processes. A Rosenberg, Y Mansour, PMLRInternational Conference on Machine Learning. Rosenberg, A. and Mansour, Y. (2019a). Online convex optimization in adversarial markov decision processes. In International Conference on Machine Learning. PMLR.
Online stochastic shortest path with bandit feedback and unknown transition function. A Rosenberg, Y Mansour, Advances in Neural Information Processing Systems. 32Rosenberg, A. and Mansour, Y. (2019b). Online stochastic shortest path with bandit feedback and unknown transition function. Advances in Neural Information Processing Systems 32.
Optimistic policy optimization with bandit feedback. L Shani, Y Efroni, A Rosenberg, S Mannor, PMLRInternational Conference on Machine Learning. Shani, L., Efroni, Y., Rosenberg, A. and Mannor, S. (2020). Optimistic policy optimization with bandit feedback. In International Conference on Machine Learning. PMLR.
Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches. W Sun, N Jiang, A Krishnamurthy, A Agarwal, J Langford, PMLRConference on learning theory. Sun, W., Jiang, N., Krishnamurthy, A., Agarwal, A. and Langford, J. (2019). Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches. In Conference on learning theory. PMLR.
Reinforcement learning: An introduction. R S Sutton, A G Barto, MIT pressSutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Algorithms for reinforcement learning. C Szepesvári, Synthesis lectures on artificial intelligence and machine learning. 4Szepesvári, C. (2010). Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning 4 1-103.
Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret. J Tarbouriech, R Zhou, S S Du, M Pirotta, M Valko, A Lazaric, Advances in Neural Information Processing Systems. 34Tarbouriech, J., Zhou, R., Du, S. S., Pirotta, M., Valko, M. and Lazaric, A. (2021). Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret. Advances in Neural Information Processing Systems 34 6843-6855.
Is long horizon reinforcement learning more difficult than short horizon reinforcement learning?. R Wang, S S Du, L F Yang, S M Kakade, arXiv:2005.00527arXiv preprintWang, R., Du, S. S., Yang, L. F. and Kakade, S. M. (2020a). Is long horizon reinforcement learning more difficult than short horizon reinforcement learning? arXiv preprint arXiv:2005.00527 .
Optimism in reinforcement learning with generalized linear function approximation. Y Wang, R Wang, S S Du, A Krishnamurthy, International Conference on Learning Representations. Wang, Y., Wang, R., Du, S. S. and Krishnamurthy, A. (2020b). Optimism in reinforcement learning with generalized linear function approximation. In International Conference on Learning Representations.
Exponential lower bounds for planning in mdps with linearly-realizable optimal action-value functions. G Weisz, P Amortila, C Szepesvári, PMLRAlgorithmic Learning Theory. Weisz, G., Amortila, P. and Szepesvári, C. (2021). Exponential lower bounds for planning in mdps with linearly-realizable optimal action-value functions. In Algorithmic Learning Theory. PMLR.
Sample-optimal parametric q-learning using linearly additive features. L Yang, M Wang, PMLRInternational Conference on Machine Learning. Yang, L. and Wang, M. (2019). Sample-optimal parametric q-learning using linearly additive features. In International Conference on Machine Learning. PMLR.
Markov decision processes with arbitrary reward processes. J Y Yu, S Mannor, N Shimkin, Mathematics of Operations Research. 34Yu, J. Y., Mannor, S. and Shimkin, N. (2009). Markov decision processes with arbitrary reward processes. Mathematics of Operations Research 34 737-757.
Learning near optimal policies with low inherent bellman error. A Zanette, A Lazaric, M Kochenderfer, E Brunskill, PMLRInternational Conference on Machine Learning. Zanette, A., Lazaric, A., Kochenderfer, M. and Brunskill, E. (2020). Learning near optimal policies with low inherent bellman error. In International Conference on Machine Learning. PMLR.
Nearly minimax optimal reward-free reinforcement learning. Z Zhang, S S Du, X Ji, arXiv:2010.05901arXiv preprintZhang, Z., Du, S. S. and Ji, X. (2020). Nearly minimax optimal reward-free reinforcement learning. arXiv preprint arXiv:2010.05901 .
Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon. Z Zhang, X Ji, S Du, PMLRConference on Learning Theory. Zhang, Z., Ji, X. and Du, S. (2021a). Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon. In Conference on Learning Theory. PMLR.
Horizon-free reinforcement learning in polynomial time: the power of stationary policies. Z Zhang, X Ji, S Du, PMLRConference on Learning Theory. Zhang, Z., Ji, X. and Du, S. (2022). Horizon-free reinforcement learning in polynomial time: the power of stationary policies. In Conference on Learning Theory. PMLR.
Improved variance-aware confidence sets for linear bandits and linear mixture mdp. Z Zhang, J Yang, X Ji, S S Du, Advances in Neural Information Processing Systems. 34Zhang, Z., Yang, J., Ji, X. and Du, S. S. (2021b). Improved variance-aware confidence sets for linear bandits and linear mixture mdp. Advances in Neural Information Processing Systems 34 4342-4355.
Computationally efficient horizon-free reinforcement learning for linear mixture mdps. D Zhou, Q Gu, arXiv:2205.11507arXiv preprintZhou, D. and Gu, Q. (2022). Computationally efficient horizon-free reinforcement learning for linear mixture mdps. arXiv preprint arXiv:2205.11507 .
Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. D Zhou, Q Gu, C Szepesvari, PMLRConference on Learning Theory. Zhou, D., Gu, Q. and Szepesvari, C. (2021). Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. In Conference on Learning Theory. PMLR.
Horizon-free reinforcement learning for latent markov decision processes. R Zhou, R Wang, S S Du, arXiv:2210.11604arXiv preprintZhou, R., Wang, R. and Du, S. S. (2022). Horizon-free reinforcement learning for latent markov decision processes. arXiv preprint arXiv:2210.11604 .
Online learning in episodic markovian decision processes by relative entropy policy search. A Zimin, G Neu, Advances in neural information processing systems 26. Zimin, A. and Neu, G. (2013). Online learning in episodic markovian decision processes by relative entropy policy search. Advances in neural information processing systems 26. |
264,812,826 | WHEN DO PROMPTING AND PREFIX-TUNING WORK? A THEORY OF CAPABILITIES AND LIMITATIONS | Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters.Despite their empirical successes, there is little theoretical understanding of how these techniques influence the internal computation of the model and their expressiveness limitations.We show that despite the continuous embedding space being more expressive than the discrete token space, soft-prompting and prefix-tuning are strictly less expressive than full fine-tuning, even with the same number of learnable parameters.Concretely, context-based fine-tuning cannot change the relative attention pattern over the content and can only bias the outputs of an attention layer in a fixed direction.This suggests that while techniques like prompting, in-context learning, soft prompting, and prefixtuning can effectively elicit skills present in the pretrained model, they cannot learn novel tasks that require new attention patterns. | [
259370753,
253510792,
237416585,
248780043,
257365136,
248780177,
230433941,
254043800,
238583580,
233296808,
233189563,
53080764,
220364230,
248405974,
235458009,
233231453,
52967399,
230437613,
67855860,
199552244
] | WHEN DO PROMPTING AND PREFIX-TUNING WORK? A THEORY OF CAPABILITIES AND LIMITATIONS
30 Oct 2023
Aleksandar Petrov [email protected]
Department of Engineering Science
University of Oxford Oxford
United Kingdom
Philip H S Torr [email protected]
Department of Engineering Science
University of Oxford Oxford
United Kingdom
Adel Bibi [email protected]
Department of Engineering Science
University of Oxford Oxford
United Kingdom
WHEN DO PROMPTING AND PREFIX-TUNING WORK? A THEORY OF CAPABILITIES AND LIMITATIONS
30 Oct 202333CF2E3559896E8636E7BF81BCA09935arXiv:2310.19698v1[cs.LG]
Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters.Despite their empirical successes, there is little theoretical understanding of how these techniques influence the internal computation of the model and their expressiveness limitations.We show that despite the continuous embedding space being more expressive than the discrete token space, soft-prompting and prefix-tuning are strictly less expressive than full fine-tuning, even with the same number of learnable parameters.Concretely, context-based fine-tuning cannot change the relative attention pattern over the content and can only bias the outputs of an attention layer in a fixed direction.This suggests that while techniques like prompting, in-context learning, soft prompting, and prefixtuning can effectively elicit skills present in the pretrained model, they cannot learn novel tasks that require new attention patterns.
INTRODUCTION
Language model advances are largely driven by larger models and more training data (Kaplan et al., 2020;Rae et al., 2021).Training cutting-edge models is out of reach for most academic researchers, small enterprises, and individuals, and it has become common to instead fine-tune open-source pretrained models (Devlin et al., 2019;Min et al., 2021).Yet, due to escalating computational demands, even fine-tuning of the larger models has become prohibitively expensive (Lialin et al., 2023).
As a result, there is an acute need for more efficient fine-tuning methods, either by sparsely modifying the parameters of the model or modifying its input context.Examples of the first type include adapter modules which introduce a few trainable layers to modify the behaviour of the frozen pretrained network (Rebuffi et al., 2017;Houlsby et al., 2019;Hu et al., 2023).One can also use low-rank updates, which also results in a reduced number of trainable parameters (Hu et al., 2021).
Context-based fine-tuning has been motivated by the success of few-shot and zero-shot learning (Wei et al., 2021;Kojima et al., 2022).The most popular context-based approach is prompting, where generation is conditioned on either human-crafted or automatically optimized tokens (Shin et al., 2020;Liu et al., 2023).In-context learning -prompting via providing input-label examplesis another widely used technique (Brown et al., 2020).Given the challenges of discrete optimization over tokens, there is a growing interest in methods that optimize real-valued embeddings (Lester et al., 2021).It is widely believed that these soft prompts offer greater expressiveness due to the expansive nature of continuous space.Furthermore, beyond only optimizing input embeddings, one can optimize the inputs of every attention layer (Li and Liang, 2021).This technique, prefix-tuning, has proven to be very successful and competitive to full fine-tuning (Liu et al., 2022).
While context-based fine-tuning approaches have witnessed impressive empirical successes and widespread adoption, we have little theoretical understanding of how they work.In this work, we analyse the influence of prompts and prefixes on the internal computations of a pretrained model and delineate their limitations.Specifically, we address the following questions:
1. Soft prompting and prefix-tuning are motivated by the embedding space being larger than the token space.However, can a transformer utilize the additional capacity?We show that with a careful choice of transformer weights, controlling a single embedding can generate any of the V N completions of N tokens, while controlling a token can produce only V completions, with V being the vocabulary size.Thus, a transformer can indeed exploit the embedding space.2. Since prefix-tuning is more expressive than prompting, is it as expressive as full fine-tuning?
Despite the expressiveness of continuous space, prefix-tuning has structural limitations.A prefix cannot change the relative attention over the content tokens and can only bias the output of the attention block in a constant direction.In contrast, full fine-tuning can learn new attention patterns and arbitrarily modify attention block outputs, making it strictly more powerful.3.If context-based fine-tuning methods suffer from such structural limitations, how come they have high empirical performance?We show that the prefix-induced bias can steer the model towards a pretraining task.Prefix-tuning can also combine skills picked up during pretraining to solve some new tasks similar to pretraining tasks.However, it cannot learn a completely new task.This is not simply because of the small number of learnable parameters: fine-tuning the same number of parameters can be sufficient to learn the novel task.Hence, context-based finetuning can elicit or combine pretrained model skills but cannot learn completely new behaviors.
BACKGROUND 2.1 THE TRANSFORMER ARCHITECTURE
We outline a formulation of a simplified transformer architecture (Vaswani et al., 2017).Assume that the model has vocabulary size V (also referred to as number of tokens).The input is a sequence (x 1 , . . . ,x p ), x i ∈{1, . . . ,V }, ∀i.Each token is mapped to a d e -dimensional vector that is the x ith column of an embedding matrix E∈R de×V .The attention mechanism is position-invariant, so typically position encodings are added.For a model with maximum input length N (context size), we use a one-hot position encoding e N (i) concatenated with the embedding.Therefore, the embedding for the i-th position provided to the first attention block would be x i = [E ⊤ :,xi , e ⊤ N (i)] ⊤ .A transformer consists of alternating attention blocks which operate across the whole sequence and Multi-Layer Perceptrons (MLPs) that operate on each individual element.Each attention block consists of H heads.Each head h is parameterized by query, key, and value matrices W h Q , W h K ∈R k×din , W h V ∈ R dout×din . 1 The attention matrix A h ∈R p×p for head h then has elements
A h ij = exp T / √ k(W h Q x i ) ⊤ (W h K x j ) p r=1 exp T / √ k(W h Q x i ) ⊤ (W h K x r ) ,(1)
where p ≤ N is the current length of the input and T > 0 is an inverse temperature parameter.2Equation ( 1) is the softmax function, hence with high enough T , it will result in approximately one-hot encoding of the maximum j.The output of the attention block A with H heads is then (t 1 , . . ., t p ), where each position i is the sum of the attention-weighted values across all heads:
A[(W 1 Q , . . . , W H Q ), (W 1 K , . . . , W H K ), (W 1 V , . . . , W H V )](x 1 , . . . , x p ) = (t 1 , . . . , t p ), t i = H h=1 p j=1 A h ij W h V x j .(2)
A transformer then applies an MLP to each output of an attention block before passing them to next attention block.We will consider linear layers
L[M , b](x)=M x+b and ReLU-activated linear layers L[M , b](x)= ReLU(M x+b).
When we compose attention blocks and linear or softmax layers, we will implicitly assume that the linear layer is applied to all positions of the sequence.Furthermore, we will use the then operator for left-to-right function composition.Therefore, a transformer model predicting confidences over the vocabulary can, for example, be represented as:
(y 1 , . . . , y p ) = A 1 L1,1 L 1,2 A 2 L2,1 L 2,2 softmax E :,x1 e N (1) , . . . , E :,xp e N (p) ,(3)
where the output dimension of the last layer has to be V .The next token for a deterministic transformer is selected to be the last element's largest logit: x p+1 = arg max u∈1,...,V y p,u .Given an input (x 1 , . . ., x p ), the model then autoregressively extends this sequence one token at a time, following Equation (3) either until the sequence reaches a length N or until a special termination token.
A transformer has no separation between the system prompt S, user provided input X and the autoregressively response Y. Thus, a sequence conditional on user input is denoted as (S 1 , ..., S n S , X 1 , ..., X n X , Y 1 , ..., Y n Y ) and one without user input as (S 1 , ..., S n S , Y 1 , ..., Y n Y ).
CONTEXT-BASED FINE-TUNING OF A PRETRAINED MODEL
We now define prompting, soft prompting and prefix-tuning with the previously introduced notation.
Prompting.The most frequently used content-based fine-tuning approach is prompting: prefixing the input (X 1 , ..., X n X ) with a token sequence S ∈ {1, ..., V } n S to guide the model response:
(S 1 , ..., S n S , X 1 , ..., X n X ).This is how most people interact with language models such as ChatGPT.
Soft prompting.Soft prompting replaces the embeddings of the system input E :,Si with learned vectors s i ∈ R de called virtual tokens (Hambardzumyan et al., 2021;Lester et al., 2021;Qin and Eisner, 2021).Hence, the input in Equation ( 3) is modified to be:
s 1 e N (1) , . . . , s n S e N (n S ) , E :,X1 e N (n S + 1) , . . . , E :,Xn X e N (n S + n X )(4)
with s i chosen to maximize the likelihood of a target response Y =(Y 1 , ..., Y n Y ), i.e., arg max s1,...,sn S ∈R de n Y j=1 log y n S +n X +j,Yj , where y n S +n X +j are autoregressively generated.
Prefix-tuning.Prefix-tuning applies soft prompting across the depth of the model (Li and Liang, 2021;Liu et al., 2022).The first n S positions for all attention blocks are learnable parameters, replacing the input (x l 1 , . . ., x l nX ) for layer l with (s l 1 , . . ., s l nS , x l 1 , . . ., x l nX ), where all s l i constitute the prefix.Hence, prefix-tuning can be formulated as arg max {s 1 i ,...,s L i } n S i=1 nY j=1 log y nS +nX +j,Yj .Prefix-tuning has been successful at fine-tuning models (Vu et al., 2022;Wu and Shi, 2022;Choi and Lee, 2023;Ouyang et al., 2023;Bai et al., 2023), leading to calls for language models provided as a service (La Malfa et al., 2023) to allow providing prefixes instead of prompts (Sun et al., 2022).
Any token-based prompt (S 1 , ..., S n S ) has a corresponding soft prompt (s i =E :,Si ) but the reverse does not hold.Similarly, every soft prompt (s 1 , ..., s n S ) can be represented as a prefix by setting the deeper prefixes to be the values that the model would compute at these positions (s
l i =(A 1 ... L l−1,−1 )([s ⊤ 1 , e ⊤ N (1)] ⊤ , ..., [s ⊤ l , e ⊤ N (l)] ⊤ ))
. The reverse also does not hold: there are prefixes that cannot be represented as a soft prompt.A hierarchy emerges: prompting < soft prompting < prefix-tuning, with prefix-tuning the most powerful of the three.Hence, we focus on examining its performance relative to full fine-tuning but our findings also apply to prompting and soft prompting.
SOFT PROMPTING HAS MORE CAPACITY THAN PROMPTING
The success of soft prompting (and by extension, prefix-tuning) is commonly attributed to the larger capacity of the uncountably infinite embedding space compared to the finite token space.Yet, increased capacity from this larger domain is beneficial only if the model can utilize it.This section confirms that this is indeed the case by constructing a transformer that can generate exponentially more completions by varying a single virtual token than by varying a hard token.
Consider unconditional generation (representing a function with no inputs) with a single system token: (Y 1 , ..., Y N )=f (S 1 )=f S1 .For a deterministic autoregressive function, there are a total of V functions in this family, hence the upper bound on the number of outputs of length N that one can generate by varying the first token S 1 is V : the first token fully determines the rest of the sequence.Generally, if one varies the first N S tokens, there are at most V N S unique outputs.What if instead of the token S 1 we vary a single virtual token s 1 : (Y 1 , ..., Y N )=f (s 1 )=f s1 ?This family of functions is indexed by a real vector and hence is infinite: in principle, one could generate all V N possible output sequences by only controlling s 1 .3Still, that does not mean that the transformer architecture can represent a function that achieves that in practice, i.e., it is not obvious if there is a surjective map from {f s1 : s 1 ∈ R de } to {1, ..., V } N .We demonstrate that, in fact, there is a transformer f for which such a surjective map exists: Theorem 1 (Exponential unconditional generation capacity of a single virtual token).For any V, N >0, there exists a transformer with vocabulary size V , context size N , embedding size d e =N , one attention layer with two heads and a three-layer MLP such that it generates any token sequence (Y 1 , ..., Y N )∈{1, ..., V } N when conditioned on the single virtual token s 1 = ( (Y1−1) /V , ..., (Y N −1) /V ).
However, conditional generation is more interesting: given a user input (X 1 , ..., X n X ), we want to generate a target response (Y 1 , ..., Y n Y ).Even in the simple case of one system token, the user provides one token and the model generates one token in response (Y 1 =f (S 1 , X 1 )=f S1 (X 1 )), we cannot control response of the model to any user input with the system token.There are V V maps from X 1 to Y 1 , but S 1 can take on only V values: |{f S1 : S 1 ∈ 1, ..., V }| = V < V V .Hence, tokens cannot be used to specify an arbitrary map from user input to model output.However, a single virtual token can specify any of the V V maps, i.e., there exists a transformer f s1 (X 2 ) for which there is a surjective map from {f s1 : s 1 ∈ R de } to {1, ..., V } {1,...,V } .Theorem 2 (Conditional generation capacity for a single virtual token (n X =n Y =1)).For any V >0, there exists a transformer with vocabulary size V , context size N =2, embedding size d e =V , one attention layer with two heads and a three-layer MLP that reproduces any map m:[1, ..., V ]→[1, ..., V ] from a user input token to a model response token when conditioned on a single virtual token s 1 =( m(1) /V , ..., m(V ) /V ).That is, by selecting s 1 we control the model response to any user input.
Theorem 2 builds on Theorem 1 by showing that soft prompting is also more expressive for governing the conditional behavior of a transformer model.This also holds for longer responses n Y > 1 by increasing the length of the soft prompt, or longer user inputs n X > 1, by increasing the depth of the model.We provide proofs in Appendix A, as well as working Python implementations.This section showed that soft prompting, and by implication, prefix-tuning, possess greater expressiveness than prompting.As we can fully determine the map from user input to model response using virtual tokens, our findings may appear to suggest that soft prompting is as powerful as full fine-tuning.However, this is not at all the case.There are structural constraints on the capabilities of soft prompting and prefix-tuning; they cannot facilitate the learning of an entirely new task.The following section elucidates this discrepancy and reconciles these seemingly contradictory results.
PREFIX-TUNING CAN ONLY BIAS THE OUTPUT OF AN ATTENTION HEAD
We just saw that soft prompting and prefix-tuning can fully control the conditional behavior of a transformer.However, that assumed a specific design for the network weights.Given a fixed pretrained model, as opposed to a manually crafted one, can prefix-tuning be considered equally powerful to full fine-tuning?In this section, we show that a prefix S cannot change the relative attention over the content X, Y and can only bias the attention block outputs in a subspace of rank n S , the length of the prefix, making it strictly less powerful than full fine-tuning.
While full fine-tuning can alter the attention pattern of an attention head, prefix-tuning cannot.Recall the attention A ij position i gives to position j for a trained transformer (Equation (1)):
A ij = exp T / √ k x ⊤ i W ⊤ Q W K x j p r=1 exp T / √ k x ⊤ i W ⊤ Q W K x r = exp T / √ k x ⊤ i Hx j p r=1 exp T / √ k x ⊤ i Hx r ,(5)
where W ⊤ Q W K =H.Full fine-tuning can enact arbitrary changes to W Q and W K and hence, assuming the input does not change (e.g., at the first attention layer), we get the following attention:
A ft ij = exp T / √ k x ⊤ i Hx j + T / √ k x ⊤ i ∆Hx j p r=1 exp T / √ k x ⊤ i Hx r + T / √ k x ⊤ i ∆Hx r ,
where the changes to W Q and W K are folded into ∆H.It is clear that by varying ∆H full finetuning can change the attention patterns arbitrarily.However, let us see how is attention affected by the presence of a prefix.For now, assume we have a prefix of length one (s 1 ) at position 0.
A pt i0 = exp( T / √ k x ⊤ i Hs1) exp T √ k x ⊤ i Hs1 + p r=1 exp T √ k x ⊤ i Hxr , A pt ij = exp( T / √ k x ⊤ i Hxj ) exp T √ k x ⊤ i Hs1 + p r=1 exp T √ k x ⊤ i Hxr for j≥1.
The numerator of A pt ij is the same as in Equation ( 5), i.e., the prefix does not affect it.It only adds the term exp( T / √ k x ⊤ i Hs 1 ) to the denominator.Therefore, the attention position i gives to the content positions j≥1 is simply scaled down by the attention it now gives to the prefix.If tomato attends the most to salad in a particular context, no prefix can change that.This becomes evident by rewriting A pt ij as the attention of the pretrained model scaled by the attention "stolen" by the prefix:
A pt ij = A ij p r=1 A pt ir = A ij (1 − A pt i0 ).(6)
Hence, prefix-tuning cannot affect the relative attention patterns across the content, it will only scale them down.In other words, one cannot modify what an attention head attends to via prefix-tuning.
Prefix-tuning only adds a bias to the attention block output.Let us see how this attention scaling down affects the output of the attention block.Following Equation ( 2), the output at position i for the pretrained (t i ), the fully fine-tuned (t ft i ) and the prefix-tuned (t pt i ) models are as follows:4
t i = p j=1 A ij W V x j , t ft i = p j=1 A ft ij (W V + ∆W V )x j , t pt i = A pt i0 W V s 1 + p j=1 A pt ij W V x j (6) = A pt i0 W V s 1 + p j=1 A ij (1-A pt i0 )W V x j =A pt i0 W V s 1 +(1-A pt i0 )t i .(7)
Hence, prefix-tuning only biases the pretrained attention block value at each position i towards the constant vector W V s 1 , which is independent of the content (x 1 , ..., x p ).The content only affects the scale A pt i0 of the bias via the amount of attention on the prefix.This is in contrast with full fine-tuning where ∆W Q , ∆W K and ∆W V allow for a content-dependent change of the attention and value computation.Note that these results also hold for suffix-tuning (placing the prefix after the input) but not for suffix soft-prompting.
We validate that this indeed is the case when prefix-tuning real-world transformers.In Figures 6 and 7, we show that a prefix applied to LLaMA's first layer does not change the relative attention distribution over the content positions X and results in a bias with a constant direction.
Longer prefixes define larger subspaces for the bias but are not fully utilized in practice.In the case of a longer prefix (s 1 , . . ., s n S ), the bias vector is in a subspace of dimensionality n S :
t pt i = nS j=1 A pt i,Sj W V s j + (1 − nS j=1 A pt i,Sj )t i
, where i goes over the content and j over the prefix positions.Larger prefixes thus have a larger subspace to modify the attention block output.The specific direction is determined by the relative distribution of attention across the prefix positions.However, when we examine the distribution of attention across the prefix positions for various inputs as in Appendix B, it appears that the prefixes do not span this subspace.Regardless of the input, the attention A pt i,Sj over the prefix positions remains nearly constant.Thus, prefix-tuning does not seem to make full use of the space that the vectors W V s j span.We hypothesise that this is due to the two competing optimization goals for the vectors s j : at the same time they need to "grab attention" when interacting with W K and determine the bias direction when multiplied with W V .
So, is prefix-tuning equivalent to full fine-tuning or is it less powerful than full fine-tuning?In Section 3, we showed that prefix-tuning, in principle, has a large capacity to influence the behavior of the model.But then, in this section, we showed that it has some severe limitations, including not being able to affect the attention pattern and only biasing the attention layer activations.These two results seem to be contradicting one another, so how do we reconcile them?
The constructions for the results in Section 3 (described in Appendix A) are simply an algorithm that extracts the completion from a lookup table encoded in the virtual tokens.The attention patterns 2 6 0 1 4 6 3 0 1 2 0 0 1 1 2 2 3 4 2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 0 2 6 0 1 4 6 3 0 1 2 0 0 0 1 3 3 3 3 3
2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 02 6 0 1 4 6 3 0 1 2 0 0 0 1 3 3 3 3 3
The pretrained model has seen only sorting in ascending order.Hence, the single attention head �rst looks at the smaller numbers and then to larger ones.
Pre�x-tuning, on the other hand, cannot change the key and value matrices and the attention pattern.It can only "steal" some attention.That is why the model still focuses on the zeros, as with the pretrained model.
Full �ne-tuning can change the key and value matrices and hence can lead to new attention patterns.In this case, it modi�es the model to �rst look at the larger numbers.
Sorted ascending Sorted descending Not sorted
Figure 1: Attention patterns of a small transformer pretrained on sorting in ascending order.The model is given the prefix S and user input X and generates Y autoregressively.We have highlighted the attention when the first response Y1 is being generated.Full fine-tuning sorts in descending order but prefix-tuning cannot as it cannot update the learned attention.Note how the relative attention of X to X in the left and right plots is exactly the same: the prefix cannot change the attention pattern for the same inputs.The relative attention of X to X in the center plot is very different because full fine-tuning can arbitrarily change WQ and WK .
are simply extracting the current position embedding and the virtual token and hence the attention does not depend on the actual content in the tokens.There is no need to learn a new attention pattern to learn a different map from input to output.Furthermore, the virtual token designates the map precisely by acting as a bias.Therefore, the observations in these two sections do not contradict one another.Soft prompting and prefix-tuning can be on par with full fine-tuning but only in very limited circumstances: when all the knowledge is represented in the virtual token as a lookup table and the model simply extracts the relevant entry.Transformers do not behave like this in practice.Models are typically trained with token inputs rather than virtual tokens.Moreover, if we had a lookup table of the responses to each input we would not need a learning algorithm in the first place.
Therefore, the limitations from this section hold for real-world pretrained transformers.Then how come prefix-tuning has been reported to achieve high accuracy and often to be competitive to full fine-tuning?The next section aims to explain when and why prefix-tuning can work in practice.
THE BIAS CAN ELICIT SKILLS FROM THE PRETRAINED MODEL
Pretraining potentially exposes a model to different classes of completions for the same token sequence.For a string like I didn't enjoy the movie, the model may have seen completions such as I found the acting to be sub par, This is negative sentiment or Je n'ai pas aimé le film.Hence, a pretrained model could do text completion, sentiment analysis, or translation.However, the input does not fully determine the desired completion type and the model can generate any one of them.Hence, following our results from Section 4, we hypothesise that prefix-tuning cannot be used to gain new knowledge but can bring to the surface latent knowledge present in the pretrained model. 5e test this hypothesis by constructing small transformers trained on one or few tasks.We use a minimal transformer model (Karpathy, 2020) to show that prefix-tuning cannot learn a new task that full fine-tuning can, then that it can easily elicit a latent skill from pretraining, and finally how it can even learn some new tasks, provided they can be solved by combining pretraining skills.Prefix-tuning cannot learn a new task requiring a different attention pattern.To check if prefix-tuning can learn a new task, we train a 1-layer, 1-head transformer to sort numbers into ascending order and then fine-tune it to sort in descending order.During training, the model sees random sequences of 10 digits from 0 to 7 followed by their ascending sorted order.The pretrained accuracy (fully matching the sorted sequence) is 91%.Full fine-tuning on the descending task leads to 85% test accuracy, hence full fine-tuning successfully learns the new task.However, prefix-tuning with a prefix size n S =1 results in 0% accuracy, hence prefix-tuning fails to learn the new task at all. 2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 The attention patterns in Figure 1 show why this is the case: the pretrained model learns to attend first to the smallest numbers and then to the larger ones.When fully fine-tuned, the attention patterns are reversed: they now first attend to the largest values.However, following Section 4, prefix-tuning cannot change the attention pattern over the input sequence and will still attend to the smallest values.Therefore, prefix-tuning indeed cannot learn a new task if it requires new attention patterns.Prefix-tuning can elicit a skill from the pretrained model.The second part of our hypothesis was that prefix-tuning can elicit latent skills in the pretrained model.To test that, we pretrain a 1-layer, 4-head model with solutions sorted in ascending (↗) or descending (↘) order, or adding one (+1) or two (+2) to each element of the input sequence.Each solution is shown with 25% probability.The model has no indication of what the task is, hence, it assigns equal probability to all tasks, as shown in the first row in Table 2. Full fine-tuning for each task naturally results in high accuracy.However, prefix-tuning can also reach very high accuracy for all tasks: above 90%.Compared to the previous case, prefix-tuning is more successful here because the pretrained model contains the attention mechanisms for solving the four tasks, as shown in Figure 2. If all a prefix does is bias the attention layer activations, how can it steer the model to collapse its distribution onto one task?This is likely due to the attention block solving all tasks in parallel and placing their solutions in different subspaces of the residual stream (intermediate representation, Elhage et al., 2021).As the MLP needs to select one solution to generate, a further indicator on the selected task (or lack of selection thereof) should also be represented.The bias induced by the prefix then acts on this "selection subspace" to nudge the MLP to select the desired solution.
0 2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 0 20 2 6 0 1 4 6 3 0 1 2 6 6 4 3 2 2 1 1 0 2
This can be be clearly seen from the activations of the attention layer at the last input position (X n X ).This is the position where the task selection happens as the first output element fully describes the task.Figure 3 shows plots of randomly selected dimensions of the residual stream with and without a prefix.The attention block activations of the pretrained model (without prefix) show no correlation with the output it is about to generate, demonstrating that the choice of completion is indeed random and is not determined by the activation block.However, the prefix-tuned activations for the same inputs are clustered as a result of the prefix-induced bias.This shows that the bias induced by the prefix acts as a "task selector" of the subspace of the residual stream specializing in the desired task.
Prefix-tuning can combine knowledge from pretraining tasks to solve new tasks.Prefix-tuning eliciting one type of completion learned in pretraining starts to explain its practical utility.Still, prefix-tuning seems to be successful also at tasks that the pretrained model has not seen.As we showed above, a model trained to sort in one order cannot be prefix-tuned to sort in the other.Then how is it possible for prefix-tuning to learn a new task?We posit that this can happen, as long as the "skill" required to solve the new task is a combination of "skills" the pretrained model has seen.We test this by pretraining a 40-layer 4-head model with the same four tasks.We prefixtune (n S =10) for two new tasks: incrementing the ascending sorted sequence (↗+1) and double histogram (mapping each element to the number of elements in the sequence with the same value, H).The pretrained model has not seen either task.Prefix-tuning results in 93% accuracy for ↗+1 which is a combination of the ↗ and +1 pretraining tasks and just 1% for the H task which requires different skills: finding other instances of the same token and counting.Note that H is not a hard task: it requires 2 layers and 2 heads to be solved exactly (Weiss et al., 2021).Therefore, prefix-tuning is can indeed combine skills that the model has learned in order to solve a novel task but cannot learn a completely new task requiring new skills.
EFFECTS OF PREFIX-TUNING BEYOND THE SINGLE ATTENTION LAYER
Section 4 focused exclusively on a single attention layer.However, even if a prefix can only induce a bias on its output, this bias, when passed through the following MLP and subsequent attention layers, can exhibit increasingly complex behaviors.While it is not immediately obvious that the limitations discussed above translate to deeper networks, we provide evidence to this end.Section 6.1 shows that a prefix can change the attention pattern of the following attention layer but only in a linear fashion while full fine-tuning can also enact bilinear effects.Section 6.2 shows that while prefixtuning can be thought of as neural network training, the representational capacity of the resulting architecture is severely limited.Therefore, prefix-tuning appears to be less expressive than full fine-tuning, even when both methods are given the same number of learnable parameters.
PREFIX-TUNING CAN CHANGE THE ATTENTION, ALBEIT THE ONE OF THE NEXT LAYER
Let us examine how the prefix of one attention layer affects the following attention layer.For simplicity, assume there is no MLP between the attention layers, residual connections or layer norms: the output t (1) i of the first is the input x (2) i of the second.The pretrained outputs are
t (1) i = p j=1 A (1) ij W (1) V x
(1) j , resulting in the second layer attention Ã( 2)
ij = T / √ k t (1)⊤ i H (2) t (1) j .
Here Ãij is the pre-softmax attention, i.e., A ij = exp Ãij / p r=1 exp Ãir.For prefix-tuning we then have:
t pt(1) i = A pt(1) i0 W V s (1) 1 + p j=1 A pt(1) ij W (1) V x (1) j (7) = A pt(1) i0 αi W V s (1) 1 µ +(1 − A pt(1) i0 )t (1) i , Ãpt(2) ij = T √ k t pt(1)⊤ i H (2) t pt(1) j , = T √ k (α i α j µ ⊤ H (2) µ constant +α j (1-α i ) t (1)⊤ i H (2) µ depends only on t (1) i +α i (1-α j ) µ ⊤ H (2) t
(1) j depends only on t
(1) j
+(1-α i )(1-α j ) t (1)⊤ i H (2) t (1) j pretrained attention Ã(2) ij ).
The presence of µ shows that the prefix of layer 1 can change the attention pattern of the following layer.This change is content-specific: the second and the third terms depend on the inputs, hence a simple bias can affect the attention when passed through MLPs and further attention blocks.
Compare with Equation ( 6), which showed a prefix cannot change the attention of the same layer. , s (2) , ... have limited interaction with the inputs x1 and x2.The interaction of the prefix parameters with each input is only via the scalar attention, shown here with a light connection.The mixing of information between the inputs happens via residual connections with the pretrained fixed feature extraction and hence is not learnable.The MLP is also fixed and hence only acts as a multivariate activation function.This limited interaction explains why prefix-tuning struggles to learn new tasks even in deeper models.
Still, even considering this cross-layer effect, prefix-tuning is more limited in its expressiveness than full fine-tuning.While the second and the third terms are input-dependent, each depends on one input position only.The prefix does not change the bilinear dependency on both the query and key.This is something that the full fine-tuning can achieve:
Ãft(2) ij = T / √ k t ft(1)⊤ i (H (2) + ∆H (2) )t
ft(1) j .
EXPRESSIVITY OF PREFIX-TUNING ACROSS DEEPER MODELS
We considered the effect of the presence of the prefix in the first attention layer on the attention of the second.However, the effects become more complex as one adds more attention attention layers.
We also ignored the MLPs between, but in practice they can play an important role.Here, instead, we analyse prefix-tuning as learning a neural network.We argue that while the resulting architecture includes both linear operation and non-linear activations, the structure is unlikely to learn efficiently.
For simplicity, we will consider two inputs x 1 and x 2 and a single prefix s.The output of the attention head, parameterized by s is then:
A s (x 1 , x 2 ) = ⟨y 1 , y 2 ⟩ y 1 = exp(x 1 ⊤ Hs)W V s + exp(x 1 ⊤ Hx 1 )W V x 1 + exp(x 1 ⊤ Hx 2 )W V x 2 C 1 (8) y 2 = exp(x 2 ⊤ Hs)W V s + exp(x 2 ⊤ Hx 1 )W V x 1 + exp(x 2 ⊤ Hx 2 )W V x 2 , C 2
where we have omitted the T / √ k factors and have folded the softmax normalization into C 1 and C 2 .The layer inputs, pretrained parameters and the learnable parameters are correspondingly highlighted.The attention head is clearly a non-linear operation.However, the learnable parameter s participates only in the left term.It interacts with only one of the inputs at a time and only by computing a single scalar value x ⊤ i Hs.As we discussed above, the interaction between x 1 and x 2 is non-trainable and can be thought as a hard-coded feature extraction.Each of the outputs is then passed to a pretrained MLP which can be thought of as an activation function.This would be a multivariate activation function, which while unusual in the contemporary practice has been studied before (Solazzi and Uncini, 2004).Figure 4 illustrates the computation graph of the resulting neural network and shows that the only learnable interaction between the inputs is indirect.Therefore, prefix-tuning can be considered as learning a neural network where the only interaction between the inputs happens via non-learnable residual connections.Nevertheless, the alternating linear and nonlinear operations are reminiscent of the standard neural network architecture and their universal approximation properties (Hassoun, 1995).That begs the question if the prefix-tuning architecture can be an universal approximator and whether it would be a parameter-efficient one.
An example of prefix-tuning failing to be an universal approximator.While we leave the formal analysis of the representational capacity of prefix-tuning as future work, we provide an example of pretrained parameters for which the architecture is not a universal approximator.As can be seen in Figure 4, all information must pass through the non-learnable MLPs.Thus, if the MLPs destroy all input information, there is no value for the prefixes s (1) , s (2) , ... that can change that fact.The MLPs can destroy all information, if, e.g., one of their linear layers has a zero weight matrix.While this is unlikely to be learned with typical pretraining, this demonstrates that if prefix-tuning could be a universal approximator, that would pose specific requirements on the pretrained MLPs and it is not clear whether pretraining would satisfy these requirements.
Even if prefix-tuning could be an universal approximator, it would not be parameter-efficient.Whether prefix-tuning is an universal approximator is not of practical relevance as even if it was, it would be much less parameter-efficient than other comparable approaches.While Equation ( 8) and Figure 4 already hint at that, we designed an experiment to this end.Recall that our pretrained model in Section 5 failed to learn the double histogram task (H) as it required novel attention patterns.A rank-1 Low Rank Adaptation (LoRA, Hu et al., 2021) applied only to the MLPs in a 4-layer 4-head model pretrained in the exact same way results in 92% accuracy on the H task.The number of parameters for the LoRA fine-tuning is exactly the same as for a prefix of size 12.However, as can be expected from the results in Section 5, training this prefix results in 0% accuracy.Therefore, prefix-tuning fails to learn a task that LoRA with the same number of parameters can learn.
In conclusion, while prefix-tuning, prompting, soft prompting and in-context learning have complex effects in deeper models, the interaction of the learnable parameters and the model inputs likely still results in very limited expressiveness.In particular, we demonstrated that LoRA can be used to learn a completely new task while prefix-tuning with the exact same number of parameters cannot.
DISCUSSION AND RELATED WORKS
Understanding fine-tuning and prefix-tuning.Prior works show that prefixes have low intrinsic dimension allowing transfer to similar tasks and initialization of prefixes for new tasks (Qin et al., 2021;Su et al., 2022;Zhong et al., 2022;Wang et al., 2022b;Zheng et al., 2023).In this work, we offered theoretical insights into their results: this subspace is the span of the prefix-induced bias.Another line of work shows that skills can be localized in the parameter space of pretrained models (Wang et al., 2022a;Panigrahi et al., 2023).Here, we showed that it is also possible to identify subspaces of the residual stream corresponding to individual tasks and select them via prefix-tuning.
Prompting and in-context learning.Prompting and in-context learning are a special case of prefix-tuning.Therefore, the limitations and mechanisms discussed in this work apply to prompting as well: prompts cannot change the distribution of attention of the first attention layer over the content following it and can only induce a bias on the output of this layer (Section 4).Even considering the cross-layer effects, a prompt is strictly less expressive than full fine-tuning (Section 6) and prompting is unlikely to enable the model to solve a completely new task.Our theory thus explains why Kossen et al. (2023) observed that in-context examples cannot overcome pre-training skills.
While context-based fine-tuning approaches cannot learn arbitrary new tasks, as shown in Section 5, they can leverage pre-trained skills.For instance, transformers can learn linear models in-context by mimicking gradient descent (Von Oswald et al., 2023) or by approximating matrix inversion (Akyürek et al., 2022).This is consistent with our theory: the prediction updates are enacted as biases in the activations of the attention block.Hence, despite the limitations discussed in this work, context-based methods can result in powerful fine-tuning if the pretrained model has "transferable skills" such as algorithmic fundamentals.Still, there are non-algorithmic applications where our theory predicts that in-context learning will fail, for example, translating to a language that the model has never seen before, even if large number of translation pairs are provided in-context.
Implications for catastrophic forgetting and model alignment.The lack of expressiveness of context-based fine-tuning can be a feature: desirable properties will be maintained.Full fine-tuning can result in catastrophic forgetting (He et al., 2021b;Luo et al., 2023;Mukhoti et al., 2023).Our theory shows that context-based methods will not destroy pretrained skills.Model alignment poses the reverse problem: ensuring that the model cannot pick up undesirable skills during fine-tuning.
Our results show that prompting and prefix-tuning cannot steer the model towards new adversarial behaviors.Hence, the recent successes in adversarial prompting (Zou et al., 2023) indicate that current model alignment methods just mask the undesirable skills rather than removing them.
Implications for model interpretability.An open question for language model interpretability is whether attention is sufficient for explainability (Jain and Wallace, 2019;Wiegreffe and Pinter, 2019).Section 5 points toward the negative: by interfering in the output of the attention layer with the bias induced by a prefix, we can change the behavior of the model, without changing its attention.On the flip side, prefix-tuning can be used to understand what "skills" a model has: if prefix-tuning for a task fails, then the model likely lacks one of the key "skills" for that task.
Limitations.The present analysis is largely limited to prefixing with prompts, soft prompts and for prefix-tuning.While our theoretical results hold for suffix-tuning, they do not necessarily apply to suffixing with prompts or soft prompts.That is because the deeper representations for prompt and soft prompt suffixes would depend on the previous positions.This does not apply to suffix-tuning as it fixes all intermediate representations.Therefore, whether suffixing is more expressive than prefixing remains an open question.Separately, while we provided evidence towards context-based fine-tuning methods being parameter inefficient learners, the formal analysis of the conditions under which they may be universal approximators remain an open question.Finally, we mostly considered simple toy problems.In practice, however, language models are pretrained with very large datasets and can pick up very complex behaviors.Hence, the extent to which the limitations we demonstrated apply to large-scale pretrained transformers also remains for future work.
CONCLUSION
This paper formally showed that fine-tuning techniques working in embedding space, such as soft prompting and prefix-tuning, are strictly more expressive than prompting which operates in the discrete token space.However, we then demonstrated that despite this larger expressivity, prefixtuning suffers from structural limitations that prevent it from learning new attention patterns.As a result, it can only bias the output of the attention layer in a direction from a subspace of rank equal to the size of the prefix.We showed that this results in practical limitations by constructing minimal transformers where prefix tuning fails to solve a simple task.This result seems to be at odds with the empirical success of prefix-tuning.We provided explanations towards that.First, we showed that prefix-tuning can easily elicit a skill the pretrained model already has and can even learn a new task, if it has picked up the skills to solve it during pretraining.Second, we showed that the effect of the prefix-induced bias is more complicated and powerful when combined with downstream non-linear operations.However, it appears to be still strictly less expressive than full fine-tuning.
REPRODUCIBILITY STATEMENT
In order to facilitate the reproduction of our empirical results, validating our theoretical results, and further studying the properties of context-based fine-tuning, we release all our code and resources used in this work.Furthermore, in Appendix A we offer explicit constructions of transformers with the properties discussed in Section 3. We also provide Python implementations of these constructions that validate their correctness.
A CONSTRUCTING TRANSFORMERS THAT UTILIZE THE CAPACITY OF THE EMBEDDING SPACE
A.1 UNCONDITIONAL GENERATION FOR A SINGLE VIRTUAL TOKEN
This section provides an explicit construction of a transformer with the properties described in Theorem 1.The goal is to construct a transformer that, by varying the choice of the virtual token, can generate any sequence of N tokens.
First, we need to specify how we encode the target sequence (Y 1 , . . ., Y N ) into the virtual token s 1 .We chose the size of the embedding (and hence of s 1 ) to be N .This way, each element of s 1 can represent one position of the target sequence.We then represent the token value by discretizing each element of s 1 into V levels:
s 1 = ( (Y1−1) /V , . . ., (Y N −1) /V ) .Note that this means that each element of s 1 is in [0, 1).
When predicting the token for the i + 1 position, the transformer needs to pick the i-th element of s 1 , and then decode the corresponding value as a one-hot encoding representing the Y i -th token.
We extract the i-th element of s 1 using one attention block of two heads.The fst head always looks at the first position which is our virtual token s 1 .For that purpose we create an attention head that always has A fst ij = 1 if j = 1 and A fst ij = 0 otherwise together with a value matrix W fst V that extracts the embedding.This is achieved with
W fst Q = [0 N , 1 N ], W fst K = [0 N , 1, 1 N −1 ], W fst V = [I N , 0 N ×N ],(9)
and a sufficiently high inverse temperature parameter T .
The pos head instead extracts the one-hot encoding of the current position.This can be done with an attention head that always attends only to the current position and a value matrix W pos V that extracts the position embedding as a one-hot vector:
W pos Q = [0 N ×N , I N ], W pos K = [0 N ×N , I N ], W pos V = [0 N ×N , I N ].(10)
When the outputs of these two attention heads are summed, then only the element of s 1 that corresponds to the current position will be larger than 1.From Equation (2) the output at the i-th position of the attention block is:
t i = p j=1 A fst ij x j + p j=1
A pos ij e N (j) = s 1 + e N (i),
where x 1 = s 1 and x j = E :,Yj−1 for j > 1.
We can extract the value of s 1 corresponding to the current position by substracting 1 from the hidden state and apply ReLU:
Lex = L[I N , −1 N ]
. Now, we are left with only one non-zero entry and that's the one corresponding to the next token.We can retain only the non-zero entry if we just sum all the entries of the hidden state with Lsum = L[1 ⊤ N , 0].The final step is to map this scalar to a V -dimensional vector which has its maximum value at index Y i .This task is equivalent to designing V linear functions, each attaining its maximum at one of 0, 1 /V , . . ., (V −1) /V .To construct this, we use the property of convex functions that their tangent is always under the plot of the function.Therefore, given a convex function γ(x), we construct the i-th linear function to be simply the tangent of γ at i−1 /V .If we take γ(x) = (x − 1 /2) 2 , this results in the following linear layer:
L proj = L 2(1 − 1) V − 1, . . . , 2(V − 1) V − 1 T , 1 4 − (1 − 1) 2 V 2 , . . . , 1 4 − (V − 1) 2 V 2 ⊤ .(11)
Figure 5 shows the predictors for each individual token id.
With just two attention heads and three linear layers, the transformer
A[(W fst Q , W pos Q ), (W fst K , W pos K ), (W fst V , W pos V )
] Lex Lsum L proj softmax achieves the upper bound of V N unique outputs by controlling a single virtual token at its input.Note that for this
0/V 2/V 4/V 6/V 8/V V/V
Logit for token 1 (largest token when the input is 0/V) Logit for token 8
(largest token when the input is 7/V) input output
Figure 5: Illustration of the predictors for each token in the Lproj linear layer for V = 10.The layer is constructed in such a way that the i-th token has the highest confidence when the input is i−1 /V .construction, the choice of embedding matrix E ∈ R N ×V does not matter.The same transformer architecture can generate only V unique outputs if we only control the first token instead.Therefore, it is indeed the case that the embedding space has exponentially more capacity for control than the token space.You can see this transformer implemented and running in practice in Section 2 of the constructions Jupyter notebook.
A.2 CONDITIONAL GENERATION FOR
A SINGLE VIRTUAL TOKEN (n X = n Y = 1)
This section provides an explicit construction of a transformer with the properties described in Theorem 2. The goal is to construct a transformer that, by varying the choice of the virtual token, can cause the model to act as any map m : [1, . . ., V ] → [1, . . ., V ].In other words, by selecting the virtual token, we can fully control how the model will respond to any token the user may provide.
First, we need to specify how the map m will be encoded in the virtual token s 1 .We choose the embedding size d e to be V .Now, we can use the same encoding scheme as before, but now each element in s 1 corresponds to a different user token, rather than to a position in the generated sequence:
s 1 = ( m(1) /V , . . . , m(V ) /V ).
Therefore, the first element of s 1 designates the response if the user provides token 1, the second element is the response to the token 2, and so on.
Extracting the Y i -th value from s 1 and decoding it can be done in a very similar way as for the unconditional case.The only difference is that instead of looking at the user input position, we look at its value.Take E = I V and N = 2.
Hence we have the following val head (only differing in the W V matrix from Equation ( 10)):
W val Q = [0 2×V , I 2 ], W val K = [0 2×V , I 2 ], W val V = [I V , 0 V ×2 ].
We also need embedding of the first token, so we have a modified version of Equation ( 9):
W fst Q = [0 V , 1, 1], W fst K = [0 V , 1, 0], W fst V = [I V , 0 V ×2 ].
And hence the output of this attention block at the second position would be:
t 2 = 2 j=1 A fst ij x j + 2 j=1 A val ij W fst V x j = s 1 + e V (Y 1 ).
Similarly to the unconditional case, only the entry of t 2 corresponding to the user token will have a value above 1 and that value would be 1 + m(x 1 )/V .token to target token, while the second half of the residual stream will be used to copy the second token value so that the second attention layer can use it to extract the target.As usual, the embedding matrix will be the identity matrix: E = I V .Finally, for convenience, we will also use a dummy zero virtual token that we will attend to when we want to not attend to anything.This results in context size N = 4 with the input being
0 V e N (1) , s 1 e N (2) , E :,X1 e N (3) , E :,X2 e N (4) = 0 V e N (1) , s 1 e N (2) , e V (X 1 ) e N (3) , e V (X 2 )
e N (4) .
We want the output at the last position to be the target m(X 1 , X 2 ), that is:
arg max u∈1,...,V y 4,u = m(X 1 , X 2 ) for any m, X 1 , X 2 .
The first attention block will have three attention heads.
As before, we want to extract the value of s 1 that corresponds to the first token the user provided (X 1 ) and place it in the first half of the residual stream.We want only the third position to do that, while the rest of the positions keep the first half of their residual stream with zeros.Hence we have the following fst head:
W fst Q = 0 2×V 1 1 0 1 0 0 1 0 , W fst K = 0 2×V 1 0 0 0 0 1 0 0 , W fst V = I V 0 V ×N 0 V ×V 0 V ×N .
The user1 head extracts the value of the first user-provided token (X 1 ) and also places it in the first half of the residual stream:
W user1 Q = 0 2×V 1 1 0 1 0 0 1 0 , W user1 K = 0 2×V 1 0 0 0 0 0 1 0 , W user1 V = I V 0 V ×N 0 V ×V 0 V ×N .
And the user2 head does the same for the value of the second user-provided token (X 2 ), placing it in the second half of the residual stream:
W user2 Q = 0 2×V 1 1 1 0 0 0 0 1 , W user2 K = 0 2×V 1 0 0 0 0 0 0 1 , W user2 V = 0 V ×V 0 V ×N 2I V 0 V ×N ,
where the factor 2 is there because, as usual, the first linear layer will subtract 1 from everything in order to extract the value selected by the first token.
This linear layer looks as usual:
Lex2 = L[I 2V , −1 2V ].
The result is that the first V elements will be 0 except one which designates which map from second user token to output we should use, and the second V elements have a one hot-encoding of the second user token.Constructing an MLP that unpacks the mapping can become quite involved so we do not provide an explicit form for it.But from the universal approximation theorems and the finiteness of the domain and range, we know that such an MLP should exist.We thus designate by unpack the MLP that decodes the first half of the residual stream to: m X1 (1) V , . . ., m X1 (V ) V and keeps the second half unchanged.
And now, by using two attention heads, the second attention block extracts the value of the above vector at the position designated by the second token, in a fashion not dissimilar to all the previous cases:
W emb Q = [0 ⊤ V , 1 ⊤ V ], W emb K = [1 ⊤ V , 0 ⊤ V ], W emb V = [I V 0 V ×V ] , W user2' Q = [0 ⊤ V , 1 ⊤ V ], W user2' K = [0 ⊤ V , 1 ⊤ V ], W user2' V = [0 V ×V I V ] ,
And finally, with Lex = L
[I V , −1 V ], Lsum = L[1 ⊤ V , 0]
, and the same projection had as before (Equation ( 11)), we get the target token.
The final transformer is then: A
[(W fst Q , W user1 Q , W user2 Q ), (W fst K , W user1 K , W user2 K ), (W fst V , W user1 V , W user2 V )] Lex2 unpack A[(W emb Q , W user2' Q ), (W emb K , W user2' K ), (W emb V , W user2' V
)] Lex Lsum L proj softmax .You can see this transformer implemented and running in practice in Section 5 of the constructions Jupyter notebook.et al., 2023).The left plot shows the attention with a prefix of length one.The second plot shows the same attention but normalized such that the attenion over the non-prefix positions sums to 1.The right plot shows the attention of the pre-trained model (without prefix).The center and the right plots are the same, illustrating that the presence of the prefix indeed only scales down the attention over the content (non-prefix positions) but does not change its relative distribution, providing empirical validation of Equation ( 6).The test sequence is TABLE: Fourth Round Qualifying : NEW ENTRIES THIS ROUND : 24 TEXT: Fourth round qualifying had 24 new entries.from the DART table-to-test dataset (Nan et al., 2021).The range of attention (1st to 99th percentile) for a single GPT-2 (Radford et al., 2019) prefix trained on the Emotion dataset (Saravia et al., 2018).The prefix is of size 10 (nS = 10).This is the attention of the last user input token (nX ) because this is the position at which the class prediction is done.For illustration purposes, we have normalized the attention so that the attention over the 10 prefix positions sums to 1.The range of attention over the 10 positions for each layer are shown.
B ATTENTION DISTRIBUTION OVER THE PREFIX
As discussed in Section 4, longer prefixes define a subspace from which the bias for the attention block is selected.For a prefix of size n S , that means that this subspace is n S -dimensional.Each of prefix position j defines a basis vector W V s j for this subspace, while the attention A pt i,Sj on this position determines how much of this basis component contributes to the bias.
In ordered to span the whole subspace and make full use of the capacity of the prefix, A pt i,Sj should vary between 0 and 1 for different inputs.However, we observe that this does not happen in practice.Figure 8 shows the ranges of attention the different prefix positions take for the GPT-2 model (Radford et al., 2019).For layer 1, for example, the attention each prefix positions gets is almost constant hence, the effective subspace is collapsed and there is a single bias vector that's applied to the attention layer output, regardless of the user input X.Some other layers show slightly higher variation.For example, layer 3 has three prefix positions with large variations.Therefore, the effective bias subspace is 3-dimensional and the user input X governs which bias vector from this subspace will be selected.
Figure 2 :
2
Figure2: Model pretrained on the four tasks.The four attention heads specialize in the skills necessary to solve these tasks: look at the elements in order, look first at the smallest elements or first at the largest elements.
Figure 3 :
3
Figure3: Attention block activations for ten sequences at the last input position (10) when pretrained on the four tasks.The left plot shows the pretrained activations t10 are not predictive of the completion.The right plot shows prefixes cluster the activations t pt 10 .Connecting the pretrained and prefixed activations highlights the bias.No dimensionality reduction is used; the clustering is solely due to the prefixes.
Figure 4 :
4
Figure4: Prefix-tuning as a neural network architecture.While linearities and non-linearities are present, the only learnable parameters s (1) , s (2) , ... have limited interaction with the inputs x1 and x2.The interaction of the prefix parameters with each input is only via the scalar attention, shown here with a light connection.The mixing of information between the inputs happens via residual connections with the pretrained fixed feature extraction and hence is not learnable.The MLP is also fixed and hence only acts as a multivariate activation function.This limited interaction explains why prefix-tuning struggles to learn new tasks even in deeper models.
Figure 6 :
6
Figure6: The attention of the twelfth head of the first layer ofLLaMA (Touvron et al., 2023).The left plot shows the attention with a prefix of length one.The second plot shows the same attention but normalized such that the attenion over the non-prefix positions sums to 1.The right plot shows the attention of the pre-trained model (without prefix).The center and the right plots are the same, illustrating that the presence of the prefix indeed only scales down the attention over the content (non-prefix positions) but does not change its relative distribution, providing empirical validation of Equation (6).The test sequence is TABLE: Fourth Round Qualifying : NEW ENTRIES THIS ROUND : 24 TEXT: Fourth round qualifying had 24 new entries.from the DART table-to-test dataset(Nan et al., 2021).
Figure8: The range of attention (1st to 99th percentile) for a single GPT-2(Radford et al., 2019) prefix trained on the Emotion dataset(Saravia et al., 2018).The prefix is of size 10 (nS = 10).This is the attention of the last user input token (nX ) because this is the position at which the class prediction is done.For illustration purposes, we have normalized the attention so that the attention over the 10 prefix positions sums to 1.The range of attention over the 10 positions for each layer are shown.
Table 1
1: A transformer pretrained on sortingin ascending order cannot be prefix-tuned tosort in descending order. 10 random seeds.Ascending DescendingPretrain on asc.91±5%0±0%Full fine-tune on desc.0±0%85±5%Prefix-tune on desc.0±0%0±0%
Table 2 :
2
A transformer pretrained on several tasks can be prefix-tuned for one of them.10 random seeds.
Accuracy on:↗↘+1+2Pretrained25±13% 25±12% 24±11% 22±7%Prefix-tune on ↗ 95± 2% 0± 0% 0± 0% 0±0%Prefix-tune on ↘ 0± 0% 90± 3% 1± 1% 1±1%Prefix-tune on +1 0± 0% 1± 3% 95± 6% 0±1%Prefix-tune on +2 0± 0% 0± 0% 1± 2% 98±5%
Table 3 :
3
Prefix tuning can learn a new task requiring only pretraining skills (↗+1) but cannot learn a completely new task (H).Average accuracy over 3 seeds.
Accuracy on:↗↘+1+2 ↗+1 HPretrained17% 23% 34% 25% 0% 0%Prefix-tune on ↗ 100% 0% 0% 0% 0% 0%Prefix-tune on ↘0% 100% 0% 0% 0% 0%Prefix-tune on +10% 0% 100% 0% 0% 0%Prefix-tune on +20% 0% 0% 100% 0% 0%Prefix-tune on ↗+1 0% 0% 0% 0% 93% 0%Prefix-tune on H0% 0% 0% 0% 0% 1%
For the first block, din must be de + N but may be different for the deeper blocks.
A causal model has Aij = 0 for j > i. This does not affect our results so we will skip the masking step.
For example, LLaMA-7B (Touvron et al., 2023) has 24
unique completions when prompted with each of its 32 000 tokens and we found a non-exhaustive set of 46 812 unique 10-token-long sequences by controlling the first virtual token. Hence, in practice, one can generate more outputs by soft prompting than by prompting.
He et al. (2021a) show a similar analysis but do not study the expressiveness of prefix-tuning.
A similar hypothesis has also been proposed byReynolds and McDonell (2021) for fine-tuning in general.
ACKNOWLEDGEMENTSThis work is supported by a UKRI grant Turing AI Fellowship (EP/W002981/1) and the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1).AB has received funding from the Amazon Research Awards.We also thank the Royal Academy of Engineering and FiveAI.We can now extract the one-hot representation of the target token using the same approach as before, just adjusting for the different hidden state size:, and the same projection had as before (Equation (11)).The final transformer is then:] Lex Lsum L proj softmax .You can see this transformer implemented and running in practice in Section 3 of the constructions Jupyter notebook.We can obtain longer responses via a simple extension.If the response length is N 0 , then we can encode the map m : [1, . . ., V ] → [1, . . ., V ] N0 in N 0 virtual tokens, each corresponding to one of the target positions:For this model we would then have N = 2N 0 and d e = V .First, we need a head that always looks at the token provided by the user, which will be at position N o + 1:In order to consume the map at the right location, we need to also look at the embedding of the token N o positions before the one we are trying to generate:From here on, the decoding is exactly the same as in the n X = n Y = 1 case.The final transformer is then:You can see this transformer implemented and running in practice in Section 4 of the constructions Jupyter notebook.Finally, we consider the case when the user input X is longer.This is a bit more complicated because we need to search through a domain of size V V .We will only consider the case with n X = 2 where we would need two attention layers.A similar approach can be used to construct deeper models for n X > 2. Finally, combining the strategy in the previous section for longer responses with the strategy in this section for longer user inputs allows us to construct transformers that map from arbitrary length user strings to arbitrary length responses.In order to encode a map m : [1, . . ., V ] 2 → [1, . . ., V ] into a single virtual token we would need a more involved construction than before.Similarly to how we discretized each element of the virtual token s 1 in V levels before, we are going to now discretize it into V V levels.Each one of these levels would be one of the V V possible maps from the second user token to the response.The first user token would be used to select the corresponding element of s 1 .Then this scalar will be "unpacked" into a new vector of V elements using the first attention block.Then, the second user token will select an element from this unpacked vector, which will correspond to the target token.We construct the virtual token as follows:where m f (x) = m(f, x) is a map from the second user token to the response when the first token is fixed to be f .An additional change from the previous constructions is that we are going to divide the residual stream into two sections.This is in line with the theory that different parts of the residual stream specialize for different communications needs by different attention heads(Elhage et al., 2021).We will use the first half of the residual stream to extract and "unpack" the correct mapping from secondet al., 2023).The left plot shows the activations in the presence of the prefix.The right plot shows the activations ti of the pretrained model, scaled by one minus the attention that the prefix would take and then biased in the direction WV s1.The two plots are the same, illustrating that our theory, Equation (7) in particular, also holds for real-world large transformer models.The test sequence is the same as in Figure6.
What learning algorithm is in-context learning? Investigations with linear models. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou, International Conference on Learning Representations. 2022
Knowprefix-tuning: A two-stage prefix-tuning framework for knowledge-grounded dialogue generation. Jiaqi Bai, Zhao Yan, Jian Yang, Xinnian Liang, Hongcheng Guo, Zhoujun Li, arXiv:2306.154302023arXiv preprint
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in Neural Information Processing Systems. 2020
CodePrompt: Task-agnostic prefix tuning for program and language generation. Yunseok Choi, Jee-Hyong Lee, Findings of the Association for Computational Linguistics: ACL 2023. 2023
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter. Long and Short Papers. the 2019 Conference of the North American ChapterHuman Language Technologies20191
. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlishand Chris Olah. 2021. A mathematical framework for transformer circuits. Transformer Circuits Thread
WARP: Word-level Adversarial ReProgramming. Karen Hambardzumyan, Hrant Khachatrian, Jonathan , Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Long Papers. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingMay. 20211
Towards a unified view of parameter-efficient transfer learning. Mohamad H Hassoun ; Junxian, Chunting He, Xuezhe Zhou, Taylor Ma, Graham Berg-Kirkpatrick, Neubig, International Conference on Learning Representations. MIT press1995. 2021aFundamentals of artificial neural networks
Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models. Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng, Proceedings of the 16th Conference of the European Chapter. the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter2021b
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, International Conference on Machine Learning. 2019
LoRA: Low-rank adaptation of large language models. J Edward, Yelong Hu, Phillip Shen, Zeyuan Wallis, Yuanzhi Allen-Zhu, Shean Li, Lu Wang, Weizhu Wang, Chen, International Conference on Learning Representations. 2021
LLM-Adapters: An adapter family for parameter-efficient fine-tuning of large language models. Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, Roy , Ka-Wei Lee, Lidong Bing, Soujanya Poria, arXiv:2304.019332023arXiv preprint
Attention is not explanation. Sarthak Jain, Byron C Wallace, Proceedings of the 2019 Conference of the North American Chapter. Long and Short Papers. the 2019 Conference of the North American ChapterHuman Language Technologies20191
Jared Kaplan, Sam Mccandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, Dario Amodei, arXiv:2001.08361Scaling laws for neural language models. 2020arXiv preprint
. Andrej Karpathy, 2020minGPT GitHub Repository
Large language models are zero-shot reasoners. Takeshi Kojima, Shane Shixiang, Machel Gu, Yutaka Reid, Yusuke Matsuo, Iwasawa, Advances in Neural Information Processing Systems. 2022
In-context learning in large language models learns label relationships but is not conventional learning. Jannik Kossen, Tom Rainforth, Yarin Gal, arXiv:2307.123752023arXiv preprint
La Emanuele, Aleksandar Malfa, Christoph Petrov, Simon Weinhuber, Ryan Frieder, Anthony G Burnell, Nigel Cohn, Michael Shadbolt, Wooldridge, arXiv:2309.16573The ARRT of Language-Modelsas-a-Service: Overview of a new paradigm and its challenges. 2023arXiv preprint
The power of scale for parameter-efficient prompt tuning. Brian Lester, Rami Al-Rfou, Noah Constant, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language Processing2021
Prefix-Tuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Long Papers. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing20211
Scaling down to scale up: A guide to parameter-efficient fine-tuning. Vijeta Vladislav Lialin, Anna Deshpande, Rumshisky, arXiv:2303.156472023arXiv preprint
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, 10.1145/35608152023ACM Computing Surveys
P-Tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, Jie Tang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsShort Papers20222
An empirical study of catastrophic forgetting in large language models during continual fine-tuning. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, Yue Zhang, arXiv:2308.087472023arXiv preprint
Recent advances in natural language processing via large pre-trained language models: A survey. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran, Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, Dan Roth, 2021ACM Computing Surveys
Fine-tuning can cripple your foundation model; preserving features may be the solution. Jishnu Mukhoti, Yarin Gal, H S Philip, Puneet K Torr, Dokania, arXiv:2308.133202023arXiv preprint
DART: Open-domain structured data record to text generation. Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, Nazneen Fatema, Rajani , Proceedings of the 2021 Conference of the North American Chapter. the 2021 Conference of the North American ChapterHuman Language Technologies2021
On prefix-tuning for lightweight out-of-distribution detection. Yawen Ouyang, Yongchang Cao, Yuan Gao, Zhen Wu, Jianbing Zhang, Xinyu Dai, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Long Papers. the 61st Annual Meeting of the Association for Computational Linguistics20231
Task-specific skill localization in fine-tuned language models. Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora, International Conference on Machine Learning. 2023
Learning how to ask: Querying LMs with mixtures of soft prompts. Guanghui Qin, Jason Eisner, Proceedings of the 2021 Conference of the North American Chapter. the 2021 Conference of the North American ChapterHuman Language Technologies2021
Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, arXiv:2110.07867Exploring universal intrinsic task subspace via prompt tuning. 2021arXiv preprint
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019
. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, et al. 2021. Scaling language models: Methods, analysis & insights from training Gopher
Learning multiple visual domains with residual adapters. Hakan Sylvestre-Alvise Rebuffi, Andrea Bilen, Vedaldi, Advances in Neural Information Processing Systems. 2017
Prompt programming for large language models: Beyond the few-shot paradigm. Laria Reynolds, Kyle Mcdonell, 10.1145/3411763.3451760Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 2021
CARER: Contextualized affect representations for emotion recognition. Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, Yi-Shin Chen, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language Processing2018
Auto-Prompt: Eliciting knowledge from language models with automatically generated prompts. Taylor Shin, Yasaman Razeghi, Robert L Logan, I V , Eric Wallace, Sameer Singh, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)2020
Regularising neural networks using flexible multivariate activation function. Mirko Solazzi, Aurelio Uncini, Neural Networks. 1722004
On transferability of prompt tuning for natural language processing. Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie Zhou, Proceedings of the 2022 Conference of the North American Chapter. the 2022 Conference of the North American ChapterHuman Language Technologies2022
Xuanjing Huang, and Xipeng Qiu. 2022. Black-box tuning for language-model-as-a-service. Tianxiang Sun, Yunfan Shao, Hong Qian, International Conference on Machine Learning.
Thibaut Hugo Touvron, Gautier Lavril, Xavier Izacard, Marie-Anne Martinet, Timothée Lachaux, Baptiste Lacroix, Naman Rozière, Eric Goyal, Faisal Hambro, Azhar, arXiv:2302.13971LLaMA: Open and efficient foundation language models. 2023arXiv preprint
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. 2017
Transformers learn in-context by gradient descent. Johannes Von, Oswald , Eyvind Niklasson, Ettore Randazzo, Joao Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov, International Conference on Machine Learning. 2023
SPoT: Better frozen model adaptation through soft prompt transfer. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, ' , Daniel Cer, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Long Papers. the 60th Annual Meeting of the Association for Computational Linguistics20221
Finding skill neurons in pre-trained transformer-based language models. Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, Juanzi Li, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language Processing2022a
Multitask prompt tuning enables parameter-efficient transfer learning. Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, Yoon Kim, International Conference on Learning Representations. 2022b
Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, Quoc V Le, International Conference on Learning Representations. 2021
Thinking like transformers. Gail Weiss, Yoav Goldberg, Eran Yahav, International Conference on Machine Learning. 2021
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingEMNLP-IJCNLP2019
Adversarial soft prompt tuning for cross-domain sentiment analysis. Hui Wu, Xiaodong Shi, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Long Papers. the 60th Annual Meeting of the Association for Computational Linguistics20221
Black-box prompt tuning with subspace learning. Yuanhang Zheng, Zhixing Tan, Peng Li, Yang Liu, arXiv:2305.035182023arXiv preprint
PANDA: Prompt transfer meets knowledge distillation for efficient model adaptation. Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao, arXiv:2208.101602022arXiv preprint
Universal and transferable adversarial attacks on aligned language models. Andy Zou, Zifan Wang, Zico Kolter, Matt Fredrikson, arXiv:2307.150432023arXiv preprint |
219,558,792 | Dataset Condensation with Gradient Matching | Efficient training of deep neural networks is an increasingly important problem in the era of sophisticated architectures and large-scale datasets. This paper proposes a training set synthesis technique, called Dataset Condensation, that learns to produce a small set of informative samples for training deep neural networks from scratch in a small fraction of the required computational cost on the original data while achieving comparable results. We rigorously evaluate its performance in several computer vision benchmarks and show that it significantly outperforms the state-of-the-art methods. Finally we show promising applications of our method in continual learning and domain adaptation.Preprint. Under review. | [] | Dataset Condensation with Gradient Matching
Bo Zhao [email protected]
School of Informatics
The University of Edinburgh
Konda Reddy Mopuri
School of Informatics
The University of Edinburgh
Hakan Bilen [email protected]
School of Informatics
The University of Edinburgh
Dataset Condensation with Gradient Matching
Efficient training of deep neural networks is an increasingly important problem in the era of sophisticated architectures and large-scale datasets. This paper proposes a training set synthesis technique, called Dataset Condensation, that learns to produce a small set of informative samples for training deep neural networks from scratch in a small fraction of the required computational cost on the original data while achieving comparable results. We rigorously evaluate its performance in several computer vision benchmarks and show that it significantly outperforms the state-of-the-art methods. Finally we show promising applications of our method in continual learning and domain adaptation.Preprint. Under review.
Introduction
Training deep neural networks fast, accurately and efficiently is a long-standing goal in machine learning with many practical benefits. As the state-of-the-art in computer vision [1,2], natural language processing [3] and speech recognition [4,5] rely on more sophisticated deep networks and larger datasets, their training becomes more computationally expensive, memory and storage intensive and even demands for specialized equipment and infrastructure. Thus a widespread adoption of the current state-of-the-art to different data and problems, and development of the future state-of-the-art which typically require training multiple models for validating various decisions (e.g. architecture designs, hyperparameters, loss formulations) call for efficient training strategies.
A well studied problem that aims at reducing train time by decreasing the number of models to be trained is hyperparameter optimization. The standard solutions include discretizing the parameter search space via a grid search, random sampling [6], evolutionary optimization [7], modeling the validation loss as a function of hyperparameters which can be fit to the results of previously explored ones to estimate the optimum [8,9] and utilizing reinforcement learning to maximize the expected accuracy of the selected hyperparameters [10]. However, these strategies are not directly applicable to evaluating more complex design decisions which cannot be considered as hyperparameters.
This paper focuses on an orthogonal direction that aims at reducing training time for each individual model by using a significantly smaller training set such that the model learned on this set performs as closely as possible to the model trained on the entire dataset (illustrated in Figure 1(a)). A classical technique to reducing a large set of high-dimensional points to a small set such that the reduced set approximately preserves a target property (e.g. width, diameter) of the original data is coreset selection [11,12,13]. Recent examples of this approach can be found in active learning [14] and continual learning [15,16,17,18] where there is typically a fixed budget in labeling and storing training samples respectively. Sener and Savarese [14] propose an active learning method that picks data points to label from a large set by using a greedy K-center technique. Rebuffi et al. [15] and Castro et al. [17] use the herding selection strategy in [19] for each seen category by picking the closest samples to the category mean in a feature space to construct the rehearsal memory in continual learning. Aljundi et al. [18] cast sample selection as a constraint reduction problem and propose to select a diverse set of samples based on the similarity between their gradients. Toneva et al. [16] the relation between the sample selection and rate of misclassification during training, and show that the least forgotten (i.e. misclassified) samples, a significant fraction of training set, can be omitted while still maintaining good performance. However, these methods have two shortcomings: they rely on i) heuristics (e.g. picking cluster centers, diversity) that do not guarantee any optimum solution for the downstream task, ii) presence of representative samples. Motivated by these shortcomings, we focus on learning to synthesize new samples that are optimized to train neural networks for downstream tasks and not limited to individual samples in the data. We show in section 3 that our method produces a more compact set and achieves better generalization performance than the coresets when a deep neural network is trained on them.
Our method is inspired by a recent work, Dataset Distillation (DD) [20] that goes beyond finding coresets by learning to synthesize a small set of informative images from large training data. In particular, the authors model the network parameters as a function of the synthetic training data and learn them by minimizing the training loss over the original training data w.r.t. synthetic data. Like DD [20], our goal is also to achieve comparable generalization performance with a model trained on the synthesized images to a model trained on the original images (see Figure 1(a)). To this end, we propose a Dataset Condensation method to learn a small set of "condensed" synthetic samples such that a deep network trained on them obtains not only similar performance but also a close solution to a network trained on the large training data in the network parameter space. We formulate this goal as a minimization problem between two sets of gradients for the network parameters that are computed for a training loss over a large fixed training set and a learnable condensed set (see Figure 1(b)). Our method enables a more effective learning of synthetic images and we show that neural networks trained on the condensed images outperforms [20] with a wide margin in multiple computer vision benchmarks.
Our method is also related to knowledge distillation (KD) [21] techniques [22,23,24] that transfer the knowledge of an ensemble of models to a single one. Though our method can also be seen as KD from a teacher to a student, we distill knowledge of a large training set into a small synthetic set rather than the knowledge between models. Recent zero-shot KD methods [25,26] aim to perform KD from a trained model in the absence of training data by generating synthetic data as the intermediate production to further use. Unlike them, our method does not require pretrained teacher models to provide the knowledge, i.e. to obtain the features and labels. Our method is also related to Generative Adversarial Networks [27,28,29] and Variational AutoEncoders [30] that synthesize high fidelity samples by capturing the data distribution. In contrast, our goal is to generate informative samples for training deep neural networks rather than to produce "real-looking" samples. Finally our method is loosely related to the methods [31,32,33] that recover images by projecting the feature activations back to the input pixel space [31], reconstruct the input image by matching the feature activations [32], recover private training images for given training gradients [33]. Our goal is however to synthesize a set of condensed training images not to recover the original training images.
In the remainder of this paper, we first review the problem of dataset condensation and introduce our method in section 2, present and analyze our results in several image recognition benchmarks in section 3.1, showcase applications in continual learning and few-shot domain adaptation in section 3.2, and conclude the paper with future directions in section 4.
Method
Dataset condensation
Suppose we are given a large dataset consisting of m pairs of a training image and its class label
T = {(x i , y i )}| m i=1 where x ∈ X ⊂ R d , y ∈ {1, .
. . , C}, X is a d-dimensional input space and C is the number of classes. We wish to learn a differentiable function φ (i.e. deep neural network) with parameters θ that correctly predicts labels of previously unseen images, i.e. y = φ θ (x). One can learn the parameters of this function by minimizing an empirical loss term over the training set:
θ T = arg min θ L T (θ)(1)
where L T (θ) = (x,y)∈T (φ θ (x), y) , (·, ·) is a task specific loss (i.e. cross-entropy) and θ T is the minimizer of L T . The generalization performance of the obtained model φ θ T can be written
as E x∼P D [ (φ θ S (x), y)]
where P D is the data distribution. Our goal is to generate a small set of condensed synthetic samples with their labels, S = {(s i , y i )}| n i=1 where s ∈ R d and y ∈ Y, n m. Similar to eq. (1), once the condensed set is learned, one can train φ on them as follows
θ S = arg min θ L S (θ)(2)
where L S (θ) = (s,y)∈S (φ θ (s), y) and θ S is the minimizer of L S . As the synthetic set S is significantly smaller (e.g. 2-3 orders of magnitude), we expect the optimization in eq. (2) to be significantly faster than that in eq. (1). We also wish that the generalization performance of
φ θ S is comparable to φ θ T , i.e. E x∼P D [ (φ θ T (x), y)] E x∼P D [ (φ θ S (x), y)].
Discussion. The problem of achieving comparable generalization performance by training on the condensed data can be formulated in different ways. One approach, which is proposed in [20], is to pose the parameters θ S as a function of the synthetic data S:
S * = arg min S L T (θ S (S)) subject to θ S (S) = arg min θ L S (θ).(3)
The method aims to find the optimum set of synthetic images S * such that the model φ θ S trained on them minimizes the empirical loss over the original training set. Optimizing eq. (3) involves a nested loop optimization and solving the inner loop for θ S (S) at each iteration to recover the gradients for S which requires a computationally expensive procedure -unrolling the recursive computation graph for S over multiple optimization steps for θ (see [34,35]). Hence, it does not scale to large models and/or accurate inner-loop optimizers with many steps. Next we propose an alternative formulation for dataset condensation.
Dataset condensation with parameter matching
Here we aim to learn S such that the model φ θ S trained on them achieves not only comparable generalization performance to φ θ T but also converges to a similar solution in the parameter space (i.e. θ S ≈ θ T ). Let φ θ be a locally smooth function 1 , similar weights (θ S ≈ θ T ) imply similar mappings in a local neighborhood and thus generalization performance, i.e.
E x∼P D [ (φ θ T (x), y)] E x∼P D [ (φ θ S (x), y)]
. Now we can formulate this goal as
min S D(θ S , θ T ) subject to θ S (S) = arg min θ L S (θ)(4)
where θ T = arg min θ L T (θ) and D(·, ·) is a distance function. In a deep neural network, θ T typically depends on its initial values θ 0 . However, the optimization in eq. (4) aims to obtain an optimum set of synthetic images only for one model φ θ T with the initialization θ 0 , while our actual goal is to generate samples that can work with a distribution of random initializations P θ0 . Thus we modify eq. (4) as follows:
min S E θ0∼P θ 0 [D(θ S (θ 0 ), θ T (θ 0 ))] subject to θ S (S) = arg min θ L S (θ(θ 0 ))(5)
where θ T = arg min θ L T (θ(θ 0 )). For brevity, we use only θ S and θ T to indicate θ S (θ 0 ) and θ T (θ 0 ) respectively in the next sections. The standard approach to solving eq. (5) employs implicit differentiation (see [35] for details), which involves solving an inner loop optimization for θ S . As the inner loop optimization θ S (S) = arg min θ L S (θ) can be computationally expensive in case of large-scale models, one can adopt the back-optimization approach in [35] which re-defines θ S as the output of an incomplete optimization:
θ S (S) = opt-alg θ (L S (θ), ς)(6)
where opt-alg is a specific optimization procedure with a fixed number of steps (ς).
In practice, θ T for different initializations can be trained first in an offline stage and then used as the target parameter vector in eq. (5). However, there are two potential issues by learning to regress θ T as the target vector. First the distance between θ T and intermediate values of θ S can be too big in the parameter space with multiple local minima traps along the path and thus it can be too challenging to reach. Second opt-alg involves a limited number of optimization steps as a trade-off between speed and accuracy which may not be sufficient to take enough steps for reaching the optimal solution. These problems are similar to those of [20], as they both involve parameterizing θ S with S and θ 0 .
Dataset condensation with curriculum gradient matching
As such we propose a curriculum based approach to address the above mentioned challenges. The key idea is that we wish θ S to be close to not only the final θ T but also to follow a similar path to θ T throughout the optimization. While this can restrict the optimization dynamics for θ, we argue that it also enables a more guided optimization and effective use of the incomplete optimizer. We can now rewrite eq. (5) as a sum of subproblems:
min S E θ0∼P θ 0 [ T t=1 D(θ S t , θ T t )] subject to θ S t (S) = opt-alg θ (L S (θ S t−1 ), ς S ) and θ T t = opt-alg θ (L T (θ T t−1 ), ς T )(7)
where T is the number of iterations, ς S and ς T are the numbers of optimization steps for θ S and θ T respectively. In words, we wish to generate a set of condensed samples S such that the network parameters trained on them (θ S t ) are similar to the ones trained on the original training set (θ T t ) at each iteration t. We find in our preliminary experiments that θ S t , which is parameterized with S, can successfully track θ T t by updating S and minimize D(θ S t−1 , θ T t−1 ) close to zero. In the case of one step gradient descent optimization for opt-alg, we have the following update rule
θ S t ← θ S t−1 − η θ ∇ θ L S (θ S t−1 ) and θ T t ← θ T t−1 − η θ ∇ θ L T (θ T t−1 ),(8)
where η θ is the learning rate. Based on our observation (D(θ S t−1 , θ T t−1 ) ≈ 0), we simplify the formulation in eq. (7) by replacing θ T t−1 with θ S t−1 and use a single symbol θ to denote θ S in the rest of the paper:
min S E θ0∼P θ 0 [ T t=1 D(∇ θ L S (θ t−1 ), ∇ θ L T (θ t−1 ))].(9)
We now have a single deep network with parameters θ trained on the synthetic set S which is optimized such that the distance between the gradients for the loss over the training samples L T w.r.t. θ and the gradients for the loss over the condensed samples L S w.r.t. θ is minimized. In words, our goal reduces to matching the gradients for the real and synthetic training loss w.r.t. θ via updating the condensed samples. This approximation has the key advantage over the one in eq. (5) and also [20] that it does not require the expensive unrolling of the recursive computation graph over the previous parameters {θ 0 , . . . , θ t−1 }. The important consequence is that the optimization is significantly faster, memory efficient and thus scales up to the state-of-the-art deep neural networks (e.g. ResNet [2]).
Generating labeled synthetic samples. The set of synthetic data contains pairs of a sample and its label (s, y) that can be jointly learned by optimizing eq. (9) in theory. However, their joint optimization is challenging, as the content of the samples depend on their label and vice-versa. Thus in our experiments we learn to synthesize images for fixed labels, e.g. one synthetic image per class.
Algorithm. We depict the optimization details in Alg. 1. At the outer level, the optimization contains a loop over random weight initializations, as we want the condensed samples to train previously unseen models. Once θ, which denotes θ S for brevity, is randomly initialized, we use φ θ to first compute the loss over the training samples (L T ) and the synthetic samples (L S ) and their gradients w.r.t. θ, then optimize the synthetic samples to match these gradients ∇ θ L S to ∇ θ L T by applying ς S gradient descent steps with learning rate η S . We use the stochastic gradient descent optimization for both opt-alg θ and opt-alg S . Next we train θ on the synthetic samples by minimizing the loss L S with learning rate η θ for ς θ steps. Note that each real and synthetic batch sampled from T and S contains samples from a single class and the synthetic data for each class are individually updated at each iteration (t) for the following reasons: i) this reduces the memory use at train time, ii) imitating the mean gradients w.r.t. the data from single class is easier compared to that of multiple classes.
Algorithm 1: Dataset condensation with gradient matching
Input: Training set T 1 Required: randomly initialized set of synthetic samples S, probability distribution over randomly initialized weights P θ 0 , deep neural network φ θ , number of outer-loop steps K, number of inner-loop steps T , number of steps for updating weights ς θ and synthetic samples ςS in each inner-loop step respectively, learning rates for updating weights η θ and synthetic samples ηS .
2 for k = 0, · · · , K − 1 do 3 Initialize θ0 ∼ P θ 0 4 for t = 0, · · · , T − 1 do 5 Sample a minibatch B T ∼ T and B S ∼ S 6 Compute L T = (x,y)∈B T (φ θ t (x), y) and L S = (s,y)∈B S (φ θ t (s), y) 7 Update synthetic samples: S ← opt-alg S (D(∇ θ L S (θt), ∇ θ L T (θt)), ςS , ηS ) 8 Update model parameters: θt+1 ← opt-alg θ (L S (θt), ς θ , η θ )
Output: S Gradient matching loss. The matching loss D(·, ·) in eq. (9) quantifies the distance between the gradients for L S and L T w.r.t. θ. When φ θ is a multi-layered neural network, the gradients correspond to a set of learnable 2-dimensional (out×in) weights at each fully connected (FC) and 4-dimensional ones (out×in×h×w) at each convolutional layer where out, in, h, w are number of output and input channels, kernel height and width respectively. The matching loss can be decomposed into a sum of layerwise losses as D(
∇ θ L S , ∇ θ L T ) = L l=1 d(∇ θ (l) L S , ∇ θ (l) L T )
where l is the layer index, L is the number of layers with weights and
d(A, B) = out i=1 1 − A i· · B i· A i· B i·(10)
where A i· and B i· are flattened vectors of gradients corresponding to each output node i.e. in dimensional for FC weights and in×h×w dimensional for convolutional weights. In contrast to the previous works [39,18,33] that ignore the layer-wise structure by flattening tensors over all layers to one vector and then computing the distance between two vectors, we group them for each output node. We found that this is a better distance for gradient matching and enables using a single learning rate across all layers in our preliminary experiments.
Experiments
Dataset condensation
First we evaluate classification performance with the condensed images on four standard benchmark datasets: digit recognition on MNIST [40], SVHN [41] and object classification on FashionM-NIST [42], CIFAR10 [43]. We test our method using six standard deep network architectures: MLP, ConvNet [44], LeNet [40], AlexNet [1], VGG-11 [45] and ResNet-18 [2]. MLP is a multilayer perceptron with two nonlinear hidden layers, each has 128 units. ConvNet is a commonly used modular architecture in few-shot learning [46,47,44] includes 3 blocks, each with 128 filters, followed by InstanceNorm [48], ReLU and AvgPooling modules. The final block is followed by a linear classifier. More details about the datasets and networks can be found in the supplementary.
The pipeline for dataset condensation has two stages: learning the condensed images (denoted as Synth) and training classifiers from scratch on them (denoted as Cls). In the coreset baselines, the coreset is selected in the first stage. Note that the model architectures used in two stages might be different. We investigate two settings: 1 and 10 image/class learning, which means that the synthetic set or core-set contains 1 and 10 images per class respectively. Each method is run for 5 times, and 5 synthetic sets are generated in the first stage. Each generated synthetic set is evaluated on 20 randomly initialized models in the second stage, which equals to evaluating 100 models in the second stage. In all experiments, we report the mean and standard deviation of these 100 testing results.
Baselines. We compare our method to four core-set selection baselines (Random, Herding, K-Center and Forgetting) and also to dataset distillation (DD) [20]. In random, the training samples are randomly selected as the core-set. Herding baseline, which selects closest samples to the cluster center, is based on the method of [19] and used in [15,17,49,50]. K-Center [51,14] selects multiple center points such that the largest distance between a data point and its nearest center is minimized. For herding and K-Center, we use models trained on the whole dataset to extract features, compute l 2 distance to centers. Forgetting method [16] considers those training samples which are easy to forget during training as the important ones and uses forgetting statistics to measure the importance. The forgetting statistics are obtained during the training process [16].
Comparison to coreset methods. We first compare our method to the coreset baselines on MNIST, FashionMNIST, SVHN and CIFAR10 in Table 1 using the default ConvNet in terms of classification accuracy. Whole dataset indicates a model trained on the whole training set which serves as an approximate upper-bound performance. First we observe that our method outperforms all the baselines significantly and achieves a comparable result (97.4%) in case of 10 images per class to the upper bound (99.6%) in MNIST which uses two orders of magnitude more training images per class (6000). We also obtain promising results in FashionMNIST, however, the gap between our method and upper bound is bigger in SVHN and CIFAR10 which contain more diverse images with varying foregrounds and backgrounds. We also observe that, (i) the random selection baseline is competitive to other coreset methods in case of 10 images per class and (ii) herding method is on average the best coreset technique. We visualize the synthetic images produced by our method under 1 image/class setting in Figure 2. Interestingly they are interpretable and look like "prototypes" of each class.
Comparison to Dataset Distillation [20]. Unlike the setting in Table 1, DD [20] reports results only for 10 images per class on MNIST and CIFAR10 over LeNet and AlexCifarNet (a customized AlexNet). For a fair comparison, we strictly follow the experimental setting in [20], use the same architectures and report our and their original results in Table 3. Our method achieves significantly better performance than DD on both MNIST and CIFAR10. Especially, on MNIST, we obtain 5% better accuracy with only 1 synthetic sample per class than DD with 10 per class. In addition, our method obtains consistent results over multiple runs with a standard deviation of only 0.6% on MNIST, while DD's performance significantly vary over different runs (8.1%). Finally our method trains 2 times faster than DD and requires 50% less memory on CIFAR10 experiments. More detailed runtime and qualitative comparison can be found in the supplementary. Cross-architecture generalization. Another key advantage of our method is that the condensed images learned using one architecture can be used to train another unseen one. Here we learn to generate 1 condensed image per class for MNIST over a diverse set of networks including MLP, ConvNet [44], LeNet [40], AlexNet [1], VGG-11 [45] and ResNet-18 [2] (see Table 2). Once the condensed sets are synthesized, we train every network on all the sets separately from scratch and evaluate their cross architecture performance in terms of classification accuracy on the MNIST test set. Table 2 shows that the condensed images, especially the ones that are trained with convolutional networks, perform well and are thus architecture generic. MLP generated images do not work well for training convolutional architectures which is possibly due to the mismatch between translation invariance properties of MLP and convolutional networks. Interestingly, MLP achieves better performance with convolutional network generated images than the MLP generated ones. The best results are obtained in most cases with ResNet generated images and ConvNet or ResNet as classifiers which is inline with the performances of the architectures when trained on the original dataset.
Number of condensed images. We also study the relation between the number of condensed images per class and test performance of a ConvNet trained on them for MNIST, FashionMNIST, SVHN and CIFAR10 in Figure 3 in absolute and relative terms -normalized by its upper-bound. While increasing the number of condensed images improves the accuracies in all benchmarks and further closes the gap with the upper-bound performance especially in MNIST and FashionMNIST, the performance saturates around 50 images/class. We plan to investigate effective regularization strategies on the condensed images to further improve the performance in the future. Activation, normalization & pooling. We also study the effect of various activation (sigmoid, ReLU [52,53], leaky ReLU [54]), pooling (max, average) and normalization functions (batch [55], group [56], layer [57], instance norm [48]) and have the following observations: i) leaky ReLU over ReLU and average pooling over max pooling enable learning better condensed images, as they allow for denser gradient flow; ii) instance normalization obtains better classification performance than its alternatives when used in the networks that are trained on a small set of condensed images. We refer to the supplementary for detailed results and discussion.
Applications
Continual Learning First we apply our method to a continual-learning scenario [15,17] where new tasks are learned incrementally and the goal is to preserve the performance on the old tasks while learning the new ones. In particular, we build our model on E2E method in [17] that uses a limited budget rehearsal memory (we consider 10 images/class here) to keep representative samples from the old tasks and knowledge distillation (KD) to regularize the network's output w.r.t. to previous predictions. We replace its sample selection mechanism (herding) with ours such that a set of condensed images are generated and stored in the memory, keep the rest of the model same and evaluate this model on the task-incremental learning problem on the digit recognition datasets, SVHN [41], MNIST [40] and USPS [58] in the same order. MNIST and USPS images are reshaped to 32 × 32 RGB images. We compare our method to E2E [17], depicted as herding in Figure 4, with and without KD regularization. The experiment contains 3 incremental training stages (SVHN→MNIST→USPS) and testing accuracies are computed by averaging over the test sets of the previous and current tasks after each stage. The desired outcome is to obtain high mean classification accuracy at T3. We show that our method achieves better performance than E2E for both settings and the condensed images are more informative than the ones sampled by herding. In particular, our method outperforms E2E by a large margin (2.3% at T3) when KD is not employed.
Few-shot domain adaptation. Next we apply our method to a supervised domain adaptation scenario [59] where the goal is to efficiently transfer a model that is trained on a large labeled source domain to a small target domain with few selected or synthesized samples. To this end, we use a simple but effective transfer learning technique, finetuning, that trains the pretrained network on the few given samples. We report results on two domain adaptation scenarios: from MNIST (RGB) to SVHN, and from SVHN to MNIST (RGB). Table 4 depicts the adaptation results using ConvNet architecture for all methods. We compare our method to the selected coreset methods and also to the pretrained network without finetuning (No adaptation). We show that our method synthesizes more informative images and thus a better learning of the target task. We also run an additional experiment with LeNet architecture for comparing against DD [20] which reports results only for SVHN to MNIST for 10 images per class. Our method obtains 93.7 ± 0.8% and outperforms DD (85.2 ± 4.7%) in this setting though DD requires many pretrained models to synthesize images while ours employ randomly initialized ones.
Conclusion
In this paper, we have proposed a dataset condensation method that learns to synthesize a small set of informative images. These images can be used to efficiently train neural networks 2-3 orders of magnitude faster with a modest drop in performance. Our method outperforms the state-of-the-art significantly in multiple benchmarks and also obtains promising results in continual learning and few-shot adaptation. As future work, we plan to explore the use of condensed images in more diverse and thus challenging datasets such as ImageNet [60] that contain higher resolution images with larger variations in pose, appearance of objects, background.
Acknowledgment. We thank Iain Murray and Oisin Mac Aodha for their valuable feedback.
A Implementation details
In this part, we explain the implementation details for the dataset condensation, continual learning and few-shot domain adaptation experiments.
Dataset condensation. The presented experiments involve tuning of six hyperparameters -the number of outer-loop K and inner-loop steps T , learning rates η S and number of optimization steps ς S for the condensed samples, learning rates η θ and number of optimization steps ς θ for the model weights. In all experiments, we set K = 1000, η S = 0.1, η θ = 0.01, ς S = 1 and employ Stochastic Gradient Descent (SGD) as the optimizer. The only exception is that we set η S to 0.01 for synthesizing data with MLP in cross-architecture experiments (Table 2), as MLP requires a slightly different treatment. Note that while K is the maximum number of outer-loop steps, the optimization early-stops automatically if it converges before K steps. For the remaining hyperparameters, we use two different sets of for each 1 and 10 image/class setting. We set T = 1, ς θ = 1 for 1 image/class and T = 10, ς θ = 50 for 10 images/class. Note that when T = 1, it is not required to update the model parameters (see the Step 8 in Algorithm 1), as this model is not further used. For those experiments where more than 10 images/class are synthesized, we set T to be the same number as the synthetic images per class and ς θ = 500/T , e.g. T = 20, ς θ = 25 for 20 images/class learning.
The minibatches (Step 5 in Algorithm 1) that are sampled at each inner iteration contain samples from a class at a time. Specifically, we randomly sample 256 real images of a class (c) as a batch to calculate the mean gradient ∇ θ L Tc (θ t ) and match it with the mean gradient ∇ θ L Sc (θ t ) that is averaged over the condensed samples with the corresponding class. We train the condensed samples class-by-class in every inner-loop.
In all experiments, we use the standard train/test splits of the datasets -the train/test statistics are shown in Table T1. We apply data augmentation (crop, scale and rotate) only for experiments (coreset methods and ours) on the MNIST dataset. The only exception is that we also use data augmentation when compared to DD [20] on CIFAR10 with AlexCifarNet, as data augmentation is also used in [20]. MNIST In the first stage -while training the condensed images-, we use Batch Normalization in the VGG and ResNet networks. For reliable estimation of the running mean and variance, we sample many real training data to estimate the running mean and variance and then freeze them ahead of Step 7. In the second stage -while training a deep network on the condensed set -, we replace Batch Normalization layers with Instance Normalization in VGG and ResNet, due to the fact that the batch statistics are not reliable when training networks with few condensed images. Another minor modification that we apply to the standard network ResNet architecture in the first stage is replacing the strided convolutions where stride = 2 with convolutional layers where stride = 1 coupled with an average pooling layer. We observe that this change enables more detailed (per pixel) gradients w.r.t. the condensed images and leads to better condensed images.
Continual learning. In this experiment, we focus on a task-incremental learning on SVHN, MNIST and USPS with the given order. The three tasks share the same label space, however have significantly different image statistics. The images of the three datasets are reshaped to 32 × 32 RGB size for standardization. We use the standard splits for training sets and randomly sample 2,000 test images for each datasets to obtain a balanced evaluation over three datasets. Thus each model is tested on a growing test set with 2,000, 4,000 and 6,000 images at the three stages respectively. We use the default ConvNet in this experiment and set the weight of distillation loss to 1.0 and the temperature to 2. We run 5,000 and 500 iterations for training and balanced finetuning as in [17] with the learning rates 0.01 and 0.001 respectively. We run 5 experiments and report the mean and standard variance in Figure 4. Few-shot domain adaptation. Here we reshape MNIST data to 32 × 32 RGB images in this experiment so that both MNIST and SVHN images have the same dimensionality and the same architecture (ConvNet) can be used on both images. The learning rate for finetuning is initialized with 0.01 and then lowered to 0.001 after 300 iterations. We run each experiment for 5 times with different random initializations and report their mean and standard variance.
B Further analysis
Next we provide additional results on the computational cost of training deep networks on various condensed sets of images, ablative studies over various deep network layers including activation, pooling and normalization functions and also over depth and width of deep network architecture, an additional qualitative analysis on the learned condensed images.
Computational cost. In Table T2 we report the number of training images and training times for training a ConvNet from scratch for the standard training on the whole dataset, and for our method on 1 and 10 condensed image(s)/class in MNIST, FashionMNIST, SVHN and CIFAR10. All the experiments are run for 100 epochs -which is just enough for convergence in all the settings -on a NVIDIA GTX 1080-Ti GPU and the training times are averaged over 10 experiments. No data augmentation is applied. We see in Table T2 that training on the condensed sets are about 2 to 3 orders of magnitude faster than the standard training on the whole data. Ablation study on activation functions. Here we study the use of three activation functions -Sigmoid, ReLU, LeakyReLu (negative slope is set to 0.01) -in two stages, when training condensed images (denoted as Synth) and when training a ConvNet from scratch on the learned condensed images (denoted as Cls). The experiments are conducted in MNIST dataset for 1 condensed image/class setting. Table T3 shows that all three activation functions are good for the first stage while generating good condensed images, however, Sigmoid performs poor in the second stage while learning a classifier on the condensed images -its testing accuracies are lower than ReLu and LeakyReLu by around 5%. This suggests that ReLU can provide sufficiently informative gradients for learning condensed images, though the gradient of ReLU w.r.t. to its input is typically sparse.
Ablation study on pooling functions. Next we investigate the performance of two pooling functions -average pooling and max pooling -also no pooling for 1 image/class dataset condensation with ConvNet architecture in MNIST in terms of classification accuracy. Table T4 shows that max and average pooling both perform significantly better than no pooling (None) when they are used in the second stage. When the condensed samples are trained and tested on models with average pooling, the best testing accuracy (91.7 ± 0.5%) is obtained, possibly, because average pooling provides more informative and smooth gradients for the whole image rather than only for its discriminative parts.
Ablation study on normalization functions. Next we study the performance of four normalization options -No normalization, Batch [55], Layer [57], Instance [48] and Group Normalization [56] (number of groups is set to be four) -for 1 image/class dataset condensation with ConvNet architecture in MNIST classification accuracy. learning the condensed set, while the choice of normalization layer is important for training networks on the condensed set. LayerNorm and GroupNorm have similar performance, and InstanceNorm is the best choice for training a model on condensed images. BatchNorm obtains lower performance which is similar to None (no normalization), as it is known to perform poorly when training models on few condensed samples as also observed in [56]. Note that Batch Normalization does not allow for a stable training in the first stage (Synth); thus we replace its running mean and variance for each batch with those of randomly sampled real training images.
Ablation study on network depth and width. Here we study the effect of network depth and width for 1 image/class dataset condensation with ConvNet architecture in MNIST in terms of classification accuracy. To this end we conduct multiple experiments by varying the depth and width of the networks that are trained to synthesize condensed images and that are trained to classify testing data in ConvNet architecture and report the results in Table T6 and Table T7. In Table T6, we observe that deeper ConvNets with more blocks generate better condensed images that results in better classification performance when a network is trained on them, while ConvNet with 3 blocks performs best as classifier. Interestingly, Table T7 shows that the best results are obtained with the classifier that has 128 filters at each block, while network width (number of filters at each block) in generation has little overall impact on the final classification performance.
Further qualitative analysis We first depict the condensed images that are learned on MNIST, FashionMNIST, SVHN and CIFAR10 datasets in one experiment by using the default ConvNet in 10 images/class setting in Figure F5. It is interesting that the 10 images/class results in Figure F5 are diverse which cover the main variations, while the condensed images for 1 image/class setting (see Figure 2) look like the "prototype" of each class. For example, in Figure F5 (a), the ten images of "four" indicate ten different styles. The ten "bag" images in Figure F5 (b) are significantly different from each other, similarly "wallet" (1st row), "shopping bag" (3rd row), "handbag" (8th row) and "schoolbag" (10th row). Figure F5 (c) also shows the diverse house numbers with different shapes, colors and shadows. Besides, different poses of a "horse" have been learned in Figure F5 (d).
C Further comparison to DD [20]
Next we compare our method to DD [20] first quantitatively in terms of cross-architecture generalization, then qualitatively in terms of synthetic image quality, and finally in terms of computational load for training synthetic images. Note that we use the original source code to obtain the results for DD that is provided by the authors of DD in the experiments.
Generalization ability comparison. Here we compare the generalization ability across different deep network architectures to DD. To this end, we use the synthesized 10 images/class data learned with LeNet on MNIST to train MLP, ConvNet, LeNet, AlexNet, VGG11 and ResNet18 and report the results in Table T8. We see that that the condensed set produced by our method achieves good classification performances with all architectures, while the synthetic set by DD perform poorly when used to trained some architectures, e.g. AlexNet, VGG and ResNet. Note that DD generates learning rates to be used in every training step in addition to the synthetic data. This is in contrast to our method which does not learn learning rates for specific training steps. Although the tied learning rates improve the performance of DD while training a network on the synthetic data, our method outperforms it in this experiment.
Qualitative comparison. We also provide a qualitative comparison to to DD in terms of image quality in Figure F6. Note that both of the synthetic sets are trained with LeNet on MNIST and AlexCifarNet on CIFAR10. Our method produces more interpretable and realistic images than DD, although it is not our goal. The MNIST images produced by DD are noisy, and the CIFAR10 images produced by DD do not show any clear structure of the corresponding class. In contrast, the MNIST and CIFAR10 images produced by our method are both visually meaningful and diverse.
Training memory and time. One advantage of our method is that we decouple the model weights from its previous states in training, while DD requires to maintain the recursive computation graph which is not scalable to large models and inner-loop optimizers with many steps. Hence, our method requires less training time and memory cost. We compare the training time and memory cost required by DD and our method with one NVIDIA GTX1080-Ti GPU. Table T9 shows that our method requires significantly less memory and training time than DD and provides an approximation reduction of 20% and 45% in memory and 400% and 200% in train time to learn MNIST and CIFAR10 datasets respectively. Table T9: Time and memory use for training DD and our method in 10 images/class setting.
Figure 1 :
1Dataset Condensation (left) aims to generate a small set of synthetic images that can match the performance of a network trained on a large image dataset. Our method (right) realizes this goal by learning a synthetic set such that a deep network trained on it and the large set produces similar gradients w.r.t. the parameters. The synthetic data can later be used to train a network from scratch in a fraction of the original computational load. CE denotes Cross-Entropy.
Figure 3 :Figure 4 :
34Absolute and relative testing accuracies for varying the number of condensed images/class for MNIST, FashionMNIST, SVHN and CIFAR10. The relative accuracy means the ratio compared to its upperbound, i.e. training with the whole dataset. Continual learning performance in accuracy (%). Herding denotes the original E2E[17]. T1, T2, T3 are three learning stages. The performance at each stage is the mean testing accuracy on all learned tasks.Computational cost. A key advantage of our method is the significant reduction in the size of training set and thus in the training time. Once the condensed images are learned, one can train a network (from scratch) on them only in a fraction of the training time taken by the original data. For instance, the full-dataset training times for learning a ConvNet (from scratch) over MNIST, FashionMNIST, SVHN and CIFAR10 on a NVIDIA GTX1080-Ti GPU are respectively 773, 776, 931 and 644 seconds (all above 10 minutes). However, the training times for 1 and 10 condensed images/class are only 0.6 and 1.3 seconds respectively for all datasets which is a remarkable reduction of 2-3 orders of magnitude. More details can be found in the supplementary.
Figure F5 :
F5The synthetic images for MNIST, FashionMNIST, SVHN and CIFAR10 produced by our method with ConvNet under 10 images/class setting.
Figure F6 :
F6Qualitative comparison between the condensed images produced by DD and ours under 10 images/class setting. LeNet and AlexCifarNet are utilized for MNIST and CIFAR10 respectively.
studyBackpropagation
Forward pass
Matching
Loss
Comparable
Large training set
Small synthetic set
CNN
CE
Loss
CE
Loss
Update
synthetic set
Large training set
Small synthetic set
train
test
train
test
with D duplicate blocks, and each block has a convolutional layer with W (3 × 3) filters, a normalization layer N , an activation layer A and a pooling layer P , denoted as [W, N, A, P ] × D. The default ConvNet (unless specified otherwise)Table 1: The performance comparison to coreset methods. This table shows the testing accuracies (%) of different methods on four datasets. ConvNet is used for training and testing. Img/Cls: image(s) per class, Ratio (%): the ratio of condensed images to whole training set.Img/Cls Ratio %
Core-set Selection
Ours
Whole Dataset
Random
Herding
K-Center Forgetting
MNIST
1
0.017
64.9±3.5 89.2±1.6 89.3±1.5 35.5±5.6 91.7±0.5
99.6±0.0
10
0.17
95.1±0.9 93.7±0.3 84.4±1.7 68.1±3.3 97.4±0.2
FashionMNIST
1
0.017
51.4±3.8 67.0±1.9 66.9±1.8 42.0±5.5 70.5±0.6
93.5±0.1
10
0.17
73.8±0.7 71.1±0.7 54.7±1.5 53.9±2.0 82.3±0.4
SVHN
1
0.014
14.6±1.6 20.9±1.3 21.0±1.5 12.1±1.7 31.2±1.4
95.4±0.1
10
0.14
35.1±4.1 50.5±3.3 14.0±1.3 16.8±1.2 76.1±0.6
CIFAR10
1
0.02
14.4±2.0 21.5±1.2 21.5±1.3 13.5±1.2 28.3±0.5
84.8±0.1
10
0.2
26.0±1.2 31.6±0.7 14.7±0.9 23.3±1.0 44.9±0.5
Top PantsPulloverDress Coat Sandal ShirtSneaker Bag BootPlane Car Bird Cat Deer Dog Frog Horse Ship TruckFigure 2: Visualization of condensed 1 im-
age/class with ConvNet for MNIST, FashionM-
NIST, SVHN and CIFAR10.
Synth\Cls MLP
ConvNet LeNet
AlexNet
VGG
ResNet
MLP
70.5±1.2 63.9±6.5 77.3±5.8 70.9±11.6 53.2±7.0 80.9±3.6
ConvNet 69.6±1.6 91.6±0.5 85.3±1.8 85.1±3.0 83.4±1.8 90.0±0.8
LeNet 71.0±1.6 90.3±1.2 85.0±1.7 84.7±2.4 80.3±2.7 89.0±0.8
AlexNet 72.1±1.7 87.5±1.6 84.0±2.8 82.7±2.9 81.2±3.0 88.9±1.1
VGG 70.3±1.6 90.1±0.7 83.9±2.7 83.4±3.7 81.7±2.6 89.1±0.9
ResNet 73.6±1.2 91.6±0.5 86.4±1.5 85.4±1.9 83.4±2.4 89.4±0.9
Table 2 :
2Cross-architecture performance in accuracy (%) for condensed 1 image/class in MNIST.
Table 3 :
3Comparison to DD[20] in terms ofclassification accuracy (%).
I/C Random Herding K-Center Forgetting
Ours
No Adapt
S→M
1 75.7±4.5 93.5±0.8 93.1±1.1 41.5±6.4 94.4±0.4 76.5±1.7
10 97.0±0.5 95.8±0.4 90.8±0.7 74.8±1.8 97.7±0.2
M→S
1 17.2±2.2 25.0±3.2 25.0±3.2 12.8±1.4 35.7±2.1 31.0±1.2
10 56.1±1.8 61.7±2.1 19.0±1.6 32.3±2.2 74.8±0.8
Table 4 :
4Few-shot domain adaption in terms of classificationaccuracy (%). I/C: image(s) per class. M: MNIST. S: SVHN.
Table T1 :
T1Train/test statistics for MNIST, FashionMNIST, SVHN, CIFAR10 and USPS datasets.
Table T3 :
T3Cross-activation experiments in accuracy (%) for 1 condensed image/class in MNIST.Synth\Cls
None
MaxPooling AvgPooling
None
78.7±3.0
80.8±3.5
88.3±1.0
MaxPooling 81.2±2.8
89.5±1.1
91.1±0.6
Avgpooing 81.8±2.9
90.2±0.8
91.7±0.5
Table T4 :
T4Cross-pooling experiments in accuracy (%) for 1 condensed image/class in MNIST.
Table T2 :
T2Training time for a randomly initialized ConvNet on whole dataset and our condensed set.
Table T5shows that the normalization layer has little influence forSynth
Cls
None
BatchNorm LayerNorm InstanceNorm GroupNorm
None
79.0±2.2
80.8±2.0
85.8±1.7
90.7±0.7
85.9±1.7
BatchNorm
78.6±2.1
80.7±1.8
85.7±1.6
90.9±0.6
85.9±1.5
LayerNorm
81.2±1.8
78.6±3.0
87.4±1.3
90.7±0.7
87.3±1.4
InstanceNorm 72.9±7.1
56.7±6.5
82.7±5.3
91.7±0.5
84.3±4.2
GroupNorm
79.5±2.1
81.8±2.3
87.3±1.2
91.6±0.5
87.2±1.2
Table T5 :
T5Cross-normalization experiments in accuracy (%) for 1 condensed image/class in MNIST.Synth\Cls
1
2
3
4
1
61.3±3.5 78.2±3.0 77.1±4.0 76.4±3.5
2
78.3±2.3 89.0±0.8 91.0±0.6 89.4±0.8
3
81.6±1.5 89.8±0.8 91.7±0.5 90.4±0.6
4
82.5±1.3 89.9±0.8 91.9±0.5 90.6±0.4
Table T6 :
T6Cross-depth performance in accuracy (%) for 1 condensed image/class in MNIST.Synth\Cls
32
64
128
256
32
90.6±0.8 91.4±0.5 91.5±0.5 91.3±0.6
64
91.0±0.8 91.6±0.6 91.8±0.5 91.4±0.6
128
90.8±0.7 91.5±0.6 91.7±0.5 91.2±0.7
256
91.0±0.7 91.6±0.6 91.7±0.5 91.4±0.5
Table T7 :
T7Cross-width performance in accuracy (%) for 1 condensed image/class in MNIST.
DD 72.7±2.8 77.6±2.9 79.5±8.1 51.3±19.9 11.4±2.6 63.6±12.7 Ours 83.0±2.5 92.9±0.5 93.9±0.6 90.6±1.9 92.9±0.5 94.5±0.4Method MLP
ConvNet LeNet
AlexNet
VGG
ResNet
Table T8 :
T8Generalization ability comparison to DD. The 10 condensed images per class are trained with LeNet, and tested on various architectures. It shows that condensed images generated by our method have better generalization ability. Dataset Architecture Memory (MB) Time (min) Test Acc.Method DD
MNIST
LeNet
785
160
79.5±8.1
Ours
MNIST
LeNet
653
46
93.9±0.6
DD CIFAR10 AlexCifarNet
3211
214
36.8±1.2
Ours CIFAR10 AlexCifarNet
1445
105
39.1±1.2
Local smoothness is frequently used to obtain explicit first-order local approximations in deep networks (e.g. see[36,37,38]).
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Deep speech 2: End-to-end speech recognition in english and mandarin. Dario Amodei, Rishita Sundaram Ananthanarayanan, Jingliang Anubhai, Eric Bai, Carl Battenberg, Jared Case, Bryan Casper, Qiang Catanzaro, Guoliang Cheng, Chen, International conference on machine learning. Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning, pages 173-182, 2016.
English conversational telephone speech recognition by humans and machines. George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, arXiv:1703.02136arXiv preprintGeorge Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, et al. English conversational telephone speech recognition by humans and machines. arXiv preprint arXiv:1703.02136, 2017.
Random search for hyper-parameter optimization. James Bergstra, Yoshua Bengio, Journal of machine learning research. 13James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of machine learning research, 13(Feb):281-305, 2012.
Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Nikolaus Hansen, Sibylle D Müller, Petros Koumoutsakos, Evolutionary computation. 111Nikolaus Hansen, Sibylle D Müller, and Petros Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evolutionary computation, 11(1):1-18, 2003.
Sequential model-based optimization for general algorithm configuration. Frank Hutter, H Holger, Kevin Hoos, Leyton-Brown, International conference on learning and intelligent optimization. SpringerFrank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In International conference on learning and intelligent optimization, pages 507-523. Springer, 2011.
Practical bayesian optimization of machine learning algorithms. Jasper Snoek, Hugo Larochelle, Ryan P Adams, Advances in neural information processing systems. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951-2959, 2012.
Barret Zoph, V Quoc, Le, arXiv:1611.01578Neural architecture search with reinforcement learning. arXiv preprintBarret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
Approximating extent measures of points. K Pankaj, Sariel Agarwal, Har-Peled, Kasturi R Varadarajan, Journal of the ACM (JACM). 514Pankaj K Agarwal, Sariel Har-Peled, and Kasturi R Varadarajan. Approximating extent measures of points. Journal of the ACM (JACM), 51(4):606-635, 2004.
On coresets for k-means and k-median clustering. Sariel Har, -Peled , Soham Mazumdar, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing. the thirty-sixth annual ACM symposium on Theory of computingSariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 291-300, 2004.
Turning big data into tiny data: Constantsize coresets for k-means, pca and projective clustering. Dan Feldman, Melanie Schmidt, Christian Sohler, Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms. the twenty-fourth annual ACM-SIAM symposium on Discrete algorithmsSIAMDan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant- size coresets for k-means, pca and projective clustering. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pages 1434-1453. SIAM, 2013.
Active learning for convolutional neural networks: A core-set approach. ICLR. Ozan Sener, Silvio Savarese, Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. ICLR, 2018.
icarl: Incremental classifier and representation learning. Alexander Sylvestre-Alvise Rebuffi, Georg Kolesnikov, Christoph H Sperl, Lampert, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2001-2010, 2017.
An empirical study of example forgetting during deep neural network learning. Mariya Toneva, Alessandro Sordoni, Remi Tachet, Adam Combes, Yoshua Trischler, Geoffrey J Bengio, Gordon, ICLRMariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. ICLR, 2019.
End-to-end incremental learning. M Francisco, Manuel J Castro, Nicolás Marín-Jiménez, Cordelia Guil, Karteek Schmid, Alahari, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 233-248, 2018.
Gradient based sample selection for online continual learning. Rahaf Aljundi, Min Lin, Baptiste Goujaud, Yoshua Bengio, Advances in Neural Information Processing Systems. Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems, pages 11816-11825, 2019.
Herding dynamical weights to learn. Max Welling, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningACMMax Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual Interna- tional Conference on Machine Learning, pages 1121-1128. ACM, 2009.
. Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A Efros, arXiv:1811.10959Dataset distillation. arXiv preprintTongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Model compression. Cristian Buciluǎ, Rich Caruana, Alexandru Niculescu-Mizil, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningCristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535-541, 2006.
Do deep nets really need to be deep?. Jimmy Ba, Rich Caruana, Advances in neural information processing systems. Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654-2662, 2014.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio, arXiv:1412.6550Fitnets: Hints for thin deep nets. arXiv preprintAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Data-free knowledge distillation for deep neural networks. Stefano Raphael Gontijo Lopes, Thad Fenu, Starner, LLD Workshop at Neural Information Processing Systems (NIPS ). Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks. In LLD Workshop at Neural Information Processing Systems (NIPS ), 2017.
Zero-shot knowledge distillation in deep networks. Konda Gaurav Kumar Nayak, Vaisakh Reddy Mopuri, Shaj, Anirban Venkatesh Babu Radhakrishnan, Chakraborty, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningGaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. Zero-shot knowledge distillation in deep networks. In Proceedings of the 36th International Conference on Machine Learning, 2019.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014.
Mehdi Mirza, Simon Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Unsupervised representation learning with deep convolutional generative adversarial networks. Alec Radford, Luke Metz, Soumith Chintala, arXiv:1511.06434arXiv preprintAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Visualizing and understanding convolutional networks. D Matthew, Rob Zeiler, Fergus, European conference on computer vision. SpringerMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014.
Understanding deep image representations by inverting them. Aravindh Mahendran, Andrea Vedaldi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by invert- ing them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188-5196, 2015.
Deep leakage from gradients. Ligeng Zhu, Zhijian Liu, Song Han, Advances in Neural Information Processing Systems. Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pages 14747-14756, 2019.
Learning optimized map estimates in continuouslyvalued mrf models. G G Kegan, Samuel, Marshall F Tappen, 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEEKegan GG Samuel and Marshall F Tappen. Learning optimized map estimates in continuously- valued mrf models. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 477-484. IEEE, 2009.
Generic methods for optimization-based modeling. Justin Domke, Artificial Intelligence and Statistics. Justin Domke. Generic methods for optimization-based modeling. In Artificial Intelligence and Statistics, pages 318-326, 2012.
A generative process for sampling contractive auto-encoders. Salah Rifai, Yoshua Bengio, Yann Dauphin, Pascal Vincent, arXiv:1206.6434arXiv preprintSalah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractive auto-encoders. arXiv preprint arXiv:1206.6434, 2012.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572, 2014.
Understanding black-box predictions via influence functions. Wei Pang, Percy Koh, Liang, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885-1894. JMLR. org, 2017.
Gradient episodic memory for continual learning. David Lopez-Paz, Advances in Neural Information Processing Systems. David Lopez-Paz et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pages 6467-6476, 2017.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Reading digits in natural images with unsupervised feature learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf, arXiv:1708.07747arXiv preprintHan Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, CiteseerTechnical reportAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Dynamic few-shot visual learning without forgetting. Spyros Gidaris, Nikos Komodakis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSpyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4367-4375, 2018.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556arXiv preprintKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Advances in neural information processing systems. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077-4087, 2017.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, Advances in neural information processing systems. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630-3638, 2016.
Instance normalization: The missing ingredient for fast stylization. Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky, arXiv:1607.08022arXiv preprintDmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
Large scale incremental learning. Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Yun Fu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 374-382, 2019.
Scail: Classifier weights scaling for class incremental learning. Eden Belouadah, Adrian Popescu, The IEEE Winter Conference on Applications of Computer Vision. Eden Belouadah and Adrian Popescu. Scail: Classifier weights scaling for class incremental learning. In The IEEE Winter Conference on Applications of Computer Vision, pages 1266-1275, 2020.
Facility location: concepts, models, algorithms and case studies. G W Wolf, G W Wolf. Facility location: concepts, models, algorithms and case studies. 2011.
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, Proceedings of the 27th international conference on machine learning (ICML-10). the 27th international conference on machine learning (ICML-10)Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814, 2010.
On rectified linear units for speech processing. Matthew D Zeiler, Marc'aurelio Ranzato, Rajat Monga, Mark Z Mao, Kyeongcheol Yang, Quoc V Le, Patrick Nguyen, Andrew W Senior, Vincent Vanhoucke, Jeffrey Dean, Geoffrey E Hinton, IEEE International Conference on Acoustics, Speech and Signal Processing. Matthew D. Zeiler, Marc'Aurelio Ranzato, Rajat Monga, Mark Z. Mao, Kyeongcheol Yang, Quoc V. Le, Patrick Nguyen, Andrew W. Senior, Vincent Vanhoucke, Jeffrey Dean, and Geoffrey E. Hinton. On rectified linear units for speech processing. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3517-3521, 2013.
Rectifier nonlinearities improve neural network acoustic models. L Andrew, Maas, Y Awni, Andrew Y Hannun, Ng, International conference on machine learning (ICML). Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In International conference on machine learning (ICML), 2013.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, abs/1502.03167ArXiv. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv, abs/1502.03167, 2015.
Group normalization. Yuxin Wu, Kaiming He, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 3-19, 2018.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
A database for handwritten text recognition research. Jonathan J Hull, IEEE Transactions. 165Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550-554, 1994.
Few-shot adversarial domain adaptation. Saeid Motiian, Quinn Jones, Seyed Iranmanesh, Gianfranco Doretto, Advances in Neural Information Processing Systems. Saeid Motiian, Quinn Jones, Seyed Iranmanesh, and Gianfranco Doretto. Few-shot adversarial domain adaptation. In Advances in Neural Information Processing Systems, pages 6670-6680, 2017.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, Computer Vision and Pattern Recognition. IeeeCVPR 2009. IEEE Conference onJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248-255. Ieee, 2009. |
17,220,670 | LIE-ACCESS NEURAL TURING MACHINES | External neural memory structures have recently become a popular tool for algorithmic deep learning(Graves et al., 2014;Weston et al., 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lieaccess memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks.Recent work on neural Turing machines (NTMs)(Graves et al., 2014;and memory networks (MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neural networks and demonstrated that these networks can be effectively trained in an end-to-end fashion. These methods have been successfully applied to question answering (Weston et al.Grefenstette et al., 2015;Joulin & Mikolov, 2015), machine translation(Kalchbrenner et al., 2015), and other tasks. This methodology has the potential to extend deep networks in a general-purpose way beyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frame traditional memory access paradigms to be continuous and possibly differentiable to allow for backpropagation. In MemNNs, traditional random-access memory is replaced with a ranking approach that finds the most likely memory. In the work ofGrefenstette et al. (2015), classical stack-, queue-, and deque-based memories are replaced by soft-differentiable stack, queue, and deque datastructures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditional discrete memory. We argue that a neural memory should provide the following: (A) differentiability for end-to-end training and (B) robust relative indexing (perhaps in addition to random-access). Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B, discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups with differentiable operations, which provide a natural structure for neural memory access. By definition, their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provide identity, invertibility, and associativity, all of which are desirable properties for a relative indexing scheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though, 1 | [] | LIE-ACCESS NEURAL TURING MACHINES
Greg Yang gyang@college
Harvard University Cambridge
02138MAUSA
Alexander M Rush [email protected]
Harvard University Cambridge
02138MAUSA
LIE-ACCESS NEURAL TURING MACHINES
Published as a conference paper at ICLR 2017
External neural memory structures have recently become a popular tool for algorithmic deep learning(Graves et al., 2014;Weston et al., 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random access, stacks, tapes) to provide the storage necessary for computational tasks. In this work, we argue that these neural memory systems lack specific structure important for relative indexing, and propose an alternative model, Lieaccess memory, that is explicitly designed for the neural setting. In this paradigm, memory is accessed using a continuous head in a key-space manifold. The head is moved via Lie group actions, such as shifts or rotations, generated by a controller, and memory access is performed by linear smoothing in key space. We argue that Lie groups provide a natural generalization of discrete memory structures, such as Turing machines, as they provide inverse and identity operators while maintaining differentiability. To experiment with this approach, we implement a simplified Lie-access neural Turing machine (LANTM) with different Lie groups. We find that this approach is able to perform well on a range of algorithmic tasks.Recent work on neural Turing machines (NTMs)(Graves et al., 2014;and memory networks (MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neural networks and demonstrated that these networks can be effectively trained in an end-to-end fashion. These methods have been successfully applied to question answering (Weston et al.Grefenstette et al., 2015;Joulin & Mikolov, 2015), machine translation(Kalchbrenner et al., 2015), and other tasks. This methodology has the potential to extend deep networks in a general-purpose way beyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frame traditional memory access paradigms to be continuous and possibly differentiable to allow for backpropagation. In MemNNs, traditional random-access memory is replaced with a ranking approach that finds the most likely memory. In the work ofGrefenstette et al. (2015), classical stack-, queue-, and deque-based memories are replaced by soft-differentiable stack, queue, and deque datastructures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditional discrete memory. We argue that a neural memory should provide the following: (A) differentiability for end-to-end training and (B) robust relative indexing (perhaps in addition to random-access). Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B, discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups with differentiable operations, which provide a natural structure for neural memory access. By definition, their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provide identity, invertibility, and associativity, all of which are desirable properties for a relative indexing scheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though, 1
INTRODUCTION
simple group properties like invertibility are not satisfied by neural Turing machines, differentiable neural computers, or even by simple soft-tape machines. In short, in our method, we construct memory systems with keys placed on a manifold, and where relative access operations are provided by Lie groups.
To experiment with this approach, we implement a neural Turing machine with an LSTM controller and several versions of Lie-access memory, which we call Lie-access neural Turing machines (LANTM). The details of these models are exhibited in Section 4. 1 Our main experimental results are presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks such as copying and permutating sequences with higher accuracy than more traditional memory-based approaches, and significantly better than fixed memory LSTM models. The memory structures and key transformation learned by the model resemble interesting continuous space representations of traditional discrete memory data structures.
BACKGROUND: RECURRENT NEURAL NETWORKS WITH MEMORY
This work focuses particularly on recurrent neural network (RNN) controllers of abstract neural memories. Formally, an RNN is a differentiable function RNN : X × H → H, where X is an arbitrary input space and H is the hidden state space. On input (x (1) , . . . , x (T ) ) ∈ X T and with initial state h (0) ∈ H, the RNN produces states h (1) , . . . , h (T ) based on the recurrence, h (t) := RNN(x (t) , h (t−1) ). These states can be used for downstream tasks, for example sequence prediction which produces outputs (y (1) , . . . , y (T ) ) based on an additional transformation and prediction layer y (t) = F (h (t) ) such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagationthrough-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs (Hochreiter & Schmidhuber, 1997). LSTM's hidden state consists of two variables (c (t) , h (t) ), where h (t) is also the output to the external world; we however use the above notation for simplicity.
An RNN can also serve as the controller for an external memory system (Graves et al., 2014;Grefenstette et al., 2015;Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry state over time from both the RNN and the external memory, and (2) the RNN controller to collect readings from and compute additional instructions to the external memory. Formally, we extend the recurrence to, h (t) := RNN ([x (t) ; ρ (t−1) ], h (t−1) ),
Σ (t) , ρ (t) := RW(Σ (t−1) , h (t) ),
where Σ is the abstract memory state, and ρ (t) is the value read from memory, and h is used as an abstract controller command to a read/write function RW. Writing occurs in the mutation of Σ at each time step. Throughout this work, Σ will take the form of an ordered set {(k i , v i , s i )} i where k i ∈ K is an arbitrary key, v i ∈ R m is a memory value, and s i ∈ R + is a memory strength.
In order for the model to be trainable with backpropagation, the memory function RW must also be differentiable. Several forms of differentiable memory have been proposed in the literature. We begin by describing two simple forms: (neural) random-access memory and (neural) tape-based memory. For this section, we focus on the read step and assume Σ is fixed.
Random-Access Memory Random-access memory consists of using a now standard attentionmechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The controller hidden state is used to output a random-access pointer, q (h) that determines a weighting of memory vectors via dot products with the corresponding keys. This weighting in turn determines the read values via linear smoothing based on a function w,
w i (q, Σ) := s i exp q, k i j s j exp q, k j ρ := i w i (q (h), Σ)v i .
The final read memory is based on how "close" the read pointer was to each of the keys, where closeness in key space is determined by w.
Tape-Based Memory Neural memories can also be extended to support relative access by maintaining read state. Following notation from Turing machines, we call this state the head, q. In the simplest case the recurrence now has the form, Σ , q , ρ = RW(Σ, q, h), and this can be extended to support multiple heads.
In the simplest case of soft tape-based memory (a naive version of the much more complicated neural Turing machine), the keys k i indicate one-hot positions along a tape with k i = δ i . The head q is a probability distribution over tape positions. It determines the read value by directly specifying the weights. The controller can only "shift" the head by outputting a kernel K(h) = (K −1 , K 0 , K +1 ) in the probability simplex ∆ 2 and applying convolution.
q (q, h) := q * K(h), i.e. q j = q j−1 K +1 + q j K 0 + q j+1 K −1
We can view this as the soft version of a single-step discrete Turing machine where the kernel can softly shift the "head" of the machine one to the left, one to the right, or remain in the same location. The value returned can then be computed with linear smoothing as above,
w i (q, Σ) := s i q, k i j s j q, k j ρ := i w i (q (q, h), Σ)v i .
LIE GROUPS FOR MEMORY
Let us now take a brief digression and consider the standard (non-neural) Turing machine (TM) and the movement of its head over a tape. A TM has a head q ∈ Z indicating the position on a tape. Between reads, the head can move any number of steps left or right. Moving a + b steps and then c steps eventually puts the head at the same location as moving a steps and then b + c steps -i.e. the head movement is associative. In addition, the machine should be able to reverse a head shift, for example, in a stack simulation algorithm, going from push to pop -i.e. each head movement should also have a corresponding inverse. Finally, the head should also be allowed to stay put, for example, to read a single data item and use it for multiple time points, an identity.
These movements correspond directly to group actions: the possible head movements should be associative, and contain inverse and identity elements. This group acts on the set of possible head locations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexed infinite tape. By our reasoning above, if a Turing machine is to store data contents at points in a general space K (instead of an infinite Z-indexed tape), then its head movements should form a group and act on K via group actions.
For a neural memory system, we desire the network to be (almost everywhere) differentiable. The notion of "differentiable" groups is well-studied in mathematics, where they are known as Lie groups, and "differentiable group actions" are correspondingly called Lie group actions. In our case, using Lie group actions as generalized head movements on a general key space (more accurately, manifolds) would most importantly mean that we can take derivatives of these movements and perform the usual backpropagation algorithm.
LIE-ACCESS NEURAL TURING MACHINES
These properties motivate us to propose Lie access as an alternative formalism to popular neural memory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and often do not provide an identity. 2 Our Lie-access memory will consist of a set of points in a manifold K.
2 The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketched in Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing "sharpness" over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al. (2016) utilize a temporal memory link matrix for actions. They note, "the operation Lw smoothly shifts the focus forwards to the locations written ... whereas L w shifts the focus backwards" but do not enforce this as a true inverse. They also explicitly do not include an identity, noting "Self-links are excluded (the diagonal of the link matrix is always 0)"; however, they could ignore the link matrix with an interpolation gate, which in effect acts as the identity.
We replace the discrete head with a continuous head q ∈ K. The head moves based on a set of Lie group actions a ∈ A generated by the controller. To read memories, we will rely on a distance measure in this space, d : K × K → R ≥0 . 3 Together these properties describe a general class of possible neural memory architectures.
Formally a Lie-access neural Turing machine (LANTM) computes the following function,
Σ , q , q (w) , ρ := RW(Σ, q, q (w) , h)
where q, q (w) ∈ K are resp. read and write heads, and Σ is the memory itself. We implement Σ, as above, as a weighted dictionary
Σ = {(k i , v i , s i )} i .
ADDRESSING PROCEDURE
The LANTM maintains a read head q which at every step is first updated to q and then used to read from the memory table. This update occurs by selecting a Lie group action from A which then acts smoothly on the key space K. We parametrize the action transformation, a : H → A by the hidden state to produce the Lie action, a(h) ∈ A. In the simplest case, the head is then updated based on this action (here · denotes group action): q := a(h) · q.
For instance, consider two possible Lie groups:
(1) A shift group R 2 acting additively on R 2 . This means that A = R 2 so that a(h) = (α, β) acts upon a head q = (x, y) by,
a(h) · q = (α, β) + (x, y) = (x + α, y + β).
(2) A rotation group SO(3) acting on the sphere S 2 = {v ∈ R 3 : v = 1}. Each rotation can be described by its axis ξ (a unit vector) and angle θ. An action (ξ, θ) · q is just the appropriate rotation of the point q, and is given by Rodrigues' rotation formula,
a(h) · q = (ξ, θ) · q = q cos θ + (ξ × q) sin θ + ξ ξ, q (1 − cos θ).
Here × denotes cross product.
READING AND WRITING MEMORIES
Recall that memories are stored in Σ, each with a key, k i , memory vector, v i , and strength, s i , and that memories are read using linear smoothing over vectors based on a key weighting function w, ρ := i w i (q , Σ)v i . While there are many possible weighting schemes, we use one based on the distance of each memory address from the head in key-space assuming a metric d on K. We consider two different weighting functions (1) inverse-square and (2) softmax. There first uses the polynomial law and the second an annealed softmax of the squared distances:
w (1) i (q, Σ) := s i d(q, k i ) −2 j s j d(q, k j ) −2 w (2) i (q, Σ, T ) := s i exp(−d(q, k i ) 2 /T ) j s j exp(−d(q, k j ) 2 /T ) ,
where we use the convention that it takes the limit value when q → k i and T is a temperature that represents the certainty of its reading, i.e. higher T creates more uniform w.
The writing procedure is similar to reading. The LANTM maintains a separate write head q (w) that moves analogously to the read head, i.e. with action function a (w) (h) and updated value q (w) . At each call to RW, a new memory is automatically appended to Σ with k = q (w) . The corresponding mem. vec.
vi read value ρ address ki key manifold K read key q weight scheme
Σ := Σ ∪ {(q (w) , v(h), s(h))}.
No explicit erase mechanism is provided, but to erase a memory (k, v, s), the controller may in theory write (k, −v, s).
COMBINING WITH RANDOM ACCESS
Finally we combine this relative addressing procedure with direct random-access to give the model the ability for absolute address access. We do this by outputting an absolute address each step and simply interpolating with our current head. Write t(h) ∈ [0, 1] for the interpolation gate and q(h) ∈ K for our proposed random-access layer. For key space manifolds K like R n , 4 there's a well defined straight-line interpolation between two points, so we can set
q := a · (tq + (1 − t)q)
where we have omitted the implied dependence on h. For other manifolds like the spheres S n that have well-behaved projection functions π : R n → S n , we can just project the straight-line interpolation to the sphere:
q := a · π(tq + (1 − t)q).
In the case of a sphere S n , π is just L 2 -normalization. 5
EXPERIMENTS
We experiment with Lie-access memory on a variety of algorithmic learning tasks. We are particularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectively utilized for algorithmic learning, and (c) what internal structures the model learns compared to systems based directly on soft discrete memory. In particular Lie access is not equipped with an explicit stack or tape, so it would need to learn continuous patterns that capture these properties.
Setup.
Our experiments utilize an LSTM controller in a version of the encoder-decoder setup (Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoder reads and writes memories at each step; the decoder only reads memories. The encoder is given s , followed by an the input sequence, and then /s to terminate input. The decoder is not re-fed its output or the correct symbol, i.e. we do not use teacher forcing, so x (t) is a fixed placeholder input symbol. The decoder must correctly emit an end-of-output symbol /e to terminate.
Models and Baselines. We implement three main baseline models including: (a) a standard LSTM encoder-decoder, without explicit external memory, (b) a random access memory network, RAM using the key-value formulation as described in the background, roughly analogous to an attentionbased encoder-decoder, and (c) an interpolation of a RAM/Tape-based memory network as described in the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with a sharpening parameter. Our models include four versions of Lie-access memory. The main model, LANTM, has an LSTM controller, with a shift group A = R 2 acting additively on key space K = R 2 . We also consider a model SLANTM with spherical memory, utilizing a rotation group A = SO(3) acting on keys in the sphere K = S 2 . For both of the models, the distance function d is the Euclidean (L 2 ) distance, and we experiment with smoothing using inverse-square (default) and with an annealed softmax. 6
Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each of the other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size of each memory vector) 20. Other parameters such as learning rate, decay, and intialization are found through grid search. Further hyperparameter details are give in the appendix.
Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The COPY, RE-VERSE, and BIGRAM FLIP tasks are based on Grefenstette et al. (2015); the DOUBLE and INTER-LEAVED ADD tasks are designed in a similar vein. Additionally we also include three harder tasks: ODD FIRST, REPEAT COPY, and PRIORITY SORT. In ODD FIRST, the model must output the oddindexed elements first, followed by the even-indexed elements. In REPEAT COPY, each model must repeat a sequence of length 20, N times. In PRIORITY SORT, each item of the input sequence is given a priority, and the model must output them in priority order. We train each model in two regimes, one with a small number of samples (16K) and one with a large number of samples (320K). In the former case, the samples are iterated through 20 times, while in the latter, the samples are iterated through only once. Thus in both regimes, the total training times are the same. Training is done by minimizing negative log likelihood with RMSProp.
Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance of the models, we compute the fraction of tokens correctly predicted and the fraction of all answers completely correctly predicted, respectively called fine and coarse scores. We assess the models on 3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k (or repeat number 2N in the case of REPEAT COPY) to test the generalization of the system. More precisely, for all tasks other than repeat copy, during training, the length k is varied in the interval [l k , u k ] (as shown in table 1ba). During test time, the length k is varied in the range [u k + 1, 2u k ]. For repeat copy, the repetition number N is varied similarly, instead of k.
Results.
Main results comparing the different memory systems and read computations on a series of tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM system fails consistently when required to generalize to the 2x samples, unable to solve any 2x problem correctly, and only able to predict at most ∼ 50% of the symbols for all tasks except interleaved addition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid are much stronger baselines, answering more than 50% of the characters correctly for all but the 6-ODD FIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-REPEAT COPY task with almost perfect generalization scores when trained in the large sample regime. In general, it does not seem that the simple tape memory confers much advantage to the RAM model, as the generalization performances of both models are similar for the most part, which motivates more advanced NTM enhancements beyond sharpening.
The last four columns illustrate the performance of the LANTM models. We found the inversesquare LANTM and SLANTM models to be the most effective, achieving > 90% generalization Task Input Output Size k |V| 1 -COPY a1a2a3 · · · a k a1a2a3 · · · a k [2, 64] 128 2 -REVERSE a1a2a3 · · · a k a k a k−1 a k−2 · · · a1 [2, 64] 128 3 -BIGRAM FLIP a1a2a3a4 · · · a 2k−1 a 2k a2a1a4a3 · · · a 2k a 2k−1 [1, 16] 128 4 -DOUBLE a1a2 · · · a k 2 × |a k · · · a1| [2, 40] 10 5 -INTERLEAVED ADD a1a2a3a4 · · · a 2k−1 a 2k |a 2k a 2k−2 · · · a2| + |a 2k−1 · · · a1|
[2, 16] 10 6 -ODD FIRST a1a2a3a4 · · · a 2k−1 a 2k a1a3 · · · a 2k−1 a2a4 · · · a 2k [1, 16] 128 7 -REPEAT COPY N a1 · · · a20 a1 · · · a20 · · · a1 · · · a20 (N times) N ∈ [1, 5] 128 8 -PRIORITY SORT 5a52a29a9 · · · a1a2a3 · · · a k [2, 10] 128 (a) Task descriptions and parameters. |a k · · · a1| means the decimal number repesented by decimal digits a k · · · a1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zero padding. The DOUBLE task takes an integer x ∈ [0, 10 k ) padded to k digits and outputs 2x in k + 1 digits, zero padded to k + 1 digits. The INTERLEAVED ADD task takes two integers x, y ∈ [0, 10 k ) padded to k digits and interleaved, forming a length 2k input sequence and outputs x + y zero padded to k + 1 digits. The last two tasks use numbers in unary format: N is the shorthand for a length N sequence of a special symbol @, encoding N in unary, e.g. 3 = @@@. accuracy on most tasks, and together they solve all of the tasks here with > 90% coarse score. In particular, LANTM is able to solve the 6-ODD FIRST problem when no other model can correctly solve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able to solve the 7-REPEAT COPY problem.
Base
The best Lie access model trained with the small sample regime beats or is competitive with any of the baseline trained under the large sample regime. In all tasks other than 7-REPEAT COPY, the gap in the coarse score between the best Lie access model in small sample regime and the best baseline in any sample regime is ≥ 70%. However, in most cases, training under the large sample regime does not improve much. For a few tasks, small sample regime actually produces a model with better generalization than large sample regime. We observed in these instances, the generalization error curve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, and then increases almost monotonically from there. Thus, the model likely has found an algorithm that works only for the training sizes; in particular, this phenomenon does not seem to be due to lack of training time.
DISCUSSION
Qualitative Analysis. We did further visual analysis of the different Lie-access techniques to see how the models were learning the underlying tasks, and to verify that they were using the relative addressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sort and repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. and read heads q of LANTM for the unary 8-PRIORITY SORT task. In this task, the encoder reads a priority, encoded in unary, and then a value; the decoder must output these values in priority order. In this example the sequence is [@, @, 79, @, @, @, @, 98, @, 5, @, @, @, 107, @, 119], where the special symbol @ is a unary encoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q (w) as it is fed each input character. Fill indicates the strength si of memory write (black indicates high strength). Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movement of decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembers the item 119 itself. (b) Raw coordinates in key space R 2 of writes (red) and reads (blue) from LANTM on 7-REPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase. Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.
Enc. Writes Dec. Reads Here the model is to repeatedly output the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39, ... ]. Left: the positions of write head q (w) during the encoding phase. Fill indicates strength si (black means high strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point 28, and stores data at regular intervals. Right: the positions of read head q during the decoding phase. Starting from the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity, read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implements a cyclic list data structure, taking advantage of the spherical structure of the memory. (b) Raw coordinates in key space S 2 of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the priority sort task. Red line indicates the movements of the write-head q (w) to place points along a sub-manifold of K (an arc of S 2 ) during the encoding phase. Notably, this movement is not sequential, but random-access, so as to store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.
8
Published as a conference paper at ICLR 2017 ure 4 shows the memory access pattern of LANTM on 6-ODD FIRST task. Additionally, animations tracing the evolution of the memory access pattern of models over training time can be found at http://nlp.seas.harvard.edu/lantm. They demonstrate that the models indeed learn relative addressing and internally are constructing geometric data structures to solve these algorithmic tasks.
Unbounded storage One possible criticism of the LANTM framework could be that the amount of information stored increases linearly with time, which limits the usefulness of this framework for long timescale tasks. This is indeed the case with our implementations, but need not be the case in general. There can be many ways of limiting physical memory usage. For example, a simple way is to discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al. (2016). Another way is to approximate with fixed number of bits the read function that takes a head position and returns the read value. For example, noting that this function is a rational function on the head position, keys, and memory vectors, we can approximate the numerators and denominators with a fixed degree polynomial.
Content address Our Lie-access framework is not mutually exclusive from content addressing methods. For example, in each of our implementations, we could have the controllers output both a position in the key space and a content addresser of the same size as memory vectors, and interpolated the read values from Lie-access and the read values from content addressing.
CONCLUSION
This paper introduces Lie-access memory as an alternative neural memory access paradigm, and explored several different implementations of this approach. LANTMs follow similar axioms as discrete Turing machines while providing differentiability. Experiments show that simple models can learn algorithmic tasks. Internally these models naturally learn equivalence of standard data structures like stack and cyclic lists. In future work we hope to experiment with more groups and to scale these methods to more difficult reasoning tasks. For instance, we hope to build a general purpose encoder-decoder model for tasks like question answering and machine translation that makes use of differentiable relative-addressing schemes to replace RAM-style attention.
Appendices
A EXPERIMENTAL DETAILS
We obtain our results by performing a grid search over the hyperparameters specified in Table A.1 and also over seeds 1 to 3, and take the best scores. We bound the norm of the LANTM head shifts by 1, whereas we try both bounding and not bounding the angle of rotation in our grid for SLANTM. We initialize the Lie access models to favor Lie access over random access through the interpolation mechanism discussed in section 4.3.
The RAM model read mechanism is as discussed in section 2, and writing is done by appending new (k, v, s) tuples to the memory Σ. The only additions to this model in RAM/tape is that left and right keys are now computed using shifted convolution with the read weights:
k L := i w i+1 k i k R := i w i−1 k i
and these keys k L and k R are available (along with the random access key output by the controller) to the controller on the next turn to select from via interpolation. We also considered weight sharpening in the RAM/Tape model according to Graves et al. (2014): the controller can output a sharpening coefficient γ ≥ 1 each turn, so that the final weights arew i = We found that weight sharpening only confers small advantage over vanilla on the COPY, BIGRAM FLIP, and DOUBLE tasks, but deteriorates performance on all other tasks.
B ACTION INTERPOLATION
We also experimented with adding an interpolation between the last action a (t−1) with a candidate action a(h) via a gate r(h) ∈ [0, 1] to produce the final action a (t) . Then the final equation of the new head is q := a (t) · π(tq + (1 − t)q).
This allows the controller to easily move in "a straight line" by just saturating both t and r.
For example, for the translation group we have straight-line interpolation, a (t) := ra+(1−r)a (t−1) . For the rotation group SO (3), each rotation is represented by its axis ξ ∈ S 2 and angle θ ∈ (−π, π], Table B.2: Comparison between scores of model with action interpolation and without action interpolation. Numbers represent the accuracy percentages on the fine/coarse evaluations on the outof-sample 2× tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Each entry is of the format A:B/C:D, where A and C are respectively the fine and coarse scores of the model without action interpolation (same as in table 1b), and B and C are those for the model with action interpolation. and we just interpolate each separately ξ (t) := π(rξ + (1 − r)ξ (t−1) ) and θ (t) := rθ + (1 − r)θ (t−1) . where π is L 2 -normalization. 7 We perform the same experiments, with the same grid as specified in the last section, and with the initial action interpolation gates biased toward the previous action. The results are given in table B.2. Figure B.1 shows action interpolation's impact on performance. Most notably, interpolation seems to improve performance of most models in the 5-INTERLEAVED ADD task and of the spherical memory models in the 6-ODD FIRST task, but causes failure to learn in many situations, most significantly, the failure of LANTM to learn 6-ODD FIRST.
Figure 1 :
1Retrieval of value from memory via a key. Weightings with unit sum are assigned to different memories depending on the distances from the addresses to the read key. Linear smoothing over values is used to emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in their computations of the weightings.memory v and strength s are created by MLP's v(h) ∈ R m and s(h) ∈ [0, 1] taking h as input. After writing, the new memory set is,
results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-ofsample 2× tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in the body. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weighting scheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sample regimes.
Figure 2 :
2Analysis of the LANTM model. (a) PCA projection from key space R 2 to 1D for the memories Σ
Figure 3 :
3Analysis of the SLANTM model. (a) PCA projection from the spherical key space S 2 to 2D of the memories Σ and read heads q of SLANTM for the task of 7-REPEAT COPY.
Figure 4 :
4Memory access pattern of LANTM on 6-ODD FIRST. Left: In the middle of training. LANTM learns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on the other. However reading is only half correct. Right: After training. During reading, the model simply reads the odd-indexed items in a straight line, followed by the even-indexed items in a parallel line.
Figure B. 1 :
1The additive difference between the fine (left) and coarse (right) scores of models without action interpolation vs models with action interpolation. Positive value means model without interpolation performs better. For each model, the left column displays the difference in small sample regime, while the right column displays the difference in large sample regime.
. We included this as a feature to grid search over.w γ
i
j w γ
j
rnn size
embed decay delay init
learning rate key dim custom
LANTM(-s)
50 × 1
14
{300, 600}
{1, *} {1, 2, 4}e-2
2
-
SLANTM(-s) 50 × 1
14
{300, 600}
{1, *} {1, 2, 4}e-2
3
∠ bound
RAM(/tape)
50 × 1
14
{300, 600}
{1, *} {1, 2, 4}e-2
{2, 20}
sharpen
LSTM
256×{1 to 4} 128
{500, 700}
*
2e-{1 to 4}
-
-
Table A .
A1: Parameter grid for grid search. LANTM(-s) means LANTM with invnorm or SoftMax; similarly for SLANTM(-s). RAM(/tape) means the ram and hybrid ram/tape models. Initialization: both initialization options set the forget gate of the LSTMs to 1. The number 1 in the init column means initialization of all other parameters uniformly from [−1, 1]. The symbol * in init column means initialization of all linear layers were done using the torch default, which initializes weights uniformly from (−κ, κ), where κ is (input size) −1/2 . For models with memory, this means that the LSTM input to hidden layer is initialized approximately from [−0.07, 0.07] (other than forget gate). Angle bound is a setting only available in SLANTM. If angle bound is true, we bound the angle of rotation by a learnable magnitude value. Sharpening is a setting only available in RAM/tape, and it works as explained in the main text.
Our implementations are available at https://github.com/harvardnlp/lie-access-memory
This metric should satisfy a compatibility relation with the Lie group action. When points x, y ∈ X are simultaneously moved by the same Lie group action v, their distance should stay the same (One possible mathematical formalization is that X should be a Riemannian manifold and the Lie group should be a subgroup of X's isometry group.): d(vx, vy) = d(x, y). This condition ensures that if the machine writes a sequence of data along a "straight line" at points x, vx, v 2 x, . . . , v k x, then it can read the same sequence by emitting a read location y close to x and then follow the "v-trail" y, vy, v 2 y, . . . , v k y.
Or in general, manifolds with convex embeddings in R n . 5 Technically, in the sphere case, dom π = R d − {0}. But in practice one almost never gets 0 from a straight-line interpolation, so computationally this makes little difference.
Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAM model: For head q, exp(−d(q, ki) 2 /T ) = exp(− q − ki 2 /T ) = exp(−(2 − 2 q, ki )/T ), where the last equality comes from q = ki = 1 (key-space is on the sphere). Therefore the weights wi = s i exp(−d(q,k i ) 2 /T ) j s j exp(−d(q,k j ) 2 /T ) = s i exp(−2 q,k i /T ) j s j exp(−2 q,k j /T ) , which is the RAM weighting scheme.
There is, in fact, a canonical way to interpolate the most common Lie groups, including all of the groups mentioned above, based on the exponential map and the Baker-Campbell-Hausdorff formula(Lee, 2012), but the details are outside the scope of this paper and the computational cost, while acceptable in control theory settings, is too hefty for us. Interested readers are referred toShingel (2009) andMarthinsen (1999).
Alex Graves, Greg Wayne, Ivo Danihelka, arXiv:1410.5401arXiv: 1410.5401Neural Turing Machines. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. arXiv:1410.5401 [cs], October 2014. URL http://arxiv.org/abs/1410.5401. arXiv: 1410.5401.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. 5387626Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471-476, 2016.
Edward Grefenstette, csKarl Moritz Hermann, csMustafa Suleyman, csPhil Blunsom, csarXiv:1506.02516arXiv: 1506.02516Learning to Transduce with Unbounded Memory. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to Transduce with Unbounded Memory. arXiv:1506.02516 [cs], June 2015. URL http://arxiv. org/abs/1506.02516. arXiv: 1506.02516.
Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio, arXiv:1607.00036arXiv: 1607.00036Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes. Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes. arXiv:1607.00036 [cs], June 2016. URL http://arxiv.org/abs/1607.00036. arXiv: 1607.00036.
Long Short-Term Memory. Sepp Hochreiter, Jrgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jrgen Schmidhuber. Long Short-Term Memory. Neural Comput., 9(8):1735- 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets. Armand Joulin, csTomas Mikolov, csarXiv:1503.01007arXiv: 1503.01007Armand Joulin and Tomas Mikolov. Inferring Algorithmic Patterns with Stack-Augmented Recur- rent Nets. arXiv:1503.01007 [cs], March 2015. URL http://arxiv.org/abs/1503.01007. arXiv: 1503.01007.
Ilya Kaiser, Sutskever, arXiv:1511.08228arXiv: 1511.08228November 2015. ukasz Kaiser and Ilya Sutskever. Neural GPUs Learn Algorithms. arXiv:1511.08228 [cs], Novem- ber 2015. URL http://arxiv.org/abs/1511.08228. arXiv: 1511.08228.
Grid Long Short-Term Memory. Nal Kalchbrenner, csIvo Danihelka, csAlex Graves, csarXiv:1507.01526arXiv: 1507.01526Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid Long Short-Term Memory. arXiv:1507.01526 [cs], July 2015. URL http://arxiv.org/abs/1507.01526. arXiv: 1507.01526.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Richard Socher, arXiv:1506.07285arXiv: 1506.07285Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, and Richard Socher. Ask Me Anything: Dynamic Memory Networks for Natural Language Process- ing. arXiv:1506.07285 [cs], June 2015. URL http://arxiv.org/abs/1506.07285. arXiv: 1506.07285.
Karol Kurach, Marcin Andrychowicz, Ilya Sutskever, arXiv:1511.06392arXiv: 1511.06392Neural Random-Access Machines. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural Random-Access Machines. arXiv:1511.06392 [cs], November 2015. URL http://arxiv.org/abs/1511.06392. arXiv: 1511.06392.
Introduction to Smooth Manifolds. John Lee, 978-1-4419-9981-8Number 218 in Graduate Texts in Mathematics. Springer2 editionJohn Lee. Introduction to Smooth Manifolds. Number 218 in Graduate Texts in Mathematics. Springer, 2 edition, 2012. ISBN 978-1-4419-9981-8.
Interpolation in Lie Groups. A Marthinsen, http:/epubs.siam.org/doi/abs/10.1137/S0036142998338861SIAM Journal on Numerical Analysis. 371A. Marthinsen. Interpolation in Lie Groups. SIAM Journal on Numerical Analysis, 37(1):269-285, January 1999. ISSN 0036-1429. doi: 10.1137/S0036142998338861. URL http://epubs. siam.org/doi/abs/10.1137/S0036142998338861.
Key-value memory networks for directly reading documents. CoRR, abs/1606.03126. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126.
Interpolation in special orthogonal groups. Tatiana Shingel, 10.1093/imanum/drn033IMA Journal of Numerical Analysis. 293Tatiana Shingel. Interpolation in special orthogonal groups. IMA Journal of Numerical Analysis, 29(3):731-745, July 2009. ISSN 0272-4979, 1464-3642. doi: 10.1093/imanum/drn033. URL http://imajna.oxfordjournals.org/content/29/3/731.
. Sainbayar Sukhbaatar, csArthur Szlam, csJason Weston, csRob Fergus, csarXiv:1503.08895arXiv: 1503.08895End-To-End Memory NetworksSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net- works. arXiv:1503.08895 [cs], March 2015. URL http://arxiv.org/abs/1503.08895. arXiv: 1503.08895.
Ilya Sutskever, Oriol Vinyals, Quoc V Le, arXiv:1409.3215arXiv: 1409.3215Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to Sequence Learning with Neural Net- works. arXiv:1409.3215 [cs], September 2014. URL http://arxiv.org/abs/1409.3215. arXiv: 1409.3215.
Backpropagation through time: what it does and how to do it. Paul J Werbos, Proceedings of the IEEE. 7810Paul J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, 1990. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp? arnumber=58337.
. Jason Weston, Sumit Chopra, Antoine Bordes, arXiv:1410.3916arXiv: 1410.3916Memory Networks. cs, statJason Weston, Sumit Chopra, and Antoine Bordes. Memory Networks. arXiv:1410.3916 [cs, stat], October 2014. URL http://arxiv.org/abs/1410.3916. arXiv: 1410.3916.
Wojciech Zaremba, Ilya Sutskever, arXiv:1505.00521arXiv: 1505.00521Reinforcement Learning Neural Turing Machines -Revised. Wojciech Zaremba and Ilya Sutskever. Reinforcement Learning Neural Turing Machines -Re- vised. arXiv:1505.00521 [cs], May 2015. URL http://arxiv.org/abs/1505.00521. arXiv: 1505.00521. |
257,219,732 | On Differentially Private Federated Linear Contextual Bandits | We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy, where multiple silos (agents) interact with the local users and communicate via a central server to realize collaboration while without sacrificing each user's privacy. We identify three issues in the state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect regret bound due to noise miscalculation and (iii) ungrounded communication cost. To resolve these issues, we take a two-step principled approach. First, we design an algorithmic framework consisting of a generic federated LCB algorithm and flexible privacy protocols. Then, leveraging the proposed framework, we study federated LCBs under two different privacy constraints. We first establish privacy and regret guarantees under silo-level local differential privacy, which fix the issues present in state-of-the-art algorithm. To further improve the regret performance, we next consider shuffle model of differential privacy, under which we show that our algorithm can achieve nearly "optimal" regret without a trusted server. We accomplish this via two different schemes -one relies on a new result on privacy amplification via shuffling for DP mechanisms and another one leverages the integration of a shuffle protocol for vector sum into the tree-based mechanism, both of which might be of independent interest. Finally, we support our theoretical results with numerical evaluations over contextual bandit instances generated from both synthetic and real-life data. | [] | On Differentially Private Federated Linear Contextual Bandits
31 May 2023
Xingyu Zhou [email protected]
Wayne State University
DetroitUSA
Sayak Ray Chowdhury
Microsoft Research
BengaluruKarnatakaIndia
On Differentially Private Federated Linear Contextual Bandits
31 May 20231
We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy, where multiple silos (agents) interact with the local users and communicate via a central server to realize collaboration while without sacrificing each user's privacy. We identify three issues in the state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect regret bound due to noise miscalculation and (iii) ungrounded communication cost. To resolve these issues, we take a two-step principled approach. First, we design an algorithmic framework consisting of a generic federated LCB algorithm and flexible privacy protocols. Then, leveraging the proposed framework, we study federated LCBs under two different privacy constraints. We first establish privacy and regret guarantees under silo-level local differential privacy, which fix the issues present in state-of-the-art algorithm. To further improve the regret performance, we next consider shuffle model of differential privacy, under which we show that our algorithm can achieve nearly "optimal" regret without a trusted server. We accomplish this via two different schemes -one relies on a new result on privacy amplification via shuffling for DP mechanisms and another one leverages the integration of a shuffle protocol for vector sum into the tree-based mechanism, both of which might be of independent interest. Finally, we support our theoretical results with numerical evaluations over contextual bandit instances generated from both synthetic and real-life data.
We consider the classic cross-silo Federated Learning (FL) paradigm [KMABBBBCCC+21] applied to linear contextual bandits (LCB). In this setting, a set of M local silos or agents (e.g., hospitals) communicate with a central server to learn about the unknown bandit parameter (e.g., hidden vector representing values of the user for different medicines). In particular, at each round t ∈ [T ], each local agent i ∈ [M ] receives a new user (e.g., patient) with context information c t,i ∈ C i (e.g., age, gender, medical history), recommends an action a t,i ∈ K i (e.g., a choice of medicine), and then it observes a real-valued reward y t,i (e.g., effectiveness of the prescribed medicine). In linear contextual bandits, the reward y t,i is a linear function of the unknown bandit parameter θ * ∈ R d corrupted by i.i.d mean-zero observation noise η t,i , i.e., y t,i = ⟨x t,i , θ * ⟩ + η t,i , where x t,i = ϕ i (c t,i , a t,i ) and ϕ i : C i × K i → R d is a known function that maps a context-action pair to a d-dimensional real-valued feature vector. The goal of federated LCB is to minimize the cumulative group pseudo-regret defined as
R M (T ) = M i=1 T t=1 max a∈K i ⟨ϕ i (c t,i , a), θ * ⟩ − ⟨x t,i , θ * ⟩ .
To achieve the goal, as in standard cross-silo FL, the agents are allowed to communicate with the central server following a star-shaped communication, i.e., each agent can communicate with the server by uploading and downloading data, but agents cannot communicate with each other directly. However, the communication process (i.e., both data and schedule) could also possibly incur privacy leakage for each user t at each silo i, e.g., the sensitive context information c t,i and reward y t,i . To address this privacy risk, we resort to differential privacy [DR14], a principled way to prove privacy guarantee against adversaries with arbitrary auxiliary information. In standard cross-device FL, the notion of privacy is often the client-level DP, which protects the identity of each participating client or device. However, it has limitations in the setting of cross-silo FL, where the protection targets are users (e.g., patients) rather than participating silos or agents (e.g., hospitals). Also, in order to adopt client-level DP to cross-silo FL, one needs the server and other silos to be trustworthy, which is often not the case. Hence, recent studies [LR21; LGR22; LHWS22; DPZRT18] on cross-silo federated supervised learning have converged to a new privacy notion, which requires that for each silo, all of its communication during the entire process is private ("indistinguishable") with respect to change of one local user of its own. This allows one to protect each user within each silo without trustworthy server and other silos. In this paper, we adapt it to the setting of cross-silo federated contextual bandits and call it silo-level LDP.
Dubey and Pentland [DP20] adopt a similar but somewhat weaker notion of privacy called Federated DP and takes the first step to tackle this important problem of private and federated linear contextual bandits (LCBs). In fact, the performance guarantees presented by the authors are currently the state-of-the-art for this problem. The proposed algorithm claims to protect the privacy of each user at each silo. Furthermore, given a privacy budget ε > 0, the claimed regret bound is O( M T /ε) with only O(M log T ) communication cost, which matches the regret of a super-single agent that plays for total M T rounds. Unfortunately, in spite of being the state-of-the-art, the aforementioned privacy, regret and communication cost guarantees have fundamental gaps, as discussed below.
Our Contributions
Identify privacy, regret and communication gaps in state-of-the-art [DP20]. In Section 4, we first show that the proposed algorithm in [DP20] could leak privacy from the side channel of adaptive communication schedule, which depends on users' non-private local data. Next, we identify a mistake in total injected privacy noise in the current regret analysis. Accounting for this miscalculation, the correct regret bound would amount to O(M 3/4 T /ε), which is M 1/4 factor higher than the claimed one, and doesn't match regret performance of the super agent. Finally, we observe that due to the presence of privacy noise, its current analysis for O(M log T ) communication cost no longer holds. To resolve these issues, we take the following two-step principled approach:
(i) design a generic algorithmic and analytical framework. In Section 5, we propose a generic federated LCB algorithm along with a flexible privacy protocol. Our algorithm adopts a fixed-batch schedule (rather than an adaptive one in [DP20]) that helps avoid privacy leakage from the side channel, as well as subtleties in communication analysis. Our privacy protocol builds on a distributed version of the celebrated tree-based algorithm [CSS11; DNPR10], enabling us to provide different privacy guarantees in a unified way. We further show that our algorithm enjoys a simple and generic analytical regret bound that only depends on the total amount of injected privacy noise under the required privacy constraints.
(ii) prove performance guarantees under different privacy notions. We build upon the above framework to study federated LCBs under two different privacy constraints. In Section 6.1, we consider silo-level LDP (a stronger notion of privacy than Federated DP of [DP20]) and establish privacy guarantee with a correct regret bound O(M 3/4 T /ε) and communication cost O( √ M T ), hence fixing the gaps in [DP20]. Next, to match the regret of a super single agent, we consider shuffle DP (SDP) [CSUZZ19] in Section 6.2 and establish a regret bound of O( M T /ε). We provide two different techniques to achieve this -one that relies on a new result on privacy amplification via shuffling for DP mechanisms and the other that integrates a shuffle protocol for vector sums [CJMP21] into the tree-based mechanism, both of which might be of independent interest. In Section 7, we support our theoretical results with simulations on contextual bandit instances generated from synthetic and real-life data.
Related Work
Private bandit learning has recently received increasing attention under various notion of DP. For multi-armed bandits (MAB) where rewards are the sensitive data, different DP models including the central model [MT15; AB22; SS19], local model [RZLS20] and distributed model [CZ22a ;TKMS21] have been studied. Among them, we note that [CZ22a] also presents optimal private regret bounds under the above three DP models while only relying on discrete privacy noise, hence avoiding the privacy leakage of continuous privacy noise on finite computers due to floating point arithmetic. For linear bandits (without contexts protection), [LZJ22] establishes the first near-optimal private regret bounds for central, local, and shuffle models of approximate DP. The same problem has also been studied under pure-DP in [HGFD22]. In the specific case of linear contextual bandits, where both the contexts and rewards need to be protected, there are recent line of work under the central [SS18], local [ZCHLW20] and shuffle model [CZ22b; GCPP22; TKMS23] of DP. Private bandit learning has also been studied beyond linear settings, such as kernel bandits [ZT21; Dub21; LZJ23].
All the above papers consider learning by a single agent. To the best of our knowledge, Dubey and Pentland [DP20] is the first to consider cross-silo federated linear contextual bandits (LCBs). Nonprivate federated or distributed LCBs have also been well studied [WHCW20; HWMG22; HWYS21]. One common goal is to match the regret achieved by a super single agent that plays M T rounds while keeping communication among agents as low as possible. Our work shares the same spirit in that we aim to match the regret achieved by a super single agent under differential privacy.
Broadly speaking, our work also draws inspiration from recent advances in private cross-silo federated supervised learning [LR21;LHWS22]. In particular, our silo-level local and shuffle DP definitions for federated LCBs in the main paper can be viewed as counterparts of the ones proposed for cross-silo federated supervised learning (see, e.g., Lowy and Razaviyayn [LR21]).
Differential Privacy in Federated LCBs
We now formally introduce differential privacy in cross-silo federated contextual bandits. Let a dataset D i at each silo i be given by a sequence of T unique users U 1,i , . . . , U T,i . Each user U t,i is identified by her context information c t,i as well as reward responses she would give to all possible actions recommended to her. We say two datasets D i and D ′ i at silo i are adjacent if they differ exactly in one participating user, i.e., U τ,i ̸ = U ′ τ,i for some τ ∈ [T ] and U s,i = U ′ s,i for all s ̸ = τ . Silo-level local differential privacy (LDP). Consider a multi-round, cross-silo federated learning algorithm Q. At each round t, each silo i communicates a randomized message Z t i of its data D i to the server, which may depend (due to collaboration) on previous randomized messages Z 1 j , . . . , Z t−1 j from all other silos j ̸ = i. We allow Z t i to be empty if there is no communication at round t. Let Z i = (Z 1 i , . . . , Z T i ) denote the full transcript of silo i's communications with the server over T rounds and Q i the induced local mechanism in this process. Note that Z i is a realization of random messages generated according to the local mechanism Q i . We denote by Z −i = (Z 1 , . . . , Z i−1 , Z i+1 , . . . , Z M ) the full transcripts of all but silo i. We assume that Z i is conditionally independent of D j for all j ̸ = i given D i and Z −i . With this notation, we have the following definition of silo-level LDP.
Definition 3.1 (Silo-level LDP). A cross-silo federated learning algorithm Q with M silos is said to be
(ε i , δ i ) i∈M silo-level LDP if for each silo i ∈ [M ], it holds that P Q i (Z i ∈ E i |D i ,Z −i ) ≤ e ε i P Q i (Z i ∈ E i |D ′ i ,Z −i ) +δ i ,
for all adjacent datasets D i and D ′ i , and for all events E i in the range of Q i . If ε i = ε and δ i = δ for all i ∈ [M ], we simply say Q is (ε, δ)-silo-level LDP.
Roughly speaking, a silo-level LDP algorithm protects the privacy of each individual user (e.g., patient) within each silo in the sense that an adversary (which could either be the central server or other silos) cannot infer too much about any individual's sensitive information (e.g., context and reward) or determine whether an individual participated in the learning process. 1 Remark 3.2 (Federated DP vs. Silo-level LDP). Dubey and Pentland [DP20] consider a privacy notion called Federated DP (Fed-DP in short). As summarized in [DP20], Fed-DP requires "the action chosen by any agent must be sufficiently impervious (in probability) to any single pair (x, y) from any other agent". Both silo-level LDP and Fed-DP are item-level DP as the neighboring relationship is defined by differing in one participating user. The key here is to note that silo-level DP implies Fed-DP by the post-processing property of DP, and thus it is a stronger notion of privacy. In fact, Dubey and Pentland [DP20] claim to achieve Fed-DP by relying on privatizing the communicated data from each silo. However, as we shall see in Section 4, its proposed algorithm fails to privatize the adaptive synchronization schedule, which is the key reason behind privacy leakage in their algorithm.
Shuffle differential privacy (SDP). Next, we consider the notion of SDP [CSUZZ19], which builds upon a trusted third-party (shuffler) to amplify privacy. This provides us with the possibility to achieve a better regret compared to the one under silo-level LDP while still without a trusted server. Under the shuffle model of DP in FL, each silo i ∈ [M ] first applies a local randomizer R to its raw local data and sends the randomized output to a shuffler S. The shuffler S permutes all the messages from all M silos uniformly at random and sends those to the central server. Roughly speaking, SDP requires all the messages sent by the shuffler to be private ("indistinguishable") with respect to a single user change among all M T users. This item-level DP is defined formally as follows.
Definition 3.3 (SDP). Consider a cross-silo federated learning algorithm Q that induces a (randomized) mechanism M whose output is the collection of all messages sent by the shuffler during the entire learning process. Then, the algorithm Q is said to be (ε, δ)-SDP if
P M(D) ∈ E ≤ e ε P M(D ′ ) ∈ E + δ ,
for all E in the range of M and for all adjacent datasets D = (D 1 , . . . , D M ) and
D ′ = (D ′ 1 , . . . , D ′ M ) such that M i=1 T t=1 1 {U t,i ̸ =U ′ t,i } = 1.
Privacy, Regret and Communication Gaps in State-of-the-Art
In this section, we discuss the gaps present in privacy, regret and communication cost guarantees of the state-of-the-art algorithm proposed in Dubey and Pentland [DP20].
Gap in Privacy Analysis
We take a two-step approach to demonstrate the privacy issue in [DP20]. To start with, we argue that the proposed technique (i.e., Algorithm 1 in [DP20]) fails to achieve silo-level LDP due to privacy leakage through the side channel of communication schedule (i.e., when the agents communicate with the server).
The key issue is that the adaptive communication schedule in their proposed algorithm depends on users' non-private data. This fact can be utilized by an adversary or malicious silo j to infer another silo i's users' sensitive information, which violates the requirement of silo-level LDP. Specifically, in the proposed algorithm of [DP20], all silos communicate with the server (which is termed as synchronous setting) if
∃ some silo i ∈ [M ] : f (X i , Z) > 0 ,(1)
where f is some function, X i is the non-private local data of silo i since the last synchronization and Z is all previously synchronized data. Crucially, the form of f and the rule (1) are public information, known to all silos even before the algorithm starts. This local and non-private data-dependent communication rule in (1) causes privacy leakage, as illustrated below with a toy example.
Example 4.1 (Privacy leakage). Consider there are two silos i and j following the algorithm in [DP20]. After the first round, X i in (1) includes the data of the first user in silo i (say Alice), X j includes the data of the first user in silo j (say Bob) and Z is empty (zero). Let communication be triggered at the end of first round and assume f (X j , 0) ≤ 0. Since the rule (1) is public, silo j can infer that f (X i , 0) > 0, i.e. the communication is triggered by silo i. Since f is also public knowledge, silo j can utilize this to infer some property of X i .
Hence, by observing the communication signal only (even without looking at the data), silo j can infer some sensitive data of Alice. 2
The above example demonstrates that the proposed algorithm in [DP20] does not satisfy silo-level LDP, implying (i) their current proof for Fed-DP guarantee via post-processing of silo-level LDP does not hold anymore and (ii) Fed-DP is weak privacy protection. However, it does not necessarily imply that this algorithm also does not satisfy the weaker notion of Fed-DP (as considered in [DP20]). Nevertheless, one can show that this algorithm indeed also fails to guarantee Fed-DP by leveraging Example 4.1.
To see this, recall the definition of Fed-DP from Remark 3.2. In the context of Example 4.1, it translates to silo j selecting similar actions for its users when a single user in silo i changes. Specifically, if the first user in silo i changes from Alice to say, Tracy, Fed-DP mandates that all T actions suggested by silo j to its local T users remain "indistinguishable". This, in turn, implies that the communicated data from silo i must remain "indistinguishable" at silo j for each t ∈ [T ]. This is because the actions at silo j are chosen deterministically based on its local data as well as communicated data from silo i, and the local data at silo j remains unchanged. However, in Algorithm 1 of [DP20], the communicated data from silo i is not guaranteed to remain "indistinguishable" as synchronization depends on non-private local data (e.g. X i in (1)). In other words, without additional privacy noise added to X i in (1), the change from Alice to Tracy could affect the existence of synchronization at round t ≥ 1 a lot. Consequently, under these two neighboring situations (e.g. Alice vs. Tracy), the communicated data from silo i could differ significantly at round t + 1. As a result, the action chosen at round t + 1 in silo j can be totally different, which violates the Fed-DP definition. This holds true even if silo i injects noise while communicating its data (as done in Algorithm 1 of [DP20]) due to a large change of non-private communicated data (see Appendix A).
Gaps in Regret and Communication Analysis
We now turn to regret and communication analysis of [DP20], which has fundamental gaps that lead to incorrect conclusions in the end. First, the reported cost of privacy in regret bound isÕ( M T /ε) (ignoring dependence on dimension d for simplicity), which leads to the (incorrect) conclusion that federated LCBs across M silos under silo-level LDP can achieve the same order of regret as a super single agent that plays M T rounds. However, in the proposed analysis, the total amount of injected privacy noise is miscalculated. In particular, variance of total noise needs to be M σ 2 rather than the proposed value of σ 2 . This comes from the fact that each silo injects Gaussian noise with variance σ 2 when sending out local data and hence the total amount of noise at the server is M σ 2 . Accounting for this correction, the cost of privacy becomes O(M 3/4 T /ε), which is O(M 1/4 ) factor worse than the claimed cost. Hence, we conclude that Algorithm 1 in [DP20] cannot achieve the same order of regret as a super single agent. Second, the proposed analysis in [DP20] to show O(log T ) communication cost for the data-adaptive schedule (1) under privacy constraint essentially follows from the non-private analysis of [WHCW20]. Unfortunately, due to additional privacy noise, this direct approach no longer holds, and hence the reported logarithmic communication cost stands ungrounded (see Appendix A for more details on this). Receive context c t,i ; compute V t,i = λI + W syn + W i and θ t,
i = V −1 t,i ( U syn + U i ) 6:
Play action a t,i = argmax
a∈K i ⟨ϕ i (c t,i ,a), θ t,i ⟩+β t,i ∥ϕ i (c t,i ,a)∥ V −1 t,i
; observe reward y t,i 7:
Set x t,i = ϕ i (c t,i , a t,i ), U i = U i + x t,i y t,i and W i = W i + x t,i x ⊤ t,
Our Approach
To address all three issues in [DP20], we introduce a generic algorithm for private and federated linear contextual bandits (Algorithm 1) along with a flexible privacy protocol (Algorithm 2), which not only allows us to present the correct privacy, regret, and communication results under silo-level LDP (and hence under Fed-DP) (Section 6.1), but also helps us achieve the same order of regret as a super single agent under SDP (Section 6.2). Throughout the paper, we make following assumptions.
Algorithm: Private Federated LinUCB
We build upon the celebrated LinUCB algorithm [APS11] by adopting a fixed-batch schedule for synchronization among agents and designing a privacy protocol P (Algorithm 2) for both silo-level LDP and SDP . At each round t, each agent i recommends an action a t,i to each local user following optimism in the face of uncertainty principle. First, the agent computes a local estimate θ t,i based on all available data to her, which includes previously synchronized data from all agents as well as her own new local data (line 5 of Algorithm 1). Then, the action a t,i is selected based on the LinUCB decision rule (line 6), where a proper radius β t,i is chosen to balance between exploration and exploitation. After observing the reward y t,i , each agent accumulates her own local data (bias vector x t,i y t,i and covariance matrix x t,i x ⊤ t,i ) and stores them in U i and W i , respectively (line 7). A communication is triggered between agents and central server whenever a batch ends -we assume w.l.o.g. total rounds T is divisible by batch size B (line 9). During this process, a protocol P = (R, S, A) assists in aggregating local data among all agents while guaranteeing privacy properties (to be discussed in detail soon). After communication, each agent receives latest synchronized data W syn , U syn from the server (line 17). Here, for any t = kB, k ∈ [T /B], W syn represents noisy version of all covariance matrices up to round t from all agents (i.e., M i=1 t s=1 x s,i x ⊤ s,i ) and similarly, U syn represents noisy version of all bias vectors M i=1 t s=1 x s,i y s,i . Finally, each agent resets W i and U i so that they can be used to accumulate new local data for the next batch. Note that Algorithm 1 uses a fixed-batch (data-independent) communication schedule rather than the adaptive, data-dependent one in [DP20]. This allows us to resolve privacy and communication issues in [DP20] (to be discussed in Section 6).
Privacy Protocol
We now turn to our privacy protocol P (Algorithm 2), which helps to aggregate data among all agents under privacy constraints. The key component of P is a distributed version of the classic tree-based algorithm, which was originally designed for continual release of private sum statistics [CSS11; DNPR10]. That is, given a stream of (multivariate) data γ = (γ 1 , . . . , γ K ), one aims to release s k = k l=1 γ l privately for all k ∈ [K]. The tree-based mechanism constructs a complete binary tree T in online manner. The leaf nodes contain data γ 1 to γ K , and internal nodes contain the sum of all leaf nodes in its sub-tree, see Fig. 1 for an illustration. For any new arrival data γ k , it only releases a tree node privately, which corresponds to a noisy partial sum (p-sum) between two time indices. As an example, take k = 6, and hence the new arrival is γ 6 . The tree-based mechanism first computes the p-sum [5, 6] = γ 5 + γ 6 (line 6 in Algorithm 2). Then, it adds a Gaussian noise with appropriate variance σ 2 0 to [5, 6] and releases the noisy p-sum (line 7). Finally, to compute the prefix sum statistic [1, 6] privately, it simply adds noisy p-sums for [1, 4] and [5, 6], respectively. Reasons behind releasing and aggregating p-sums are that (i) each data point γ k only affects at most 1 + log K p-sums (useful for privacy) and (ii) each sum statistic [1, k] only involves at most 1 + log k p-sums (useful for utility).
Algorithm 2 P, a privacy protocol used in Algorithm 1 1: Procedure: Local Randomizer R at each agent 2: //Input: stream data (γ 1 , . . . , γ K ), ε > 0, δ ∈ (0, 1] Each leaf node is the stream data and each internal node is a p-sum Σ[i, j] = j l=i γ l . The green node corresponds to the newly computed p-sum at each k, i.e., αi k in Algorithm 2.
Our privacy protocol P = (R, S, A) breaks down the above classic mechanism of releasing and aggregating p-sums into a local randomizer R at each agent and an analyzer A at the server, separately, while allowing for a possible shuffler in between to amplify privacy. For each k, the local randomizer R at each agent computes and releases the noisy p-sum to a third-party S (lines 4-7). S can either be a shuffler that permutes the data uniformly at random (for SDP) or can simply be an identity mapping (for silo-level LDP). It receives a total of M noisy p-sums, one from each agent, and sends them to the central server. The analyzer A at the server first adds these M new noisy p-sums to synchronize them (line 13). It then privately releases the synchronized prefix sum by adding up all relevant synchronized p-sums as discussed in above paragraph (line 14). Finally, we employ P to Algorithm 1 by observing that local data γ k,i for batch k and agent i consists of bias vectors γ bias k,i = kB t=(k−1)B+1 x t,i y t,i and covariance matrices γ cov
k,i = kB t=(k−1)B+1 x t,i x ⊤ t,
i , which are stored in U i and W i respectively. We denote the randomizer and analyzer for bias vectors as R bias and A bias , and for covariance matrices as R cov and A cov in Algorithm 1.
Remark 5.2 (Sensitivity vs. norm). Although the l 2 norm of each γ k in Algorithm 1 scales linearly with batch size B, its sensitivity is only one, i.e., changing one user's data only changes Euclidean norm of the vector γ bias k,i and Frobenius norm of the matrix γ cov k,i by at most one, due to our boundedness assumption. It is this sensitivity that determines the noise level for privacy in Algorithm 2.
Theoretical Results
We now show that our generic algorithmic framework (Algorithms 1 and 2) enables us to establish regret bounds of federated LCBs under both silo-level LDP and SDP in a simple and unified way. Proofs of all the results are deferred to Appendices D and E due to space constraint.
Federated LCBs under Silo-level LDP
We first present the performance of Algorithm 1 under silo-level LDP, hence fixing the privacy, regret and communication issues of the state-of-the-art algorithm in [DP20]. The key idea is to inject Gaussian noise with proper variance (σ 2 0 in Algorithm 2) when releasing a p-sum such that all the released p-sums up to any batch k ∈ [K] is (ε, δ)-DP for any agent i ∈ [M ]. Then, by Definition 3.1, it achieves silo-level LDP. Note that in this case, there is no shuffler, which is equivalent to the fact that the third party S in P is simply an identity mapping, denoted by I. The following result states this formally.
Theorem 6.1 (Performance under silo-level LDP). Fix batch size B, privacy budgets ε > 0, δ ∈ (0, 1). Let P = (R, I, A) be a protocol given by Algorithm 2 with parameters σ 2 0 = 8κ · (log(2/δ)+ε) ε 2 , where κ = 1+log(T /B). Then, under Assumption 5.1, Algorithm 1 instantiated with P satisfies (ε, δ)-silo-level LDP. Moreover, for any α ∈ (0, 1], there exist choices of λ and {β t,i } t,i such that, with probability at least 1 − α, it enjoys a group regret
R M (T ) = O dM B log T +d √ M T log(M T /α) + O √ T (M d) 3/4 log 1/4 (1/δ) √ ε log 1/4 T Bα .
The first term in the above regret bound doesn't depend on privacy budgets ε, δ, and serves as a representative regret bound for federated LCBs without privacy constraint. The second term is the dominant one which depends on ε, δ and denotes the cost of privacy due to injected noise.
Corollary 6.2. Setting B = T /M , Algorithm 1 achieves O d √ M T + √ T (M d) 3/4 log 1/4 (1/δ) √ ε group regret, with total √ M T synchronizations under (ε, δ)-silo-level LDP.
Comparisons with related work. First, we avoid privacy leakage and gap in communication analysis of [DP20] by adopting data-independent synchronization. This, however, leads to an O( √ T ) communication cost rather than the reported O(log T ) cost of [DP20]. It remains open to design an data-adaptive communication schedule with a correct performance analysis (see Appendix C for more details). We also show that privacy cost scales as O(M 3/4 ) with number of agents M , correcting the reported √ M scaling of [DP20]. Next, we compare our result with that of a (super) single agent running for M T rounds under the central model DP (i.e., where central server is trusted), which serves as a benchmark for our results. As shown in [SS18;CZ22b], the total regret for such a single agent is O d
√ M T + √ M T d 3/4 log 1/4 (1/δ) √ ε
. Comparing this bound with Corollary 6.2, we observe that the privacy cost of federated LCBs under silo-level LDP is a multiplicative M 1/4 factor higher than a super agent under central DP. This observation motivates us to consider SDP in the next section.
Federated LCBs under SDP
We now close the above M 1/4 gap in the privacy cost under silo-level LDP compared to that achieved by a super single agent (with a truseted central server). To do so, we consider federated LCBs under SDP, which still enjoys the nice feature of silo-level LDP that the central server is not trusted. Thanks to our flexible privacy protocol P, the only change needed compared to silo-level LDP is the introduction of a shuffler S to amplify privacy and adjustment of the privacy noise σ 2 0 accordingly.
Theorem 6.3 (Performance under SDP via amplification). Fix batch size B and let κ = 1 + log(T /B).
Let P = (R, S, A) be a protocol given by Algorithm 2. Then, under Assumption 5.1, there exist con-
stants C 1 , C 2 > 0 such that for any ε ≤ √ κ C 1 T √ M , δ ≤ κ C 2 T , Algorithm 1 instantiated with P and σ 2 0 = O 2κ log(1/δ) log(κ/(δT )) log(M κ/δ) ε 2 M
, satisfies (ε, δ)-SDP. Moreover, for any α ∈ (0, 1], there exist choices of λ and {β t,i } t,i such that, with a probability at least 1 − α, it enjoys a group regret
R M (T ) = O dM B log T +d √ M T log(M T /α) + O d 3/4 √ M T log 3/4 (M κ/δ) √ ε log 1/4 T Bα . Corollary 6.4. Setting B = T /M , Algorithm 1 achieves O d √ M T +d 3/4 √ M T log 3/4 (M κ/δ) √ ε group regret, with total √ M T synchronizations under (ε, δ)-SDP.
Corollary 6.4 asserts that privacy cost of federated LCBs under SDP matches that of a super single agent under central DP (up to a log factor in T, M, δ).
Comparison with existing SDP analysis. A crucial observation here is that the above result doesn't directly follow from existing amplification lemmas. In particular, prior results on privacy amplification [FMT22; EFMRTT19; CSUZZ19; BBGN19] show that shuffling the outputs of M (ε, δ)-LDP algorithms achieve roughly 1/ √ M factor amplification in privacy for small ε -the key to close the aforementioned gap in privacy cost. However, these amplification results apply only when each mechanism is LDP in the standard sense, i.e., they operate on a dataset of size n = 1. This doesn't hold in our case since the dataset at each silo is a stream of T points. Lowy and Razaviyayn [LR21] adopt group privacy to handle the case of n > 1, which can amplify any general DP mechanism but comes at the expense of a large increase in δ. To avoid this, we prove a new amplification lemma specific to Gaussian DP mechanisms operating on datasets with size n > 1. This helps us achieve the required 1/ √ M amplification in ε while keeping the increase in δ in check. The key idea behind our new lemma is to directly analyze the sensitivity when creating "clones" as in [FMT22], but now by accounting for the fact that all n > 1 points can be different (see Appendix E for a formal statement of the lemma).
SDP guarantee for a wide range of privacy parameters
One limitation of attaining SDP via amplification is that the privacy guarantee holds only for small values of ε, δ (see Theorem 6.3). In this section, we propose an alternative privacy protocol to resolute this limitation. This new protocol leverages the same binary tree structure as in Algorithm 2 for releasing and aggregating psums, but it employs different local randomizers and analyzers for computing (noisy) synchronized p-sums of bias vectors and covariance matrices ( α i k in Algorithm 2). Specifically, it applies the vector sum mechanism P Vec of [CJMP21], which essentially take n vectors as inputs and outputs their noisy sum. Here privacy is ensured by injecting suitable binomial noise to a fixed-point encoding of each vector entry, which depends on ε, δ and n.
In our case, one cannot directly aggregate M p-sums using P Vec with n = M . This is because each p-sum would then have a large norm (O(T ) at the worst case), which would introduce a large amount of privacy noise (cf. Theorem 3.2 in [CJMP21]), resulting in worse utility (regret). Instead, we first expand each p-sum resulting in O(B) data points (bias vectors and covariance matrices) each with O(1) norm, where B is the size of each batch. Then, we aggregate all n = O(BM ) of those data points using P Vec mechanism (one each for bias vectors and covariance matrices). For example, consider summing bias vectors during batch k = 6 and refer back to Fig. 1 for illustration. Here, the p-sum for each agent is given by [5, 6] = γ 5 + γ 6 (see line 6 in Algorithm 2), the expansion of which results in 2B bias vectors (B each for batch 5 and 6). A noisy sum of n = 2BM bias vectors is then computed using P Vec . We denote the entire mechanism as P T Vecsee Algorithm 5 in Appendix E.2 for pseudo-code and complete description. Now, the key intuition behind using P Vec as a building block is that it allows us to compute private vector sums under the shuffle model using nearly the same amount of noise as in the central model. In other words, it "simulates" the privacy noise introduced in vector summation under central model using a shuffler. This, in turn, helps us match the regret of a super single agent under central DP while guaranteeing (strictly stronger) SDP. Specifically, we have the same order of regret as in Theorem 6.3, but now it holds for a wide range of privacy budgets ε, δ as presented below formally.
Theorem 6.5 (Performance under SDP via vector sum). Fix batch size B and let κ = 1+log(T /B). Let P T Vec be a privacy protocol given by Algorithm 5. Then, under Assumption 5.1, there exist parameter choices of P T Vec such that for any ε ≤ 60 2κ log(2/δ) and δ ≤ 1, Algorithm 1 instantiated with P T Vec satisfies (ε, δ)-SDP. Moreover, for any α ∈ (0, 1], there exist choices of λ and {β t,i } t,i such that, with a probability at least 1 − α, it enjoys a group regret
R M (T ) = O dM B log T +d √ M T log(M T /α) + O d 3/4 √ M T log 3/4 (κd 2 /δ) √ ε log 1/4 T Bα .
Remark 6.6 (Importance of communicating P-sums). One of our key techniques behind closing the regret gap under SDP is to communicate and shuffle only the p-sums rather than prefix sums. With this we can ensure that each data point (bias vector/covariance matrix) participates only in at most log K shuffle mechanisms (rather than in O(K) mechanisms if we communicate and shuffle prefix-sums). This helps us to keep the final privacy cost in check after adaptive composition. In other words, one cannot simply use shuffling to amplify privacy of the proposed algorithm in [DP20] to close the regret gap (even ignoring its privacy and communication issues), since it communicates prefix sums at each synchronization. This again highlights the algorithmic novelty of our privacy protocols (Algorithms 2 and 5), which could be of independent interest. See Appendix E for further details.
Key Techniques: Overview
Our first key tool is a generic regret bound for Algorithm 1 under a mild condition on injected noise. Let t = kB, and N t,i , n t,i denote total noise injected up to the k-th communication by agent i to covariance matrices t s=1 x s,i x ⊤ s,i and bias vectors t s=1 x s,i y s,i , respectively. Moreover, let (i) M i=1 n t,i be a random vector whose entries are independent, mean zero, sub-Gaussian with variance at most σ 2 1 , and (ii) M i=1 N t,i be a random symmetric matrix whose entries on and above the diagonal are independent sub-Gaussian random variables with variance at most σ 2 2 . Let σ = max{σ 1 , σ 2 }. Then, we have the following result.
Lemma 6.7 (Informal regret bound). With high probability, the regret of Algorithm 1 satisfies
R M (T ) = O dM B + d √ M T + √ σM T d 3/4 .
Armed with the above lemma, one only needs to determine the noise variance σ 2 under different privacy constraints. For silo-level LDP, we use concentrated differential privacy [BS16] to obtain a tighter privacy accounting. In this process, we also utilize the nice properties of the tree-based mechanism. The final total noise level is σ 2 = 8M κ 2 · (log(2/δ)+ε) ε 2 with κ := 1 + log K. For SDP, the key idea is to leverage privacy amplification of 1/ √ M by shuffling. Hence, the noise variance by each agent is roughly 1/M of the noise under LDP. By Lemma 6.7, we thus shave an M 1/4 factor from the regret under silo-level LDP. However, as mentioned before, the key technique to achieve this is our new amplification lemma in Appendix E. For SDP via vector sum, we utilize the property of P Vec to compute each noisy synchronized p-sum under SDP using noise level O(κ/ε 2 ) (where we use the fact that each data point only participates at most κ times and advanced composition). Then, by the binary tree structure again, each private prefix sum only requires at most κ noisy synchronized p-sums. Thus, the total amount of noise is O(κ 2 /ε 2 ).
Simulation Results
We evaluate regret performance of Algorithm 1 under silo-level LDP and SDP, which we abbreviate as LDP-FedLinUCB and SDP-FedLinUCB, respectively. We fix confidence level α = 0.01, batchsize B = 25 and study comparative performances under varying privacy budgets ε, δ. We plot time-averaged group regret Reg M (T )/T in Figure 2 by averaging results over 25 parallel runs. Our simulations are proof-of-concept only; we do not tune any hyperparameters.
Synthetic bandit instance. We simulate a LCB instance with a parameter θ * of dimension d = 10 and |K i | = 100 actions for each of the M agents. Similar to Vaswani et al. [VMDK20], we generate θ * and feature vectors by sampling a (d−1)-dimensional vectors of norm 1/ √ 2 uniformly at random, and append it with a 1/ √ 2 entry. Rewards are corrupted with Gaussian N (0, 0.25) noise. Real-data bandit instance. We generate bandit instances from Microsoft Learning to Rank dataset [QL13]. Queries form the contexts c and actions a are the available documents. The dataset contains 10K queries, each with up to 908 judged documents, where the query-document pairs are judged on a 3-point scale, rel(c, a) ∈ {0, 1, 2}. Each pair (c, a) has a feature vector ϕ(c, a), which is partitioned into title and body features of dimensions 57 and 78, respectively. We first train a lasso regression model on title features to predict relevances from ϕ, and take this model as the bandit parameter θ * with d = 57 (similar experiment with body features is reported in Appendix G). Next, we divide the queries equally into M = 10 agents and assign corresponding feature vectors to the agents. This way, we obtain a federated LCB instance with 10 agents, each with number of actions |K i | ≤ 908.
Observations. In sub-figure (a), we compare performance of LDP-FedLinUCB and SDP-FedLinUCB (with amplification based privacy protocol P) on synthetic Gaussian bandit instance with M = 100 agents under privacy budget δ = 0.0001 and ε = 0.001 or 0.0001. We observe that regret of SDP-FedLinUCB is less than LDP-FedLinUCB for both values of ε, which is consistent with our theoretical results. Here, we only work with small privacy budgets since the privacy guarantee of Theorem 6.3 holds for ε, δ ≪ 1. Instead, in sub-figure (b), we consider higher privacy budgets as suggested in Theorem 6.5 (e.g. ε = 0.2, δ = 0.1) and compare the regret performance of LDP-FedLinUCB and SDP-FedLinUCB (with vecor-sum based privacy protocol P T vec ). As expected, here also we observe that regret of SDP-FedLinUCB decreases faster than that of LDP-FedLinUCB.
Next, we benchmark the performance of Algorithm 1 under silo-level LDP (i.e. LDP-FedLinUCB) against a non-private Federated LCB algorithm with fixed communication schedule, which we build upon the algorithm of Abbasi-Yadkori et al. [APS11] and refer as FedLinUCB. In sub-figure (c), we demonstrate the cost of privacy under silo-level LDP on real-data bandit instance by varying ε in the set {0.2, 1, 5} while keeping δ fixed to 0.1. We observe that regret of LDP-FedLinUCB decreases and comes closer to that of FedLinUCB as ε increases (i.e., level of privacy protection decreases). A similar regret behavior is noticed under SDP also (postponed to Appendix G). FedLinUCB (non-private) under varying privacy budgets ε, δ on (a, b) synthetic Gaussian bandit instance and (c) bandit instance generated from MSLR-WEB10K Learning to Rank dataset.
Concluding Remarks
Silo-level LDP/SDP vs. other privacy notions. It is helpful to compare silo-level LDP and SDP with other existing privacy notions. In Appendix F.1, we compare it with the standard local model, central model and shuffle model for single-agent LCBs. As a by-product, via a simple tweak of Algorithm 1, we also show how to achieve a slightly stronger privacy guarantee than silo-level LDP in the sense that now the action selection is also based on private data only. With this, we can not only protect against colluding among other silos (as in silo-level LDP), but against colluding among users within the same silo (as in standard central DP). Non-unique users. In this theoretical work, we assume that all M T users are unique. In practice, it is often the case that the same user can participate in multiple rounds within the same silo or across different silos. For example, the same patient can have multiple medical tests at the same hospital or across different types of hospitals. We discuss how to handle the case of non-unique users in Appendix F.3.
Acknowledgements
XZ is supported in part by NSF CNS-2153220. XZ would like to thank Abhimanyu Dubey for discussions on the work [DP20]. XZ would also like to thank Andrew Lowy and Ziyu Liu for insightful discussions on the privacy notion for cross-silo federated learning. XZ would also thank Vitaly Feldman and Audra McMillan for the discussion on some subtleties behind "hiding among the clones".
[DNPR10]
C. Dwork, M. Naor, T. Pitassi, and G. N. Rothblum. "Differential privacy under continual observation". In: Proceedings of the forty-second ACM symposium on Theory of computing. 2010, pp. 715-724.
[DP20]
A. Dubey and A. Pentland. "Differentially-private federated linear bandits". In: Advances in Neural Information Processing Systems 33 (2020), pp. 6003-6014.
[DPZRT18] R. Dobbe, Y. Pu, J. Zhu, K. Ramchandran, and C. Tomlin. "Customized local differential privacy for multi-agent distributed optimization". In: arXiv preprint arXiv:1806.06035 (2018).
[DR14] C. Dwork and A. Roth. "The algorithmic foundations of differential privacy. A.1 More on violation of silo-level LDP As shown in the main paper, Algorithm 1 in [DP20] does not satisfy silo-level LDP. To give a more concrete illustration of privacy leakage, we now specify the form of f 3 , local data X i and synchronized data Z in (1) according to [DP20]. In particular, a communication is triggered at round t if for any silo i, it holds that
(t−t ′ ) log det Z + t s=t ′ +1 x s,i x ⊤ s,i +λ min I det (Z +λ min I) > D,(2)
where t ′ is the latest synchronization time before t, Z is all synchronized (private) covariance matrices up to time t ′ , λ min > 0 is some regularization constant (which depends on privacy budgets ε, δ) and D > 0 is some suitable threshold (which depends on number of silos M ).
With the above explicit form in hand, we can give a more concrete discussion of Example 4.1. A communication is triggered at round t = 1 if det x 1,m x ⊤ 1,m +λ min I > det (λ min I) e D holds for any silo m. This implies that (λ min + ∥x 1,m ∥ 2 )λ d−1 min > e D λ d min , which, in turn, yields ∥x 1,m ∥ 2 > λ min (e D − 1) =: C. Now, if ∥x 1,j ∥ 2 ≤ C, then silo j immediately knows that ∥x 1,i ∥ 2 > C, where C is a known constant. Since x 1,i contains the context information of the user (Alice), this norm condition could immediately reveal that some specific features in the context vector are active (e.g., Alice has both diabetes and heart disease), thus leaking Alice's private and sensitive information to silo j.
Remark A.1. The above result has two implications: (i) the current proof strategy for Fed-DP guarantee in [DP20] does not hold since it essentially relies on the post-processing of DP through silo-level LDP; (ii) Fed-DP could fail to handle reasonable adversary model in cross-silo federated LCBs. That is, even if Algorithm 1 in [DP20] satisfies Fed-DP, it still cannot protect Alice's information from being inferred by a malicious silo (which is a typical adversary model in cross-silo FL). Thus, we believe that silo-level LDP is a more proper privacy notion for cross-silo federated LCBs.
A.2 More on violation of Fed-DP
As shown in the main paper, Algorithm 1 in [DP20] also does not satisfy its weaker notion of Fed-DP. To give a more concrete illustration, recall Example 4.1 and let us define m i,j as the message/data sent from silo i to silo j after round t = 1. Suppose in the case of Alice, there is no synchronization and hence m i,j = 0. On the other hand, in the case of Tracy (i.e., the first user at silo i changes from Alice to Tracy), suppose synchronization is triggered by silo i via rule (1) due to Tracy's data. Then, according to [DP20], m i,j = x 1,i y 1,i + N (consider bias vector here), where N is the injected noise when silo i sends out its data. Now, based on the requirement of Fed-DP, the recommended action at silo j in round t = 2 needs to be "similar" or "indistinguishable" in probability under the change from Alice to Tracy. Note that silo j chooses its action at round t = 2 based on its local data (which is unchanged) and m i,j , via deterministic selection rule (i.e., LinUCB) in Algorithm 1 of [DP20]. Thus, Fed-DP essentially requires m i,j to be close in probability when Alice changes to Tracy, which is definitely not the case (i.e., 0 vs. x 1,i y 1,i + N ). Thus, Algorithm 1 in [DP20] also fails Fed-DP.
Remark A.2. One can also think from the following perspective: the non-private data-dependent sync rule (i.e., (2)) in [DP20] impacts the communicated messages/data as well, which cannot be made private by injecting noise when sending out data. To rescue, a possible approach is to use private (noisy) data in rule (2) when determining synchronization (while still injecting noise when sending out data). As a result, whether there exists a synchronization would be "indistinguishable" under Alice or Tracy and hence m i,j now would be similar. However, this approach still suffers the gap in communication cost analysis (see below) and moreover it will incur new challenges in regret analysis, see Appendix C for a detailed discussion on this approach.
A.3 More on communication cost analysis
The current analysis in [DP20] (cf. Proposition 5) for communication cost (i.e., how many rounds of communication within T ) essentially follows the approach in the non-private work [WHCW20] (cf. proof of Theorem 4). However, due to additional privacy noise injected into the communicated data, one key step of the approach in [WHCW20] fails in the private case. In the following, we first point out the issue using notations in [DP20].
The key issue in its current proof of Proposition 5 in [DP20] is that
log det(S i,t+n ′ ) det(S i,t ) > D n ′(3)
which appears right above Eq. 4 in [DP20] does not hold. More specifically, [t, t + n ′ ] is the i-th interval between two communication steps and S i,t , S i,t+n ′ are corresponding synchronized private matrices. At the time t + n ′ , we know (2) is satisfied by some silo (say j ∈ [M ]), since there is a new synchronization. In the non-private case, S i,t+n ′ simply includes some additional local covariance matrices from silos other than j, which are positive semi-definite (PSD). As a result, (3) holds. However, in the private case, S i,t+n ′ includes the private messages from silos other than j, which may not be positive semi-definite (PSD), since there are some new covariance matrices as well as new Gaussian privacy noise (which could be negative definite). Thus, (3) may not hold anymore.
B A Generic Regret Analysis for Algorithm 1
In this section, we formally establish Lemma 6.7, i.e., our generic regret bound of Algorithm 1 under sub-Gaussian noise condition. To this end, let us first recall the following notations. Fix B, T ∈ N, we let K = T /B be the total number of communication steps. For all i ∈ [M ] and all t = kB, k ∈ [K], we let N t,i = W t,i − t s=1 x s,i x ⊤ s,i and n t,i = U t,i − t s=1 x s,i y s,i be the cumulative injected noise up to the k-th communication by agent i. We further let H t := λI d + i∈[M ] N t,i and h t := i∈[M ] n t,i . Assumption B.1 (Regularity). Fix any α ∈ (0, 1], with probability at least 1 − α, we have H t is positive definite and there exist constants λ max , λ min and ν depending on α such that for all t = kB, k ∈ [K]
∥H t ∥ ≤ λ max , H −1 t ≤ 1/λ min , ∥h t ∥ H −1 t ≤ ν.
With the above regularity assumption and the boundedness in Assumption 5.1, we fist establish the following general regret bound of Algorithm 1, which can be viewed as a direct generalization of the results in [SS18; CZ22b] to the federated case.
Lemma B.2. Let Assumptions B.1 and 5.1 hold. Fix any α ∈ (0, 1], there exist choices of λ and {β t,i } t∈[T ],i∈ [M ] such that, with probability at least 1 − α, the group regret of Algorithm 1 satisfies
Reg M (T ) = O β T dM T log 1 + M T dλ min + O M · B · d log 1 + M T dλ min , where β T := 2 log 2 α + d log 1 + M T dλ min + √ λ max + ν.
Lemma 6.7 is a corollary of the above result, which holds by bounding λ max , λ min , ν under sub-Gaussian privacy noise. Assumption B.3 (sub-Gaussian private noise). There exist constants σ 1 and σ 2 such that for all t = kB, k ∈ [K]: (i) M i=1 n t,i is a random vector whose entries are independent, mean zero, sub-Gaussian with variance at most σ 2 1 , and (ii) M i=1 N t,i is a random symmetric matrix whose entries on and above the diagonal are independent sub-Gaussian random variables with variance at most σ 2 2 . Let σ 2 = max{ σ 2 1 , σ 2 2 }. Now, we are ready to state the formal version of Lemma 6.7 as follows. with probability at least 1 − α.
B.1 Proofs
Proof of Lemma B.2. We divide the proof into the following six steps. Let E be the event given in Assumption B.1, which holds with probability at least 1 − α under Assumption B.1. In the following, we condition on the event E.
Step 1: Concentration. In this step, we will show that with high probability, θ * − θ t,i
V t,i ≤ β t,i for all i ∈ [M ]
. Fix an agent i ∈ [M ] and t ∈ [T ], let t last be the latest communication round of all agents before t. By the update rule, we have
θ t,i = V −1 t,i ( U syn + U i ) = V −1 t,i M j=1 t last s=1 x s,j y s,j + M j=1 n t last ,j + t−1 s=t last +1 x s,i y s,i = λI + M j=1 t last s=1 x s,j x ⊤ s,j + M j=1 N t last ,j + t−1 s=t last +1 x s,i x ⊤ s,i −1 M j=1 t last s=1
x s,j y s,j + M j=1 n t last ,j + t−1 s=t last +1
x s,i y s,i .
By the linear reward function y s,j = ⟨x s,j , θ * ⟩ + η s,j for all j ∈ [M ] and elementary algebra, we have
θ * − θ t,i = V −1 t,i H t last θ * − M j=1 t last s=1 x s,j η s,j − t−1 s=t last +1 x s,i η s,i − h t last ,
where we recall that H t last = λI + M j=1 N t last ,j and h t last = M j=1 n t last ,j . Thus, multiplying both sides by V 1/2 t,i , yields Putting everything together, we have with probability at least 1 − 2α, for all i ∈ [M ] and all t ∈ [T ],
θ * − θ t,i V t,i ≤ M j=1 t last s=1 x s,j η s,j + t−1 s=t last +1 x s,i η s,i V −1 t,i + ∥H t last θ * ∥ V −1 t,i + ∥h t last ∥ V −1 t,i (a) ≤ M j=1 t last s=1 x s,j η s,j + t−1 s=t last +1 x s,i η s,i (G t,i +λ min I) −1 + ∥θ * ∥ Ht last + ∥h t last ∥ H −1 t last (b) ≤ M j=1 t last s=1 x s,j η s,j + t−1 s=t last +1 x s,i η s,i (G t,i +λ min I) −1 + λ max + ν where (a) holds by V t,i ⪰ H t last and V t,i ⪰ G t,i +λ min I with G t,i := M j=1 t last s=1 x s,j x ⊤ s,j + t−1 s=t last +1 x s,i x ⊤ s,iθ * − θ m V t,i ≤ β t,i = β t , where β t := 2 log 1 α + d log 1 + M t dλ min + λ max + ν.(4)
Step 2: Per-step regret. With the above concentration result, based on our UCB policy for choosing the action, we have the classic bound on the per-step regret r t,i , that is, with probability at least 1 − 2α
r t,i = ⟨θ * , x * t,i ⟩ − ⟨θ * , x t,i ⟩ (a) = ⟨θ * , x * t,i ⟩ − UCB t,i (x * t,i ) + UCB t,i (x * t,i ) − UCB t,i (x t,i ) + UCB t,i (x t,i ) − ⟨θ * , x t,i ⟩ (b) ≤ 0 + 0 + 2β t,i ∥x t,i ∥ V −1 t,i ≤ 2β T ∥x t,i ∥ V −1 t,i where in (a), we let UCB t,i (x) := ⟨ θ t,i , x⟩ + β t,i ∥x∥ V −1 t,i
; (b) holds by the optimistic fact of UCB (from the concentration), greedy action selection, and the concentration result again.
Step 3: Regret decomposition by good and bad epochs. In Algorithm 1, at the end of each synchronization time t = kB for k ∈ [K], all the agents will communicate with the server by uploading private statistics and downloading the aggregated ones from the server. We then divide time horizon T into epochs by the communication (sync) rounds. In particular, the k-th epoch contains rounds between (t k−1 , t k ], where t k = kB is the k-th sync round. We define V k :
= λ min I + M i=1 t k t=1 x t,i x ⊤ t,i , i.e.
, all the data at the end of the k-th communication plus a regularizer. Then, we say that the k-th epoch is a "good" epoch if
det(V k ) det(V k−1 ) ≤ 2;
otherwise it is a "bad" epoch. Thus, we can divide the group regret into two terms:
Reg M (T ) = i∈[M ] t∈good epochs r t,i + i∈[M ] t∈bad epochs r t,i .
Step 4: Bound the regret in good epochs. To this end, we introduce an imaginary single agent that pulls all the M T actions in the following order: x 1,1, , x 1,2 , . . . , x 1,M , x 2,1 , . . . , x 2,M , . . . , x T,1 , . . . , x T,M . We define a corresponding imaginary design matrixV t,i = λ min I + p<t,q∈ [M ] x p,q x ⊤ p,q + p=t,q<i x p,q x ⊤ p,q , i.e., the design matrix right before x t,i . The key reason behind this construction is that one can now use the standard result (i.e., the elliptical potential lemma (cf. Lemma 11 in [APS11])) to bound the summation of bonus terms, i.e., t,i ∥x t,i ∥V −1 t,i . Suppose that t ∈ [T ] is within the k-th epoch. One key property we will use is that for all i, V k ⪰V t,i and G t,i + λ min I ⪰ V k−1 , which simply holds by their definitions. This property enables us to see that for any t ∈ good epochs, det(V t,i )/ det(G t,i + λ min I) ≤ 2. This is important since by the standard "determinant trick", we have
∥x t,i ∥ (G t,i +λ min I) −1 ≤ √ 2 ∥x t,i ∥V −1 t,i .(5)
In particular, this follows from Lemma 12 in [APS11], that is, for two positive definite matrices A, B ∈ R d×d satisfying A ⪰ B, then for any x ∈ R d , ∥x∥ A ≤ ∥x∥ B · det(A)/ det(B). Note that here we also use det(A) = 1/ det(A −1 ). Hence, we can bound the regret in good epochs as follows.
i∈[M ] t∈good epochs r t,i (a) ≤ i∈[M ] t∈good epochs min{2β T ∥x t,i ∥ V −1 t,i , 1} (b) ≤ i∈[M ] t∈good epochs min{2β T ∥x t,i ∥ (G t,i +λ min I) −1 , 1} (c) ≤ i∈[M ] t∈good epochs min{2 √ 2β T ∥x t,i ∥V −1 t,i , 1} (d) ≤ i∈[M ] t∈good epochs 2 √ 2β T min{∥x t,i ∥V −1 t,i , 1} ≤ i∈[M ] t∈[T ] 2 √ 2β T min{∥x t,i ∥V −1 t,i , 1} (e) ≤ O β T dM T log 1 + M T dλ min ,(6)
where (a) holds by the per-step regret bound in Step 2 and the boundedness of reward; (b) follows from the fact that V t,i ⪰ G t,i + λ min I under event E; (c) holds by (5) when t is in good epochs; (d) is true since β T ≥ 1; (e) holds by the elliptical potential lemma (cf. Lemma 11 in [APS11]).
Step 5: Bound the regret in bad epochs. Let T bad be the total number of rounds in all bad epochs. Thus, the total number of bad rounds across all agents are M · T bad . As a result, the cumulative group regret in all these bad rounds are upper bounded by M · T bad due to the to the boundedness of reward.
We are left to bound T bad . All we need is to bound the N bad -total number of bad epochs. Then, we have T bad = N bad · B, where B is the fixed batch size. To this end, recall that K = T /B and define Ψ := {k ∈ [K] : log det(V k ) − log det(V k−1 ) > log 2}, i.e., N bad = |Ψ|. Thus, we have
log 2 · |Ψ| ≤ k∈Ψ log det(V k ) − log det(V k−1 ) ≤ k∈[K] log det(V k ) − log det(V k−1 ) ≤ d log 1 + M T dλ min
Hence, we have N bad = |Ψ| ≤ d log 2 log 1 + M T dλ min . Thus we can bound the regret in bad epochs as follows.
i∈[M ] t∈bad epochs r t,i ≤ M · T bad = M · B · N bad ≤ M · B · d log 2 log 1 + M T dλ min .(7)
Step 6: Putting everything together. Now, we substitute the total regret in good epochs given by (6) and total regret in bad epochs given by (7) into the total regret decomposition in Step 3, yields the final cumulative group regret
Reg M (T ) = O β T dM T log 1 + M T dλ min + O M · B · d log 1 + M T dλ min ,
where β T := 2 log 1 α + d log 1 + M T dλ min + √ λ max + ν. Finally, taking a union bound, we have the required result. Now, we turn to the proof of Lemma B.4, which is an application of Lemma B.2 we just proved.
Proof of Lemma B.4. To prove the result, thanks to Lemma B.2, we only need to determine the three constants λ max , λ min and ν under the sub-Gaussian private noise assumption in Assumption B.3. To this end, we resort to concentration bounds for sub-Gaussian random vector and random matrix.
To start with, under (i) in Assumption B.3, by the concentration bound for the norm of a vector containing sub-Gaussian entries (cf. Theorem 3.1.1 in [Ver18]) and a union bound over all communication rounds, we have for all t = kB where k = [T /B] and any α ∈ (0, 1], with probability at least 1 − α/2, for some absolute constant c 1 ,
M i=1 n t,i = ∥h t ∥ ≤ Σ n := c 1 · σ 1 · ( √ d + log(T /(αB)).
By (ii) in Assumption B.3, the concentration bound for the norm of a sub-Gaussian symmetric random matrix (cf. Corollary 4.4.8 in [Ver18]) and a union bound over all communication rounds, we have for all t = kB where k = [T /B] and any α ∈ (0, 1], with probability at least 1 − α/2,
M i=1 N t,i ≤ Σ N := c 2 · σ 2 · ( √ d + log(T /(αB)) for some absolute constant c 2 . Thus, if we choose λ = 2Σ N , we have ∥H t ∥ = λI d + M i=1 N t,i ≤ 3Σ N ,
i.e., λ max = 3Σ N , and λ min = Σ N . Finally, to determine ν, we note that
∥h t ∥ H −1 t ≤ 1 √ λ min ∥h t ∥ ≤ c · σ · ( √ d + log(T /(αB)) 1/2 := ν,
where σ = max{ σ 1 , σ 2 }. The final regret bound is obtained by plugging the three values into the result given by Lemma B.2.
C Discussion on Private Adaptive Communication
In the main paper and Appendix A, we have pointed out that the gap in privacy guarantee of Algorithm 1 in [DP20] is that its adaptive communication schedule leads to privacy leakage due to its dependence on non-private data. As mentioned in Remark A.1, one possible approach is to use private data to determine the sync in (2). This will resolve the privacy issue. However, the same issue in communication cost still remains (due to privacy noise), and hence O(log T ) communication does not hold. Moreover, this new approach will also lead to new challenges in regret analysis, when compared with its current one in [DP20] and the standard one in [WHCW20].
To better illustrate the new challenges, let us restate Algorithm 1 in [DP20] using our notations and first focus on how to establish the regret based on its current adaptive schedule (which has the issue of privacy leakage). After we have a better understanding of the idea, we will see how new challenges come up when one uses private data for an adaptive schedule.
As shown in Algorithm 3, the key difference compared with our fixed-batch schedule is highlighted in color. Note that we only focus on silo-level LDP and use PRIVATIZER to represent a general protocol that can privatize the communicated data (e.g., P or the standard tree-based algorithm in [DP20]).
V t,i = λI + W syn + W i , θ t,i = V −1 t,i ( U syn + U i ) 6:
Play arm a t,i = argmax
a∈K i ⟨ϕ i (c t,i , a), θ t,i ⟩ + β t,i ∥ϕ i (c t,i , a)∥ V −1 t,i
and set x t,i = ϕ i (c t,i , a t,i ) 7:
Observe reward y t,i 8:
Update W i = W i + x t,i x ⊤ t,i , U i = U i + x t,i y t,i 9: if log det(V t,i + x t,i x ⊤ t,i ) − log det(V last ) > D t−t last then 10:
Send a signal to the server to start a synchronization round. Receive W syn and U syn from the server 17: In this section, we demonstrate the key step in establishing the regret with the non-private adaptive communication schedule in Algorithm 3 (i.e., line 9). It turns out that the regret analysis is very similar to our proof for Lemma B.2 for the fixed batch case, in that the only key difference lies in Step 5 when bounding the regret in bad epochs 5 . The main idea behind adaptive communication is: whenever the accumulated local regret at any agent exceeds a threshold, then synchronization is required to keep the data homogeneous among agents. This idea is directly reflected in the following analysis.
Reset W i = 0, U i = 0, t last = t and V last = λI + W
Bound the regret in bad epochs (adaptive communication case). Let's consider an arbitrary bad epoch k, i.e., (t k−1 , t k ], where t k is the round for the k-th communication. For all i, we want to bound the total regret between (t k−1 , t k ], denoted by R k i . That is, the local regret between any two communications (in the bad epoch) will not be too large. For now, suppose we already have such a bound U (which will be achieved by adaptive communication later), i.e., R k i ≤ U for all i, k, we can easily bound the total regret in bad epochs. To see this, recall that Ψ :
= {k ∈ [K] : log det(V k ) − log det(V k−1 ) > log 2}, i.e., N bad = |Ψ|, we have i t∈bad epochs r t,i = i k∈Ψ R k i = O (|Ψ|M U ) .
Plugging in N bad = |Ψ| ≤ d log 2 log 1 + M T dλ , we have the total regret for bad epochs. Now, we are only left to find U . Here is the place where the adaptive schedule in the algorithm comes in. First, note that
t k−1 <t<t k r t,i (a) ≤ t k−1 <t<t k min{2β T ∥x t,i ∥ V −1 t,i , 1} (8) (b) ≤ O β T (t k − t k−1 ) log det V t k ,i det V last (9) (c) ≤ O β T √ D ,
where (a) holds by boundedness of reward; (b) follows from the elliptical potential lemma, i.e., V last is PSD under event E and V t,i = V t−1,i + x t−1,i x ⊤ t−1,i for all t ∈ (t k−1 , t k ); (c) holds by the adaptive schedule in line 9 of Algorithm 3. As a result, we have
R k i ≤ O β T √ D + 1,
where the regret at round t k is at most 1 by the boundedness of reward. With a proper choice of D, one can obtain a final regret bound.
C.2 Challenges in Regret Analysis under Private Adaptive Schedule
Now, we will discuss new challenges when one uses private data for an adaptive communication schedule. In this case, one needs to first privatize the new local gram matrices (e.g., t s=t last +1 x s,i x ⊤ s,i ) before being used in the determinant condition. This can be done by using standard tree-based algorithm with each data point as x s,i x ⊤ s,i . With this additional step, now the determinant condition becomes
log det( V t,i ) − log det(V last ) > D t − t last ,(10)
where V t,i := V last + t s=t last +1 x s,i x ⊤ s,i + N loc t,i and N loc t,i is the new local injected noise for private schedule up to time t. Now suppose one uses (10) to determine t k . Then, it does not imply that (9) is upper bounded
by β T √ D. That is, det( V t,i ) det(V last ) ≤ D ′ does not necessarily mean that det(V last + t s=t last +1 x s,i x ⊤ s,i ) det(V last ) ≤ D ′ .
One may try to work around (8) by first using G t,i + λ min I to lower bound V t,i . Then, (9) becomes O β T (t k − t k−1 ) log det(G t k ,i +λ min I) det(G t k−1 ,i +λ min I) , which again cannot be bouded based on the rule given by (10).
To see this, note that
det( V t k −1,i ) det(V last ) ≤ D ′ only implies that det(G t k ,i +λ min I) det(G t k−1 ,i +λmaxI) ≤ D ′ .
D Additional Details on Federated LCBs under Silo-Level LDP
In this section, we provide details for Section 6.1. In particular, we present the proof for Theorem 6.1 and the alternative privacy protocol for silo-level LDP.
D.1 Proof of Theorem 6.1
Proof of Theorem 6.1. Privacy. We only need to show that P in Algorithm 2 with a proper choice of σ 0 satisfies (ε, δ)-DP for all k ∈ [K], which implies that the full transcript of the communication is private in Algorithm 1 for any local agent i.
First, we recall that the (multi-variate) Gaussian mechanism satisfies zero-concentrated differential privacy (zCDP) [BS16]. In particular, by [BS16, Lemma 2.5], we have that computation of each node (p-sum) in the tree is ρ-zCDP with ρ = L 2 2σ 2 0 . Then, from the construction of the binary tree in P, one can easily see that one single data point γ i (for all i ∈ [K]) only impacts at most 1 + log(K) nodes. Thus, by adaptive composition of zCDP (cf. Lemma 2.3 in [BS16]), we have that the entire releasing of all p-sums is (1 + log K)ρ-zCDP. Finally, we will use the conversion lemma from zCDP to approximated DP (cf. Proposition 1.3 in [BS16]). In particular, we have that ρ 0 -zCDP implies (ε = ρ 0 + 2 ρ 0 · log(1/δ), δ)-DP for all δ > 0. In other words, to achieve a given (ε, δ)-DP, it suffices to achieve ρ 0 -zCDP with ρ 0 = f (ε, δ) := ( log(1/δ) + ε − log(1/δ)) 2 . In our case, we have ρ 0 = (1 + log(K))ρ = (1 + log(K)) L 2 2σ 2 0 . Thus, we have σ 2 0 = (1+log(K)) L 2 2ρ 0 = (1+log(K)) L 2 2f (ε,δ) . To simply it, one can lower bound f (ε, δ) by ε 2 4 log(1/δ)+4ε (cf. Remark 15 in [Ste22]). Therefore, to obtain (ε, δ)-DP, it suffices to set σ 2 0 = 2 · L 2 · (1+log(K))(log(1/δ)+ε) ε 2 . Note that there are two streams of data in Algorithm 1, and hence it suffices to ensure that each of them is (ε/2, δ/2)-DP. This gives us the final noise level σ 2 0 = 8 (1+log(K))(log(2/δ)+ε)
ε 2 (note that by boundedness assumption L = 1 in our case).
Regret. In order to establish the regret bound, thanks to Lemma B.4, we only need to determine the maximum noise level in the learning process. Recall that σ 2 0 = 8 · (1+log(K))(log(2/δ)+ε) ε 2 is the noise level for both streams (i.e., γ bias and γ cov ). Now, by the construction of binary tree in P, one can see that each prefix sum [1, k] only involves at most 1 + log(k) tree nodes. Thus, we have that the noise level in n t,i and N t,i are upper bounded by (1 + log(K))σ 2 0 . As a result, the overall noise level across all M silos is upper bounded by σ 2 total = M (1 + log(K))σ 2 0 . Finally, setting σ 2 in Lemma B.4 to be the noise level σ 2 total , yields the required result.
D.2 Alternative Privacy Protocol for Silo-Level LDP
For silo-level LDP, each local randomizer can simply be the standard tree-based algorithm, i.e., releasing the prefix sum at each communication step k (rather than p-sum in Algorithm 2). The analyzer now becomes a simple aggregation. As before, no shuffler is required in this case. This alternative protocol is given by Algorithm 4, which is essentially the main protocol used in [DP20].
It can be seen that both privacy and regret guarantees under this P alt are the same as Theorem 6.1. To see this, for privacy, the prefix sum is a post-processing of the p-sums. Thus, since we have already shown that the entire releasing of p-sums is private in the proof of Theorem 6.1, hence the same as the prefix sum. Meanwhile, the total noise level at the server is the same as before. Thus, by Lemma B.4, we have the same regret bound.
E Additional Details on Federated LCBs under SDP
In this section, we provide more detailed discussions on SDP and present the proof for Theorem 6.3 (SDP via amplification lemma) and Theorem 6.5 (SDP via vector sum).
First, let us start with some general discussions. Importance of communicating P-sums. For SDP, it is important to communicate P-sums rather than prefix sum. Note that communicating noisy p-sums in our privacy protocol P rather than the noisy prefix sum (i.e., the sum from beginning as done in [DP20]) plays a key role in achieving optimal regret with shuffling. To see this, both approaches can guarantee silo-level LDP. By our new amplification lemma, privacy guarantee can be amplified by 1/ √ M in ε for each of the K shuffled outputs, where K = T /B is total communication rounds. Now, if the prefix sum is released to the shuffler, then any single data point participates in at most K shuffle mechanisms, which would blow up ε by a factor of O( Express k in binary form: k = j Bin j (k) · 2 j 5:
Find the index of first one i k := min{j : Bin j (k) = 1} 6:
Compute p-sum α i k = j<i α j + γ k .
7:
Add noise to p-sum α i k = α i k + N (0, σ 2 0 I) 8:
Output private prefix sum s k = j:Bin j (k)=1 α j 9:
end for 10: end procedure 11: Procedure: Analyzer Output y = i∈[M ] y i 14: end procedure would eventually lead to a K 1/4 factor blow up in regret due to privacy. Similarly, if we apply P Vec to the data points in the prefix sum, then again a single data point can participate in at most K shuffled outputs.
On the other hand, if only noisy p-sums are released for shuffling at each communication round k ∈ [K] (as in our protocol P) or only the data points in each p-sum are used in P Vec (as in our protocol in P T Vec ), then due to the binary-tree structure, each data point only participates in at most log K shuffled mechanisms, which only leads to O( √ log K) blow-up of ε; hence allowing us to achieve the desired O( √ M T ) regret scaling, and close the gap present under silo-level LDP.
Remark E.1 (Shuffled tree-based mechanism). Both the protocol P in Algorithm 2 along with our new amplification lemma and protocol P T Vec in Algorithm 5 can be treated as a black-box method, which integrates shuffling into the tree-based mechanism while providing formal guarantees for continual release of sum statistics. Hence, it can be applied to other federated online learning problems beyond contextual bandits.
E.1 Amplification lemma for SDP
We first formally introduce our new amplification lemma, which is the key to our analysis, as mentioned in the main paper.
The motivation for our new amplification result is two-fold: (i) Existing results on privacy amplification via shuffling (e.g., [FMT22; EFMRTT19; CSUZZ19; BBGN19]) are only limited to the standard LDP case, i.e., each local dataset has size n = 1, which is not applicable in our case where each silo runs a DP (rather than LDP) mechanism over a dataset of size n = T ; (ii) Although a recent work [LR21] establishes a general amplification result for the case of n > 1, it introduces a very large value for the final δ that scales linearly with n due to group privacy.
We first present the key intuition behind our new lemma. Essentially, as in [LR21], we follow the nice idea of hiding among the clones introduced in [FMT22]. That is, the output from silo 2 to n can be similar to that of silo 1 by the property of DP (i.e., creating clones). The key difference between n = 1 and n > 1 is that in the latter case, the similarity distance between the output of silo 1 and j (j > 1) will be larger as in this case all n > 1 data points among two silos could be different. To capture this, [LR21] resorts to group privacy for general DP local randomizers. 6 However, group privacy for approximate DP will introduce a large value for δ. Thus, since we know that each local randomizer in our case is the Gaussian mechanism, we can capture the similarity of outputs between silo 1 and j (j > 1) by directly bounding the sensitivity. This helps to avoid the large value for the final δ. Specifically, we have the following result, which can be viewed as a refinement of Theorem D.5 in [LR21] when specified to the Gaussian mechanism. We follow the notations in [LR21] for easy comparison.
Lemma E.2 (Amplification lemma for Gaussian mechanism). Let X = (X 1 , · · · , X N ) ∈ X N ×n be a distributed data set, i.e., N silos each with n data points. Let r ∈ N and let R (i) r (Z, ·) : X n → Z := R d be a Gaussian mechanism with (ε r 0 , δ r 0 )-DP, ε r 0 ∈ (0, 1) 7 , for all Z = Z
(1:N )
(1:r−1) ∈ Z (r−1)×N and i ∈ [N ], where X is an arbitrary set. Suppose for all i, max any pair(X,X ′ ) R
(i) r (Z, X) − R (i) r (Z, X ′ ) ≤ n · max adjacent pair(X,X ′ ) R (i) r (Z, X) − R (i) r (Z, X ′ ) . 8 Given Z = Z (1:N )
(1:r−1) , consider the shuffled algorithm A r s : X n×N × Z (r−1)×N → Z N that first samples a random permutation π of [N ] and then computes Z r = (Z (1)
r , · · · , Z (N ) r ), where Z (i) r := R (i) r (Z, X π(i) ). Then, for any δ ∈ [0, 1] such that ε r 0 ≤ 1 n ln N 16 log(2/δ) , A r s is (ε r , δ r )-DP, where ε r := ln 1 + e ε r 0 − 1 e ε r 0 + 1 8 e nε r 0 log(4/δ) √ N + 8e nε r 0 N δ r := e ε r 0 − 1 e ε r 0 + 1 δ + N (e ε r + 1)(1 + e −ε r 0 /2)δ r 0 . If ε r 0 ≤ 1/n, choosing δ = N nδ r 0 yields ε r = O ε r 0 √ log(1/(nN δ r 0 )) √ N and δ r = O(N δ r 0 ), where δ r 0 ≤ 1/(N n).
E.2 Vector Sum Protocol for SDP
One limitation of our first scheme for SDP is that the privacy guarantee holds only for very small values of ε. This comes from two factors: one is due to the fact that standard 1/ √ M amplification result requires the local privacy budget to be close to one; the other one comes from the fact that now the local dataset could be n = T , which further reduces the range of valid ε.
In this section, we give the vector sum protocol in [CJMP21] for easy reference. Let's also give a concrete example to illustrate how to combine Algorithm 6 with Algorithm 5. Consider a fixed k = 6. Then, for each agent, we have α i 6 = γ 5 + γ 6 . That is, consider the case of summing bias vectors, for agent i ∈ [M ], γ 5 = 5B t=4B+1 x t,i y t,i and γ 6 = 6B t=5B+1 x t,i y t,i . Then, D 6 consists of 2B data points, each of which is a single bias vector. Now, R vec and A vec (as well the shuffler) work together to compute the noisy sum of 2B · M data points. In particular, denote by P vec the whole process, then we have α i 6 = P vec (D M 6 ), where D M 6 is the data set that consists of n = 2B · M data points, each of them is a single bias vector.
6 This is because it mainly focuses on the lower bound, where one needs to be general to handle any mechanisms. 7 Note that standard Gaussian mechanism only applies to the regime when ε < 1. In our case, ε r 0 is often less than 1. Gaussian mechanism also works for the regime ε > 1, in this case, σ 2 ≈ 1/ε rather than 1/ε 2 . With minor adjustment of the final ε r , our proof can be extended. 8 This is w.l.o.g; one can easily generalize it to any upper bound that is a function of n. 9 In our application, each data point means a bias vector or a covariance matrix. See Appendix E.2 for a concrete example.
Algorithm 5 P T Vec , another privacy protocol used in Algorithm 1 1: Procedure: Local Randomizer R at each agent 2: // Input: stream data (γ 1 , . . . , γ K ), privacy budgets ε > 0, δ ∈ (0, 1] 3:
for k = 1, . . . , K do 4:
Express k in binary form: k = j Bin j (k) · 2 j 5:
Find index of first one i k = min{j : Bin j (k) = 1}
6:
Let D k be the set of all data points 9 that contribute to α i k = j<i k α j + γ k 7:
Output y k = R Vec (D k ) // apply R Vec in Algorithm 6 to each data point Output: s k = j:Bin j (k)=1 α j 15:
end for
Next, we present more details on the implementations, i.e., the parameter choices of g, b, p. Let's consider k = 6 again as an example. In this case, the total number of data points that participate in P vec is n = 2B · M . Then, according to the proof of Theorem C.1 in [CZ22b], we have
g = max{2 √ n, d, 4}, b = 24 · 10 4 · g 2 · log 4·(d 2 +1)
δ 2 ε 2 n , p = 1/4. Algorithm 6 P vec , a shuffle protocol for vector summation [CJMP21] 1: Input: Database of d-dimensional vectors X = (x 1 , · · · , x n ); privacy parameters ε, δ; L. Run analyzer on coordinate j's messages z j ← A 1D (y j ) ε ∈ (0, 30 2κ log(2/δ)) and δ ∈ (0, 1), there exist parameters for P Vec such that the entire calculations of noisy p-sums are (ε, δ)-SDP. Since we have two streams of data (bias and covariance), we finally have that for any ε ∈ (0, 60 2κ log(2/δ)) and δ ∈ (0, 1), there exist parameters for P Vec such that Algorithm 1 with P T Vec satisfies (ε, δ)-SDP.
Regret. By the same analysis in the proof of Theorem 3.5 in [CZ22b], the injected noise for each calculation of the noisy synchronized p-sum is sub-Gaussian with the variance being at most σ 2 = O log 2 (d 2 /δ 0 )
ε 2 0 = O κ log(1/δ) log 2 (d 2 κ/δ) ε 2
. Now, by the binary tree structure, each prefix sum only involves at most κ p-sums. Hence, the overall noise level is upper bounded by σ 2 total = κ σ 2 . Finally, setting σ 2 in Lemma B.4 to be the noise level σ 2 total , yields the required result. Now, we provide proof of amplification Lemma E.2 for completeness. We follow the same idea as in [FMT22] and [LR21]. For easy comparison, we use the same notations as in [LR21] and highlighted the key difference using color text.
Proof of Lemma E.2. Let X 0 , X 1 ∈ X n×N be adjacent distributed data sets (i.e. N i=1 n j=1 1 {x i,j ̸ =x i,j } = 1). Assume WLOG that X 0 = (X 0 1 , X 2 , · · · , X N ) and X 1 = (X 1 1 , X 2 , · · · , X N ), where X 0 1 = (x 1,0 , x 1,2 , · · · , x 1,n ) ̸ = (x 1,1 , x 1,2 , · · · , x 1,n ). We can also assume WLOG that X j / ∈ {X 0 1 , X 1 1 } for all j ∈ {2, · · · , N } by redefining X and R (1:r−1) ∈ Z (r−1)×N , denote R(X) := R (i) r (Z, X) for X ∈ X n , and A s (X) := A r s (Z 1:r−1 , X). Draw π uniformly from the set of permutations of
[N ]. Now, since R is (ε r 0 , δ r 0 )-DP, R(X 1 1 ) ≃ (ε r 0 ,δ r 0 )
R(X 0 1 ), so by [LR21, Lemma D.12], there exists a local randomizer R ′ such that
R ′ (X 1 1 ) ≃ (ε r 0 ,0) R(X 0 1 ) and T V (R ′ (X 1 1 ), R(X 1 1 )) ⩽ δ r 0 .
Hence, by [LR21, Lemma D.8], there exist distributions U (X 0 1 ) and U (X 1 1 ) such that
R(X 0 1 ) = e ε r 0 e ε r 0 + 1 U (X 0 1 ) + 1 e ε r 0 + 1 U (X 1 1 )(11)
and
R ′ (X 1 1 ) = 1 e ε r 0 + 1 U (X 0 1 ) + e ε r 0 e ε r 0 + 1 U (X 1 1 ).(12)
Here, we diverge from the proof in [LR21]. We denote ε 0 := nε r 0 and δ 0 := δ r 0 . Then, by the assumption of R(X), for any X, we have R(X) ≃ ( ε 0 , δ 0 ) R(X 0 1 )) and R(X) ≃ ( ε 0 , δ 0 ) R(X 1 1 )). This is because by the assumption, when the dataset changes from any X to X 0 1 (or X 1 1 ), the total change in terms of l 2 norm can be n times that under an adjacent pair. Thus, one has to scale the ε r 0 by n while keeping the same δ r 0 . Now, we resume the same idea as in [LR21]. By convexity of hockey-stick divergence and the above result, we have R(X) ≃ ( ε 0 , δ 0 ) 1 2 (R(X 0 1 ) + R(X 1 1 )) := ρ for all X ∈ X n . That is, R is ( ε 0 , δ 0 ) deletion group DP for groups of size n with reference distribution ρ. Thus, by [LR21, Lemma D.11], we have that there exists a local randomizer R ′′ such that R ′′ (X) and ρ are ( ε 0 , 0) indistinguishable and T V (R ′′ (X), R(X)) ⩽ δ 0 for all X. Then by the definition of ( ε 0 , 0) indistinguishability, for all X there exists a "left-over" distribution LO(X) such that R ′′ (X) = 1 e ε 0 ρ + (1 − 1/e ε 0 )LO(X) = 1 2e ε 0 (R(X 0 1 ) + R(X 1 1 )) + (1 − 1/e ε 0 )LO(X).
Now, define a randomizer L by L(X 0 1 ) := R(X 0 1 ), L(X 1 1 ) := R ′ (X 1 1 ), and
L(X) := 1 2e ε 0 R(X 0 1 ) + 1 2e ε 0 R ′ (X 1 1 ) + (1 − 1/e ε 0 )LO(X) = 1 2e ε 0 U (X 0 1 ) + 1 2e ε 0 U (X 1 1 ) + (1 − 1/e ε 0 )LO(X)(13)
for all X ∈ X n \ {X 0 1 , X 1 1 }. (The equality follows from (11) and (12).) Note that T V (R(X 0 1 ), L(X 0 1 )) = 0, T V (R(X 1 1 ), L(X 1 1 )) ⩽ δ r 0 , and for all X ∈ X n \{X 0 1 , X 1 1 }, T V (R(X), L(X)) ⩽ T V (R(X), R ′′ (X))+ T V (R ′′ (X), L(X)) ⩽ δ 0 + 1 2e ε 0 T V (R ′ (X 1 1 ), R(X 1 1 )) = (1 + 1 2e nε r 0 )δ r 0 . Keeping r fixed (omitting r scripts everywhere), for any i ∈ [N ] and Z := Z 1:r−1 ∈ Z (r−1)×N , let L (i) (Z, ·), U (i) (Z, ·), and LO (i) (Z, ·) denote the randomizers resulting from the process described above. Let A L : X n×N → Z N be defined exactly the same way as A r s := A s (same π) but with the randomizers R (i) replaced by L (i) . Since A s applies each randomizer R (i) exactly once and R (1) (Z, X π(1) , · · · R (N ) (Z, X π(N ) ) are independent (conditional on Z = Z 1:r−1 ) 10 , we have
T V (A s (X 0 ), A L (X 0 )) ⩽ N (1 + 1 2e nε r 0 )δ r 0 and T V (A s (X 1 ), A L (X 1 ) ⩽ N (1+ 1 2e nε r 0 )δ r
0 . Now we claim that A L (X 0 ) and A L (X 1 ) are (ε r , δ) indistinguishable for any δ ⩾ 2e −N e −nε r 0 /16 . Observe that this claim implies that A s (X 0 ) and A s (X 1 ) are (ε r , δ r ) indistinguishable by [LR21, Lemma D.13] (with P ′ := A L (X 0 ), Q ′ := A L (X 1 ), P := A s (X 0 ), Q := A s (X 1 ).) Therefore, it only remains to prove the claim, i.e. to show that D e ε r (A L (X 0 ), A L (X 1 ) ⩽ δ for any δ ⩾ 2e −N e −nε r 0 /16 .
Now, define L (i) U (Z, X) := U (i) (Z, X 0 1 ) if X = X 0 1 U (i) (Z, X 1 1 ) if X = X 1 1 L (i) (Z, X) otherwise.
. For any inputs Z, X, let A U (Z, X) be defined exactly the same as A s (Z, X) (same π) but with the randomizers R (i) replaced by L (i) U . Then by (11) and (12),
A L (X 0 ) = e ε r 0 e ε r 0 + 1 A U (X 0 )+ 1 e ε r 0 + 1 A U (X 1 ) and A L (X 1 ) = 1 e ε r 0 + 1 A U (X 0 )+ e ε r 0 e ε r 0 + 1 A U (X 1 ). (14)
Then by (13), for any X ∈ X n \ {X 0 1 , X 1 1 } and any Z = Z 1:r−1 ∈ Z (r−1)×N , we have L (i)
U (Z, X) = 1 2e ε 0 L (i) U (Z, X 0 1 ) + 1 2e ε 0 L (i) U (Z, X 1 1 ) + (1 − e − ε 0 )LO (i) (Z, X)
. Hence, [LR21, Lemma D.10] (with p := e − ε 0 = e −nε r 0 ) implies that A U (X 0 ) and A U (X 1 )) are log 1 + 8 e ε 0 ln(4/δ)
√ N + 8e ε 0 N , δ
indistinguishable for any δ ⩾ 2e −N e −nε r 0 /16 . Here, we also slightly diverge from [LR21]. Instead of using [LR21, Lemma D.14] 11 , we can directly follow the proof of Lemma 3.5 in [FMT22] and Lemma 2.3 in [FMT22] to establish our claim that A L (X 0 ) and A L (X 1 ) are indistinguishable (hence the final result). Here, we also slightly improve the δ term compared to [FMT22] by applying amplification via sub-sampling to the δ term as well. In particular, the key step is to rewrite (14) as follows (with T := 1 2 (A U (X 0 ) + A U (X 1 ))
A L (X 0 ) = 2 e ε r 0 + 1 T + e ε r 0 − 1 e ε r 0 + 1 A U (X 0 ) and A L (X 1 ) = 2 e ε r 0 + 1 T + e ε r 0 − 1 e ε r 0 + 1 A U (X 1 ).(15)
Thus, by the convexity of the hockey-stick divergence and Lemma 2.3 in [FMT22], we have A L (X 0 ) and A L (X 1 ) are log 1 + ε r 0 − 1 ε r 0 + 1 8 e ε 0 ln(4/δ r ) √ N + 8e ε 0 N , ε r 0 − 1 ε r 0 + 1 δ indistinguishable for any δ ⩾ 2e −N e −nε r 0 /16 . As decribed before, this leads to the result that A s (X 0 ) and A s (X 1 ) are (ε r , δ r ) indistinguishable by [LR21, Lemma D.13] (original result in Lemma 3.17 of [DR14]) with (noting that ε 0 = nε r 0 ) ε r := ln 1 + e ε r 0 − 1 e ε r 0 + 1 8 e nε r 0 ln(4/δ) √ N + 8e nε r 0 N , δ r := e ε r 0 − 1 e ε r 0 + 1 δ + N (e ε r + 1)(1 + e −ε r 0 /2)δ r 0 .
F Further Discussions
In this section, we provide more details on our privacy notion and algorithm design.
F.1 Silo-level LDP/SDP vs. Other Privacy Notions
In this section, we compare our silo-level LDP and SDP with standard privacy notions for single-agent LCBs, including local, central, and shuffle model for DP, respectively. Silo-level LDP vs. single-agent local DP. Under standard LDP for single-agent LCBs [ZCHLW20; DJW13; ZT21], each user only trusts herself and hence privatizes her response before sending it to the agent. In contrast, under silo-level LDP, each local user trusts the local silo (agent), which aligns with the pratical situations of cross-silo FL, e.g., patients often trust the local hospitals. In such cases, standard LDP becomes unnecessarily stringent, hindering performance/regret and making it less appealing to cross-silo federated LCBs.
Silo-level LDP vs. single-agent central DP. The comparison with standard central DP for single-agent LCB (e.g., [SS18]) is delicate. We first note that under both notions, users trust the agent and the privacy burden lies at the agent. Under standard central DP, the agent uses private statistics until round t to choose action for each round t, which ensures that any other users t ′ ̸ = t cannot infer too much about user t's information by observing the actions on rounds t ′ ̸ = t (i.e., joint differential privacy (JDP) [KPRU14]) 12 . On the other hand, silo-level LDP does not necessarily require each agent (silo) to use private statistics to recommend actions to users within the silo. Instead, it only requires the agent to privatize its sent messages (both schedule and content). Thus, silo-level LDP may not protect a user t from the colluding of all other users within the same silo. In other words, the adversary model for silo-level LDP is that the adversary could be any other silos or the central server rather than other users within the same silo. Note that the same adversary model is assumed in a similar notion for federated supervised learning (e.g., inter-silo record-level differential privacy (ISRL-DP) in [LR21]). In fact, with a minor tweak of our Algorithm 1, one can achieve a slightly stronger notion of privacy than silo-level LDP in that it now can protect against both other silos/server and users within the same silo. The key idea is exactly that now each agent will only use private statistics to recommend actions, see Appendix F.2.
Silo-level LDP vs. Federated DP in [DP20]. In [DP20], the authors define the so-called notion of federated DP for federated LCBs, which essentially means that "the action chosen by any agent must be sufficiently impervious (in probability) to any single data from any other agent". This privacy guarantee is directly implied by our silo-level LDP. In fact, in order to show such a privacy guarantee, [DP20] basically tried to show that the outgoing communication is private, which is the idea of silo-level LDP. However, as mentioned in the main paper, [DP20] only privatizes the communicated data and fails to privatize the communication schedule, which leads to privacy leakage. Moreover, as already mentioned in Remark A.1, Fed-DP fails to protect a user's privacy even under a reasonable adversary model. Thus, we believe that silo-level LDP is a better option for federated LCBs.
SDP vs. single-agent shuffle DP. Under the single-agent shuffle DP [CZ22b; TKMS23], the shuffler takes as input a batch of users' data (i.e., from t 1 to t 2 ), which enables to achieve a regret of O(T 3/5 ) (vs. O(T 3/4 ) regret under local model and O( √ T ) regret under central model). In contrast, under our SDP, the shuffler takes as input the DP outputs from all M agents. Roughly speaking, single-agent shuffle DP aims to amplify the privacy dependence on T while our SDP amplifies privacy over M . Due to this, single-agent shuffle DP can directly apply a standard amplification lemma (e.g., [FMT22]) or shuffle protocol (e.g., [CJMP21]) that works well with LDP mechanism at each user (i.e., the size of dataset is n = 1). In contrast, in order to realize amplification over M agents' DP outputs, we have to carefully modify the standard amplification lemma to handle the fact that now each local mechanism operates on n > 1 data points, which is one of the key motivations for our new amplification lemma.
F.2 A Simple Tweak of Algorithm 1 for a Stronger Privacy Guarantee
As discussed in the last subsection, the adversary model behind silo-level LDP only includes other silos and the central server, i.e., excluding adversary users within the same silo. Thus, for silo-level LDP, Algorithm 1 can use non-private data to recommend actions within a batch (e.g., V t,i includes non-private recent local bias vectors and covariance matrices). If one is also interested in protecting against adversary users within the same silo, a simple tweak of Algorithm 1 suffices.
As shown in Algorithm 8, the only difference is a lazy update of θ t,i is adopted (line 5), i.e., it is only computed using private data without any dependence on new non-private local data. In fact, same regret bound as in Theorem 6.1 can be achieved for this new algorithm (though empirical performance could be worse due to the lazy update). In the following, we highlight the key changes in the regret analysis. It basically follows the six steps in the proof of Lemma B.2. One can now define a mapping κ(t) that maps any t ∈ [T ] to the most recent communication round. That is, for any t ∈ [t k−1 , t k ] where t k = kB is the k-th communication round, we have κ(t) = t k−1 . Then, one can replace all t in V t,i and G t,i by κ(t). The main difference that needs a check is Step 4 when bounding the regret in good epochs. The key is again to establish a similar form as (5). To this end, note that for all t ∈ [t k−1 , t k ] V k ⪰V t,i and G κ(t),i + λ min I = V k−1 , which enables us to obtain ∥x t,i ∥ (G κ(t),i +λ min I) −1 ≤ √ 2 ∥x t,i ∥V −1 t,i . Following the same analysis yields the desired regret bound. {0.1, 0.01, 0.001}. In sub-figure (c), we plot results for δ = 0.1 and varying level of ε ∈ {0.2, 1, 5} on bandit instance generated from MSLR-WEB10K data by training a lasso model on bodyfeatures (d = 78). In all these plots, we observe that regret of LDP-FedLinUCB decreases and, comes closer to that of FedLinUCB as ε, δ increases (i.e., level of privacy protection decreases), which support our theoretical results. Here, we don't compare SDP-LinUCB (with privacy amplification) since its privacy guarantee holds for ε, δ ≪ 1. Instead, we do so in sub-figure (d) with ε = δ = 0.0001. Here also, we observe a drop in regret of SDP-FedLinUCB compared to that of LDP-FedLinUCB.
32 F
32lemma for SDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E.2 Vector Sum Protocol for SDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 E.3 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Discussions 36 F.1 Silo-level LDP/SDP vs. Other Privacy Notions . . . . . . . . . . . . . . . . . . . . . . . . 36 F.2 A Simple Tweak of Algorithm 1 for a Stronger Privacy Guarantee . . . . . . . . . . . . . . 37 F.3 Non-unique Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parameters: Batch size B ∈ N, regularization λ > 0, confidence radii {β t,i } t∈[T ],i∈[M ] , feature map ϕ i : C i × K i → R d ,privacy protocol P = (R, S, A) 2: Initialize: W i = 0, U i = 0 for all agents i ∈ [M ], W syn = 0, U syn = 0 3: for t = 1, . . . , T do 4: for each agent i = 1, . . . , M do 5:
Assumption 5 . 1 (
51Boundedness [SS18; CZ22b]). The rewards are bounded, i.e., y t,i ∈ [0, 1] for all t ∈ [T ] and i ∈ [M ]. Moreover, the parameter vector and the context-action features have bounded norms, i.e., ∥θ * ∥ 2 ≤ 1 and sup c,a ∥ϕ i (c, a)∥ 2 ≤ 1 for all i ∈ [M ].
:γ 2 γ 3 γ 4 γ 5 γ 6 γ 7 Figure 1 :
71in binary form: k = j Bin j (k) · 2 j 5: Find index of first one i k = min{j : Bin j (k) = 1} 6: Compute p-sum α i k = j<i k α j +γ k 7: Output α k = α i k +N (0Procedure: Analyzer A at server 10: //Input : data from S : ( α k,1 , . . . , α k,M ), k ∈ [K] 11: for k = 1, . . . , K do 12: Express k in binary and find index of first one i k 13:Add noisy p-sums of all agents: α i k = M i=1 α Illustration of the tree-based algorithm.
Figure 2 :
2Comparison of time-average group regret for LDP-FedLinUCB (silo-level LDP), SDP-FedLinUCB (shuffle model) and
Future work .
workOne immediate future work is to reduce communication costs by overcoming the challenges in private adaptive communication, see Appendix C. Another direction could be considering similar cross-silo federated learning for private RL, especially with linear function approximation (cf. [Zho22].)
Lemma B. 4 (
4Formal statement of Lemma 6.7). Let Assumptions B.3 and 5.1 hold. Fix time horizon T ∈ N, batch size B ∈ [T ], confidence level α ∈ (0, 1]. Set λ = Θ(max{1, σ( √ d + log(T /(Bα))}) and β t,i = 2 log 2 α + d log 1 + M t dλ + √ λ for all i ∈ [M ]. Then, Algorithm 1 achieves group regret Reg M (T ) = O dM B log T + d √ M T log(M T /α) + O σM T log(M T )d 3/4 log 1/4 (T /(Bα))
(i.e., non-private Gram matrix) under event E; (b) holds by the boundedness of θ * and event E.For the remaining first term, we can use self-normalized inequality (cf. Theorem 1 in[APS11]) with a proper filtration 4 . In particular, we have for any α ∈ (0, 1], with probability at least 1 − α, for all t ∈ [T ] det(G t,i + λ min I) det(λ min I) .Now, using the trace-determinant lemma (cf. Lemma 10 in[APS11]) and the boundedness condition on ∥x s,j ∥ for all s ∈ [T ] and j ∈ [M ], we have det(G t,i + λ min I) ≤ λ min + M t d d .
Parameters: Adaptive communication parameter D, regularization λ > 0, confidence radii {β t,i } t∈[T ],i∈[M ] , feature map ϕ i : C i × K i → R d ,privacy budgets ε > 0, δ ∈ [0, 1]. 2: Initialize: For all i ∈ [M ], W i = 0, U i = 0, PRIVATIZER with ε, δ, W syn = 0, U syn = 0, 3: for t = 1, . . . , T do 4: for each agent i = 1, . . . , M do 5:
√
K) (by advanced composition [DR14]). ThisAlgorithm 4 P alt , an alternative privacy protocol for silo-level LDP 1: Procedure: Local Randomizer R 2: // Input: stream data γ = (γ i ) i∈[K] ; privacy parameters ε, δ; Output:
A 12 :
12// Input: a collection of M data points, y = {y i } i∈[M ] ; Output: Aggregated sum 13:
:
Procedure: Analyzer A at server 10: // Input: stream data from S: {ȳ k = (ȳ k,1 , . . . ,ȳ k,M )} k∈[K] 11: for k = 1, . . . , K do 12: Express k in binary and find index of first one i k 13:Add all messages from M agents: α i k = A Vec (ȳ k ) // apply A Vec in Algorithm 6 14:
vector of estimates o = (o 1 , · · · o d ) 15: end procedure E.3 Proofs First, we present proof of Theorem 6.3.
necessary. Fix i ∈ [N ], r ∈ [R], Z = Z 1:r−1 = Z (1:N )
Figure 3 :
3Comparison of time-average group regret for FedLinUCB (non-private) and LDP-FedLinUCB (i.e., under silo-level LDP) on (a, b) synthetic Gaussian bandit instance and (c,d) bandit instance generated from MSLR-WEB10K Learning to Rank dataset.
Receive W syn and U syn from the server and reset W i = 0, U i = 0i
8:
end for
9:
if t mod B = 0 then
10:
// Local randomizer R at all agents i ∈ [M ]
11:
Send randomized messages R bias
t,i = R bias (U i ) and R cov
t,i = R cov (W i ) to S
12:
// Third party S
13:
Shuffle (or, not) all messages S bias
t
= S({R bias
t,i } i∈[M ] ) and S cov
t
= S({R cov
t,i } i∈[M ] )
14:
// Analyzer A at the server
15:
Compute private synchronized statistics U syn = A bias (S bias
t ) and W syn = A cov (S cov
t )
16:
// All agents i ∈ [M ]
17:
18:
end if
19: end for
Feldman, A. McMillan, and K. Talwar. "Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling". In: 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS).In this section, we provide more details on the current gaps in [DP20], especially on privacy violation and communication cost. It turns out that both gaps come from the fact that an adaptive communication schedule is employed in [DP20]." In:
Found. Trends Theor. Comput. Sci. 9.3-4 (2014), pp. 211-407.
[Dub21]
A. Dubey. "No-regret algorithms for private gaussian process bandit optimization".
In: International Conference on Artificial Intelligence and Statistics. PMLR. 2021,
pp. 2062-2070.
[EFMRTT19]Ú. Erlingsson, V. Feldman, I. Mironov, A. Raghunathan, K. Talwar, and A. Thakurta.
"Amplification by shuffling: From local to central differential privacy via anonymity".
In: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algo-
rithms. SIAM. 2019, pp. 2468-2479.
[FMT22]
V. IEEE. 2022,
pp. 954-964.
[GCPP22]
E. Garcelon, K. Chaudhuri, V. Perchet, and M. Pirotta. "Privacy Amplification
via Shuffling for Linear Contextual Bandits". In: International Conference on
Algorithmic Learning Theory. PMLR. 2022, pp. 381-407.
[HGFD22]
O. A. Hanna, A. M. Girgis, C. Fragouli, and S. Diggavi. "Differentially Private
Stochastic Linear Bandits:(Almost) for Free". In: arXiv preprint arXiv:2207.03445
(2022).
[HWMG22]
J. He, T. Wang, Y. Min, and Q. Gu. "A Simple and Provably Efficient Algo-
rithm for Asynchronous Federated Contextual Linear Bandits". In: arXiv preprint
arXiv:2207.03106 (2022).
[HWYS21]
R. Huang, W. Wu, J. Yang, and C. Shen. "Federated linear contextual bandits". In:
Advances in Neural Information Processing Systems 34 (2021), pp. 27057-27068.
[KMABBBBCCC+21] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji,
K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. "Advances and open
problems in federated learning". In: Foundations and Trends® in Machine Learning
14.1-2 (2021), pp. 1-210.
[KPRU14]
M. Kearns, M. Pai, A. Roth, and J. Ullman. "Mechanism design in large games:
Incentives and privacy". In: Proceedings of the 5th conference on Innovations in
theoretical computer science. 2014, pp. 403-410.
[LGR22]
A. Lowy, A. Ghafelebashi, and M. Razaviyayn. "Private Non-Convex Federated
Learning Without a Trusted Server". In: arXiv preprint arXiv:2203.06735 (2022).
[Zho22]
X. Zhou. "Differentially Private Reinforcement Learning with Linear Function
Approximation". In: Proc. ACM Meas. Anal. Comput. Syst. 6.1 (2022).
[ZT21]
X. Zhou and J. Tan. "Local Differential Privacy for Bayesian Optimization". In: Pro-
ceedings of the AAAI Conference on Artificial Intelligence 35.12 (2021), pp. 11152-
11159.
A More Discussions on Gaps in SOTA
2 :
2procedure: Local Randomizer R vec (x i ) Shift component to enforce non-negativity: w i,j ← x i,j + L 5: m j ← R 1D (w i,j )Output labeled messages {(j, m j )} j∈[d] 8: end procedure 9: procedure: Analyzer A vec (y)for j ∈ [d] do3:
for j ∈ [d] do
4:
6:
end for
7:
10:
11:
This is indeed a notion of item-level DP. It appears under different names in prior work, e.g., silo-specific sample-level DP[LHWS22], inter-silo record-level DP[LR21]. A comparison of this notion of privacy with standard local DP, central DP and shuffle DP for single-agent LCBs is presented in Appendix F.1.
In fact, given the specific form of f in [DP20], silo j gets to know that log det I + λ −1 min x1,ix ⊤ 1,i > D, where λmin > 0 is a regularizer (which depends on privacy budgets ε, δ) and D > 0 is some suitable threshold (see Appendix A for the specific form of f ). This in turn implies that ∥x1,i∥ > C, where C is some constant. Since x1,i contains the context information of the user, this information could immediately reveal that some specific features in the context vector are active, which can be inferred by the adversary silo (e.g., silo j).
There is some minor issue in the form of f in [DP20]. The correct one is given by our restatement of their Algorithm 1, see line 9 in Algorithm 3.
In particular, by the i.i.d noise assumption across time and agents, one can simply construct the filtration sequentially across agents and rounds, which enlarges the single-agent filtration by a factor of M .
There is another subtle but important difference, which lies in the construction of filtration that is required to apply the standard self-normalized inequality to establish the concentration result. We believe that one cannot directly use the standard filtration (e.g.,[APS11]) in the adaptive case, and hence more care is indeed required.
This follows from the assumption that R (i) (Z1:r−1, X) is conditionally independent of X ′ given Z1:r−1 for all Z1:r−1 and X ̸ = X ′ .11 We think that its restatement of [FMT22, Lemma 2.3] is not correct (which can be easily fixed though).
As shown in[SS18], JDP relaxation is necessary for achieving sub-linear regret for LCBs under the central model.
All existing non-private federated LCB algorithms (e.g., Wang et al.[WHCW20]) adopts adaptive communication. We refrain from comparing with those to maintain consistency in presentation.
Algorithm 7 P 1D , a shuffle protocol for summing scalars[CJMP21]1: Input: Scalar database X = (x 1 , · · · x n ) ∈ [0, L] n ; g, b ∈ N; p ∈ (0, 1 2 ). 2: procedure: Local Randomizer R 1D (x i ) 3:x i ← ⌊x i g/L⌋.4:Sample rounding value η 1 ∼ Ber(x i g/L −x i ).5:Set x i ←x i + η 1 .6:Sample privacy noise value η 2 ∼ Bin(b, p).7:Report y i ∈ {0, 1} g+b containing x i + η 2 copies of 1 and g + b − ( x i + η 2 ) copies of 0. 8: end procedure 9: procedure: Analyzer A 1D (S(y 1 , . . . , y n )) 10:Proof of Theorem 6.3. Privacy. In this proof, we directly work on approximate DP. By the boundedness assumption and Gaussian mechanism, we have that with σ 2 0 = 2L 2 log(1.25/ δ 0 )Here we note that in our case, N = M and n = T , where n = T follows from the fact that there exists α i in the tree that corresponds to the sum of T data points. Moreover, since the same mechanism is run at all silos, shuffling-then-privatizing is the same as first privatizing-then-shuffling the outputs. Next, we apply the advanced composition theorem (cf. Theorem 3.20 in [DR14]). In particular, by the binary tree structure, each data point involves only κ := 1 + log(K) times in the output of R. Thus, to achieve (ε, δ)-DP, it suffices to have ε = ε 2 √ 2κ log(2/δ) and δ = δ 2κ . Using all these equations, we can solveand δ 0 = C 2 · δ M κ , for some constants C 1 > 0 and C 2 > 0. To satisfy the conditions on ε 0 and δ 0 , we have ε ≤With the choice of ε 0 and δ 0 , we have the noise variance. Thus, we can apply P to the bias and covariance terms (with L = 1), respectively.Regret. Again, we simply resort to our Lemma B.4 for the regret analysis. In particular, we only need to determine the maximum noise level in the learning process. Note that σ 2 0 = O 2L 2 κ log(1/δ) log(κ/(δT )) log(M κ/δ) ε 2 M is the noise level injected for both bias and covariance terms. Now, by the construction of the binary tree in P, one can see that each prefix sum only involves at most 1 + log(k) tree nodes. As a result, the overall noise level across all M silos is upper bounded by σ 2 total = M κσ 2 0 . Finally, setting σ 2 in Lemma B.4 to be the noise level σ 2 total , yields the required result. Now, we prove Theorem 6.5.Proof of Theorem 6.5. Privacy. For each calculation of the noisy synchronized p-sum, there exist parameters for P Vec such that it satisfies (ε 0 , δ 0 )-SDP where ε 0 ∈ (0, 15] and δ 0 ∈ (0, 1/2) (see Lemma 3.1 in[CJMP21]or Theorem 3.5 in[CZ22b]). Then, by the binary tree structure, each single data point (bias vector or covariance matrix) only participates in at most κ := 1 + log(K) runs of P Vec . Thus, to achieve (ε, δ)-DP, it suffices to have ε 0 = ε 2 √ 2κ log(2/δ) and δ 0 = δ 2κ by advanced composition theorem. Thus, for any 6:Play arm a t,i = argmaxObserve reward y t,i 8:F.3 Non-unique UsersIn the main paper, we assume all users across all silos and T rounds are unique. Here, we briefly discuss how to handle the case of non-unique users.• The same user appears multiple times in the same silo. One example of this could be one patient visiting the same hospital multiple times. In such cases, one needs to carefully apply group privacy or other technique (e.g.,[CZ22b]) to characterize the privacy loss of these returning users.• The same user appears multiple times across different silos. One example of this could be one patient who has multiple records across different hospitals. Then, one needs to use adaptive advanced composition to characterize the privacy loss of these returning users.G Additional Details on Simulation ResultsInFigure 3, we compare regret performance of LDP-FedLinUCB with FedLinUCB under varying privacy budgets. 13 In sub-figure (a), we plot results for δ = 0.1 and varying level of ε ∈ {0.2, 1, 5} on synthetic Gaussian bandit instance, wherein sub-figure (b), we plot results for ε = 5 and varying level of δ ∈
When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits. A Azize, D Basu, arXiv:2209.02570arXiv preprintA. Azize and D. Basu. "When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits". In: arXiv preprint arXiv:2209.02570 (2022).
Improved algorithms for linear stochastic bandits. Y Abbasi-Yadkori, D Pál, C Szepesvári, Advances in neural information processing systems. 24Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári. "Improved algorithms for linear stochastic bandits". In: Advances in neural information processing systems 24 (2011).
The privacy blanket of the shuffle model. B Balle, J Bell, A Gascón, K Nissim, Annual International Cryptology Conference. SpringerB. Balle, J. Bell, A. Gascón, and K. Nissim. "The privacy blanket of the shuffle model". In: Annual International Cryptology Conference. Springer. 2019, pp. 638- 667.
Concentrated differential privacy: Simplifications, extensions, and lower bounds. M Bun, T Steinke, Theory of Cryptography Conference. SpringerM. Bun and T. Steinke. "Concentrated differential privacy: Simplifications, exten- sions, and lower bounds". In: Theory of Cryptography Conference. Springer. 2016, pp. 635-658.
Shuffle private stochastic convex optimization. A Cheu, M Joseph, J Mao, B Peng, arXiv:2106.09805arXiv preprintA. Cheu, M. Joseph, J. Mao, and B. Peng. "Shuffle private stochastic convex optimization". In: arXiv preprint arXiv:2106.09805 (2021).
Private and continual release of statistics. T.-H H Chan, E Shi, D Song, ACM Transactions on Information and System Security (TISSEC). 14T.-H. H. Chan, E. Shi, and D. Song. "Private and continual release of statistics". In: ACM Transactions on Information and System Security (TISSEC) 14.3 (2011), pp. 1-24.
Distributed differential privacy via shuffling. A Cheu, A Smith, J Ullman, D Zeber, M Zhilyaev, Annual International Conference on the Theory and Applications of Cryptographic Techniques. SpringerA. Cheu, A. Smith, J. Ullman, D. Zeber, and M. Zhilyaev. "Distributed differential privacy via shuffling". In: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer. 2019, pp. 375-403.
Distributed Differential Privacy in Multi-Armed Bandits. S R Chowdhury, X Zhou, arXiv:2206.05772arXiv preprintS. R. Chowdhury and X. Zhou. "Distributed Differential Privacy in Multi-Armed Bandits". In: arXiv preprint arXiv:2206.05772 (2022).
Shuffle Private Linear Contextual Bandits. S R Chowdhury, X Zhou, Proceedings of the 39th International Conference on Machine Learning. PMLR, 2022. the 39th International Conference on Machine Learning. PMLR, 2022S. R. Chowdhury and X. Zhou. "Shuffle Private Linear Contextual Bandits". In: Proceedings of the 39th International Conference on Machine Learning. PMLR, 2022, pp. 3984-4009.
Local privacy and statistical minimax rates. J C Duchi, M I Jordan, M J Wainwright, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. IEEEJ. C. Duchi, M. I. Jordan, and M. J. Wainwright. "Local privacy and statistical min- imax rates". In: 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. IEEE. 2013, pp. 429-438.
On Privacy and Personalization in Cross-Silo Federated Learning. Z Liu, S Hu, Z S Wu, V Smith, arXiv:2206.07902arXiv preprintZ. Liu, S. Hu, Z. S. Wu, and V. Smith. "On Privacy and Personalization in Cross-Silo Federated Learning". In: arXiv preprint arXiv:2206.07902 (2022).
Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses. A Lowy, M Razaviyayn, arXiv:2106.09779arXiv preprintA. Lowy and M. Razaviyayn. "Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses". In: arXiv preprint arXiv:2106.09779 (2021).
Differentially Private Linear Bandits with Partial Distributed Feedback. F Li, X Zhou, B Ji, arXiv:2207.05827arXiv preprintF. Li, X. Zhou, and B. Ji. "Differentially Private Linear Bandits with Partial Dis- tributed Feedback". In: arXiv preprint arXiv:2207.05827 (2022).
(Private) Kernelized Bandits with Distributed Biased Feedback. F Li, X Zhou, B Ji, Proceedings of the ACM on Measurement and Analysis of Computing Systems 7.1 (2023). the ACM on Measurement and Analysis of Computing Systems 7.1 (2023)F. Li, X. Zhou, and B. Ji. "(Private) Kernelized Bandits with Distributed Biased Feedback". In: Proceedings of the ACM on Measurement and Analysis of Computing Systems 7.1 (2023), pp. 1-47.
(Nearly) optimal differentially private stochastic multi-arm bandits. N Mishra, A Thakurta, Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence. the Thirty-First Conference on Uncertainty in Artificial IntelligenceN. Mishra and A. Thakurta. "(Nearly) optimal differentially private stochastic multi-arm bandits". In: Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence. 2015, pp. 592-601.
Introducing LETOR 4.0 Datasets. T Qin, T Liu, CoRR abs/1306.2597T. Qin and T. Liu. "Introducing LETOR 4.0 Datasets". In: CoRR abs/1306.2597 (2013). URL: http://arxiv.org/abs/1306.2597.
Multi-armed bandits with local differential privacy. W Ren, X Zhou, J Liu, N B Shroff, arXiv:2007.03121arXiv preprintW. Ren, X. Zhou, J. Liu, and N. B. Shroff. "Multi-armed bandits with local differ- ential privacy". In: arXiv preprint arXiv:2007.03121 (2020).
Differentially private contextual linear bandits. R Shariff, O Sheffet, Advances in Neural Information Processing Systems. 31R. Shariff and O. Sheffet. "Differentially private contextual linear bandits". In: Advances in Neural Information Processing Systems 31 (2018).
An optimal private stochastic-mab algorithm based on optimal private stopping rule. T Sajed, O Sheffet, PMLR. 2019International Conference on Machine Learning. T. Sajed and O. Sheffet. "An optimal private stochastic-mab algorithm based on optimal private stopping rule". In: International Conference on Machine Learning. PMLR. 2019, pp. 5579-5588.
Composition of Differential Privacy & Privacy Amplification by Subsampling. T Steinke, arXiv:2210.00597arXiv preprintT. Steinke. "Composition of Differential Privacy & Privacy Amplification by Sub- sampling". In: arXiv preprint arXiv:2210.00597 (2022).
Differentially private multi-armed bandits in the shuffle model. J Tenenbaum, H Kaplan, Y Mansour, U Stemmer, Advances in Neural Information Processing Systems. 34J. Tenenbaum, H. Kaplan, Y. Mansour, and U. Stemmer. "Differentially private multi-armed bandits in the shuffle model". In: Advances in Neural Information Processing Systems 34 (2021).
Concurrent Shuffle Differential Privacy Under Continual Observation. J Tenenbaum, H Kaplan, Y Mansour, U Stemmer, arXiv:2301.12535arXiv preprintJ. Tenenbaum, H. Kaplan, Y. Mansour, and U. Stemmer. "Concurrent Shuffle Differ- ential Privacy Under Continual Observation". In: arXiv preprint arXiv:2301.12535 (2023).
High-dimensional probability: An introduction with applications in data science. R Vershynin, Cambridge university press47R. Vershynin. High-dimensional probability: An introduction with applications in data science. Vol. 47. Cambridge university press, 2018.
Old Dog Learns New Tricks: Randomized UCB for Bandit Problems. S Vaswani, A Mehrabian, A Durand, B Kveton, PMLR. 2020International Conference on Artificial Intelligence and Statistics. S. Vaswani, A. Mehrabian, A. Durand, and B. Kveton. "Old Dog Learns New Tricks: Randomized UCB for Bandit Problems". In: International Conference on Artificial Intelligence and Statistics. PMLR. 2020, pp. 1988-1998.
Distributed bandit learning: How much communication is needed to achieve (near) optimal regret. Y Wang, J Hu, X Chen, L Wang, ICLRY. Wang, J. Hu, X. Chen, and L. Wang. "Distributed bandit learning: How much communication is needed to achieve (near) optimal regret". In: ICLR (2020).
Locally differentially private (contextual) bandits learning. K Zheng, T Cai, W Huang, Z Li, L Wang, Advances in Neural Information Processing Systems. 33K. Zheng, T. Cai, W. Huang, Z. Li, and L. Wang. "Locally differentially private (contextual) bandits learning". In: Advances in Neural Information Processing Systems 33 (2020), pp. 12300-12310.
2: Initialize: For all i. C I × K I → R D, S = (r, Algorithm 8 Priv-FedLinUCB-Lazy 1: Parameters: Batch size B ∈ N, regularization λ > 0, confidence radii {β t,i } t∈. W i = 0, U i = 0, W syn = 0, U syn = 0 3: for t = 1, . . . , T do 4: for each agent i = 1, . . . , M do 5: V t,i = λI + W syn , θ t,i = V −1 t,i U synAlgorithm 8 Priv-FedLinUCB-Lazy 1: Parameters: Batch size B ∈ N, regularization λ > 0, confidence radii {β t,i } t∈[T ],i∈[M ] , feature map ϕ i : C i × K i → R d , privacy protocol P = (R, S, A) 2: Initialize: For all i ∈ [M ], W i = 0, U i = 0, W syn = 0, U syn = 0 3: for t = 1, . . . , T do 4: for each agent i = 1, . . . , M do 5: V t,i = λI + W syn , θ t,i = V −1 t,i U syn |
254,854,553 | TEXTGRAD: ADVANCING ROBUSTNESS EVALUATION IN NLP BY GRADIENT-DRIVEN OPTIMIZATION | Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision (CV) domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TEXTGRAD, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables, and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TEXTGRAD can be baked in adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TEXTGRAD not only in attack generation for robustness evaluation but also in adversarial defense. From the attack perspective, we show that TEXTGRAD achieves remarkable improvements in both the attack success rate and the perplexity score over five state-of-the-art baselines. From the defense perspective, TEXTGRAD-enabled adversarial training yields the most robust NLP model against a wide spectrum of NLP attacks. * Contributed equally. | [
222133259,
990233,
202888986,
236477723,
196202909,
51928102,
233423658,
3432876,
3488815,
221739314,
220714040,
222140859,
210164926,
1671874,
5034059,
216036179,
231798234,
5076191,
52967399,
21698802,
196183669
] | TEXTGRAD: ADVANCING ROBUSTNESS EVALUATION IN NLP BY GRADIENT-DRIVEN OPTIMIZATION
Bairu Hou
UC Santa Barbara
Jinghan Jia
Michigan State University
Yihua Zhang
Michigan State University
Guanhua Zhang
UC Santa Barbara
Yang Zhang
MIT-IBM Watson AI Lab
Sijia Liu
Michigan State University
MIT-IBM Watson AI Lab
Shiyu Chang
UC Santa Barbara
TEXTGRAD: ADVANCING ROBUSTNESS EVALUATION IN NLP BY GRADIENT-DRIVEN OPTIMIZATION
Preprint. Under review.
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision (CV) domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TEXTGRAD, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables, and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TEXTGRAD can be baked in adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TEXTGRAD not only in attack generation for robustness evaluation but also in adversarial defense. From the attack perspective, we show that TEXTGRAD achieves remarkable improvements in both the attack success rate and the perplexity score over five state-of-the-art baselines. From the defense perspective, TEXTGRAD-enabled adversarial training yields the most robust NLP model against a wide spectrum of NLP attacks. * Contributed equally.
INTRODUCTION
The assessment of adversarial robustness of machine learning (ML) models has received increasing research attention because of their vulnerability to adversarial input perturbations (known as adversarial attacks) (Goodfellow et al., 2014;Carlini & Wagner, 2017;Papernot et al., 2016). Among a variety of robustness evaluation methods, gradient-based adversarial attack generation makes a tremendous success in the computer vision (CV) domain (Croce & Hein, 2020;Dong et al., 2020). For example, the projected gradient descent (PGD)-based methods have been widely used to benchmark the adversarial robustness of CV models (Madry et al., 2018;Zhang et al., 2019b;Shafahi et al., 2019;Wong et al., 2020;Zhang et al., 2019a;Athalye et al., 2018). However, in the natural language processing (NLP) area, the predominant robustness evaluation tool belongs to query-based attack generation methods Garg & Ramakrishnan, 2020;Li et al., 2019), which do not make the full use of gradient information.
Yet, the (query-based) mainstream of NLP robustness evaluation suffers several limitations. First, these query-based attack methods could be prone to generating ambiguous or invalid adversarial textual inputs (Wang et al., 2021), most of which change the original semantics and could even Table 1: Effectiveness of TEXTGRAD at-a-glance on the SST-2 dataset (Socher et al., 2013) against 5 NLP attack baselines. Each attack method is categorized by the attack principle (gradient-based vs. query-based), and is evaluated at three aspects: attack success rate (ASR), adversarial texts quality (in terms of language model perplexity), and runtime efficiency (averaged runtime for attack generation in seconds). Two types of victim models are considered, i.e., realizations of BERT achieved by standard training (ST) and adversarial training (AT), respectively. Here AT integrates TEXTFOOLER with standard training. Across models, higher ASR, lower perplexity, and lower runtime indicate stronger attack. The best performance is highlighted in bold per metric. mislead human annotators. Second, the query-based methods could be hardly integrated with the first-order optimization-based model training recipe, and thus makes it difficult to develop adversarial training-based defenses (Madry et al., 2018;Athalye et al., 2018). Even though some first-order optimization-based NLP attack generation methods were developed in the literature, they often come with poor attack effectiveness (Ebrahimi et al., 2018) or high computational cost (Guo et al., 2021), leaving the question of whether the best optimization framework for NLP attack generation is found. The most relevant work to ours is GBDA attack (Guo et al., 2021), which perturbs each token in the input by sampling substitutes from the whole vocabulary of victim model. The sample distribution is optimized using gradients to generate adversarial examples, but yields low computational efficiency and high memory cost. Inspired by above, we ask: How to develop a principled gradient-based attack framework in NLP, like PGD in CV?
The main challenges for leveraging gradients to generate adversarial attacks in NLP lie in two aspects. First, the discrete nature of texts makes it difficult to directly employ the gradient information on the inputs. Different from perturbing pixels in imagery data, adversarial perturbations in an textual input need to optimize over the discrete space of words and tokens. Second, the fluency requirement of texts imposes another constraint for optimization. In contrast to p -norm constrained attacks in CV, adversarial examples in NLP are required to keep a low perplexity score. The above two obstacles make the design of gradient-based attack generation method in NLP highly non-trivial.
To bridge the adversarial learning gap between CV and NLP, we develop a novel adversarial attack method, termed TEXTGRAD, by peering into gradient-driven optimization principles needed for effective attack generation in NLP. Specifically, we propose a convex relaxation method to co-optimize the perturbation position selection and token modification. To overcome the discrete optimization difficulty, we propose an effective sampling strategy to enable an accurate mapping from the continuous optimization space to the discrete textual perturbations. We further leverage a perplexity-driven loss to optimize the fluency of the generated adversarial examples. In Table 1, we highlight the attack improvement brought by TEXTGRAD over some widely-used NLP attack baselines. More thorough experiment results will be provided in Sec. 5.
Our contribution. We propose TEXTGRAD, a novel first-order gradient-driven adversarial attack method, which takes a firm step to fill the vacancy of a principled PGD-based robustness evaluation framework in NLP. We identify a few missing optimization principles to boost the power of gradient-based NLP attacks, such as convex relaxation, sampling-based continuous-to-discrete mapping, and site-token co-optimization. We also show that TEXTGRAD is easily integrated with adversarial training and enables effective defenses against adversarial attacks in NLP. Lastly, we conduct thorough experiments to demonstrate the superiority of TEXTGRAD to existing baselines in both adversarial attack generation and adversarial defense.
BACKGROUND AND RELATED WORK
Adversarial attacks in CV. Gradient information has played an important role in generating adversarial examples, i.e., human-imperceptible perturbed inputs that can mislead models, in the CV area (Goodfellow et al., 2014;Carlini & Wagner, 2017;Croce & Hein, 2020;Madry et al., 2018;Zhang et al., 2019b;Kurakin et al., 2016;Xu et al., 2019a;. PGD attack is one of the most popular adversarial attack methods (Madry et al., 2018;Croce & Hein, 2020), which makes use of the firstorder gradient to generate perturbations on the inputs and has achieved great success in attacking CV models with a low computational cost. Besides, PGD is also a powerful method to generate transfer attacks against unseen victim models (Szegedy et al., 2013;Liu et al., 2016;Moosavi-Dezfooli et al., 2017). Even in the black-box scenario (i.e., without having access to model parameters), PGD is a principled framework to generate black-box attacks by leveraging function query-based gradient estimates (Cheng et al., 2018) or gradients of surrogate models Dong et al., 2018;Xie et al., 2019;Zou et al., 2020;.
Adversarial attacks in NLP. Different from attacks against CV models, gradient-based attack generation methods are less popular in the NLP domain. HOTFLIP (Ebrahimi et al., 2018) is one of the most representative gradient-based attack methods by leveraging gradients to estimate the impact of character and/or word-level substitutions on NLP models. However, HOTFLIP neglects the optimality of site selection in the discrete character/token space and ignores the constraint on the post-attacking text fluency (to preserve readability) . By contrast, this work cooptimizes the selections of perturbation sites and tokens, and leverages a perplexity-guided loss to maintain the fluency of adversarial texts. Another attack method GBDA (Guo et al., 2021) models the token replacement operation as a probability distribution which is optimized using gradients. However, acquiring this probability distribution is accompanied with high computation and memory costs. By contrast, our work can achieve comparable or better performance with higher efficiency.
The mainstream of robustness evaluation in NLP in fact belongs to query-based methods Garg & Ramakrishnan, 2020;Li et al., 2019;Wang et al., 2021;Li et al., 2021b). Many current word-level query-based attack methods adopt a two-phase framework (Zang et al., 2020) including (1) generating candidate substitutions for each word in the input sentence and (2) replacing original words with the found candidates for attack generation.
Phase (1) aims at generating semantically-preserving candidate substitutions for each word in the original sentence. Genetic Attack (GA) (Alzantot et al., 2018) uses the word embedding distance to select the candidate substitutes and filter out the candidates with an extra language model. Such strategy is also adopted in many other attack methods like and Ebrahimi et al. (2018). PWWS (Ren et al., 2019) adopts WordNet (Miller, 1995) for candidate substitution generation. Similarly, PSO Attack (Zang et al., 2020) employs a knowledge base known as HowNet (Qi et al., 2019) to craft the candidate substitutions. Along with the development of the large pre-trained language models, Garg & Ramakrishnan (2020) and Li et al. (2020) propose to utilize mask language models such as BERT (Devlin et al., 2019) to predict the candidate substitutions. In TEXTGRAD, we also adopt the pre-trained language models to generate candidate substitutions.
Phase (2) requires the adversary to find a substitution combination from the candidates obtained in phase (1) to fool the victim model. A widely-used searching strategy is greedy search Garg & Ramakrishnan, 2020;Li et al., 2019), where each candidate is first ranked based on its impact on model prediction, and then the top candidate for each word is selected as the substitution. Another popular searching strategy is leveraging populationbased methods, such as genetic algorithms and particle swarm optimization algorithms (Kennedy & Eberhart, 1995), to determine substitutions (Zang et al., 2020;Alzantot et al., 2018). Despite effectiveness, these query-based attack are prone to generating invalid or ambiguous adversarial examples (Wang et al., 2021) which change the original semantics and could even mislead humans.
To overcome these problems, we propose TEXTGRAD, which leverages gradient information to co-optimize the perturbation position and token selection subject to a sentence-fluency constraint. We will empirically show that TEXTGRAD yields a better attack success rate with lower sentence perplexity compared to the state-of-the-art query-based attack methods.
Adversarial training. Adversarial training (AT) (Goodfellow et al., 2014;Madry et al., 2018) has been shown as an effective solution to improving robustness of deep models against adversarial attacks (Athalye et al., 2018). AT, built upon min-max optimization, has also inspired a large number of robust training approaches, ranging from supervised learning (Wong et al., 2020;Zhang et al., submitted to NeurIPS, 2021) to semi-supervised learning (Zhang et al., 2019b;Carmon et al., 2019), and further to self-supervised learning ). Yet, the aforementioned literature focuses on AT for vision applications. Despite some explorations, AT for NLP models is generally under-explored. In Zang et al., 2020;Alzantot et al., 2018;Meng & Wattenhofer, 2020;Li et al., 2021a), adversarial data are generated offline and then integrated with the vanilla model training. In Li et al. (2021b); ; Dong et al. (2021); , the min-max based AT is adopted, but the adversarial attack generation step (i.e., the inner maximization step) is conducted on the embedding space rather than the input space. As a result, both methods are not effective to defend against strong NLP attacks like TEXTGRAD.
MATHEMATICAL FORMULATION OF NLP ATTACKS
In this section, we start with a formal setup of NLP attack generation by considering two optimization tasks simultaneously: (a) (token-wise) attack site selection, and (b) textual perturbation via token replacement. Based on this setup, we will then propose a generic discrete optimization framework that allows for first-order gradient-based optimization.
Problem setup. Throughout the paper, we will focus on the task of text classification, where M(x) is a victim model targeted by an adversary to perturb its input x. Let x = [x 1 , x 2 , . . . , x L ] ∈ N L be the input sequence, where x i ∈ {0, 1, . . . , |V | − 1} is the index of ith token, V is the vocabulary table, and |V | refers to the size of the vocabulary table.
From the prior knowledge perspective, we assume that an adversary has access to the victim model (i.e., white-box attack), similar to many existing adversarial attack generation setups in both CV and NLP domains (Carlini & Wagner, 2017;Croce & Hein, 2020;Madry et al., 2018;Szegedy et al., 2013). Besides, the adversary has prior knowledge on a set of token candidates for substitution at each position, denoted by
s i = {s i1 , s i2 , . . . , s im } at site i, where s ij ∈ {0, 1, . . . , |V | − 1}
denotes the index of the jth candidate token that the adversary can be used to replace the ith token in x. Here m is the maximum number of candidate tokens.
From the attack manipulation perspective, the adversary has two goals: determining the optimal attack site as well as seeking out the optimal substitute for the original token. Given this, we introduce site selection variables z = [z 1 , . . . , z L ] to encode the optimized attack site, where z i ∈ {0, 1} becomes 1 if the token site i is selected and 0 otherwise. In this regard, an attack budget is given by the number of modified token sites, 1 T z ≤ k, where k is the upper bound of the budget. We next introduce token-wise replacement variables u i = [u i1 , . . . , u im ], associated with candidates in s i , where u ij ∈ {0, 1}, and 1 T u i = 1 if z i = 1. Then, the ith input token x i will be replaced by the candidate expressed byŝ i (u i ; s i ) = j (u ij · s ij ). Please note that there is only one candidate token will be selected (constrained through 1 T u i = 1). For ease of presentation, we will useŝ to denote the replacement-enabled perturbed input with the same length of x.
In a nutshell, an NLP attack can be described as the perturbed input example together with site selection and replacement constraints (cstr):
Perturbed input: x adv (z, u; x, s) = (1 − z) • x + z •ŝ Discrete variables: z ∈ {0, 1} L , ui ∈ {0, 1} m , ∀i Site selection cstr: 1 T z ≤ k, Replacement cstr: 1 T ui = 1, ∀i,(1)
where • denotes the element-wise multiplication. For ease of notation, u and s are introduced to denote the sets of {u i } and {s i } respectively, and the adversarial example x adv is a function of site selection and token replacement variables (z and u) as well as the prior knowledge on the input example x and the inventory of candidate tokens s. Based on (1), we will next formulate the optimization problem for generating NLP attacks.
Discrete optimization problem with convex constraints. The main difficulty of formulating the NLP attack problem suited for efficient optimization is the presence of discrete (non-convex) constraints. To circumvent this difficulty, a common way is to relax discrete constraints into their convex counterparts (Boyd et al., 2004). However, this leads to an inexact problem formulation. To close this gap, we propose the following problem formulation with continuous convex constraints and an attack loss built upon the discrete projection operation:
minimizẽ z,ũ atk (x adv (B(z), B(ũ); x, s)) subject to C1 :z ∈ [0, 1], 1 Tz ≤ k, C2 :ũ ∈ [0, 1], 1 Tũ i = 1, ∀i,(2)
where most notations are consistent with (1), B is the projection operation that projects the continuous variables onto the Boolean set, i.e., z ∈ {0, 1} L and u i ∈ {0, 1} m in (1), and we usez andũ i as the continuous relaxation of z and u i . As will be evident later, an efficient projection operation can be achieved by randomized sampling. The graceful feature of problem (2) is that the constraint sets C 1 and C 2 are convex, given by the intersection of a continuous box and affine inequality/equalities.
OPTIMIZATION FRAMEWORK OF NLP ATTACKS
In this section, we present the details of gradient-driven first-order optimization that can be successfully applied to generating NLP attacks. Similar to the attack benchmark-projected gradient descent (PGD) attack Madry et al. (2018)-used for generating adversarial perturbations of imagery data, we propose the PGD attack framework for NLP models. We will illustrate the design of PGD-based NLP attack framework from four dimensions: 1) acquisition of prior knowledge on inventory of candidate tokens, 2) attack loss type, 3) regularization scheme to minimize the perplexity of perturbed texts, and 4) input gradient computation.
Prior knowledge acquisition: candidate tokens for substitution. We first tackle how to generate candidate tokens used for input perturbation. Inspired by BERT-ATTACK and BAE-R (Garg & Ramakrishnan, 2020), we employ pre-trained masked language models (Devlin et al., 2019;Lan et al., 2020), denoted as G, to generate the candidate substitutions. Specifically, given the input sequence x, we first feed it into G to get the token prediction probability at each position without masking the input. Then we take the top-m tokens at each position as the candidates. Please note that getting the token predictions at each position does not require masking the input. Using the original sentence as the input can make the computation more efficient (only one forward pass to get predictions for all positions). With a similar approach, it has been shown in that the generated candidates are more semantically consistent with the original one, compared to the approach using actual "[mask]" tokens. Note that TEXTGRAD is compatible with any other candidate tokens generating method, making it general for practical usage.
Determination of attack loss. Most existing NLP attack generation methods Alzantot et al., 2018;Li et al., 2021a) use the negative crossentropy (CE) loss as the attack objective. However, the CE loss hardly tells whether or not the attack succeeds. And it would hamper the optimization efficiency when the attack objective is regularized by another textual fluency objective (which will be introduced later). Our rationale is that intuitively a sentence with more aggressive textual perturbations typically yields a higher attack success rate but a larger deviation from its original format. Thus, it is more likely to be less fluent.
A desirable loss for designing NLP attacks should be able to indicate the attack status (failure vs. success) and can automatically adjust the optimization focus between the success of an attack and the promotion of perturbed texts fluency. Spurred by the above, we choose the C&W-type attack loss (Carlini & Wagner, 2017):
atk (x adv ) = max{Zt 0 (x adv ) − max i =t 0 Zi(x adv ), 0},(3)
where x adv was introduced in (2), t 0 denotes the predicted class of the victim model against the original input x, and Z i denotes the prediction logit of class i. In (3)
, the difference Z t0 (x adv ) − max i =t0 Z i (x adv )
characterizes the confidence gap between the original prediction and the incorrect prediction induced by adversarial perturbations. The key advantages of using (3) for NLP attack generation are two-fold. First, the success of x adv (whose prediction is distinct from the original model prediction) is precisely reflected by its zero loss value, i.e., atk (x adv ) = 0. Second, the attack loss (3) has the self-assessment ability since it will be automatically de-activated only if the attack succeeds, i.e., Z t0 (x adv ) ≤ max i =t0 Z i (x adv ). Such an advantage facilitates us to strike a graceful balance between the attack success rate and the texts perplexity rate after perturbations.
Text fluency regularization. We next propose a differentiable texts fluency regularizer to be jointly optimized with the C&W attack loss,
reg = i zi j uij( mlm (sij) − mlm (xi)) = i zi j uij mlm (sij) − i zi mlm (xi) ,(4)
where the last equality holds since j u ij = 1. mlm (·) indicates the masked language modeling loss (Devlin et al., 2019) which is widely used for measuring the contribution of a word to the sentence fluency. For example, mlm (s ij ) measures new sentence fluency after changing the ith position as its jth candidate. Smaller mlm (s ij ) indicates better sentence fluency after the replacement. We compute the masked language model loss mlm (x i ) for ith token and minimize the increment of masked language model loss after replacement.
Input gradient calculation. The availability of the gradient of the attack objective function is the precondition for establishing the PGD-based attack framework. However, the presence of the Boolean operation B(·) in (2) prevents us from gradient calculation. To overcome this challenge, we prepend a randomized sampling step to the gradient computation. The rationale is that the continuously relaxed variablesz andũ in (2) can be viewed as (element-wise) site selection and token substitution probabilities. In this regard, given the continuous valuesz =z t−1 andũ =ũ t−1 obtained at the last PGD iteration (t − 1), for the current iteration t we can achieve B(·) through the following Monte Carlo sampling step:
[B (r) (zt−1)]i = 1 with probabilityzt−1,i 0 with probability 1 −zt−1,i ,(5)
where [B (r) (z t−1 )] i denotes the ith element realization of B(z t−1 ) at the r-th random trial. We use R to denote the total number of sampling rounds. The above sampling strategy can be similarly defined to achieve B (r) (ũ t−1 ). It is worth noting that a large R reduces the variance of the random realizations of B(z t−1 ) and can further help reduce the gradient variance. Our empirical experiments show that R = 20 suffices to warrant satisfactory attack performance. Based on (5), the gradient of the attack objective function in (2) is given by
g1,t := 1 R R r=1 ∇z atk (x adv (z (r) , u (r) ; x, s)), g2,t := 1 R R r=1 ∇ũ atk (x adv (z (r) , u (r) ; x, s)),(6)
where z (r) = B (r) (z t−1 ) and u (r) = B (r) (ũ t−1 ), and g 1,t and g 2,t corresponds to the variables z andũ, respectively. Our gradient estimation over the discrete space also has the spirit similar to straight-through estimator (Bengio et al., 2013) and Gumbel Softmax method (Jang et al., 2016).
Projected gradient descent (PGD) framework. Based on the C&W attack loss (3), the texts fluency regularization (4) and the input gradient formula (6), we are then able to develop the PGD optimization method to solve problem (2). At the t-th iteration, PGD is given bỹ
zt = ΠC 1 (zt−1 − ηzg1,t),ũt = ΠC 2 (ũt−1 − ηug2,t),(7)
where Π C denotes the Euclidean projection onto the constraint set C, i.e., Π C (a) = arg min x∈C x− a 2 2 , and the constraint sets C 1 and C 2 have been defined in (2). Due to the special structures of these constraints, the closed forms of the projection operations Π C1 and Π C2 are attainable and illustrated in Proposition A.1 in the appendix. Using the optimization framework above, the empirical convergence of the PGD is relatively fast. It will be shown later, 5-step PGD is sufficient to make our algorithm outperforming all other baseline methods.
EXPERIMENTS
EXPERIMENT SETUP
Datasets and attack baselines. We mainly consider the following tasks 1 : SST-2 (Socher et al., 2013) for sentiment analysis, MNLI (Williams et al., 2018), RTE (Wang et al., 2018), and QNLI (Wang et al., 2018) for natural language inference and AG News (Zhang et al., 2015) for text classification. We compare our proposed TEXTGRAD method with the following state-of-the-art white box and black-box NLP attack baselines: TEXTFOOLER , BERT-ATTACK , BAE-R (Garg & Ramakrishnan, 2020), HOTFLIP (Ebrahimi et al., 2018), BBA (Lee et al., 2022) and GBDA (Guo et al., 2021). We also include a greedy search-based method termed GREEDY, which combines the candidate genration method used in ours and the Greedy-WIR search strategy in Morris et al. (2020). In this regard, since the candidate substitute set is the same for GREEDY and TEXTGRAD, we can better demonstrate the advantage of TEXTGRAD over baselines that use greedy search to craft adversarial examples. We follow the benchmark attack setting in (Wang et al., 2021;Li et al., 2021b). For baselines, the attack budget is set to 25% of the total word numbers in a sentence. Since TEXTGRAD modifies the sentence in token-level, we set the maximum tokens that TEXTGRAD can modify to be 25% of the total word numbers in a sentence. By doing so, TEXTGRAD uses the same or less attack budget than baselines, and ensures the fair comparison. More details about the attack implementations could be seen in Appendix B. Victim models. We consider two classes of victim models in experiments, namely conventionally trained standard models and robustly trained models with awareness of adversarial attacks. In the robust training paradigm, we consider Adversarial Data Augmentation ( When attacking these models, TEXTGRAD use the corresponding masked language model to generate candidate substitutes. We also follow the best training settings in (Li et al., 2021b).
Evaluation metrics. First, attack success rate (ASR) measures the attack effectiveness, given by the number of examples that are successfully attacked over the total number of attacked examples. Second, perplexity (PPL) measures the quality of the generated adversarial texts. We use the pre-trained GPT-XL (Radford et al., 2019) language model for PPL evaluation.
EXPERIMENT RESULTS
Attack performance on normally-trained standard models. In Table 2, we compare the attack performance (in terms of ASR and PPL) of TEXTGRAD with 7 NLP attack baselines across 5 datasets and 3 victim models that are normally trained. As we can see, TEXTGRAD yields a better ASR than all the other baselines, leading to at least 3% improvement in nearly all settings. Except our method, there does not exist any baseline that can win either across dataset types or across model types. From the perspective of PPL, TEXTGRAD nearly outperforms all the baselines when attacking BERT and RoBERTa. For the victim model ALBERT, TEXTGRAD yields the second or the third best PPL, with a small performance gap to the best PPL result. Additionally, the ASR improvement gained by TEXTGRAD remains significant in all settings when attacking ALBERT, which shows a good adversarial robustness in several past robustness evaluations (Wang et al., 2021).
Attack performance on robustly-trained models. Table 3 demonstrates the attack effectiveness of TEXTGRAD against robustly-trained models. To make a thorough assessment, attack methods are compared under 6 robust BERT models obtained using 6 robust training methods on SST-2 and RTE datasets. As we can see, TEXTGRAD yields the best ASR in all settings. Among baselines, BBA and GBDA seem outperforming the others when attacking robust models. However, (Si et al., 2021), PGD-AT (Madry et al., 2018), FREE-LB , INFOBERT , and ASCC (Dong et al., 2021). Two datasets including SST-2 and RTE are considered. The attack performance is measured by ASR and PPL. The best results under each metric (corresponding to each column) are highlighted in bold and the second best results are underlined. compared to TEXTGRAD, there remains over 4% ASR gap in most of cases. From the PPL perspective, TEXTGRAD achieves at least the second best performance. It is worth noting that the considered test-time attacks (including TEXTGRAD and baselines) are not seen during robust training. Therefore, all the robust models are not truly robust when facing unforeseen attacks. In particular, TEXTGRAD can easily break these defenses on SST-2, as evidenced by achieving at least 89% ASR. Effect of attack budget k on ASR of TEXTGRAD. Evaluation is performed under the standard BERT model on SST. Recall that the attack budget constraints the ratio of the words allowed to be modified in a sentence. Here k = x% is short for k = x% of the number of words in a sentence.
Attack Method k = 5% k = 10% k = 15% k = 20% k = 25% k = 30% (2). Results associated with above setups (a) and (b) are presented in Figure 1 and Table 4. In both cases, we compare the performance of TEXTGRAD with that of baselines when attacking a normally trained BERT model. As shown in Figure 1, ASR of TEXTGRAD increases as the number of attack iterations increases. Moreover, TEXTGRAD achieves a substantial ASR improvement over the best baseline by merely taking a very small number of iterations (less than 8 iterations in all cases). As shown in Table 4, ASR of TEXTGRAD increases as the attack budget (k) increases. Additionally, even if k is small (e.g., k = 5% of the number of words), TEXTGRAD still significantly outperforms the baselines in ASR.
Other experiment results. In Appendix C-E, we further include the attack transferability, ablation study, and human evaluation. In a nutshell, we show that TEXTGRAD achieves graceful attack transferability. The optimization techniques including random restarting and randomized sampling help boost the attack performance. For human evaluation, TEXTGRAD outperforms BAE-R, which performs the best in automatic quality evaluation.
ADVERSARIAL TRAINING WITH TEXTGRAD
In this section, we empirically show that TEXTGRAD-based AT (termed TEXTGRAD-AT) provides an effective adversarial defense comparing to other AT baselines in NLP. We focus on robustifying a BERT model on SST-2 dataset, and set the train-time iteration number of TEXTGRAD to 5 and restart time to 1. During evaluation, we set TEXTGRAD with 20 attack iterations and 10 restarts. Table 5 , | ASCC (Dong et al., 2021), and } TEXTFOOLER-AT. We remark that the AT variants y-| generate adversarial perturbations against the continuous feature representations of input texts rather than the raw inputs at the training phase. By contrast, ours and TEXTFOOLER-AT generate adversarial examples in the discrete input space. As will be evident later, TEXTFOOLER-AT is also the most competitive baseline to ours. Besides TEXTGRAD-AT, we also propose TEXTGRAD-TRADES by integrating TEXTGRAD with TRADES (Zhang et al., 2019b), which is commonly used to optimize the tradeoff between accuracy and robustness. See more implementation details in Appendix B. At testing time, four types of NLP attacks (TEXTGRAD, TEXTFOOLER, BERT-ATTACK, and BAE-R) are used to evaluate the robust accuracy of models acquired by the aforementioned training methods.
Our key observations from Table 5 are summarized below. First, the models trained by TEXTGRAD-AT and TEXTGRAD-TRADES achieve the best robust accuracy in nearly all settings. The only exception is the case of TEXTFOOLER-AT vs. the TEXTFOOLER attack, since TEXTFOOLER is an unseen attack for our approaches but it is known for TEXTFOOLER-AT during training. By contrast, if non-TEXTGRAD and non-TEXTFOOLER attacks are used for robustness evaluation, then our methods outperform TEXTFOOLER-AT by a substantial margin (e.g., robust enhancement under BAE-R). Second, we evaluate the performance of different robust training methods under the AdvGLUE (Wang et al., 2021) benchmark. We observe that TEXTGRAD-AT and TEXTGRAD-TRADES perform better than the baselines y-|, while TEXTFOOLER-AT is slightly better than ours. However, this is predictable since AdvGlue uses TEXTFOOLER as one of the attack methods to generate the adversarial dataset (Wang et al., 2021). As shown by the average robust accuracy in Table 5, our methods are the only ones to defend against a wide spectrum of attack types. Third, our methods trade off an accuracy drop for robustness improvement. However, the latter is much more significant than the former. For example, none of baselines can defend against TEXTGRAD, while our improvement in robust accuracy is over 35% compared to TEXTFOOLER-AT. We further show that robust accuracy converges quickly in two epoch training in Appendix F, demonstrating that TEXTGRAD-AT can be used to enhance the robustness of downstream tasks in an efficient manner.
CONCLUSION
In this paper, we propose TEXTGRAD, an effective attack method based on gradient-driven optimization in NLP. TEXTGRAD not only achieves remarkable improvements in robustness evaluation but also boosts the robustness of NLP models with adversarial training. In the future, we will consider how to generalize TEXTGRAD to other types of perturbations, such as word insertion or deletion, to further improve the attack performance.
A PROJECTION OPERATIONS IN PGD FRAMEWORK
The projection operations Π C1 and Π C2 in Equation 4 are demonstrated below:
Proposition A.1. Given C 1 = {z|1 Tz ≤ k,z ∈ [0, 1] L }, the project operation for a data pointz with respect to C 1 is
ΠC 1 (z ) = P [0,1] [z ] if 1 T P [0,1] [z ] ≤ k, P [0,1] [z − µ1] if µ > 0 and 1 T P [0,1] [z − µ1] = k,(8)
and for C 2 = {ũ i |1 Tũ i = 1,ũ i ∈ [0, 1] m }, the project operation for a data pointũ i with respect to C 2 is:
ΠC 2 [ũ i ] = P [0,1] [ũ i − υ1](9)
where υ is the roof of 1 T P [0,1] [ũ i − υ1] = 1, P [0,1] (·) is an element-wise projection operation,
P [0,1] (x) = x if x ∈ [0, 1], 0 if x < 0, and 1 if x > 1.
A similar derivation of the projection has been shown in (Xu et al., 2019b).
B IMPLEMENTATION DETAILS
In this section, we include the implementation details of victim models, hyper-parameters for baselines, and training details.
Training of victim models We run our experiments on the Tesla V100 GPU with 16GB memory. We fine-tune the pre-trained BERT-base-uncased model on each dataset with a batch size of 32, a learning rate of 2e-5 for 5 epochs. For RoBERTa-large and ALBERT-xxlargev2, we use a batch size of 16 and learning rate of 1e-5. For the robust models, we use the implementation of (Li et al., 2021b). Each model is trained for 10 epochs with a learning rate of 2e-5 and batch size of 32. Specifically, for ADA and MIXADA, we perturb the whole training dataset for augmentation. For MIXADA, we use α = 2.0 in the beta distribution and mix the hidden representations sampled from layer {7, 9, 12}. For PGD-AT, we use step size α = 3e-2 and number of steps m = 5 for both SST and RTE dataset. For FREE-LB, we use step size α = 1e-1 and number of steps m = 2 on SST-2 dataset and α = 3e-2, m = 3 on RTE dataset. For INFOBERT, the step size α is 4e-2 and number of steps is 3 for both two datasets. Finally, we use α = 10, β = 4 and run for 5 steps to generate perturbation for ASCC. For datasets that the labels of testing dataset are not available (MNLI, RTE, QNLI), we randomly sample 10% of training dataset as validation dataset and use the original validation dataset for testing. For the AG News dataset where the validation set is not available, we use the same way to generate the validation dataset.
Attack Implementation Regarding the hyper-parameters of TEXTGRAD, we utilize 20-step PGD for optimization and fix the number of sampling R in each iteration to be 20. We adopt a learning rate of 0.8 for bothz andũ, and normalize the gradient g 1,t and g 2,t to unit norm before the descent step. After PGD optimization, we sample 20 differentz andũ pairs for a single input x. To determine the final adversarial example of x, we select the one with the minimum PPL measured by the GPT-2 language model (Radford et al., 2019) among all successful attacks. Although such a post-processing approach via multiple sampling introduces additional computational overhead, it ensures the high quality of generated attacks. If all 20 sampled attacks fail to fool the victim model, we allow TEXTGRAD to restart the attack process with a different initialization ofz andũ. Restart with different initializations is a standard setting used in white-box PGD attacks for imagery data.
Here, we set the maximum number of restarts to 10. However, empirically we find that most of the attacks will be successful with a single or without restart. The average number of restarts in our experiment is around 1.6-1.8. To ensure query-based baselines approaches with a large enough query budget, we set the maximum number of queries for them to be 2000.
Attack parameters
We use the implementation of the TextAttack (Morris et al., 2020) library for TEXTFOOLER, BERT-ATTACK, BAE-R and BBA. The number of candidate substitutions for each word is 50 in TEXTFOOLER and BAE-R. For BERT-ATTACK, we set this value to 48 following the default setting. For BBA we use the same candidate substitution generation method as TEXTFOOLER, which is consistent with the original paper. We use our candidate substitute generation method in HOTFLIP and GREEDY and the number of candidate substitution tokens for the two baselines is also 50. For natural language inference datasets (MNLI-m, QNLI, RTE), we restrict all the attackers to only perturb the hypothesis.
GBDA purely rely on soft constraints during attack instead of hard attack budgets (i.e., the maximum perturbation which is 25% of the total word numbers in a sentence for all the other methods including TEXTGRAD). Furthermore, the soft constraints in GBDA rely on an extra causal language model (such as GPT-2 (Radford et al., 2019)) during attacking. When attacking masked language models such as BERT, RoBERTa, and ALBERT, one needs to train a causal language model that shares the same vocabulary with the victim model, making their method less flexible. Since the official implementation of GBDA only provides a causal language model that has the same vocabulary table as BERT, it only support attacking BERT and cannot be used to evaluation the robustness of RoBERTa/ALBERT in our experiments in Table 2. Therefore, we only report the experiment results when the victim model is BERT. We use the default hyper-parameters of GBDA in their original paper (100 attack iterations with batch size 10 and learning rate 0.3; λ perp = 1). For λ sim , we follow the paper's setting to cross-validate it in range [20,200]. Finally, for SST-2 and MNLI-m datasets, we use λ sim = 50; for other dataset we find the attack example quality is very low given a small λ sim . Therefore we use λ sim = 200 for the other datasets.
For TEXTGRAD, we also consider more attack constraints. Firstly, we conduct part-of-speech checking before attacking and only nouns, verbs, adjectives, and adverbs can be replaced. Secondly, after generating candidate substitutions, we use WordNet (Miller, 1995) to filter antonyms of original words to avoid invalid substitutions. Thirdly, stop-words will not be considered during attacking, which means we will not substitute original words that are stop-words or replace original words with stop-words. Finally, considering the tokenization operation in pre-trained language models, we filter out sub-words in the generated candidate substitutions before attacking to further improve the quality of adversarial examples.
C ATTACK TRANSFERABILITY
We compare the attack transferability of various attack methods in this section. Table 6 compares the attack transferability of various attack methods given different pairs of a source victim model used for attack generation and a targeted model used for transfer attack evaluation, where the considered models (BERT, RoBERTa, and ALBERT) are normally trained on SST-2. As we can see, transfer attacks commonly lead to the ASR degradation. However, compared to baselines, TEXTGRAD yields better transfer attack performance in nearly all the cases. It is also worth noting that there exists a large ASR drop when NLP attacks transfer to an unseen model. Thus, it requires more in-depth future studies to tackle the question of how to improve transferability of NLP attacks. Table 7 demonstrates the usefulness of proposed optimization tricks: random restarting and randomized sampling, where the former has been commonly used in generating adversarial images (Madry et al., 2018), and the latter is critical to calculate input gradients. As we can see, both optimization tricks play positive roles in boosting the attack performance of TEXTGRAD. However, the use of randomized sampling seems more vital, as evidenced by the larger ASR drop (6%-14%) when using TEXTGRAD w/o randomized sampling.
D ABLATION STUDIES
E PRELIMINARY HUMAN EVALUATION
Besides automatic evaluation, we also conduct human evaluations for justifying the validity of adversarial examples. Here an adversarial example is regarded as valid when its label annotated by a human annotator is consistent with the ground-truth label of the corresponding original example. During the evaluation, each annotator is asked to classify the sentiment labels (positive/negative) of the given sentences, thus requiring no domain knowledge. Note that the ground-truth label is not Table 6: NLP attack transferability. Attacks generated from source victim models (BERT, RoBERTa, and ALBERT) are evaluated at the same set of models. The experiment is conducted on the SST-2 dataset and all the models are normally trained. provided. The following instruction is displayed to the annotator during annotation:
Please classify the sentiment of the movie review displayed above. Answer 0 if you think the sentiment in that movie review is negative and answer 1 if positive.
Your answer (0/1) :
Our method is compared with BAE, the baseline with the best attack quality according to Table 2 and 3. Specifically, We randomly sample 100 examples from the SST-2 dataset that are successfully attacked by both TEXTGRAD and BAE-R, resulting in 200 adversarial examples in total. These adversarial examples are randomly shuffled and annotated by four annotators. We compute the validity ratios according to the annotations of each annotator as well as the average validity ratio. Finally, the average validity ratios are 43.5% for TEXTGRAD and 39.75% for BAE-R, showing that the quality of adversarial examples generated by TEXTGRAD is slightly better than the baseline. We will also conduct more large-scale human evaluations in the next step. In Figure 2, we further show the convergence trajectory of using TEXTGRAD-AT to fine-tune a pre-trained BERT, given by clean accuracy and robust accuracy vs. the training epoch number. We observe that the test-time clean accuracy decreases as the training epoch number increases, implying the accuracy-robustness tradeoff. We also note that the test-time robust accuracy quickly converges in two epochs. This suggests that benefiting from a large-scale pre-trained model, TEXTGRAD-AT can be used to enhance the robustness of downstream tasks in an efficient manner.
G LIMITATION AND SOCIETAL IMPACT
While the proposed TEXTGRAD can both improve the robustness evaluation and boost the adversarial robustness of NLP models, we acknowledge that there are still some limitations that need to improve in the future. Firstly, TEXTGRAD crafts adversarial examples by synonym substitution. It cannot handle other perturbations such as word insertion, word deletion, and sentence-level modification. We hope to extend our convex relaxation optimization to more perturbations in the future to further promote the performance. Secondly, how to ensemble TEXTGRAD with other types of attacks (for example, black-box baselines) to form a more powerful attack is not explored. Given the success of AutoAttack (Croce & Hein, 2020) in computer vision, it is also plausible to build an ensemble attack in NLP.
With the application of large pre-trained NLP models, the vulnerability of pre-trained NLP models also raises concerns. We admit that TEXTGRAD may be employed to design textual adversarial examples in real life, thus resulting in security concerns and negative outcomes. However, one can still adopt TEXTGRAD for robustness evaluation and adversarial training so as to improve the security and trustworthiness of NLP systems.
ADA), Mixup-based Adversarial Data Augmentation (MIXADA)(Si et al., 2021), PGD-AT(Madry et al., 2018), FREELB, INFOBERT, and ASCC(Dong et al., 2021). Notably, except ADA and MIXADA, other robust training methods impose adversarial perturbations at the (continuous) embedding space. Following(Li et al., 2021b), we remove the 2 perturbation constraint when training with PGD-AT and FREELB.All the victim models are fine-tuned based on three popular NLP encoders, i.e., BERT-base(Devlin et al., 2019), RoBERTa-large, and ALBERT-xxlargev2(Lan et al., 2020).
Figure 1 :
1ASR of TEXTGRAD with different attack iteration numbers. We attack the standard BERT model with TEXTGRAD on different datasets. For each dataset, we show the curve of ASR of TEXTGRAD w.r.t different iteration numbers (the orange curves) as well as the ASR of the best query-based baseline on each corresponding dataset from Table 2 (the dashed lines). TEXTGRAD can beat the best baseline with only 5 iterations on all datasets.
makes a detailed comparison between TEXTGRAD-AT with 6 baselines including x standard training (ST), y PGD-AT (Madry et al., 2018), z FREE-LB (Zhu et al., 2019), { IN-FOBERT
Figure 2 :
2Clean accuracy and robust accuracy on SST-2 train/test dataset during adversarial training.
Table 2 :
2Performance of proposed TEXTGRAD attack method and baseline methods against normally trained victim models. The performance is measured by attack success rate (ASR) as well as perplexity (PPL) across different datasets and model architectures. A more powerful attack method is expected to have a higher (↑) ASR and lower (↓) PPL. The best results under each metric are highlighted in bold and second best are underlined.Dataset
Attack Method
BERT
RoBERTa
ALBERT
ASR
PPL
ASR
PPL
ASR
PPL
SST-2
TEXTFOOLER 82.84 431.44 69.38 483.29 69.77 536.34
BERT-ATTACK 86.44 410.72 79.34 435.29 82.73 419.78
BAE-R
86.62 286.63 85.92 300.08 85.80 293.29
GREEDY
87.79 427.73 91.12 408.12 88.47 432.29
HOTFLIP
56.07 277.79 23.30 196.81 18.35 293.53
BBA
85.96 421.00 81.51 532.69 81.19 461.60
GBDA
85.70 314.00
-
-
-
-
TEXTGRAD
93.51 266.41 96.45 274.90 93.51 313.64
MNLI-m
TEXTFOOLER 74.82 320.47 67.33 314.11 66.02 322.00
BERT-ATTACK 88.77 234.53 86.78 241.25 85.16 246.35
BAE-R
87.00 196.20 84.56 191.29 85.39 223.62
GREEDY
88.17 263.47 90.43 265.53 88.66 272.94
HOTFLIP
54.44 276.32 26.36 204.72 29.14 316.93
BBA
82.86 346.60 78.44 329.63 77.84 423.60
GBDA
93.37 290.41
-
-
-
-
TEXTGRAD
94.08 193.42 95.44 211.58 94.44 264.07
RTE
TEXTFOOLER 59.55 402.44 62.50 319.64 74.31 344.40
BERT-ATTACK 64.61 329.30 74.57 279.86 79.17 343.69
BAE-R
65.73 239.68 71.21 221.44 81.94 317.96
GREEDY
60.67 501.40 78.81 228.03 82.52 517.97
HOTFLIP
45.51 318.13 70.25 184.47 34.97 801.32
BBA
60.67 361.30 69.16 239.41 70.62 418.07
GBDA
68.20 471.20
-
-
-
-
TEXTGRAD
71.91 202.96 83.90 140.51 87.41 378.07
QNLI
TEXTFOOLER 53.55 399.90 48.17 398.45 58.34 451.02
BERT-ATTACK 63.86 384.28 60.11 376.25 64.31 411.93
BAE-R
62.31 324.14 60.86 309.45 62.81 324.14
GREEDY
67.95 443.61 63.74 462.42 62.71 379.64
HOTFLIP
48.07 301.35 49.91 313.47 44.27 383.42
BBA
60.31 498.74 59.12 429.77 58.74 461.55
GBDA
63.52 1473.15
-
-
-
-
TEXTGRAD
70.48 297.59 68.00 297.16 72.43 333.24
AG News
TEXTFOOLER 59.43 486.53 60.77 427.19 64.37 475.62
BERT-ATTACK 62.41 560.90 63.08 513.24 68.42 496.92
BAE-R
67.97 519.42 68.79 527.68 72.73 374.35
GREEDY
60.35 523.64 62.35 579.84 73.25 375.42
HOTFLIP
49.27 397.60 52.08 375.46 55.41 431.36
BBA
74.70 147.22 66.03 157.49 55.61 323.71
GBDA
70.28 456.60
-
-
-
-
TEXTGRAD
74.51 303.21 75.19 303.91 85.12 397.93
Table 3 :
3Performance of attack methods against robustly-trained victim models. Different robustified versions of BERT are obtained using different adversarial defenses including ADA, MIXADA
Table 4 :
4
Table 5 :
5Robustness evaluation of different adversarial training methods on SST-2 dataset. The performance
is measured by clean accuracy (%) and robust accuracy (%) under different attack types. We also report the
average robust accuracy (Avg RA) over different attack types.
Model
Clean Accuracy
Attack Method
Avg RA
TEXTGRAD TEXTFOOLER BERT-ATTACK BAE-R AdvGLUE
ST
93.14
6.04
15.98
12.63
12.47
30.55
15.53
PGD-AT
92.31
10.15
23.72
12.52
15.49
36.80
19.73
FREE-LB
93.52
7.30
26.19
14.66
18.84
27.77
18.96
INFOBERT
92.86
6.47
18.40
9.28
12.80
28.47
15.08
ASCC
87.94
7.02
14.55
14.66
14.55
40.27
18.21
TEXTFOOLER-AT
88.16
16.03
49.70
20.81
19.82
54.86
32.24
TEXTGRAD-AT
80.40
50.58
33.94
27.62
41.13
53.47
41.34
TEXTGRAD-TRADES
81.49
55.91
35.53
32.02
39.43
51.38
42.85
Table 7 :
7Sensitivity analysis of random restarts and sampling. TEXTGRAD is conducted over a normally trained BERT model on SST-2.Attack MethodSST-2 MNLI-m RTE QNLI AG NewsTextGrad
93.51
94.08
71.91 70.48
74.51
w.o. restart
90.27
91.90
67.42 65.26
68.87
w.o. restart & sampling 78.30
85.44
58.99 55.05
54.23
Codes are available at https://github.com/UCSB-NLP-Chang/TextGrad
The official implementation of GBDA does not support attacking RoBERTa/ALBERT. See Appendix B for detailed explanations.
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingMoustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2890-2896, 2018.
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Anish Athalye, Nicholas Carlini, David Wagner, arXiv:1802.00420arXiv preprintAnish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
Estimating or propagating gradients through stochastic neurons for conditional computation. Yoshua Bengio, Nicholas Léonard, Aaron Courville, arXiv:1308.3432arXiv preprintYoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Convex optimization. Cambridge university press. Stephen Boyd, P Stephen, Lieven Boyd, Vandenberghe, Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge uni- versity press, 2004.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, IEEE Symposium on Security and Privacy (S&P). IEEENicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (S&P), pp. 39-57. IEEE, 2017.
Unlabeled data improves adversarial robustness. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, C John, Percy S Duchi, Liang, Advances in Neural Information Processing Systems. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems, pp. 11190-11201, 2019.
Adversarial robustness: From self-supervised pre-training to fine-tuning. Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 699-708, 2020.
Queryefficient hard-label black-box attack: An optimization-based approach. Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, Cho-Jui Hsieh, arXiv:1807.04457arXiv preprintMinhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query- efficient hard-label black-box attack: An optimization-based approach. arXiv preprint arXiv:1807.04457, 2018.
Shuyu Cheng, Yinpeng Dong, arXiv:1906.06919Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. arXiv preprintShuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. arXiv preprint arXiv:1906.06919, 2019.
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Francesco Croce, Matthias Hein, International Conference on Machine Learning. PMLRFrancesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, pp. 2206- 2216. PMLR, 2020.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019.
Towards robustness against natural language word substitutions. Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, Hong Liu, 9th International Conference on Learning Representations, ICLR 2021. 2021Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. Towards robustness against natural language word substitutions. In 9th International Conference on Learning Representations, ICLR 2021,. OpenReview.net, 2021.
Boosting adversarial attacks with momentum. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boost- ing adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185-9193, 2018.
Evading defenses to transferable adversarial examples by translation-invariant attacks. Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversar- ial examples by translation-invariant attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312-4321, 2019.
Benchmarking adversarial robustness on image classification. Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu. Benchmark- ing adversarial robustness on image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321-331, 2020.
Hotflip: White-box adversarial examples for text classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial exam- ples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 31-36, 2018.
Bae: Bert-based adversarial examples for text classification. Siddhant Garg, Goutham Ramakrishnan, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Siddhant Garg and Goutham Ramakrishnan. Bae: Bert-based adversarial examples for text clas- sification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6174-6181, 2020.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Gradient-based adversarial attacks against text transformers. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, Douwe Kiela, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingChuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. Gradient-based adversarial attacks against text transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 5747-5757, 2021.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceDi Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8018-8025, 2020.
Particle swarm optimization. James Kennedy, Russell Eberhart, Proceedings of ICNN'95-international conference on neural networks. ICNN'95-international conference on neural networksIEEE4James Kennedy and Russell Eberhart. Particle swarm optimization. In Proceedings of ICNN'95- international conference on neural networks, volume 4, pp. 1942-1948. IEEE, 1995.
Adversarial examples in the physical world. Alexey Kurakin, Ian Goodfellow, Samy Bengio, arXiv:1607.02533arXiv preprintAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2020.
Query-efficient and scalable black-box adversarial attacks on discrete sequential data via bayesian optimization. Deokjae Lee, Seungyong Moon, Junhyeok Lee, Hyun Oh Song, International Conference on Machine Learning. PMLRDeokjae Lee, Seungyong Moon, Junhyeok Lee, and Hyun Oh Song. Query-efficient and scalable black-box adversarial attacks on discrete sequential data via bayesian optimization. In Interna- tional Conference on Machine Learning, pp. 12478-12497. PMLR, 2022.
Contextualized perturbation for textual adversarial attack. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, William B Dolan, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and William B Dolan. Contextualized perturbation for textual adversarial attack. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5053-5069, 2021a.
Textbugger: Generating adversarial text against real-world applications. J Li, Ji, Du, T Li, Wang, 26th Annual Network and Distributed System Security Symposium. J Li, S Ji, T Du, B Li, and T Wang. Textbugger: Generating adversarial text against real-world applications. In 26th Annual Network and Distributed System Security Symposium, 2019.
Bert-attack: Adversarial attack against bert using bert. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. Bert-attack: Adversarial attack against bert using bert. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6193-6202, 2020.
Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh, arXiv:2108.12777Searching for an effective defender: Benchmarking defense against adversarial word substitution. arXiv preprintZongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. Searching for an effective defender: Benchmarking defense against adversarial word substitution. arXiv preprint arXiv:2108.12777, 2021b.
Delving into transferable adversarial examples and black-box attacks. Y Liu, X Chen, C Liu, D Song, arXiv:1611.02770arXiv preprintY. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Towards deep learning models resistant to adversarial attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, International Conference on Learning Representations. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
A geometry-inspired attack for generating natural language adversarial examples. Zhao Meng, Roger Wattenhofer, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsZhao Meng and Roger Wattenhofer. A geometry-inspired attack for generating natural language adversarial examples. In Proceedings of the 28th International Conference on Computational Linguistics, pp. 6679-6689, 2020.
Wordnet: a lexical database for english. A George, Miller, Communications of the ACM. 3811George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39-41, 1995.
Universal adversarial perturbations. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Omar Fawzi, Pascal Fawzi, Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765-1773, 2017.
Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsJohn Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. Textattack: A frame- work for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demon- strations, pp. 119-126, 2020.
The limitations of deep learning in adversarial settings. Nicolas Papernot, Patrick Mcdaniel, Somesh Jha, Matt Fredrikson, Ananthram Berkay Celik, Swami, IEEE European Symposium on Security and Privacy (EuroS&P). IEEENicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387. IEEE, 2016.
Openhownet: An open sememe-based lexical knowledge base. Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, Zhendong Dong, arXiv:1901.09957arXiv preprintFanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, and Zhendong Dong. Open- hownet: An open sememe-based lexical knowledge base. arXiv preprint arXiv:1901.09957, 2019.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Generating natural language adversarial examples through probability weighted word saliency. Yihe Shuhuai Ren, Kun Deng, Wanxiang He, Che, ACL. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. Generating natural language adversarial examples through probability weighted word saliency. In ACL, 2019.
Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, S Larry, Gavin Davis, Tom Taylor, Goldstein, arXiv:1904.12843Adversarial training for free! arXiv preprint. Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019.
Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning. Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun, 10.18653/v1/2021.findings-acl.137Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics2021Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. Better robustness by more coverage: Adversarial and mixup data augmentation for ro- bust finetuning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 1569-1576. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021. findings-acl.137.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language pro- cessing, pp. 1631-1642, 2013.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, arXiv:1312.6199Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprintChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
Glue: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, International Conference on Learning Representations. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations, 2018.
Infobert: Improving robustness of language models from an information theoretic perspective. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu, International Conference on Learning Representations. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. Infobert: Improving robustness of language models from an information theoretic perspective. In Interna- tional Conference on Learning Representations, 2020.
Adversarial glue: A multi-task benchmark for robustness evaluation of language models. Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li, abs/2111.02840ArXiv. Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. ArXiv, abs/2111.02840, 2021.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112-1122, 2018.
Fast is better than free: Revisiting adversarial training. Eric Wong, Leslie Rice, J Zico Kolter, International Conference on Learning Representations. Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2020.
Improving transferability of adversarial examples with input diversity. Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Alan L Zhou Ren, Yuille, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionCihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2730-2739, 2019.
Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil Jain, arXiv:1909.08072Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprintHan Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil Jain. Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprint arXiv:1909.08072, 2019a.
Topology attack and defense for graph neural networks: An optimization perspective. Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin, International Joint Conference on Artificial Intelligence (IJCAI). Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective. In Interna- tional Joint Conference on Artificial Intelligence (IJCAI), 2019b.
Structured adversarial attack: Towards general implementation and better interpretability. Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin, International Conference on Learning Representations. Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, and Xue Lin. Structured adversarial attack: Towards general implementation and better interpretability. In International Conference on Learning Representations, 2019c.
Word-level textual adversarial attacking as combinatorial optimization. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsYuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6066-6080, 2020.
You only propagate once: Accelerating adversarial training via maximal principle. Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong, arXiv:1905.00877arXiv preprintDinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerating adversarial training via maximal principle. arXiv preprint arXiv:1905.00877, 2019a.
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, Michael I Jordan , Theoretically principled trade-off between robustness and accuracy. International Conference on Machine Learning. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. International Conference on Machine Learning, 2019b.
Generating fluent adversarial examples for natural languages. Huangzhao Zhang, Hao Zhou, Ning Miao, Lei Li, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsHuangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. Generating fluent adversarial examples for natural languages. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pp. 5564-5569, 2019c.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in neural information processing systems. 28Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- sification. Advances in neural information processing systems, 28:649-657, 2015.
Revisiting and advancing adversarial training through the lens of bi-level optimization. Yihua Zhang, Guanhua Zhang, Mingyi Hong, Shiyu Chang, Sijia Liu, submitted to NeurIPSYihua Zhang, Guanhua Zhang, Mingyi Hong, Shiyu Chang, and Sijia Liu. Revisiting and advancing adversarial training through the lens of bi-level optimization, submitted to NeurIPS, 2021.
Freelb: Enhanced adversarial training for natural language understanding. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu, International Conference on Learning Representations. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations, 2019.
Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. Junhua Zou, Zhisong Pan, Junyang Qiu, Xin Liu, Ting Rui, Wei Li, European Conference on Computer Vision. SpringerJunhua Zou, Zhisong Pan, Junyang Qiu, Xin Liu, Ting Rui, and Wei Li. Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In European Conference on Computer Vision, pp. 563-579. Springer, 2020. |
252,531,169 | TEXT SUMMARIZATION WITH ORACLE EXPECTATION | Extractive summarization produces summaries by identifying and concatenating the most important sentences in a document. Since most summarization datasets do not come with gold labels indicating whether document sentences are summary-worthy, different labeling algorithms have been proposed to extrapolate oracle extracts for model training. In this work, we identify two flaws with the widely used greedy labeling approach: it delivers suboptimal and deterministic oracles. To alleviate both issues, we propose a simple yet effective labeling algorithm that creates soft, expectation-based sentence labels. We define a new learning objective for extractive summarization which incorporates learning signals from multiple oracle summaries and prove it is equivalent to estimating the oracle expectation for each document sentence. Without any architectural modifications, the proposed labeling scheme achieves superior performance on a variety of summarization benchmarks across domains and languages, in both supervised and zero-shot settings. 1 1 Our code and models can be found at https://github.com/yumoxu/oreo. | [
248426882,
223953416,
219036690,
67855635,
3510042,
174799390,
53083054,
8015669,
201304248,
3470398,
218571335,
1499080,
244714571,
204960716,
226283859,
216915028,
53295957,
215828313,
248835547,
13747425,
202542690,
52967399,
11212020,
215768182,
207880568
] | TEXT SUMMARIZATION WITH ORACLE EXPECTATION
Yumo Xu [email protected]
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
10 Crichton StreetEH8 9ABEdinburgh
Mirella Lapata
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
10 Crichton StreetEH8 9ABEdinburgh
TEXT SUMMARIZATION WITH ORACLE EXPECTATION
Extractive summarization produces summaries by identifying and concatenating the most important sentences in a document. Since most summarization datasets do not come with gold labels indicating whether document sentences are summary-worthy, different labeling algorithms have been proposed to extrapolate oracle extracts for model training. In this work, we identify two flaws with the widely used greedy labeling approach: it delivers suboptimal and deterministic oracles. To alleviate both issues, we propose a simple yet effective labeling algorithm that creates soft, expectation-based sentence labels. We define a new learning objective for extractive summarization which incorporates learning signals from multiple oracle summaries and prove it is equivalent to estimating the oracle expectation for each document sentence. Without any architectural modifications, the proposed labeling scheme achieves superior performance on a variety of summarization benchmarks across domains and languages, in both supervised and zero-shot settings. 1 1 Our code and models can be found at https://github.com/yumoxu/oreo.
INTRODUCTION
Summarization is the process of condensing a source text into a shorter version while preserving its information content. Thanks to neural encoder-decoder models (Bahdanau et al., 2015;Sutskever et al., 2014), Transformer-based architectures (Vaswani et al., 2017), and large-scale pretraining (Liu & Lapata, 2019;Zhang et al., 2020;Lewis et al., 2020), the past few years have witnessed a huge leap forward in summarization technology. Abstractive methods fluently paraphrase the main content of the input, using a vocabulary different from the original document, while extractive approaches are less creative -they produce summaries by identifying and subsequently concatenating the most important sentences in a document -but manage to avoid hallucinations, false statements and inconsistencies.
Neural extractive summarization is typically formulated as a sequence labeling problem (Cheng & Lapata, 2016), assuming access to (binary) labels indicating whether a document sentence should be in the summary. In contrast to the plethora of datasets (see Section 5 for examples) available for abstractive summarization (typically thousands of document-abstract pairs), there are no large-scale datasets with gold sentence labels for extractive summarization. Oracle labels are thus extrapolated from abstracts via heuristics, amongst which greedy search (Nallapati et al., 2017) is the most popular by far (Liu & Lapata, 2019;Xu et al., 2020;Dou et al., 2021;Jia et al., 2022).
In this work we challenge received wisdom and rethink whether greedy search is the best way to create sentence labels for extractive summarization. Specifically, we highlight two flaws with greedy labeling: (1) the search procedure is suboptimal, i.e., it does not guarantee a global optimum for the search objective, and (2) greedy oracles are deterministic, i.e., they yield a single reference extract for any given input by associating sentences in the document to its corresponding abstract.
Perhaps an obvious solution to the suboptimality problem would be to look for oracle summaries following a procedure based on beam search. Although beam search finds better oracles, we empirically observe that summarization models trained on these do not consistently improve over greedy oracles, possibly due to the higher risk of under-fitting (Narayan et al., 2018a) -there are too few positive labels. Moreover, beam search would also create deterministic oracles. A summarization Table 1: Sentence labels for a CNN/DM article according to different labeling schemes. Only the first 10 document sentences are shown. Greedy and Beam create oracle summaries (i.e., sentences with label 1) with greedy and beam search, respectively. OREO, our labeling algorithm, incorporates information from multiple summary hypotheses shown in the bar chart (R is the mean of ROUGE-1 and ROUGE-2). OREO assigns high scores to sentences 2 and 4 which contain an important named entity, :::::: Jasmine :::::::: Coleman, and location, Croydon, South East London. In comparison, greedy and beam labeling consider only one oracle summary, and assign zero to sentences 2 or 4, failing to capture that these are informative and should be probably included in the summary.
ID Document Sentence
Greedy Beam OREO 1 ::::: Jasmine :::::
Coleman, 12, has been found safe and well some 50 miles from her home. 0 1 0.568 2 A 12-year-old girl who went missing from her family home at 2 AM amid fears she was driven away by an "older man" has been found safe and well.
1 0 1.000
3 Jasmine Coleman was reported as missing this morning after disappearing from her home in lancing, west Sussex.
0 0 0.429 4
The child was found this afternoon following a police appeal some 50miles away in Croydon, South East London. In it, she was described as fair with long, blonde hair and as having possibly been wearing black riding trousers and a polo shirt or a paisley pattern dress. 0 0 0.000 10 On Saturday afternoon the force confirmed she had been found safe and well in Croydon but could not confirm the circumstances under which police located her.
0 0 0.000
Reference Summary
• :::::: Jasmine ::::::: Coleman disappeared from her home at around 2 AM this morning.
• Police believed she may have been driven towards London by an older man.
• She has been found safe and well in Croydon, South East London today. 1 5 2 4 5 1 4 7 2 4 7 1 2 4 2 5 2 3 7 2 3 4 2 4 1 2 7 1 7 2 7 2 3 3 4 5 1 4 3 5 10 0 20 40 60
Top-16 Beams (Beam Size k = 256) R system trained on either greedy or beam oracles is optimized by maximizing the likelihood of a single oracle summary. This ignores the fact that there can be multiple valid summaries for an article, in other words, the summary hypothesis space is a naturally multi-modal probability distribution. We illustrate this point Table 1.
In this paper we define a new learning objective for extractive summarization which promotes nondeterministic learning in the summary hypothesis space, and introduce OREO, ORacle ExpectatiOn labeling, as a simple yet effective sentence labeling scheme. We prove the equivalence between estimating OREO labels and optimizing the proposed learning objective. As a result, it is sufficient for current models to be trained on OREO labels without requiring any architectural changes.
Extensive experiments on summarization benchmarks show that OREO outperforms comparison labeling schemes in both supervised and zero-shot settings, including cross-domain and cross-lingual tasks. Additionally, we showcase that extracts created by OREO can better guide the learning and inference of a generative system, facilitating the generation of higher-quality abstracts. We further analyze OREO's behavior by measuring attainable summary knowledge at inference time, and demonstrate it is superior to related deterministic and soft labeling schemes, which we argue contributes to its consistent performance gain across summarization tasks.
2 RELATED WORK Narayan et al. (2018a) were among the first to discuss problematic aspects of sentence labeling schemes for extractive summarization. They argue that labeling sentences individually as in Cheng & Lapata (2016) often generates too many positive labels which leads to overfitting, while a model trained on greedy labels (Nallapati et al., 2017) underfits the data. Although extractive performance can be boosted via finetuning pretrained encoders with greedy labels (Liu & Lapata, 2019), Zhong et al. (2020) show that reranking summary candidates constructed from greedy predictions can further improve summary quality. This demonstrates that the underfitting problem caused by greedy labels still exists even when pretrained models are used. Issues with greedy labeling have also been discussed from the perspective of data bias, including lead bias (Nenkova, 2005;Kedzie et al., 2018;Grenander et al., 2019) -greedy labels display a bias towards lead sentences in news text and systems trained on them do not easily transfer to other domains -and monolingual bias (Jia et al., 2022) -greedy labels created for one language (e.g., English) do not transfer to a different language.
The idea of learning from multiple references has found application in various tasks including dialog response generation (Gao et al., 2019), machine translation (Khayrallah et al., 2020), and question answering (Zhu et al., 2020). Summarization datasets with multiple references are not generally available for model training, but a few have been manually created for system evaluation (Dang, 2005). In extractive summarization, gold references in the form of sentence labels do not usually exist, and learning from multiple references has not been yet explored. In this work, we use beam search to create multiple high-quality oracle summaries, from which summary-level supervision is aggregated into sentence labels to promote multi-reference learning for extractive summarization.
PROBLEM FORMULATION
Let D = {x i } m 1 denote a document consisting of sentences x i . An extractive summarizer produces a summary hypothesis that represents salient information via identifying a subset of sentenceŝ Y = {ŷ j } n 1 , n << m within D. In practice, ROUGE (Lin & Hovy, 2003), an automatic metric based on lexical overlap is commonly adopted to evaluate R(Ŷ , S), the quality ofŶ against gold summary reference S.
Following previous work (Cheng & Lapata, 2016;Nallapati et al., 2017;Narayan et al., 2018a;Liu & Lapata, 2019), we conceptualize extractive summarization as a sequence labeling task, and aim to build a system that estimates the summary-worthiness of each sentence in a non-autoregressive manner. As mentioned earlier, sentence labels need to be first extrapolated to train an extractive system, since existing datasets are label-free, they only contain document-abstract pairs. BERTSUM (Liu & Lapata, 2019) is a popular extractive model and representative of the approach sketched above. Built on top of BERT (Devlin et al., 2019), it adds a two-layer Transformer (Vaswani et al., 2017) for sentence representation and a sigmoid layer for summary prediction. During inference, document sentences {x i } m 1 are ranked based on their estimated scores, and summary {ŷ j } n 1 is identified. The number of sentences n to be included in the summary is often pre-defined and fixed.
FROM EXISTING LABELING SCHEMES TO OREO
Early labeling methods create sentence labels i by evaluating the similarity of x i against reference summary S through various heuristics h(·), i def = h(x i , S), including ROUGE (Lin & Hovy, 2003) and rule-based features such as sentence and paragraph position information and the number of mentioned entities (Woodsend & Lapata, 2010;Cheng & Lapata, 2016). These methods obtain local labels as they assume a sentence can be classified as summary-worthy based on its own, without taking the summary context into account.
However, model evaluation does not operate at the sentence-level, as a sentence has to form a summary hypothesis Y together with other candidates (Narayan et al., 2018a). The aim of extractive summarization is to deliver a high-quality summary hypothesis, i.e., a good set of sentences rather than a set of good sentences. A sentence might achieve a high score on its own but contribute little to a summary hypothesis, e.g., due to redundancy. An alternative is to obtain global labels, based on whether a sentence occurs within the optimal set of sentences which collectively achieve the highest score according to some evaluation metric like ROUGE (Lin & Hovy, 2003):
i def = 1(x i ∈ Y * ) where Y * = arg max Y ∈C(D) R(Y, S).(1)
where |C(D)| = C m n is the hypothesis combinatorial space. As Equation (1) is computationally intractable, in practice, it is approximated by further conditioning on a heuristic search space S such that Y * ≈ arg max Y ∈S(D) R(Y, S), and the approximated Y * is usually called oracle summary. A widely adopted approximation is greedy labeling (Nallapati et al., 2017;Narayan et al., 2018a;Liu & Lapata, 2019;Zhong et al., 2020), which uses greedy search to maximize R at each step of sentence selection (the algorithm stops when R can no longer increase or the maximum number of steps is reached). We present the greedy labeling algorithm in Appendix A.
While significantly reducing complexity, greedy labeling does not guarantee a global optimum. To find better summaries to serve as oracles, we propose to replace greedy search with beam search which we refer to as beam labeling. We empirically find that around 8%-20% of (greedy) labels can be potentially improved with beam search (when setting the beam size to 256; see Appendix B for details). However, having better labels does not necessarily translate to performance improvements, and we discuss why this is the case next.
OREO: ESTIMATING ORACLE EXPECTATION
Extractive summarization models are typically trained to optimize max p θ (Y * |D), where the best hypothesis Y * can be approximated with greedy or beam search. This learning objective maximizes the probability at a single point Y * , and assigns zero probability to other summary hypothesesŶ , regardless of their quality. We note that this formulation leads to a discrepancy between how the model is optimized and how the labels against which this optimization takes place are obtained. Given an input document, sequence labeling summarization models assume conditional independence at sentence-level inference, while in greedy labeling, each step in the process of maximizing R(Y * , S) conditions on the outcomes of previous steps. From an optimization perspective, this mismatch renders fitting a non-autoregressive sequence labeler difficult for two reasons: (1) learning to search and maximizing the likelihood at Y * is challenging, and so the model tends to underfit Y * (Narayan et al., 2018a), and (2) probabilities at other Y with high evaluation scores remain under-estimated and uncalibrated due to supervision sparsity. Simply replacing greedy search with beam search does not resolve these optimization challenges as point estimation is still performed.
A solution is to evaluate summary hypotheses during training and reward the model accordingly (Narayan et al., 2018a). However, this is non-trivial as the the metric R is usually non-differentiable, and it is computational expensive to sample from a large combinatorial hypothesis space, e.g., with Reinforcement Learning (Sutton & Barto, 1998). Rather than changing the training of the model, in this work, we study how to derive a better sentence labeling algorithm that leads to a better optimization objective.
Specifically, we wish to incorporate multiple high-quality hypotheses as oracle summaries into the learning objective. Our key assumption is that extractive oracles are non-deterministic, but drawn from a distribution p(Y * |D, S). We thus formulate the objective for extractive summarization as:
max E Y * ∼p(Y * |D,S) [R(Y * , S)p θ (Y * |D)] .(2)
Under this formulation, an optimized model is expected to assign high probability p θ (Y |D) when there exists an oracle summary with high probability and high score according to some quality evaluation metric.
From the perspective of sentence labeling, we note that a candidate x i relates to the summarization task through the oracle summary space Y. As Y is a combinatorial space, the mapping x i → Y * is one-to-many. Therefore, we can compute the probability for each candidate to be selected via marginalization:
p(x i |D, S) = Y Y * p(x i , Y * |D, S) = Y Y * p(x i |Y * , D)p(Y * |D, S).(3)
To connect marginalization in Equation (3) with the summarization objective in Equation (2), we further incorporate hypothesis evaluation R(Y * , S), and define the summary-worthiness of a sentence x i as the expectation of its associated oracle evaluation:
i def = Y Y * R(Y * , S)p(x i |Y * , D)p(Y * |D, S) = E Y * ∼p(Y * |D,S) oracle distribution R(Y * , S) oracle evaluation p(x i |Y * , D) oracle membership (4)
Algorithm 1 Labeling with Oracle Expectation 1: function OREO(n, k, p) Max number of sentences in a summary, beam size, and oracle distribution 2:
Initialize beam B 3:
for j ← n do 4: B ← STEP(j, B) 5:
Initialize i to 0, ∀i Pre-scaled expectation 6:
for b, r ← B do 7: for x ← b do 8: i ← r + * pi 9:
= MAXMINSCALE( ) 10: return 11: end function 1: function STEP(j, B)
Step and beam 2:
Initialize visited paths V 3:
for b, v ← B do 4:
if |b| < j then 5:
continue Skip early stopping 6:
for
i ← |D| do 7: b = SORT(b + {i}) 8: if b not in V then 9: v = ROUGE(b ) 10: if v > v then 11: B ← B + {(b , v )} 12: V ← V + {b } 13: return TOP-k(B) Pruned beam 14: end function where the oracle membership p(x i |Y * , D) is identical to y i = 1(x i ∈ Y * )
and the oracle distribution will be discussed in Section 4.3. Given a sequence labeling model θ, maximizing the oracle expectation for all input sentences is equivalent to the objective in Equation (2). We present the proof in Appendix C.
To be compatible with the standard sequence labeling architecture for extractive summarization, we perform MLE with a cross-entropy loss:
min L(θ) = min m i=1 CrossEntropy ( (x i ), p θ (x i |D, S))(5)
where the scaled expectation (
x i ) = ( i −¯ min )/(¯ max −¯ min ) constitutes the final sentence labels.
The details of oracle expectation labeling are given in Algorithm 1.
COMPARISON WITH EXISTING LABELING ALGORITHMS
OREO creates soft (continuous) sentence labels, i.e., it incorporates summary-level evaluation while maintaining low sparsity. A detailed comparison with other labeling algorithms is provided in Table 3. Equation (4) also bears a resemblance to the RL objective used in Narayan et al. (2018a): Narayan et al. (2018a) evaluate summary hypotheses directly while Equation (4) estimates sentence-level membership. This is a consequence of the nature of the sequence labeler which does not explicitly represent the summary hypothesis space (which is combinatorial), and therefore supervision is delegated to sentences rather than summaries. By maximizing sentencelevel likelihood, estimations for the associated hypotheses are updated, albeit indirectly. Narayan et al. (2018a) employ REINFORCE (Williams, 1992, an on-policy RL algorithm that samples from a model during training, while in Equation (4), samples are drawn from the nonparametrized oracle distribution p(Y * |D, S). We provide offline supervision in the form of static sample labels. In contrast to the online reward in RL, offline labels can be reused during the entire course of training and are therefore more sample efficient. Our labeling scheme can be seen as a type of offline bandit learning (Nguyen-Tang et al., 2022). While offline RL has been recently applied to abstractive summarization (Pang & He, 2021), it remains under-explored in extractive summarization.
max EŶ ∼p θ (Y |D) [R(Ŷ , S)].
THE ORACLE DISTRIBUTION
We derive the oracle distribution p(Y * |D) heuristically bearing in mind that: (a) we have no prior knowledge as to which hypothesis is more or less likely as an oracle summary and therefore assume the oracle distribution to be uniform over a large hypothesis space; and (b) it is desirable for p(Y * |D) to positively correlate with R(Y * , S) and we expect this correlation to become stronger over the course of optimization. In practice, we use beam search (with beam size k << |Y|) to find potential oracle summaries, and adopt a uniform distribution over top-ranked beams:
p(Y * |D) ∼ U (1, t),
where t < k is a hyper-parameter which we optimize on a development set. To further incorporate other oracle features, we also experimented with several weight annealing mechanisms over top beams as determined by R, our summary quality evaluation metric (see Appendix D for details).
EXPERIMENTS
SUPERVISED EXTRACTIVE SUMMARIZATION
We conducted all extractive summarization experiments with BERTSUM (Liu & Lapata, 2019), the neural summarization architecture introduced in Section 3. We opted for BERTSUM due to its simplicity and popularity in a wide range of summarization tasks (Zhong et al., 2020;Xu & Lapata, 2021;Jia et al., 2022). We nevertheless note that OREO is model-agnostic and can be also applied to more complex architectures. For a fair comparison between different labeling schemes, we follow the standard training configuration used in Liu & Lapata (2019) without any additional hyper-parameter optimization (e.g., for our specific labeling scheme). We set R, the summary quality evaluation metric, to the mean of ROUGE-1 and ROUGE-2. We report experiments on a variety of summarization datasets including CNN/DM (Hermann et al., 2015), XSum (Narayan et al., 2018b), Multi-News (Fabbri et al., 2019), Reddit (Kim et al., 2019), and WikiHow (Koupaee & Wang, 2018). Detailed statistics are shown in Table 2.
Our results are presented in Table 4. In the first block, we report the performance of the LEAD baseline which considers the first k sentences in a document as the summary (see last row in Table 4) and MATCHSUM (Zhong et al., 2020), a state-of-the-art system which performs summary reranking with another BERT model. The second block reports ORACLE performance with greedy labels, beam labels (k = 256), and OREO labels (k = 256, t = 16; we take the top-n sentences with non-zero scores). See Appendix E for the labeling hyperparameters k, t for each dataset, and more detail on experimental settings. The third block reports BERTSUM performance with different labeling schemes.
Although beam ORACLE is superior to greedy ORACLE and raises the upper bound, the overall performance of BERTSUM optimized on beam labels does not significantly improve upon its greedy counterpart. In fact, performance drops drastically on WikiHow. OREO shows inferior ORACLE results as it considers multiple top-ranked beams and is therefore not bound-preserving (see Section 5.5 for detailed analysis). However, BERTSUM trained with OREO labels consistently outperforms a model trained with beam labels, and achieves a 0.18-0.89 ROUGE-L improvement on different benchmarks compared to greedy labeling. Although BERTSUM with OREO still falls short of the state-of-the-art, we show that one-stage summarization modeling can be enhanced with better labeling, which can potentially serve as a foundation for more complex reranking methods.
ZERO-SHOT CROSS-DOMAIN EXTRACTIVE SUMMARIZATION
We further examine the generalization capability of models trained with OREO labels in a zeroshot setting. Specifically, we evaluate a model trained on CNN/DM, against XSum, another news summarization dataset with shorter summaries (at most 2 sentences), and Reddit and WikiHow which represent entirely different domains (discussion forums and instructional text) and topics. Table 5 summarizes our results. Models trained with OREO perform on par with greedy labeling in-domain but display stronger generalization cross-domain. Greedy labels are more prone to lead bias, they deem as summary-worthy sentences mostly from the beginning of the document. Such bias is present in news datasets like CNN/DM but does not transfer to other domains like social media or Wikipedia. OREO alleviates this bias and performs better out-of-domain. As shown in Figures 1(a) and 1(b), OREO is less concentrated on lead sentences in news text.
ZERO-SHOT CROSS-LINGUAL EXTRACTIVE SUMMARIZATION
We next investigate the generalization capabilities of our approach in a cross-lingual setting. We use English data for model training and report zero-shot results on a variety of languages from the MLSum dataset (Scialom et al. 2020; see Table 2 for detailed statistics). Following Jia et al. (2022), we augment English articles with word replacement during training (Qin et al., 2021) using the MUSE (Lample et al., 2018) bilingual dictionary to align multilingual representations. We adopt a word replacement rate of 0.5. BERTSUM was initialized with XLM-R base (Conneau et al., 2020), a cross-lingual pretrained model (see XLS in Table 6). 2
The first block in Table 6, reports the results of a supervised XLS model which has access to training data for all languages; NLS is the zero-shot state of the art system of Jia et al. (2022); their approach creates multiple sets of greedy labels with different machine translation methods and adopts a neural architecture to learn weights for the obtained label sets. The second block presents the results of a zero-shot XLS model with different labeling schemes. As can be seen, OREO labels on Spanish, French, and Russian are on par with greedy labeling. Systems trained with greedy labels exhibit less cross-lingual generalization on German and Turkish, while OREO improves system performance on German by 2.72 ROUGE-L points and on Turkish by 2.19. Previous work (Jia et al., 2022) shows that cross-lingual performance correlates with lead bias in the target language. For example, Turkish articles are less lead-biased than Russian in MLSum, and thus benefit more from better sentence labeling. OREO trails behind NLS which is not surprising as the latter model benefits from more resources, i.e., machine translation and XLM-R large, and a more complex network architecture.
SUPERVISED ABSTRACTIVE SUMMARIZATION
We further assessed whether the proposed labeling scheme is of benefit to abstractive summarization. We experimented with GSUM (Dou et al., 2021), a state-of-the-art abstractive system that takes extractive summaries as additional input to guide the generation of document abstracts. During training, GSUM uses extractive oracles as guidance, while at inference time guidance is provided by summary hypotheses produced by a trained extractive system. We initialized GSUM with BART (Lewis et al., 2020), and used BERTSUM as the guidance model optimized with different labeling schemes (i.e., greedy, beam and OREO).
Abstractive summarization results are shown in Table 7. The first block shows the performance of BART (Lewis et al., 2020) which serves as a baseline. In the second block, we report the performance of GSUM (Dou et al., 2021) with greedy labels (default) in addition to beam-and OREO-based variants. As we can see, while beam labeling performs on par with its greedy counterpart, OREO guidance boosts performance with 0.37 ROUGE-L points over vanilla GSUM. We conclude that abstractive systems can also benefit from our expectation-based labeling algorithm, without any modeling changes or hyperparameter optimization. More results with varied guidance settings can be found in Appendix F. Examples of system output are shown in Appendix G.
COMPARISON WITH BOUND-PRESERVING METHODS
Let {z * i } m 1 , z * i ∈ {0,
1} denote a multi-hot representation of an oracle summary. We define a labeling function as bound-preserving, if there exists a constant γ ∈ [0, 1] so that the condition 1 ( (x i ) > γ) = z * i , ∀i holds. Intuitively, the condition holds if and only if the top-ranked sentences remain identical. Bound preservation of soft labels guarantees that the performance upper bound of a summarization system trained on soft labels equals that of a system trained on their original hard labels, e.g., obtained via greedy and beam search. For instance, label smoothing (Szegedy et al., 2016), a common technique for training deep neural networks, is bound-preserving. In contrast, OREO is generally not bound-preserving for either beam or greedy oracles. 3 To further analyse this property of soft labels, we propose ORMAX as a bound-preserving variant, by replacing the expectation with a max operator: i def = max Y * ∼p(Y * |D,S) [p(x i |Y * )R(Y * , S)]. Compared to OREO, ORMAX incorporates multiple oracles while additionally preserving the upper bound of beam labels. 4 Table 8 shows the performance of label smoothing and ORMAX for extractive (first block) and abstractive (second block) summarization. Although label smoothing has been successfully applied to discriminative (Szegedy et al., 2016) and generative NLP tasks (Chen et al., 2018;Lewis et al., 2020), the soft labels it creates do not yield better results than their original hard labels in extractive summarization. Label smoothing performs implicit model calibration (Müller et al., 2019), which can potentially improve sentence ranking and selection at inference, however, it also imposes regularization in neural networks training (Szegedy et al., 2016), which may render it less effective for extractive summarization where there is a higher risk of underfitting (Narayan et al., 2018a). On the other hand, ORMAX performs on par with OREO on abstractive summarization, while it underperforms on extractive summarization. Although bound preservation is, intuitively, desirable, our experimental results suggest that it is neither a necessary nor sufficient condition to produce a well-optimized summarization system.
PERFORMANCE ANALYSIS
Previous experiments revealed that oracles are not necessarily indicative of model performance (see Tables 4 and 8), due to the discrepancy between model optimization and sentence labeling, as discussed in Section 4.1. To further understand how different labeling schemes (and the oracles based on them) influence model performance, we quantify this discrepancy via a sampling-based method which simulates sentence selection for a sequence labeling model at inference.
Bearing in mind that a non-autoregressive sequence labeling model performs conditionally independent predictions and selects a fixed size number of sentences n, we construct summary hypotheseŝ Y = {ŷ j } n j=1 by drawing independent sentence samples, and measure the extent to which a model can attain summary relevant knowlege (ASK) as:
ASK def = E {ŷj } n j=1 ∼p(xi|D,S) R({ŷ j } n j=1 , S) where p(x i |D, S) = i m i=1 i(6)
Note that p(x i |D, S) is shaped by soft labels, and thus results in varied sentence/summary samples for different labeling schemes. The comparison in Figure 2 explains why we observe performance gains from OREO despite obtaining the lowest upper bound performance. The latter considers only the best case scenario at inference, ignoring the fact that some summary knowledge encoded in sentence labels can be hard or impossible to attain, e.g., when sentences in the oracle summary are highly dependent (and is therefore challenging to select them jointly with a model making independent predictions), or the oracle summary contains less than n sentences (which again entails that information is missing). Compared to other labeling schemes, OREO captures richer summary information that is attainable for sequence labeling models, narrowing the distance between BOUND and ASK. Consistent with our analysis, systems trained on OREO perform robustly on a wide variety of summarization tasks.
CONCLUSIONS AND FUTURE WORK
We provided a comprehensive analysis of existing labeling schemes for extractive summarization, and identified two flaws in greedy labeling, namely it delivers suboptimal and deterministic labels. We proposed a novel optimization objective to learn from multiple oracle summaries, which can be instantiated by a labeling scheme based on oracle expectation. Experimental results show that the proposed scheme achieves substantial improvement across domains and languages, without any architectural modifications. Our framework is agnostic to the labeling metric R, however, an important future direction is to incorporate different learning signals and provide sentence labels with more desirable properties, such as query relevance (Xu & Lapata, 2022) and faithfulness (Durmus et al., 2020). We would also like to parametrize the oracle distribution and estimate it from data, so as to derive even more accurate sentence labels.
A GREEDY SEARCH ALGORITHM
Algorithm 2 Labeling with Greedy Search 1: function GREEDY(n) Max number of sentences in summary 2:
Initialize hypothesis b to empty 3:
for j ← n do 4:
Initialize hypothesis score v to 0 5:
for i ← |D| do 6: b = b + {i}, v = ROUGE(b ) 7: if v > v then 8: b ← b , v ← v 9:
if v = 0 then Early stop 10: Figure 3 shows the distribution of greedy oracles over position in beam search results, as ranked by R (k = 256). As we can see, around 8% − 20% greedy labels are not top-ranked, and can therefore be potentially improved with beam labels. However, as shown in our experimental results, this improvement does not lead to a summarization system that is consistently better across different tasks.
Quality of Greedy Labels
Effects of Beam Size As beam search does not guarantee a global optimum either, we further calculate R(Y * )/R(Y * 256 ) to evaluate the relative quality of the top beam Y * (the top beam found by varied beam sizes), compared against Y * 256 (the top beam found by beam size 256). Figure 4 shows that the quality of the top beam converges when beam size increases to 64. However, as oracles are not necessarily indicative of actual model performance (see Section 5.6 for details), we view beam size as a hyperparameter for optimization.
C EQUIVALENCE PROOF
Given input document D, sentence-level inference of a non-autoregressive summarization model θ is conditionally independent, and the likelihood of an oracle summary Y * = {y * i } m i=1 (in its multi-hot representation over m document sentences) is calculated as:
p θ (Y * |D) = m i=1 p θ (x i = y * i |D).(7)
In this case, we show that maximizing the oracle expectation for all sentences is equivalent to the objective in Equation (2):
max m i=1 p θ (x i = i |D, S) = m i=1 E Y * ∼p(Y * |D,S) [R(Y * , S)p θ (x i |Y * , D)] £ Plug in OREO = E Y * ∼p(Y * |D,S) R(Y * , S) m i=1 p θ (x i = y * i |D) £ Take out E = E Y * ∼p(Y * |D,S) [R(Y * , S)p θ (Y * |D)] . £ Apply Eq. (7)(8)
We note that our objective in Equation (2) serves as a lower bound of the classic extractive summarization objective max p θ (Y * |D) weighted by oracle evaluation:
max E Y * ∼p(Y * |D,S) [R(Y * , S)p θ (Y * |D)] (9) = Y Y * p(Y * |D, S)R(Y * , S)p θ (Y * |D) (10) ≤ R(Y * best , S)p θ (Y * best |D) where Y * best = arg max Y * ∈Y R(Y * , S) (11) ∝ p θ (Y * best |D)(12)
The equality holds only if the oracle distribution p(Y * |D, S) is a Dirac delta distribution δ(Y * −Y * best ). We devise and experiment with several oracle distributions that assign non-uniform probability to top t beams:
D EFFECTS OF ORACLE DISTRIBUTION
1. Annealing over Rank A r decreases the unnormalized weight from 1 to 0 over top beams, assuming that the oracle distribution positively correlates with hypothesis rank. 2. Annealing over Quality A q sets the unnormalized weight for a top beam Y * as R(Y * ), assuming that the oracle distribution positively correlates with hypothesis evaluation score.
3. Annealing over Locality A defines the locality of a hypothesis Y to be proportional to mean of sentence-level scores R(y * j , S), y * j ∈ Y * . This is based on the assumption that a hypothesis is more local if its sentences are, by themselves, high-scoring. Hypothetically, these sentences stand a higher chance to be selected by a non-autoregressive sequence labeling model which presumably focuses more on their individual features rather than collective information (Zhong et al., 2020). 4. Annealing over Position Rank A p decreases the unnormalized weight from 1 to 0 over top beams which are reversely ranked by their position in the original document, assuming that oracle distribution positively correlates with document position. Table 9 presents our results on CNN/DM validation set. As we can see, the above-mentioned distributions do not yield better results than simply adopting a uniform distribution (third row). We believe this is because hand-crafted distributions are all associated with strong assumptions, which may not be valid for real-world summarization data. However, we note that most of these distributions still manage to outperform greedy labels, showing consistent gains when considering information from multiple oracles.
Apart from heuristic oracle distributions, we could also learn a parametrized distribution from data.
For instance, a model with a uniform oracle distribution could be trained to derive a potentially more accurate estimation from its predictions. A new set of sentence labels would be then calculated with Equation (5), and used to improve the optimization of a new model. training. We used one sample for each GPU, and accumulated gradients every 32 steps. We fine-tuned all models on CNN/Daily Mail with a learning rate of 3 × 10 −5 for 20,000 optimization steps, and a warm-step of 500. Following the evaluation steps in BART (Lewis et al., 2020) and GSUM (Dou et al., 2021), we used file2rouge 5 to evaluate abstractive summaries. Table 11, MBERTSUM finetuned with greedy labels shows inferior performance across languages. Nevertheless, OREO leads to substantial performance gains on both German and Turkish (we observe a similar trend when BERTSUM initialized with XLM-R). To further validate the effectiveness of OREO on the optimization of GSUM, we trained GSUM with OREO ORACLE, and the results are shown in the second block. As we can see, adopting OREO guidance during training further boosts system performance, while the system using OREO for both training and testing achieves the best results, i.e., 0.37 ROUGE-L improvement over GSUM.
E DETAILS OF EXPERIMENTAL SETTINGS
F EXTENDED RESULTS
G SYSTEM OUTPUT
Document:
The largest single high-definition map of mysterious dark matter has been produced. It is the first in a series of maps of the cosmos that will eventually allow a 3D view of dark matter across one eighth of the night sky. And the map should allow astronomers to study how galaxies formed in the universe. University of Manchester researchers have revealed an HD dark matter map (shown). It shows clumps of mystery particles across 0.4 per cent of the sky. The goal is to eventually map 12.5 per cent over five years. Red here shows more dark matter, and blue shows less. The moon is shown top left for scale. A team from the University of Manchester, led by Dr Sarah Bridle, has spent the past two years measuring the shapes of galaxies used to construct the map. And the map was released today at the April meeting of the American Physical Society in Baltimore, Maryland. It was created using data captured by the Dark Energy Camera, a 570-megapixel imaging device that is the primary instrument for the Dark Energy Survey (DES) led by Fermilab in the US...
Reference Summary: University of Manchester researchers reveal HD dark matter map. It shows clumps of mystery particles across 0.4 per cent of the sky. The goal is to eventually map 12.5 per cent over five years. Results could help reveal how galaxies form in the universe.
Extractive Summarization with BERTSUM Greedy: University of Manchester researchers have revealed an HD dark matter map (shown). It shows clumps of mystery particles across 0.4 per cent of the sky. It is the first in a series of maps of the cosmos that will eventually allow a 3d view of dark matter across one eighth of the night sky.
Beam: University of Manchester researchers have revealed an HD dark matter map (shown). The goal is to eventually map 12.5 per cent over five years. It is the first in a series of maps of the cosmos that will eventually allow a 3d view of dark matter across one eighth of the night sky.
OREO: University of Manchester researchers have revealed an HD dark matter map (shown). It shows clumps of mystery particles across 0.4 per cent of the sky. The goal is to eventually map 12.5 per cent over five years.
Abstractive Summarization with GSUM Greedy: University of Manchester researchers have revealed an HD dark matter map. It shows clumps of mystery particles across 0.4 per cent of the sky. The goal is to eventually map 12.5 per cent over five years. It is the first in a series of maps of the cosmos that will eventually allow a 3D view of dark matter across one eighth of the night sky. And the map should allow astronomers to study how galaxies formed in the universe.
Beam: University of Manchester researchers have revealed an HD dark matter map. It shows clumps of mystery particles across 0.4 per cent of the sky. The goal is to eventually map 12.5 per cent over five years. It is the first in a series of maps of the cosmos that will eventually allow a 3D view of dark matter across one eighth of the night sky. And the map should allow astronomers to study how galaxies formed in the universe.
OREO: University of Manchester researchers have revealed an HD dark matter map of the cosmos. It shows clumps of mystery particles across 0.4 per cent of the sky. Red here shows more dark matter, and blue shows less. The goal is to eventually map 12.5 per cent over five years. And the map should allow astronomers to study how galaxies formed in the universe. Table 13: Examples of system output on the CNN/DM test set. We illustrate differences among labeling algorithms with a sentence from the reference summary labeled in red. BERTSUM trained with OREO labels includes the sentence in its extract. In contrast, Greedy selects a suboptimal, verbose sentence highlighted in blue, potentially due to its position in the beginning of the original document (lead bias). The Beam extract includes both sentences and is therefore most redundant. Using these extracts as inference guidance, GSUM creates abstractive summaries which for Greedy and Beam are identically verbose, while OREO summary is more concise. Figure 5: Illustration of beam search (left; early stopped at j = 2), ranked beams (middle), and ranked sentences (right). We show sentences (in squares) and their scores (in brackets) under different labeling algorithms. For OREO, there does not exist a γ that halves the ranked list in a way that the top half is identical to the sentences selected by the top beam.
Figure 1 :
1Distribution of label values over sentence positions in documents (development set).
Figure 2 :
2Upper bound and attainable summary knowledge captured by sentence labeling method (CNN/DM validation set) for Label Smoothing (+LS), ORMAX, and OREO.
Figure 4 :
4Relative quality of best beams for beam labeling; results shown different beam sizes across three validation sets.
Table 2 :
2Datasets for monolingual and cross-lingual (last column) summarization. Compression rate denotes the number of sentences extracted to form a summary; and † denotes that trigram blocking(Liu & Lapata, 2019) was applied in sentence selection for redundancy removal.Datasets
CNN/DM
XSum Multi-News
Reddit WikiHow
MLSum
Language
En
En
En
En
En En/De/Es/Fr/Ru/Tr
Domain
Newswire Newswire
Newswire Social Media Wikipedia
Newswire
#Train
287,084
203,028
44,972
41,675
168,126
287,227 (En)
#Validation
13,367
11,273
5,622
645
6,000
13,368 (En)
#Test
11,489
11,332
5,622
645
6,000
53,981 (Non-En)
#Compression Rate
3 †
2
9
2
4 †
2 †
Table 3 :
3Sentence labeling schemes for
extractive summarization. Sum refers to
summary-level evaluation. m, n, and k re-
spectively denote document size, summary
size, and beam size.
Scheme
Sum Sparsity Complexity
Local
Medium O(m)
Global
High
O( m!
n!(m−n)! )
Greedy
High
O(nm log m)
Beam (ours) High
O(nmk log(mk))
OREO (ours) Low
O(nmk log(mk))
Table 4 :
4Extractive performance (test set, ROUGE-
L) on CNN/DM (CM), XSum (XS), Multi-News
(MN), Reddit (RD), and WikiHow (WH). We
highlight highest and lowest scores.
Systems
CD XS MN RD WH
LEAD
36.67 14.79 38.97 14.34 23.24
MATCHSUM 40.38 18.41 41.89 20.13 29.58
ORACLE
Greedy
48.87 23.57 44.27 28.98 32.68
Beam
52.86 23.71 46.40 29.11 36.51
OREO
50.08 20.07 46.14 24.55 34.28
BERTSUM
Greedy
39.56 17.16 41.53 19.11 28.24
Beam
39.66 17.66 41.50 19.81 25.71
OREO
39.96 17.81 41.71 20.02 28.46
Table 5 :
5Cross-domain
performance for models
trained on CNN/DM (zero-
shot; ROUGE-L). We high-
light highest and lowest
performance.
CNN/DM XS RD WH
BERTSUM
Greedy 15.62 17.06 25.39
Beam 15.62 17.64 24.77
OREO 15.58 17.71 25.62
Table 6 :
6Zero-shot cross-lingual summariza-
tion on MLSum (test set, ROUGE-L). Sys-
tems with * are supervised. Systems with †
use XLM-R large.
Systems De Es
Fr Ru Tr AVG
XLS * † 41.28 21.99 24.12 10.44 33.29 26.22
NLS †
34.95 21.20 23.59 10.13 31.49 24.27
XLS
Greedy 28.75 20.83 23.10 9.43 29.52 22.33
Beam 26.43 20.90 23.41 9.42 29.80 21.99
OREO 31.47 20.84 23.10 9.44 31.71 23.31
Table 7 :
7Results for ab-
stractive summarization
on CNN/DM (test set). R-
1/2/L is a shorthand for
ROUGE.
Systems R-1 R-2 R-L
BART
44.16 21.28 40.90
GSUM
Greedy 44.40 21.52 41.23
Beam 44.41 21.55 41.26
OREO 44.81 21.83 41.60
Table 8 :
8Comparison of OREO to boundpreserving labeling (CNN/DM test set). Results shown for extractive (BERTSUM) and abstractive (GSUM) summarization. LS refers to Label Smoothing (α optimized between {0.1, 0.2, 0.3}). R-1/2/L is a shorthand for ROUGE.Systems
R-1
R-2
R-L
BERTSUM
Greedy
43.18 20.16 39.56
Greedy+LS ↑0.03 ↓0.01 ↑0.03
Beam
43.25 20.14 39.66
Beam+LS
↑0.00 ↓0.02 ↑0.01
OREO
43.58 20.43 39.96
ORMAX
↓0.20 ↓0.27 ↓0.20
GSUM
OREO
44.81 21.83 41.60
ORMAX
↑0.04 ↓0.02 ↓0.00
Table 9 :
9OREO results with different oracle distributions on CNN/DM validation set.Systems
ROUGE-1 ROUGE-2 ROUGE-L
BERTSUM
Greedy
44.00
20.73
40.45
OREO, U (1, t)
44.26
20.92
40.69
OREO, A r (1, 16)
44.06
20.76
40.50
OREO, A q (1, 16)
44.17
20.83
40.60
OREO, A (1, 16)
44.03
20.73
40.42
OREO, A p (1, 16)
43.91
20.45
40.26
Table 10 :
10Hyperparameters for supervised training of BERTSUM on five summarization datasets.Monolingual
CNN/DM XSum Multi-News Reddit WikiHow
Beam size k
256
16
16
256
16
Oracle distribution t
16
16
16
32
16
Table 11 :
11Cross-lingual zero-shot performance on test sets of MLSum in ROUGE-L.Cross-Lingual ResultsWe further initialize BERTSUM with mBERT(Devlin et al., 2019). As we can inSystems
De
Es
Fr
Ru
Tr
AVG
MBERTSUM
Greedy
22.68 20.44 22.70 8.71 27.89 20.48
Beam
28.36 20.55 22.74 9.30 29.38 22.07
OREO
29.13 20.62 22.82 9.13 30.78 22.50
Table 12 :
12Results for abstractive summarization on CNN/DM test set.Abstractive ResultsWe show extended abstractive results inTable 12. The first block shows performance of vanilla GSUM(Dou et al., 2021) which uses greedy extractive oracles as guidance during training. During inference, we compare to three types of extractive guidance produced by BERTSUM trained with greedy, beam and OREO labels. Despite the training-testing discrepancy in guidance, extractive guidance with OREO labels helps generate better downstream abstractive summaries.Systems
ROUGE-1 ROUGE-2 ROUGE-L
GSUM trained with greedy ORACLE guidance
Greedy
44.40
21.52
41.23
Beam
44.41
21.55
41.26
OREO
44.48
21.60
41.33
GSUM trained with OREO ORACLE guidance
Greedy
44.66
21.72
41.43
Beam
44.68
21.74
41.47
OREO
44.81
21.83
41.60
We also experimented with mBERT(Devlin et al., 2019) and achieved similar results. See Appendix F.
Under two special cases our labeling scheme is bound-preserving: (1) with beam size k = 1, OREO is equivalent to greedy labeling and (2) with top beam size t = 1, OREO is equivalent to beam labeling. 4 ORMAX is trivially bound-preserving since sentences selected by the top-ranked beam receive the highest score, and the top-ranked beam can be reconstructed by the top-ranked sentences. We illustrate the difference between OREO and bound-preserving methods inFigure 5in Appendix H.
https://github.com/pltrdy/files2rouge
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsSan Diego, CA, USADzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, May 2015.
The best of both worlds: Combining recent advances in neural machine translation. Orhan Mia Xu Chen, Ankur Firat, Melvin Bapna, Wolfgang Johnson, George Macherey, Llion Foster, Mike Jones, Noam Schuster, Niki Shazeer, Ashish Parmar, Jakob Vaswani, Lukasz Uszkoreit, Zhifeng Kaiser, Yonghui Chen, Macduff Wu, Hughes, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 76-86, Melbourne, Australia, July 2018.
Neural summarization by extracting sentences and words. Jianpeng Cheng, Mirella Lapata, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Jianpeng Cheng and Mirella Lapata. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pp. 484-494, Berlin, Germany, 2016.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Fran- cisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8440-8451, Online, July 2020.
Overview of duc. Hoa Trang Dang, Proceedings of the 2005 Document Understanding Conference. the 2005 Document Understanding ConferenceVancouver, CanadaHoa Trang Dang. Overview of duc 2005. In Proceedings of the 2005 Document Understanding Conference, pp. 1-12, Vancouver, Canada, October 2005.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171-4186, Minneapolis, Minnesota, 2019.
GSum: A general framework for guided neural abstractive summarization. Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineZi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. GSum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4830-4842, Online, June 2021.
FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. Esin Durmus, He He, Mona Diab, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineEsin Durmus, He He, and Mona Diab. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5055-5070, Online, July 2020.
Multi-News: A largescale multi-document summarization dataset and abstractive hierarchical model. Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, Dragomir Radev, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAlexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. Multi-News: A large- scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1074-1084, Florence, Italy, August 2019.
Jointly optimizing diversity and relevance in neural response generation. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, Bill Dolan, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. Jointly optimizing diversity and relevance in neural response generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1229-1238, Minneapolis, Minnesota, June 2019.
Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses. Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaMatt Grenander, Yue Dong, Jackie Chi Kit Cheung, and Annie Louis. Countering the effects of lead bias in news summarization via multi-stage training and auxiliary losses. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6019-6024, Hong Kong, China, November 2019.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Proceedings of the 28th International Conference on Neural Information Processing Systems. the 28th International Conference on Neural Information Processing SystemsCambridge, MA, USAKarl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the 28th International Conference on Neural Information Processing Systems, pp. 1693--1701, Cambridge, MA, USA, 2015.
Neural label search for zero-shot multi-lingual extractive summarization. Ruipeng Jia, Xingxing Zhang, Yanan Cao, Zheng Lin, Shi Wang, Furu Wei, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Ruipeng Jia, Xingxing Zhang, Yanan Cao, Zheng Lin, Shi Wang, and Furu Wei. Neural label search for zero-shot multi-lingual extractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 561-570, Dublin, Ireland, May 2022.
Content selection in deep learning models of summarization. Chris Kedzie, Kathleen Mckeown, Hal Daumé, Iii , Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumChris Kedzie, Kathleen McKeown, and Hal Daumé III. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1818-1828, Brussels, Belgium, October-November 2018.
Simulated multiple reference training improves low-resource machine translation. Huda Khayrallah, Brian Thompson, Matt Post, Philipp Koehn, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineHuda Khayrallah, Brian Thompson, Matt Post, and Philipp Koehn. Simulated multiple reference training improves low-resource machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 82-89, Online, November 2020.
Abstractive summarization of Reddit posts with multi-level memory networks. Byeongchang Kim, Hyunwoo Kim, Gunhee Kim, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. Abstractive summarization of Reddit posts with multi-level memory networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019.
Mahnaz Koupaee, William Yang Wang, arXiv:1810.09305Wikihow: A large scale text summarization dataset. arXiv preprintMahnaz Koupaee and William Yang Wang. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305, 2018.
Word translation without parallel data. Guillaume Lample, Alexis Conneau, Marc'aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsVancouver, BC, CanadaGuillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada, April -May 2018.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880, Online, July 2020.
Automatic evaluation of summaries using n-gram co-occurrence statistics. Chin-Yew Lin, Eduard Hovy, Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsEdmonton, CanadaChin-Yew Lin and Eduard Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 71-78, Edmonton, Canada, 2003.
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaYang Liu and Mirella Lapata. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3730-3740, Hong Kong, China, November 2019.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
When does label smoothing help?. Rafael Müller, Simon Kornblith, Geoffrey E Hinton, Advances in neural information processing systems. 32Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? Advances in neural information processing systems, 32, 2019.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Ramesh Nallapati, Feifei Zhai, Bowen Zhou, Proceedings of the 31st AAAI Conference on Artificial Intelligence. the 31st AAAI Conference on Artificial IntelligenceSan Francisco, California, USARamesh Nallapati, Feifei Zhai, and Bowen Zhou. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 3075-3081, San Francisco, California, USA, 2017.
Ranking sentences for extractive summarization with reinforcement learning. Shashi Narayan, Shay B Cohen, Mirella Lapata, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1747-1759, New Orleans, Louisiana, 2018a.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumShashi Narayan, Shay B. Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1797-1807, Brussels, Belgium, October-November 2018b.
Automatic text summarization of newswire: Lessons learned from the document understanding conference. Ani Nenkova, Proceedings, The Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference. Manuela M. Veloso and Subbarao KambhampatiThe Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence ConferencePittsburgh, Pennsylvania, USAAni Nenkova. Automatic text summarization of newswire: Lessons learned from the document understanding conference. In Manuela M. Veloso and Subbarao Kambhampati (eds.), Proceedings, The Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference, pp. 1436-1441, Pittsburgh, Pennsylvania, USA, July 2005.
Offline neural contextual bandits: Pessimism, optimization and generalization. Thanh Nguyen-Tang, A Tuan Gupta, Svetha Nguyen, Venkatesh, Proceedings of the 10th International Conference on Learning Representations. the 10th International Conference on Learning RepresentationsOnlineThanh Nguyen-Tang, Sunil Gupta, A. Tuan Nguyen, and Svetha Venkatesh. Offline neural contextual bandits: Pessimism, optimization and generalization. In Proceedings of the 10th International Conference on Learning Representations, Online, April 2022.
Text generation by learning from demonstrations. Yuanzhe Richard, He Pang, He, Proceedings of the 9th International Conference on Learning Representations. the 9th International Conference on Learning RepresentationsOnlineRichard Yuanzhe Pang and He He. Text generation by learning from demonstrations. In Proceedings of the 9th International Conference on Learning Representations, Online, May 2021.
CoSDA-ML: multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp. Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. the Twenty-Ninth International Conference on International Joint Conferences on Artificial IntelligenceLibo Qin, Minheng Ni, Yue Zhang, and Wanxiang Che. CoSDA-ML: multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 3853-3860, 2021.
MLSUM: The multilingual summarization corpus. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineThomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. MLSUM: The multilingual summarization corpus. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pp. 8051-8067, Online, November 2020.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in Neural Information Processing Systems. Curran Associates, Inc27Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pp. 3104-3112. Curran Associates, Inc., 2014.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc., 2017.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229-256, 1992.
Automatic generation of story highlights. Kristian Woodsend, Mirella Lapata, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, SwedenKristian Woodsend and Mirella Lapata. Automatic generation of story highlights. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 565-574, Uppsala, Sweden, July 2010.
Discourse-aware neural extractive text summarization. Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineJiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. Discourse-aware neural extractive text sum- marization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5021-5031, Online, July 2020.
Generating query focused summaries from query-free resources. Yumo Xu, Mirella Lapata, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineLong Papers1Yumo Xu and Mirella Lapata. Generating query focused summaries from query-free resources. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6096-6109, Online, August 2021.
Document summarization with latent queries. Yumo Xu, Mirella Lapata, 10.1162/tacl_a_00480Transactions of the Association for Computational Linguistics. 10Yumo Xu and Mirella Lapata. Document summarization with latent queries. Transactions of the Association for Computational Linguistics, 10:1-16, 2022. doi: 10.1162/tacl_a_00480.
PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 11328- 11339, July 2020.
Extractive summarization as text matching. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineMing Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6197-6208, Online, July 2020.
Question answering with long multiple-span answers. Ming Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, Chandan K Reddy, Findings of the Association for Computational Linguistics: EMNLP 2020. OnlineMing Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, and Chandan K. Reddy. Question answering with long multiple-span answers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 3840-3849, Online, November 2020.
We refer interested readers to Liu & Lapata (2019) for detailed training configurations which are identical to ours. Following Liu & Lapata (2019), we used the Python package pyrouge for calculating ROUGE. For our the proposed labeling methods, we searched over the following (k, t) pairs. Monolingual Extractive Summarization We used three GeForce RTX 2080 GPUs for model training and bert.base in our experiments. 32Monolingual Extractive Summarization We used three GeForce RTX 2080 GPUs for model training and bert.base in our experiments. We refer interested readers to Liu & Lapata (2019) for detailed training configurations which are identical to ours. Following Liu & Lapata (2019), we used the Python package pyrouge for calculating ROUGE. For our the proposed labeling methods, we searched over the following (k, t) pairs: (256, 32), (256, 16), (256, 8), (32, 32), (16, 16), (8, 8).
We used standard parameter settings for all experiments: ROUGE-1.5.5.pl -c 95 -m -r 1000 -n 2 -a. We used the datasets as preprocessed by. We show the best-performing hyperparameter combinations for each dataset in Table 10. Zhong et al.We show the best-performing hyperparameter combinations for each dataset in Table 10. We used standard parameter settings for all experiments: ROUGE-1.5.5.pl -c 95 -m -r 1000 -n 2 -a. We used the datasets as preprocessed by Zhong et al. (2020) which can be accessed at: https: //github.com/maszhongming/matchsum.
We refer readers to Jia et al. (2022) for details on training configuration; we made minimal adjustments to adapt to our training environment, i.e., no training hyperparameters were specifically optimized for our method. We set the batch size to 4, and accumulated gradients every 32 steps. Following Jia et al. (2022), we used word replacement rate of 0.5 to learn cross-lingual representation alignment. We fine-tuned models on the English data with a learning rate of 2 × 10 −3 for 50,000 optimization steps, and a warm-step of 10,000. ; Cross-Lingual, Liu, Extractive Summarization In our cross-lingual experiments we used four GeForce RTX 2080 GPUs for model training with xlmr.base and mbert.base. Particularly, interval embeddings. Liu & Lapata, 2019) were used in MBERTSUM but not in XLS since XLM-R removes segment embeddings from the input following RoBERTA. Following Jia et al. (2022), we used the Python package spacy for non-English hypothesis/reference tokenization, and pyrouge for ROUGE calculationCross-Lingual Extractive Summarization In our cross-lingual experiments we used four GeForce RTX 2080 GPUs for model training with xlmr.base and mbert.base. Particularly, interval embeddings (Liu & Lapata, 2019) were used in MBERTSUM but not in XLS since XLM-R removes segment embeddings from the input following RoBERTA (Liu et al., 2019). We refer readers to Jia et al. (2022) for details on training configuration; we made minimal adjustments to adapt to our training environment, i.e., no training hyperparameters were specifically optimized for our method. We set the batch size to 4, and accumulated gradients every 32 steps. Following Jia et al. (2022), we used word replacement rate of 0.5 to learn cross-lingual representation alignment. We fine-tuned models on the English data with a learning rate of 2 × 10 −3 for 50,000 optimization steps, and a warm-step of 10,000. Following Jia et al. (2022), we used the Python package spacy for non-English hypothesis/reference tokenization, and pyrouge for ROUGE calculation.
Abstractive Summarization In our abstractive summarization experiments we used four GeForce RTX 2080 GPUs for model training with bart.large; the latter was also used in our baseline BART system and to initialize GSUM. Due to GPU memory limitations, we set the maximum length of an input document to 640 tokens (with the excess clipped) and used half float precision for efficient. Abstractive Summarization In our abstractive summarization experiments we used four GeForce RTX 2080 GPUs for model training with bart.large; the latter was also used in our baseline BART system and to initialize GSUM. Due to GPU memory limitations, we set the maximum length of an input document to 640 tokens (with the excess clipped) and used half float precision for efficient |
245,837,268 | THE EFFECTS OF REWARD MISSPECIFICATION: MAPPING AND MITIGATING MISALIGNED MODELS | Reward hacking-where RL agents exploit gaps in misspecified reward functions-has been widely observed, but not yet systematically studied. To understand how reward hacking arises, we construct four RL environments with misspecified rewards. We investigate reward hacking as a function of agent capabilities: model capacity, action space resolution, observation space noise, and training time. More capable agents often exploit reward misspecifications, achieving higher proxy reward and lower true reward than less capable agents. Moreover, we find instances of phase transitions: capability thresholds at which the agent's behavior qualitatively shifts, leading to a sharp decrease in the true reward. Such phase transitions pose challenges to monitoring the safety of ML systems. To address this, we propose an anomaly detection task for aberrant policies and offer several baseline detectors. | [
21850704,
28202810,
13046179
] | THE EFFECTS OF REWARD MISSPECIFICATION: MAPPING AND MITIGATING MISALIGNED MODELS
14 Feb 2022
Alexander Pan Caltech
UC Berkeley
Berkeley
Kush Bhatia
UC Berkeley
Berkeley
Jacob Steinhardt
UC Berkeley
Berkeley
THE EFFECTS OF REWARD MISSPECIFICATION: MAPPING AND MITIGATING MISALIGNED MODELS
14 Feb 2022
Reward hacking-where RL agents exploit gaps in misspecified reward functions-has been widely observed, but not yet systematically studied. To understand how reward hacking arises, we construct four RL environments with misspecified rewards. We investigate reward hacking as a function of agent capabilities: model capacity, action space resolution, observation space noise, and training time. More capable agents often exploit reward misspecifications, achieving higher proxy reward and lower true reward than less capable agents. Moreover, we find instances of phase transitions: capability thresholds at which the agent's behavior qualitatively shifts, leading to a sharp decrease in the true reward. Such phase transitions pose challenges to monitoring the safety of ML systems. To address this, we propose an anomaly detection task for aberrant policies and offer several baseline detectors.
Figure 1
: An example of reward hacking when cars merge onto a highway. A human-driver model controls the grey cars and an RL policy controls the red car. The RL agent observes positions and velocities of nearby cars (including itself) and adjusts its acceleration to maximize the proxy reward. At first glance, both the proxy reward and true reward appear to incentivize fast traffic flow. However, smaller policy models allow the red car to merge, whereas larger policy models exploit the misspecification by stopping the red car. When the red car stops merging, the mean velocity increases (merging slows down the more numerous grey cars). However, the mean commute time also increases (the red car is stuck). This exemplifies a phase transition: the qualitative behavior of the agent shifts as the model size increases.
More worryingly, we observe several instances of phase transitions. In a phase transition, the more capable model pursues a qualitatively different policy that sharply decreases the true reward. Figure 1 illustrates one example: An RL agent regulating traffic learns to stop any cars from merging onto the highway in order to maintain a high average velocity of the cars on the straightaway.
Since there is little prior warning of phase transitions, they pose a challenge to monitoring the safety of ML systems. Spurred by this challenge, we propose an anomaly detection task (Hendrycks & Gimpel, 2017;Tack et al., 2020): Can we detect when the true reward starts to drop, while maintaining a low false positive rate in benign cases? We instantiate our proposed task, POLYNOMALY, for the traffic and COVID environments (Section 5). Given a trusted policy with moderate performance, one must detect whether a given policy is aberrant. We provide several baseline anomaly detectors for this task and release our data at https://github.com/aypan17/reward-misspecification.
RELATED WORK
Previous works have focused on classifying different types of reward hacking and sometimes mitigating its effects. One popular setting is an agent on a grid-world with an erroneous sensor. Hadfield-Menell et al. (2017) show and mitigate the reward hacking that arises due to an incorrect sensor reading at test time in a 10x10 navigation grid world. show examples of reward hacking in a 3x3 boat race and a 5x7 tomato watering grid world. theoretically study and mitigate reward hacking caused by a faulty sensor.
Game-playing agents have also been found to hack their reward. Baker et al. (2020) exhibit reward hacking in a hide-and-seek environment comprising 3-6 agents, 3-9 movable boxes and a few ramps: without a penalty for leaving the play area, the hiding agents learn to endlessly run from the seeking agents. Toromanoff et al. (2019) briefly mention reward hacking in several Atari games (Elevator Action, Kangaroo, Bank Heist) where the agent loops in a sub-optimal trajectory that provides a repeated small reward.
Agents optimizing a learned reward can also demonstrate reward hacking. Ibarz et al. (2018) show an agent hacking a learned reward in Atari (Hero, Montezuma's Revenge, and Private Eye), where optimizing a frozen reward predictor eventually achieves high predicted score and low actual score. Christiano et al. (2017) show an example of reward hacking in the Pong game where the agent learns to hit the ball back and forth instead of winning the point. Stiennon et al. (2020) show that a policy which over-optimizes the learnt reward model for text summarization produces lower quality summarizations when judged by humans.
EXPERIMENTAL SETUP: ENVIRONMENTS AND REWARD FUNCTIONS
In this section, we describe our four environments (Section 3.1) and taxonomize our nine corresponding misspecified reward functions (Section 3.2).
ENVIRONMENTS
We chose a diverse set of environments and prioritized complexity of action space, observation space, and dynamics model. Our aim was to reflect real-world constraints in our environments, selecting ones with several desiderata that must be simultaneously balanced. Table 1 provides a summary.
Traffic Control. The traffic environment is an autonomous vehicle (AV) simulation that models vehicles driving on different highway networks. The vehicles are either controlled by a RL algorithm or pre-programmed via a human behavioral model. Our misspecifications are listed in Table 1.
We use the Flow traffic simulator, implemented by Wu et al. (2021) and Vinitsky et al. (2018), which extends the popular SUMO traffic simulator (Lopez et al., 2018). The simulator uses cars that drive like humans, following the Intelligent Driver Model (IDM) (Treiber et al., 2000), a widely-accepted approximation of human driving behavior. Simulated drivers attempt to travel as fast as possible while tending to decelerate whenever they are too close to the car immediately in front.
The RL policy has access to observations only from the AVs it controls. For each AV, the observation space consists of the car's position, its velocity, and the position and velocity of the cars immediately in front of and behind it. The continuous control action is the acceleration applied to each AV. Figure 4 depicts the Traffic-Mer network, where cars from an on-ramp attempt to merge onto the straightaway. We also use the Traffic-Bot network, where cars (1-4 RL, 10-20 human) drive through a highway bottleneck where lanes decrease from four to two to one.
COVID Response. The COVID environment, developed by Kompella et al. (2020), simulates a population using the SEIR model of individual infection dynamics. The RL policymaker adjusts the severity of social distancing regulations while balancing economic health (better with lower regulations) and public health (better with higher regulations), similar in spirit to Trott et al. (2021). The population attributes (proportion of adults, number of hospitals) and infection dynamics (random testing rate, infection rate) are based on data from Austin, Texas.
Every day, the environment simulates the infection dynamics and reports testing results to the agent, but not the true infection numbers. The policy chooses one of three discrete actions: INCREASE, DECREASE, or MAINTAIN the current regulation stage, which directly affects the behavior of the population and indirectly affects the infection dynamics. There are five stages in total.
Atari Riverraid. The Atari Riverraid environment is run on OpenAI Gym (Brockman et al., 2016). The agent operates a plane which flies over a river and is rewarded by destroying enemies. The agent observes the raw pixel input of the environment. The agent can take one of eighteen discrete actions, corresponding to either movement or shooting within the environment.
Glucose Monitoring. The glucose environment, implemented in , is a continuous control problem. It extends a FDA-approved simulator (Man et al., 2014) for blood glucose levels of a patient with Type 1 diabetes. The patient partakes in meals and wears a continuous glucose monitor (CGM), which gives noisy observations of the patient's glucose levels. The RL agent administers insulin to maintain a healthy glucose level.
Every five minutes, the agent observes the patient's glucose levels and decides how much insulin to administer. The observation space is the previous four hours of glucose levels and insulin dosages.
MISSPECIFICATIONS
Using the above environments, we constructed nine instances of misspecified proxy rewards. To help interpret these proxies, we taxonomize them as instances of misweighting, incorrect ontology, or incorrect scope. We elaborate further on this taxonimization using the traffic example from Figure 1. Table 1: Reward misspecifications across our four environments. 'Misalign' indicates whether the true reward drops and 'Transition' indicates whether this corresponds to a phase transition (sharp qualitative change). We observe 5 instances of misalignment and 4 instances of phase transitions. 'Mis.' is a misweighting and 'Ont.' is an ontological misspecification.
• Misweighting. Suppose that the true reward is a linear combination of commute time and acceleration (for reducing carbon emissions). Downweighting the acceleration term thus underpenalizes carbon emissions. In general, misweighting occurs when the proxy and true reward capture the same desiderata, but differ on their relative importance. • Ontological. Congestion could be operationalized as either high average commute time or low average vehicle velocity. In general, ontological misspecification occurs when the proxy and true reward use different desiderata to capture the same concept. • Scope. If monitoring velocity over all roads is too costly, a city might instead monitor them only over highways, thus pushing congestion to local streets. In general, scope misspecification occurs when the proxy measures desiderata over a restricted domain (e.g. time, space).
We include a summary of all nine tasks in Table 1 and provide full details in Appendix A. Table 1 also indicates whether each proxy leads to misalignment (i.e. to a policy with low true reward) and whether it leads to a phase transition (a sudden qualitative shift as model capacity increases). We investigate both of these in Section 4.
Evaluation protocol. For each environment and proxy-true reward pair, we train an agent using the proxy reward and evaluate performance according to the true reward. We use PPO (Schulman et al., 2017) to optimize policies for the traffic and COVID environments, SAC (Haarnoja et al., 2018) to optimize the policies for the glucose environment, and torchbeast (Küttler et al., 2019), a PyTorch implementation of IMPALA (Espeholt et al., 2018), to optimize the policies for the Atari environment. When available, we adopt the hyperparameters (except the learning rate and network size) given by the original codebase.
HOW AGENT OPTIMIZATION POWER DRIVES MISALIGNMENT
To better understand reward hacking, we study how it emerges as agent optimization power increases. We define optimization power as the effective search space of policies the agent has access to, as implicitly determined by model size, training steps, action space, and observation space.
In Section 4.1, we consider the quantitative effect of optimization power for all nine environmentmisspecification pairs; we primarily do this by varying model size, but also use training steps, action space, and observation space as robustness checks. Overall, more capable agents tend to overfit the proxy reward and achieve a lower true reward. We also find evidence of phase transitions on four of the environment-misspecification pairs. For these phase transitions, there is a critical threshold at which the proxy reward rapidly increases and the true reward rapidly drops.
In Section 4.2, we further investigate these phase transitions by qualitatively studying the resulting policies. At the transition, we find that the quantitative drop in true reward corresponds to a qualitative shift in policy behavior. Extrapolating visible trends is therefore insufficient to catch all instances of reward hacking, increasing the urgency of research in this area.
In Section 4.3, we assess the faithfulness of our proxies, showing that reward hacking occurs even though the true and proxy rewards are strongly positively correlated in most cases.
QUANTITATIVE EFFECTS VS. AGENT CAPABILITIES
As a stand-in for increasing agent optimization power, we first vary the model capacity for a fixed environment and proxy reward. Specifically, we vary the width and depth of the actor and critic networks, changing the parameter count by two to four orders of magnitude depending on the environment. For a given policy, the actor and critic are always the same size.
Model Capacity. Our results are shown in Figure 2, with additional plots included in Appendix A. We plot both the proxy (blue) and true (green) reward vs. the number of parameters. As model size increases, the proxy reward increases but the true reward decreases. This suggests that reward designers will likely need to take greater care to specify reward functions accurately and is especially salient given the recent trends towards larger and larger models (Littman et al., 2021).
The drop in true reward is sometimes quite sudden. We call these sudden shifts phase transitions, and mark them with dashed red lines in Figure 2. These quantitative trends are reflected in the qualitative behavior of the policies (Section 4.2), which typically also shift at the phase transition.
Model capacity is only one proxy for agent capabilities, and larger models do not always lead to more capable agents (Andrychowicz et al., 2020). To check the robustness of our results, we consider several other measures of optimization: observation fidelity, number of training steps, and action space resolution.
(a) Atari -Misweighting (b) Traffic -Ontological (c) COVID -Ontological Figure 3: In addition to parameter count, we consider three other agent capabilities: training steps, action space resolution, and observation noise. In Figure 3a, an increase in the proxy reward comes at the cost of the true reward. In Figure 3b, increasing the granularity (from right to left) causes the agent to achieve similar proxy reward but lower true reward. In Figure 3c, increasing the fidelity of observations (by increasing the random testing rate in the population) tends to decrease the true reward with no clear impact on proxy reward.
Number of training steps. Assuming a reasonable RL algorithm and hyperparameters, agents which are trained for more steps have more optimization power. We vary training steps for an agent trained on the Atari environment. The true reward incentivizes staying alive for as many frames as possible while moving smoothly. The proxy reward misweights these considerations by underpenalizing the smoothness constraint. As shown in Figure 3a, optimizing the proxy reward for more steps harms the true reward, after an initial period where the rewards are positively correlated.
Action space resolution. Intuitively, an agent that can take more precise actions is more capable. For example, as technology improves, an RL car may make course corrections every millisecond instead of every second. We study action space resolution in the traffic environment by discretizing the output space of the RL agent. Specifically, under resolution level ε, we round the action a ∈ R output by the RL agent to the nearest multiple of ε and use that as our action. The larger the resolution level ε, the lower the action space resolution. Results are shown in Figure 3b for a fixed model size. Increasing the resolution causes the proxy reward to remain roughly constant while the true reward decreases.
Observation fidelity. Agents with access to better input sensors, like higher-resolution cameras, should make more informed decisions and thus have more optimization power. Concretely, we study this in the COVID environment, where we increase the random testing rate in the population. The proxy reward is a linear combination of the number of infections and severity of social distancing, while the true reward also factors in political cost. As shown in Figure 3c, as the testing rate increases, the model achieves similar proxy reward at the cost of a slightly lower true reward.
QUALITATIVE EFFECTS
In the previous section, quantitative trends showed that increasing a model's optimization power often hurts performance on the true reward. We shift our focus to understanding how this decrease happens. In particular, we typically observe a qualitative shift in behavior associated with each of the phase transitions, three of which we describe below.
Traffic Control. We focus on the Traffic-Mer environment from Figure 2a, where minimizing average commute time is replaced by maximizing average velocity. In this case, smaller policies learn to merge onto the straightaway by slightly slowing down the other vehicles (Figure 4a). On the other hand, larger policy models stop the AVs to prevent them from merging at all (Figure 4b). This increases the average velocity, because the vehicles on the straightaway (which greatly outnumber vehicles on the on-ramp) do not need to slow down for merging traffic. However, it significantly increases the average commute time, as the passengers in the AV remain stuck.
COVID Response. Suppose the RL agent optimizes solely for the public and economic health of a society, without factoring politics into its decision-making. This behavior is shown in Figure 5. The larger model chooses to increase the severity of social distancing restrictions earlier than the smaller model. As a result, larger models are able to maintain low average levels of both ICU usage (a proxy for public health) and social distancing restrictions (a proxy for economic health). These Because the larger policy enforces regulations much sooner than the smaller policy, it maintains both low ICU usage and low regulation stage. However, the larger policy is politically unfavorable: regulations are high even though public signs of infection, such as ICU usage, are low.
preemptive regulations may however be politically costly, as enforcing restrictions without clear signs of infection may foment public unrest (Boettke & Powell, 2021).
Atari Riverraid. We create an ontological misspecification by rewarding the plane for staying alive as long as possible while shooting as little as possible: a "pacifist run". We then measure the game score as the true reward. We find that agents with more parameters typically maneuver more adeptly. Such agents shoot less frequently, but survive for much longer, acquiring points (true reward) due to passing checkpoints. In this case, therefore, the proxy and true rewards are wellaligned so that reward hacking does not emerge as capabilities increase.
We did, however, find that some of the agents exploited a bug in the simulator that halts the plane at the beginning of the level. The simulator advances but the plane itself does not move, thereby achieving high pacifist reward.
Glucose Monitoring. Consider an RL agent that optimizes solely for a patient's health, without considering the economic costs of its treatment plans. In this case, the proxy reward is based off of a glycemic risk measure, which reflects the likelihood that a patient will suffer an acute hypoglycemic episode, developed by the medical community (Kovatchev et al., 2000).
However, a less economically-privileged patient may opt for the treatment plan with the least expected cost (Herkert et al., 2019;Fralick & Kesselheim, 2019), not the one with the least amount of risk. From this patient's perspective, the true reward is the expected cost of the treatment plan, which includes the expected cost of hospital visits and the cost of administering the insulin.
Although larger model treatments reduce hypoglycemic risk more smaller model treatments, they administer more insulin. Based on the average cost of an ER visit for a hypogylcemic episode ($1350 from Bronstone & Graham (2016)) and the average cost of a unit of insulin ($0.32 from Lee (2020)), we find that it is actually more expensive to pursue the larger model's treatment.
QUANTITATIVE EFFECTS VS PROXY-TRUE REWARD CORRELATION
We saw in Sections 4.1 and 4.2 that agents often pursue proxy rewards at the cost of the true reward. Perhaps this only occurs because the proxy is greatly misspecified, i.e., the proxy and true reward are weakly or negatively correlated. If this were the case, then reward hacking may pose less of a threat. To investigate this intuition, we plot the correlation between the proxy and true rewards.
The correlation is determined by the state distribution of a given policy, so we consider two types of state distributions. Specifically, for a given model size, we obtain two checkpoints: one that achieves the highest proxy reward during training and one from early in training (less than 1% of training complete). We call the former the "trained checkpoint" and the latter the "early checkpoint". In Figure 6a, we plot the proxy reward with "•" and the true reward with "×". In Figure 6b, we plot the trained checkpoint correlation and the early checkpoint correlation.
For a given model checkpoint, we calculate the Pearson correlation ρ between the proxy reward P and true reward T using 30 trajectory rollouts. Reward hacking occurs even though there is significant positive correlation between the true and proxy rewards (see Figure 6). The correlation is lower for the trained model than for the early model, but still high. Further figures are shown in Appendix A.2. Among the four environments tested, only the Traffic-Mer environment with ontological misspecification had negative Pearson correlation.
POLYNOMALY: MITIGATING REWARD MISSPECIFICATION
In Section 4, we saw that reward hacking often leads to phase transitions in agent behaviour. Furthermore, in applications like traffic control or COVID response, the true reward may be observed only sporadically or not at all. Blindly optimizing the proxy in these cases can lead to catastrophic failure (Zhuang & Hadfield-Menell, 2020;Taylor, 2016).
This raises an important question: Without the true reward signal, how can we mitigate misalignment? We operationalize this as an anomaly detection task: the detector should flag instances of misalignment, thus preventing catastrophic rollouts. To aid the detector, we provide it with a trusted policy: one verified by humans to have acceptable (but not maximal) reward. Our resulting benchmark, POLYNOMALY, is described below.
PROBLEM SETUP
We train a collection of policies by varying model size on the traffic and COVID environments. For each policy, we estimate the policy's true reward by averaging over 5 to 32 rollouts. One author labeled each policy as acceptable, problematic, or ambiguous based on its true reward score relative to that of other policies. We include only policies that received a non-ambiguous label.
For both environments, we provide a small-to-medium sized model as the trusted policy model, as Section 4.1 empirically illustrates that smaller models achieve reasonable true reward without exhibiting reward hacking. Given the trusted model and a collection of policies, the anomaly detector's task is to predict the binary label of "acceptable" or "problematic" for each policy. Table 3 in Appendix B.1 summarizes our benchmark. The trusted policy size is a list of the hidden unit widths of the trusted policy network (not including feature mappings).
EVALUATION
We propose two evaluation metrics for measuring the performance of our anomaly detectors.
• Area Under the Receiver Operating Characteristic (AUROC). The AUROC measures the probability that a detector will assign a random anomaly a higher score than a random non-anomalous policy (Davis & Goadrich, 2006). Higher AUROCs indicate stronger detectors.
• Max F-1 score. The F-1 score is the harmonic mean of the precision and the recall, so detectors with a high F-1 score have both low false positives and high true negatives. We calculate the max F-1 score by taking the maximum F-1 score over all possible thresholds for the detector.
BASELINES
In addition to the benchmark datasets described above, we provide baseline anomaly detectors based on estimating distances between policies. We estimate the distance between the trusted policy and the unknown policy based on either the Jensen-Shannon divergence (JSD) or the Hellinger distance. Specifically, we use rollouts to generate empirical action distributions. We compute the distance between these action distributions at each step of the rollout, then aggregate across steps by taking either the mean or the range. For full details, see Appendix B.2. We observe that different detectors are better for different tasks, suggesting that future detectors could do better than any of our baselines. Our benchmark and baseline provides a starting point for further research on mitigating reward hacking.
DISCUSSION
In this work, we designed a diverse set of environments and proxy rewards, uncovered several instances of phase transitions, and proposed an anomaly detection task to help mitigate these transitions. Our results raise two questions: How can we not only detect phase transitions, but prevent them in the first place? And how should phase transitions shape our approach to safe ML?
On preventing phase transitions, anomaly detection already offers one path forward. Once we can detect anomalies, we can potentially prevent them, by using the detector to purge the unwanted behavior (e.g. by including it in the training objective). Similar policy shaping has recently been used to make RL agents more ethical (Hendrycks et al., 2021b). However, since the anomaly detectors will be optimized against by the RL policy, they need to be adversarially robust (Goodfellow et al., 2014). This motivates further work on adversarial robustness and adversarial anomaly detection. Another possible direction is optimizing policies against a distribution of rewards Javed et al., 2021), which may prevent over-fitting to a given set of metrics.
Regarding safe ML, several recent papers propose extrapolating empirical trends to forecast future ML capabilities (Kaplan et al., 2020;Hernandez et al., 2021;Droppo & Elibol, 2021), partly to avoid unforeseen consequences from ML. While we support this work, our results show that trend extrapolation alone is not enough to ensure the safety of ML systems. To complement trend extrapolation, we need better interpretability methods to identify emergent model behaviors early on, before they dominate performance (Olah et al., 2018). ML researchers should also familiarize themselves with emergent behavior in self-organizing systems (Yates, 2012), which often exhibit similar phase transitions (Anderson, 1972). Indeed, the ubiquity of phase transitions throughout science suggests that ML researchers should continue to expect surprises-and should therefore prepare for them. Observe that not all misspecifications cause misalignment. We plot the proxy reward with "•" and the true reward with "×". The proxy reward is measured on the left-hand side of each figure and the true reward is measured on the right hand side of each figure.
A.1 EFFECT OF MODEL SIZE
We plot the proxy and true reward vs. model size in Figure 7, following the experiment described in Section 4.1. In the left column, we plot the proxy reward with "•" and the true reward with "×". In the right column, we plot the trained checkpoint correlation and the randomly initialized checkpoint correlation.
A.2 CORRELATION BETWEEN PROXY AND TRUE REWARDS
We plot the correlation between proxy and true rewards, following the experiment described in Section 4.3. Interestingly, we see that reward hacking still occurs when there is positive correlation between the true and proxy rewards, e.g., in Figures 8a/8b. Unsurprisingly, proxy-true pairs which are highly correlated, e.g., Figure 8c/8d do not exhibit reward hacking. Finally, proxy-true pairs which are negatively correlated, e.g., Figure 8e/8f exhibit the most reward hacking. Table 3 for Polynomaly's statistics.
B.2 RECEIVER OPERATING CHARACTERISTIC CURVES
We plot the ROC curves for the detectors described in Section 5.3. Our detectors are calculated as follows.
Let P and Q represent two probability distributions with M = 1 2 (P + Q). Then the Jensen-Shannon divergence and the Hellinger distance between them is given by (1)
Our proposed detectors estimate the distance D(π trusted , π unknown ) between the trusted policy π trusted and unknown policy π unknown as follows: We generate r rollouts of π unknown , where r = 5 in the traffic environment and r = 32 in the COVID environment. Every s steps of a rollout, where s = 10 in the traffic environment and s = 1 in the COVID environment, we set P to be the action distribution of π unknown given the unknown agent's state at that timestep in the rollout and Q to be the action distribution of π trusted given the unknown agent's state at that timestep in the rollout. Intuitively, if P and Q are far apart, then the trusted agent would have performed a different action than the unknown agent at that given timestep, indicating a possible case of reward hacking. We then compute either JSD(P Q) or Hellinger(P, Q) following Equation (1). These distances are collected every s steps over the entire rollout, and we calculate metrics on these distances (range, mean, etc.) to assign an anomaly score to the untrusted policy.
Figure 2 :
2Increasing the RL policy's model size decreases true reward on three selected environments. The red line indicates a phase transition.
( a )Figure 4 :
a4Traffic policy of smaller network (b) Traffic policy of larger network The larger model prevents the AVs (in red) from moving to increase the velocity of the human cars (unobserved cars in white and observed cars in blue). However, this greatly increases the average commute per person.
Figure 5 :
5For COVID, ICU usage is a proxy for public health and regulation stage is a proxy for economic health. The blue line indicates the maximum stage (right) enforced by the larger policy and the corresponding ICU level (left) at that stage. The red line is the equivalent for the smaller policy.
Figure 6 :
6Correlations between the proxy and true rewards, along with the reward hacking induced.
Figure 7 :
7Additional model size scatter plots.
(c) traffic merge -Misweighting (d) Correlation forFigure 8c
(e) traffic merge -Ontological (f) Correlation forFigure 8e
Figure 8 :
8Correlations between the proxy and true rewards, along with the reward hacking induced.
Figure 9 :Figure 10 :Figure 11 :Figure 12 :Figure 13 :
910111213ROC curves for Traffic-Mer -misweighting. ROC curves for Traffic-Mer -scope. ROC curves for Traffic-Mer -ontological. ROC curves for Traffic-Bot -misweighting. ROC curves for COVID -ontological.
Env.Type
Objective
Proxy
Misalign? Transition?
Traffic
Mis.
minimize commute
and accelerations
underpenalize acceleration
No
No
Mis.
underpenalize lane changes
Yes
Yes
Ont.
velocity replaces commute
Yes
Yes
Scope
monitor velocity near merge
Yes
Yes
COVID
Mis. balance economic,
health, political cost
underpenalize health cost
No
No
Ont.
ignore political cost
Yes
Yes
Atari
Mis.
score points under
smooth movement
downweight movement
No
No
Ont.
include shooting penalty
No
No
Glucose Ont. minimize health risk
risk in place of cost
Yes
No
Table 2
2reports the AUROC and
Table 2 :
2Performance of detectors on different subtasks. Each detector has at least one subtask with AUROC under 60%, indicating poor performance.
Env. -Misspecification # Policies # Problematic Rollout length Trusted policy sizeTraffic-Mer -misweighting
10
7
270
[96, 96]
Traffic-Mer -scope
16
9
270
[16, 16]
Traffic-Mer -ontological
23
7
270
[4]
Traffic-Bot -misweighting
12
9
270
[64, 64]
COVID -ontological
13
6
200
[16, 16]
Table 3 :
3Benchmark statistics. We average over 5 rollouts in traffic and 32 rollouts in COVID.B POLYNOMALY
B.1 BENCHMARK STATISTICS
See
ACKNOWLEDGEMENTSWe are thankful to Dan Hendrycks and Adam Gleave for helpful discussions about experiments and to Cassidy Laidlaw and Dan Hendrycks for providing valuable feedback on the writing. KB was supported by a JP Morgan AI Fellowship. JS was supported by NSF Award 2031985 and by Open Philanthropy.
More is different. Philip W Anderson, Science. 1774047Philip W Anderson. More is different. Science, 177(4047):393-396, 1972.
What matters in on-policy reinforcement learning? A large-scale empirical study. Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, arXiv:2006.05990arXiv preprintMarcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist, Olivier Pietquin, and Marcin Michalski. What matters in on-policy reinforcement learning? A large-scale empirical study. arXiv preprint arXiv:2006.05990, 2020.
Emergent tool use from multi-agent autocurricula. Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob Mcgrew, Igor Mordatch, International Conference on Learning Representations. Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In International Conference on Learning Representations, 2020.
The political economy of the covid-19 pandemic. Peter Boettke, Benjamin Powell, Southern Economic Journal. 874Peter Boettke and Benjamin Powell. The political economy of the covid-19 pandemic. Southern Economic Journal, 87(4):1090-1106, 2021.
Rishi Bommasani, arXiv:2108.07258On the opportunities and risks of foundation models. arXiv preprintRishi Bommasani et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
Openai gym. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
The potential cost implications of averting severe hypoglycemic events requiring hospitalization in high-risk adults with type 1 diabetes using real-time continuous glucose monitoring. Amy Bronstone, Claudia Graham, Journal of Diabetes Science and Technology. 10Amy Bronstone and Claudia Graham. The potential cost implications of averting severe hypo- glycemic events requiring hospitalization in high-risk adults with type 1 diabetes using real-time continuous glucose monitoring. Journal of Diabetes Science and Technology, 10, 2016.
Safe imitation learning via fast Bayesian reward inference from preferences. Daniel Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningDaniel Brown, Russell Coleman, Ravi Srinivasan, and Scott Niekum. Safe imitation learning via fast Bayesian reward inference from preferences. In Proceedings of the 37th International Conference on Machine Learning, 2020.
Deep reinforcement learning from human preferences. Jan Paul F Christiano, Tom Leike, Miljan Brown, Shane Martic, Dario Legg, Amodei, Advances in Neural Information Processing Systems. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, 2017.
The relationship between precision-recall and roc curves. Jesse Davis, Mark Goadrich, International Conference on Machine Learning. Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In International Conference on Machine Learning, 2006.
Scaling laws for acoustic models. Jasha Droppo, Oguz Elibol, arXiv:2106.09488arXiv preprintJasha Droppo and Oguz Elibol. Scaling laws for acoustic models. arXiv preprint arXiv:2106.09488, 2021.
Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Robert Dunning, Shane Legg, Koray Kavukcuoglu, Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Robert Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. 2018.
Reinforcement learning with a corrupted reward channel. Tom Everitt, Victoria Krakovna, Laurent Orseau, Shane Legg, International Joint Conference on Artificial Intelligence. Tom Everitt, Victoria Krakovna, Laurent Orseau, and Shane Legg. Reinforcement learning with a corrupted reward channel. In International Joint Conference on Artificial Intelligence, 2017.
Deep reinforcement learning for closedloop blood glucose control. Ian Fox, Joyce Lee, Rodica Pop-Busui, Jenna Wiens, Machine Learning for Healthcare Conference. Ian Fox, Joyce Lee, Rodica Pop-Busui, and Jenna Wiens. Deep reinforcement learning for closed- loop blood glucose control. In Machine Learning for Healthcare Conference, 2020.
The U.S. Insulin Crisis -Rationing a Lifesaving Medication Discovered in the 1920s. M Fralick, A S Kesselheim, New England Journal of Medicine. 38119M. Fralick and A. S. Kesselheim. The U.S. Insulin Crisis -Rationing a Lifesaving Medication Discovered in the 1920s. New England Journal of Medicine, 381(19):1793-1795, 2019.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine, In International conference on machine learning. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International confer- ence on machine learning, 2018.
Inverse reward design. Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, J Stuart, Anca Russell, Dragan, Advances in Neural Information Processing Systems. Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J Russell, and Anca Dragan. Inverse reward design. In Advances in Neural Information Processing Systems, 2017.
A baseline for detecting misclassified and out-of-distribution examples in neural networks. Dan Hendrycks, Kevin Gimpel, International Conference on Learning Representations. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. International Conference on Learning Representations, 2017.
Unsolved problems in ml safety. Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt, arXiv:2109.13916arXiv preprintDan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. arXiv preprint arXiv:2109.13916, 2021a.
What would Jiminy Cricket do? Towards agents that behave morally. Dan Hendrycks, Mantas Mazeika, Andy Zou, Sahil Patel, Christine Zhu, Jesus Navarro, Dawn Song, Bo Li, Jacob Steinhardt, Dan Hendrycks, Mantas Mazeika, Andy Zou, Sahil Patel, Christine Zhu, Jesus Navarro, Dawn Song, Bo Li, and Jacob Steinhardt. What would Jiminy Cricket do? Towards agents that behave morally. 2021b.
Cost-related insulin underuse among patients with diabetes. Darby Herkert, Pavithra Vijayakumar, Jing Luo, Jeremy I Schwartz, Tracy L Rabin, Eunice De-Filippo, Kasia J Lipska, JAMA Internal Medicine. 1791Darby Herkert, Pavithra Vijayakumar, Jing Luo, Jeremy I. Schwartz, Tracy L. Rabin, Eunice De- Filippo, and Kasia J. Lipska. Cost-related insulin underuse among patients with diabetes. JAMA Internal Medicine, 179(1):112-114, Jan 2019.
Scaling laws for transfer. Danny Hernandez, Jared Kaplan, Tom Henighan, Sam Mccandlish, arXiv:2102.01293arXiv preprintDanny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. Scaling laws for transfer. arXiv preprint arXiv:2102.01293, 2021.
Risks from learned optimization in advanced machine learning systems. Evan Hubinger, Chris Van Merwijk, Vladimir Mikulik, Joar Skalse, Scott Garrabrant, arXiv:1906.01820arXiv preprintEvan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from learned optimization in advanced machine learning systems. arXiv preprint arXiv:1906.01820, 2019.
Reward learning from human preferences and demonstrations in Atari. J Borja Ibarz, Tobias Leike, Geoffrey Pohlen, S Irving, Dario Legg, Amodei, Advances in Neural Information Processing Systems. Borja Ibarz, J. Leike, Tobias Pohlen, Geoffrey Irving, S. Legg, and Dario Amodei. Reward learn- ing from human preferences and demonstrations in Atari. In Advances in Neural Information Processing Systems, 2018.
Policy gradient bayesian robust optimization for imitation learning. Zaynah Javed, S Daniel, Satvik Brown, Jerry Sharma, Ashwin Zhu, Marek Balakrishna, Anca Petrik, Ken Dragan, Goldberg, Proceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine LearningZaynah Javed, Daniel S Brown, Satvik Sharma, Jerry Zhu, Ashwin Balakrishna, Marek Petrik, Anca Dragan, and Ken Goldberg. Policy gradient bayesian robust optimization for imitation learning. In Proceedings of the 38th International Conference on Machine Learning, 2021.
Jared Kaplan, Sam Mccandlish, Tom Henighan, B Tom, Benjamin Brown, Rewon Chess, Scott Child, Alec Gray, Jeffrey Radford, Dario Wu, Amodei, arXiv:2001.08361Scaling laws for neural language models. arXiv preprintJared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
Reward (Mis)design for Autonomous Driving. W , Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, Peter Stone, arXiv:2104.13906arXiv e-printsW. Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone. Reward (Mis)design for Autonomous Driving. arXiv e-prints arXiv:2104.13906, 2021.
Reinforcement learning in robotics: A survey. Jens Kober, Andrew Bagnell, Jan Peters, The International Journal of Robotics Research. 3211Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274, 2013.
Reinforcement learning for optimization of covid-19 mitigation policies. Varun Kompella, Roberto Capobianco, Stacy Jong, Jonathan Browne, Spencer Fox, Lauren Meyers, Peter Wurman, Peter Stone, Varun Kompella, Roberto Capobianco, Stacy Jong, Jonathan Browne, Spencer Fox, Lauren Meyers, Peter Wurman, and Peter Stone. Reinforcement learning for optimization of covid-19 mitigation policies, 2020.
Risk analysis of blood glucose data:a quantitative approach to optimizing the control of insulin dependent diabetes. . P Boris, Martin Kovatchev, Daniel J Straume, Leon Cox, Farhy, Journal of Theoretical Medicine. 31BorIs. P. Kovatchev, Martin Straume, Daniel J. Cox, and Leon.S Farhy. Risk analysis of blood glu- cose data:a quantitative approach to optimizing the control of insulin dependent diabetes. Journal of Theoretical Medicine, 3(1):1-10, 2000.
Heinrich Küttler, Nantas Nardelli, Thibaut Lavril, Marco Selvatici, Viswanath Sivakumar, Tim Rocktäschel, Edward Grefenstette, arXiv:1910.03552TorchBeast: A PyTorch Platform for Distributed RL. arXiv preprintHeinrich Küttler, Nantas Nardelli, Thibaut Lavril, Marco Selvatici, Viswanath Sivakumar, Tim Rocktäschel, and Edward Grefenstette. TorchBeast: A PyTorch Platform for Distributed RL. arXiv preprint arXiv:1910.03552, 2019.
How much does insulin cost? Here's how 23 brands compare. Benita Lee, Benita Lee. How much does insulin cost? Here's how 23 brands compare, Nov 2020.
Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, Shane Legg, AI safety gridworlds. Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. AI safety gridworlds, 2017.
Gathering strength, gathering storms: The one hundred year study on artificial intelligence (AI100) 2021 study panel report. Michael L Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, Toby Walsh, 2021Stanford, CAStanford UniversityTechnical reportMichael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi- Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. Gathering strength, gathering storms: The one hundred year study on artificial intelligence (AI100) 2021 study panel report. Technical report, Stanford University, Stanford, CA, 2021.
Microscopic traffic simulation using SUMO. Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, Evamarie Wießner, International Conference on Intelligent Transportation Systems. Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. Microscopic traffic simulation using SUMO. In International Conference on Intelligent Trans- portation Systems, 2018.
The UVA/PADOVA type 1 diabetes simulator: New features. Francesco Chiara Dalla Man, Dayu Micheletto, Marc Lv, Boris Breton, Claudio Kovatchev, Cobelli, Journal of Diabetes Science and Technology. 81Chiara Dalla Man, Francesco Micheletto, Dayu Lv, Marc Breton, Boris Kovatchev, and Claudio Cobelli. The UVA/PADOVA type 1 diabetes simulator: New features. Journal of Diabetes Science and Technology, 8(1):26-34, Jan 2014.
The building blocks of interpretability. Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, Alexander Mordvintsev, Distill. 3310Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The building blocks of interpretability. Distill, 3(3):e10, 2018.
A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, International Conference on Learning Representations. Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations, 2018.
Auditing radicalization pathways on youtube. Raphael Manoel Horta Ribeiro, Robert Ottoni, West, A F Virgílio, Wagner Almeida, Meira, Conference on Fairness, Accountability, and Transparency. New York, NY, USAManoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, and Wagner Meira. Auditing radicalization pathways on youtube. In Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2020.
Human Compatible: Artificial Intelligence and the Problem of Control. Penguin. Stuart Russell, Stuart Russell. Human Compatible: Artificial Intelligence and the Problem of Control. Penguin, 2019.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Nisan Stiennon, Long Ouyang, Jeff Wu, M Daniel, Ryan Ziegler, Chelsea Lowe, Alec Voss, Dario Radford, Paul Amodei, Christiano, arXiv:2009.01325Learning to summarize from human feedback. arXiv preprintNisan Stiennon, Long Ouyang, Jeff Wu, Daniel M Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. arXiv preprint arXiv:2009.01325, 2020.
Aligning ai optimization to community well-being. Jonathan Stray, International Journal of Community Well-Being. 34Jonathan Stray. Aligning ai optimization to community well-being. International Journal of Com- munity Well-Being, 3(4):443-463, Dec 2020.
Csi: Novelty detection via contrastive learning on distributionally shifted instances. Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin, Advances in Neural Information Processing Systems. Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. Csi: Novelty detection via con- trastive learning on distributionally shifted instances. Advances in Neural Information Processing Systems, 2020.
Quantilizers: A safer alternative to maximizers for limited optimization. Jessica Taylor, AAAI Workshop: AI, Ethics, and Society. Jessica Taylor. Quantilizers: A safer alternative to maximizers for limited optimization. In AAAI Workshop: AI, Ethics, and Society, 2016.
Is deep reinforcement learning really superhuman on Atari? Leveling the playing field. Marin Toromanoff, Emilie Wirbel, Fabien Moutarde, Marin Toromanoff, Emilie Wirbel, and Fabien Moutarde. Is deep reinforcement learning really superhuman on Atari? Leveling the playing field, 2019.
Congested traffic states in empirical observations and microscopic simulations. Martin Treiber, Ansgar Hennecke, Dirk Helbing, Physical review E. 6221805Martin Treiber, Ansgar Hennecke, and Dirk Helbing. Congested traffic states in empirical observa- tions and microscopic simulations. Physical review E, 62(2):1805, 2000.
Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist. Alexander Trott, Sunil Srinivasa, Douwe Van Der, Sebastien Wal, Stephan Haneuse, Zheng, arXiv:2108.02904arXiv preprintAlexander Trott, Sunil Srinivasa, Douwe van der Wal, Sebastien Haneuse, and Stephan Zheng. Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist. arXiv preprint arXiv:2108.02904, 2021.
Benchmarks for reinforcement learning in mixed-autonomy traffic. Eugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Cathy Wu, Fangyu Wu, Richard Liaw, Eric Liang, Alexandre M Bayen, Conference on Robot Learning. Eugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Cathy Wu, Fangyu Wu, Richard Liaw, Eric Liang, and Alexandre M. Bayen. Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning, 2018.
Flow: A modular learning framework for mixed autonomy traffic. Cathy Wu, Abdul Rahman Kreidieh, Kanaad Parvate, Eugene Vinitsky, Alexandre M Bayen, IEEE Transactions on Robotics. Cathy Wu, Abdul Rahman Kreidieh, Kanaad Parvate, Eugene Vinitsky, and Alexandre M. Bayen. Flow: A modular learning framework for mixed autonomy traffic. IEEE Transactions on Robotics, 2021.
Self-organizing systems: The emergence of order. Yates F Eugene, Springer Science & Business MediaF Eugene Yates. Self-organizing systems: The emergence of order. Springer Science & Business Media, 2012.
Reinforcement learning in healthcare: A survey. Chao Yu, Jiming Liu, Shamim Nemati, arXiv:1908.08796arXiv preprintChao Yu, Jiming Liu, and Shamim Nemati. Reinforcement learning in healthcare: A survey. arXiv preprint arXiv:1908.08796, 2019.
Consequences of misaligned AI. Simon Zhuang, Dylan Hadfield-Menell, Advances in Neural Information Processing Systems. Simon Zhuang and Dylan Hadfield-Menell. Consequences of misaligned AI. In Advances in Neural Information Processing Systems, 2020. |
257,078,985 | ENERGY-BASED TEST SAMPLE ADAPTATION FOR DOMAIN GENERALIZATION | In this paper, we propose energy-based sample adaptation at test time for domain generalization. Where previous works adapt their models to target domains, we adapt the unseen target samples to source-trained models. To this end, we design a discriminative energy-based model, which is trained on source domains to jointly model the conditional distribution for classification and data distribution for sample adaptation. The model is optimized to simultaneously learn a classifier and an energy function. To adapt target samples to source distributions, we iteratively update the samples by energy minimization with stochastic gradient Langevin dynamics. Moreover, to preserve the categorical information in the sample during adaptation, we introduce a categorical latent variable into the energy-based model. The latent variable is learned from the original sample before adaptation by variational inference and fixed as a condition to guide the sample update. Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal. Brendel. A simple way to make neural networks robust against diverse image | [
233024779,
248811555,
236477395
] | ENERGY-BASED TEST SAMPLE ADAPTATION FOR DOMAIN GENERALIZATION
Zehao Xiao
AIM Lab
University of Amsterdam
Xiantong Zhen
AIM Lab
University of Amsterdam
Inception Institute of Artificial Intelligence
Shengcai Liao
Inception Institute of Artificial Intelligence
Cees G M Snoek
AIM Lab
University of Amsterdam
ENERGY-BASED TEST SAMPLE ADAPTATION FOR DOMAIN GENERALIZATION
Published as a conference paper at ICLR 2023
In this paper, we propose energy-based sample adaptation at test time for domain generalization. Where previous works adapt their models to target domains, we adapt the unseen target samples to source-trained models. To this end, we design a discriminative energy-based model, which is trained on source domains to jointly model the conditional distribution for classification and data distribution for sample adaptation. The model is optimized to simultaneously learn a classifier and an energy function. To adapt target samples to source distributions, we iteratively update the samples by energy minimization with stochastic gradient Langevin dynamics. Moreover, to preserve the categorical information in the sample during adaptation, we introduce a categorical latent variable into the energy-based model. The latent variable is learned from the original sample before adaptation by variational inference and fixed as a condition to guide the sample update. Experiments on six benchmarks for classification of images and microblog threads demonstrate the effectiveness of our proposal. Brendel. A simple way to make neural networks robust against diverse image
INTRODUCTION
Deep neural networks are vulnerable to domain shifts and suffer from lack of generalization on test samples that do not resemble the ones in the training distribution (Recht et al., 2019;Krueger et al., 2021;Shen et al., 2022). To deal with the domain shifts, domain generalization has been proposed (Muandet et al., 2013;Gulrajani & Lopez-Paz, 2020;Cha et al., 2021). Domain generalization strives to learn a model exclusively on source domains in order to generalize well on unseen target domains. The major challenge stems from the large domain shifts and the unavailability of any target domain data during training.
To address the problem, domain invariant learning has been widely studied, e.g., (Motiian et al., 2017;Zhao et al., 2020;Nguyen et al., 2021), based on the assumption that invariant representations obtained on source domains are also valid for unseen target domains. However, since the target data is inaccessible during training, it is likely an "adaptivity gap" (Dubey et al., 2021) exists between representations from the source and target domains. Therefore, recent works try to adapt the classification model with target samples at test time by further fine-tuning model parameters (Sun et al., 2020;Wang et al., 2021) or by introducing an extra network module for adaptation (Dubey et al., 2021). Rather than adapting the model to target domains, Xiao et al. (2022) adapt the classifier for each sample at test time. Nevertheless, a single sample would not be able to adjust the whole model due to the large number of model parameters and the limited information contained in the sample. This makes it challenging for their method to handle large domain gaps. Instead, we propose to adapt each target sample to the source distributions, which does not require any fine-tuning or parameter updates of the source model.
In this paper, we propose energy-based test sample adaptation for domain generalization. The method is motivated by the fact that energy-based models (Hinton, 2002;LeCun et al., 2006) flexibly model complex data distributions and allow for efficient sampling from the modeled distribution by Langevin dynamics (Du & Mordatch, 2019;Welling & Teh, 2011). Specifically, we define a new discriminative energy-based model as the composition of a classifier and a neural-network-based energy function in the data space, which are trained simultaneously on the source domains. The trained model iteratively updates the representation of each target sample by gradient descent of energy minimization through Langevin dynamics, which eventually adapts the sample to the source data distribution. The adapted target samples are then predicted by the classifier that is simultaneously trained in the discriminative energy-based model. For both efficient energy minimization and classification, we deploy the energy functions on the input feature space rather than the raw images.
Since Langevin dynamics tends to draw samples randomly from the distribution modeled by the energy function, it cannot guarantee category equivalence. To maintain the category information of the target samples during adaptation and promote better classification performance, we further introduce a categorical latent variable in our energy-based model. Our model learns the latent variable to explicitly carry categorical information by variational inference in the classification model. We utilize the latent variable as conditional categorical attributes like in compositional generation (Du et al., 2020a;Nie et al., 2021) to guide the sample adaptation to preserve the categorical information of the original sample. At inference time, we simply ensemble the predictions obtained by adapting the unseen target sample to each source domain as the final domain generalization result.
We conduct experiments on six benchmarks for classification of images and microblog threads to demonstrate the promise and effectiveness of our method for domain generalization 1 .
METHODOLOGY
In domain generalization, we are provided source and target domains as non-overlapping distributions on the joint space X × Y, where X and Y denote the input and label space, respectively. Given a dataset with S source domains D s = D i s S i=1 and T target domains D t = D i t T i=1 , a model is trained only on D s and required to generalize well on D t . Following the multi-source domain generalization setting (Li et al., 2017;, we assume there are multiple source domains with the same label space to mimic good domain shifts during training.
In this work, we propose energy-based test sample adaptation, which adapts target samples to source distributions to tackle the domain gap between target and source data. The rationale behind our model is that adapting the target samples to the source data distributions is able to improve the prediction of the target data with source models by reducing the domain shifts, as shown in Figure 1 (left). Since the target data is never seen during training, we mimic domain shifts during the training stage to learn the sample adaptation procedure. By doing so, the model acquires the ability to adapt each target sample to the source distribution at inference time. In this section, we first provide a preliminary on energy-based models and then present our energy-based test sample adaptation.
ENERGY-BASED MODEL PRELIMINARY
Energy-based models (LeCun et al., 2006) represent any probability distribution p(x) for x ∈ R D as p θ (x) = exp(−E θ (x)) Z θ , where E θ (x) : R D → R is known as the energy function that maps each input sample to a scalar and Z θ = exp(−E θ (x))dx denotes the partition function. However, Z θ is usually intractable since it computes the integration over the entire input space of x. Thus, we cannot train the parameter θ of the energy-based model by directly maximizing the log-likelihood log p θ (x) = −E θ (x) − logZ θ . Nevertheless, the log-likelihood has the derivative (Du & Mordatch, 2019;Song & Kingma, 2021) :
∂log p θ (x) ∂θ = E p d (x) − ∂E θ (x) ∂θ + E p θ (x) ∂E θ (x) ∂θ ,(1)
where the first expectation term is taken over the data distribution p d (x) and the second one is over the model distribution p θ (x).
The objective function in eq. (1) encourages the model to assign low energy to the sample from the real data distribution while assigning high energy to those from the model distribution. To do so, we need to draw samples from p θ (x), which is challenging and usually approximated by MCMC methods (Hinton, 2002). An effective MCMC method used in recent works (Du & Mordatch, 2019;Nijkamp et al., 2019;Xiao et al., 2021b;Grathwohl et al., 2020) is Stochastic Gradient Langevin With Langevin dynamics, each target sample is adapted iteratively to the source data distributions, which are represented as the gradient colors from green to red (right). Best viewed in color.
Dynamics (Welling & Teh, 2011), which simulates samples by
x i+1 = x i − λ 2 ∂E θ (x i ) ∂x i + , s.t., ∼ N (0, λ)(2)
where λ denotes the step-size and x 0 is drawn from the initial distribution p 0 (x), which is usually a uniform distribution (Du & Mordatch, 2019;Grathwohl et al., 2020). , 2021), which is alternatively achieved in (Hinton, 2002) by minimizing contrastive divergence:
Actually, maximizing log p θ (x) is equivalent to minimizing the KL divergence D KL (p d (x)||p θ (x)) (Song & KingmaD KL (p d (x)||p θ (x)) − D KL (q θ (x)||p θ (x)),(3)
where q θ (x) = t θ p d (x), representing t sequential MCMC transitions starting from p(x) (Du et al., 2021a) and minimizing eq. (3) is achieved by minimizing:
E p d (x) [E θ (x)] − E stop_grad(q θ (x)) [E θ (x)] + E q θ (x) [E stop_grad(θ) (x)] + E q θ (x) [logq θ (x)]. (4)
Eq. (4) avoids drawing samples from the model distribution p θ (x), which often requires an exponentially long time for MCMC sampling (Du et al., 2021a). Intuitively, q θ (x) is closer to p θ (x) than p d (x), which guarantees that D KL (p d (x)||p θ (x)) ≥ D KL (q θ (x)||p θ (x)) and eq. (3) can only be zero when p θ (x) = p d (x).
ENERGY-BASED TEST SAMPLE ADAPTATION
We propose energy-based test sample adaption to tackle the domain gap between source and target data distributions. This is inspired by the fact that Langevin dynamics simulates samples of the distribution expressed by the energy-based model through gradient-based updates, with no restriction on the sample initialization if the sampling steps are sufficient (Welling & Teh, 2011;Du & Mordatch, 2019). We leverage this property to conduct test sample adaptation with Langevin dynamics by setting the target sample as the initialization and updating it iteratively. With the energy-based model of the source data distribution, as shown in Figure 1 (right), target samples are gradually updated towards the source domain and with sufficient update steps, the target sample will eventually be adapted to the source distribution.
Discriminative energy-based model. We propose the discriminative energy-based model p θ,φ (x, y) on the source domain, which is constructed by a classification model and an energy function in the data space. Note that x denotes the feature representations of the input image I, where x is generated by a neural network backbone x=f ψ (I). Different from the regular energy-based models that generate data samples from uniform noise, our goal is to promote the discriminative task, i.e., the conditional distribution p(y|x), which is preferred to be jointly modeled with the feature distributions p(x) of the input data. Thus, the proposed energy-based model is defined on the joint space of X × Y to consider both the classification task and the energy function. Formally, the discriminative energy-based model of a source domain is formulated as:
p θ,φ (x, y) = p φ (y|x) exp(−E θ (x)) Z θ,φ ,(5)
where p φ (y|x) denotes the classification model and E θ (x) is an energy function, which is nowadays implemented with neural networks. Eq. (5) enables the energy-based model to jointly model the feature distribution of input data and the conditional distribution on the source domains. An unseen target sample x t is iteratively adapted to the distribution of the source domain D s by Langevin dynamics update with the energy function E θ (x) and predicted by the classification model p φ (y|x).
The model parameters θ and φ can be jointly optimized following eq. (3) by minimizing:
D KL (p d (x, y)||p θ,φ (x, y)) − D KL (q θ,φ (x, y)||p θ,φ (x, y)),(6)
which is derived as:
L = − E p d (x,y) log p φ (y|x) + E p d (x,y) E θ (x) − E stop_grad(q θ,φ (x,y)) E θ (x) + E q θ,φ (x,y) E stop_grad(θ) (x) − log p stop_grad(φ) (y|x) ,(7)
where p d (x, y) denotes the real data distribution and q θ,φ (x, y) = t θ p(x, y) denotes t sequential MCMC samplings from the distribution expressed by the energy-based model similar to eq. (4) (Du et al., 2021a). We provide the detailed derivation in Appendix A.
In eq. (7), the first term encourages to learn a discriminative classifier on the source domain. The second and third terms train the energy function to model the data distribution of the source domain by assigning low energy on the real samples and high energy on the samples from the model distribution. Different from the first three terms that directly supervise the model parameters θ and φ, the last term stops the gradients of the energy function E θ and classifier φ while back-propagating the gradients to the adapted samples q θ,φ (x, y). Because of the stop-gradient, this term does not optimize the energy or log-likelihood of a given sample, but rather increases the probability of such samples with low energy and high log-likelihood under the modeled distribution.
Essentially, the last term trains the model θ to provide a variation for each sample that encourages its adapted version to be both discriminative on the source domain classifier and low energy on the energy function. Intuitively, it supervises the model to learn the ability to preserve categorical information during adaptation and find a faster way to minimize the energy.
Label-preserving adaptation with categorical latent variable. Since the ultimate goal is to correctly classify target domain samples, it is necessary to maintain the categorical information in the target sample during the iterative adaptation process. Eq. (7) contains a supervision term that encourages the adapted target samples to be discriminative for the source classification models. However, as the energy function E θ operates only in the X space and the sampling process of Langevin dynamics tends to result in random samples from the sampled distribution that are independent of the starting point, there is no categorical information considered during the adaptation procedure.
To achieve label-preserving adaptation, we introduce a categorical latent variable z into the energy function to guide the adaptation of target samples to preserve the category information. With the latent variable, the energy function E θ is defined in the joint space of X × Z. The categorical information contained in z will be explicitly incorporated into the iterative adaptation. To do so, we define the energy-based model with the categorical latent variable as:
p θ,φ (x, y) = p θ,φ (x, y, z)dz = p φ (y|z, x)p φ (z|x) exp(−E θ (x|z)) Z θ,φ dz,(8)
where φ denotes the parameters of the classification model that predicts z and y and E θ denotes the energy function that models the distribution of x considering the information of latent variable z. z is trained to contain sufficient categorical information of x and serves as the conditional attributes that guide the adapted samples x preserving the categorical information. Once obtained from the original input feature representations x, z is fixed and taken as the input of the energy function together with the updated x in each iteration. Intuitively, when x is updated from the target domain to the source domain via Langevin dynamics, z helps it preserve the classification information contained in the original x, without introducing additional information. To learn the latent variable z with more categorical information, we estimate z by variational inference and design a variational posterior q φ (z|d x ), where d x is the average representation of samples from the same category as x on the source domain. Therefore, q φ (z|d x ) can be treated as a probabilistic prototypical representation of a class. By incorporating q φ (z|d x ) into eq. (8), we obtain the lower bound of the log-likelihood:
log p θ,φ (x, y) ≥ E q φ [log p φ (y|z, x) − E θ (x, z) − log Z θ,φ ] + D KL [q φ (z|d x )||p φ (z|x)].
(9) Note that in eq. (9), the categorical latent variable z is incorporated into both the classification model p φ (y|z, x) and the energy function E θ (x|z). The energy function contains both data information and categorical information. During the Langevin dynamics update for sample adaptation, the latent variable provides categorical information in each iteration, which enables the adapted target samples to be discriminative.
By incorporating eq. (9) into eq. (6), we derive the objective function with the categorical latent variable as:
L f = E p d (x,y) E q φ (z) [−log p φ (y|z, x)] + D KL [q φ (z|d x )||p φ (z|x)] + E q φ (z) E p d (x) [E θ (x|z)] − E stop_grad(q θ (x)) [E θ (x, z)] + E q θ (x) E q stop_grad(φ) (z) E stop_grad(θ) (x, z) − log p stop_grad(φ) (y|z, x) − D KL [q stop_grad(φ) (z|d x )||p stop_grad(φ) (z|x)] ,(10)
where p d (x) and q θ (x) denote the data distribution and the t sequential MCMC samplings from the energy-based distribution of the source domain D s . Similar to eq. (7), the first term trains the classification model on the source data. The second term trains the energy function to model the source data distribution. The last term is conducted on the adapted samples to supervise the adaptation procedure. The complete derivation is provided in Appendix A. An illustration of our model is shown in Figure 2. We also provide the complete algorithm in Appendix B.
Ensemble inference. Since the target data is inaccessible during training, we train the specific parameters θ and φ to model each source distribution by adapting the samples from other source domains to the current source distribution. In each iteration, we train the energy-based model θ i of one randomly selected source domain D i s . The adapted samples generated by samples x j , x k from the other source domains D j s , D k s are used as the negative samples while x i as the positive samples to train the energy-based model. During inference, the target sample is adapted to each source distribution with the specific energy function and predicted by the specific classifier. After that, we combine the predictions of all source domain models to obtain the final prediction:
p(y t ) = 1 S S i=1 1 N N n=1 p φ i (y|z n , x) z n ∼ p(z n |x t ), x ∼ p θ i (x).(11)
Here φ i and θ i denote the domain specific classification model and energy function of domain D i s . Note that since the labels of the x t are unknown, d x t in eq. (10) is not available during inference. Therefore, we draw z n from the prior distribution p(z n |x t ), where x t is the original target sample without any adaptation. With fixed z n , x ∼ p θ i (x) are drawn by Langevin dynamics as in eq. (2) with the target samples x t as the initialization sample. p θ i (x) denotes the distributions modeled by the energy function E θ i (x|z n ). Moreover, to be efficient, the feature extractor ψ for obtaining feature representations x is shared by all source domains and only the energy functions and classifiers are domain specific. We deploy the energy-based model on feature representations for lighter neural networks of the domain-specific energy functions and classification models.
EXPERIMENTS
Datasets. We conduct our experiments on five widely used datasets for domain generalization, Since we conduct the energy-based distribution on the feature space, our method can also handle other data formats. Therefore, we also evaluate the method on PHEME (Zubiaga et al., 2016), a dataset for natural language processing. PACS consists of 9,991 images of seven classes from four domains, i.e., photo, art-painting, cartoon, and sketch. We use the same training and validation split as (Li et al., 2017) and follow their "leaveone-out" protocol. Office-Home also contains four domains, i.e., art, clipart, product, and real-world, which totally have 15,500 images of 65 categories. DomainNet is more challenging since it has six domains i.e., clipart, infograph, painting, quickdraw, real, sketch, with 586,575 examples of 345 classes. We use the same experimental protocol as PACS. We utilize the Rotated MNIST and Fashion-MNIST datasets by following the settings in Piratla et al. (Piratla et al., 2020). The images are rotated from 0 • to 90 • in intervals of 15 • , covering seven domains. We use the domains with rotation angles from 15 • to 75 • as the source domains, and images rotated by 0 • and 90 • as the target domains. PHEME is a dataset for rumour detection. There are a total of 5,802 tweets labeled as rumourous or non-rumourous from 5 different events, i.e., Charlie Hebdo, Ferguson, German Wings, Ottawa Shooting, and Sydney Siege. Same as PACS, we evaluate our methods on PHEME also by the "leave-one-out" protocol. Benefit of energy-based test sample adaptation. We first investigate the effectiveness of our energy-based test sample adaptation in Table 1. Before adaptation, we evaluate the target samples directly by the classification model of each source domain and ensemble the predictions. After the adaptation to the source distributions, the performance of the target samples improves, especially on the art-painting and cartoon domains, demonstrating the benefit of the iterative adaptation by our energy-based model. The results of models with latent variable, i.e., trained by eq. (10), are shown in the last two rows. With the latent variable, the performance is improved both before and after adaptation, which shows the benefit of incorporating the latent variable into the classification model. The performance improvement after adaptation is also more prominent than without the latent variable, demonstrating the effectiveness of incorporating the latent variable into the energy function. Figure 3: Iterative adaptation of target samples. We adapt the samples from the target domain (art-painting) to different source domains (photo, cartoon, and sketch in (a), (b), (c)). Each subfigure shows the adaptation of one target sample. The adaptation procedure of the target samples is represented by the gradient color from green to red. In each figure, the target sample has the same label as the source data, but is mispredicted due to domain shifts. During adaptation, the target samples gradually approach the source distributions and eventually get correct predictions.
Effectiveness of iterative test sample adaptation by Langevin dynamics. We visualize the iterative adaptation of the target samples in Figure 3. In each subfigure, the target and source samples have the same label. The visualization shows that the target samples gradually approach the source data distributions during the iterative adaptation by Langevin dynamics. After adaptation, the predictions of the target samples on the source domain classifier also become more accurate. For instance, in Figure 3 (a), the target sample of the house category is predicted incorrectly, with a probability of house being only 0.02%. After adaptation, the probability becomes 99.7%, which is predicted correctly. More visualizations, including several failure cases, are provided in Appendix D.
Number of Langevin dynamics updating steps Figure 4: Adaptation with different Langevin dynamics steps. As the number of steps increases, energy decreases while accuracy increases. When the number of steps is too large, the accuracy without z or with p(z|x) drops slightly while the accuracy with q(z|d x ) is more stable and better.
Adaptation with different Langevin dynamics steps. We also investigate the effect of the Langevin dynamics step numbers during adaptation. Figure 4 shows the variety of the average energy and accuracy of the target samples adapted to the source distributions with different updating steps. The experiments are conducted on PACS with ResNet-18. The target domain is art-painting. With the step numbers less than 80, the average energy decreases consistently while the accuracy increases along with the increased number of updating steps, showing that the target samples are getting closer to the source distributions. When the step numbers are too large, the accuracy will decrease as the number of steps increases. We attribute this to z t having imperfect categorical information, since it is approximated during inference from a single target sample x t only. In this case, the label information would not be well preserved in x t during the Langevin dynamics update, which causes an accuracy drop with a large number of updates.
To demonstrate this, we conduct the experiment by replacing p(z t ) with q φ (z|d x ) during inference. d x is the class center of the same class as x t . Therefore, q φ (z|d x ) contains categorical information that is closer to the ground truth label. We regard this as the oracle model. As expected, the oracle model performs better as the number of steps increases and reaches stability after 100 steps. We also show the results without z. We can see the performance and stability are both worse, which again demonstrates that z helps preserve label information in the target samples during adaptation. Moreover, the energy without conditioning on z is higher. The reason can be that without conditioning on z, there is no guidance of categorical information during sample adaptation. In this case, the sample can be adapted randomly by the energy-based model, regardless of the categorical information. This can lead to the conflict to adapt the target features to different categories of the source data, slowing down the decline of the energy. We provide more analyses of z t in Appendix E. In addition, the training and test time cost is also larger as the step number increases, the comparisons and analyses are also provided in Appendix E.
Comparisons. PACS, Office-Home, and DomainNet are three widely used benchmarks in domain generalization. We conduct experiments on PACS and Office-Home based on both ResNet-18 and ResNet-50 and experiments on DomainNet based on ResNet-50. As shown in Table 2, our method achieves competitive and even the best overall performance in most cases. Moreover, our method performs better than most of the recent test-time adaptation methods (Iwasawa & Matsuo, 2021;Wang et al., 2021;Dubey et al., 2021), which fine-tunes the model at test time with batches of target samples. By contrast, we strictly follow the setting of domain generalization. We only use the source data to train the classification and energy-based models during training. At test time, we do our sample adaptation and make predictions on each individual target sample by just the source-trained models. Our method is more data efficient at test time, avoiding the problem of data collection per target domain in real-world applications. Despite the data efficiency during inference, our method is still comparable and sometimes better, especially on datasets with more categories, e.g., Office-Home and DomainNet. Compared with the recent work by Xiao et al. (2022), our method is at least competitive and often better. To show the generality of our method, we also conduct experiments on the natural language processing dataset PHEME. The dataset is a binary classification task for rumour detection. The results in Table 2 show similar conclusions as the image datasets. Table 2 also demonstrates the effectiveness of our sample adaptation. For each dataset and backbone, the proposed method achieves a good improvement after adaptation by the proposed discriminative energy-based model. For fairness, the results without adaptation are also obtained by ensemble predictions of the source-domain-specific classifiers. Moreover, larger steps (i.e., 50) lead to better performance. The improvements of adaptation with 50 steps are slight. Considering the trade-off of computational efficiency and performance, we set the step number as 20 in our paper. We provide detailed comparisons, results on rotated MNIST and Fashion-MNIST datasets, as well as more experiments on the latent variable, corruption datasets, and analyses of the ensemble inference method in Appendix E.
RELATED WORK
Domain generalization. One of the predominant methods is domain invariant learning (Muandet et al., 2013;Ghifary et al., 2016;Motiian et al., 2017;Seo et al., 2020;Zhao et al., 2020;Xiao et al., 2021a;Mahajan et al., 2021;Nguyen et al., 2021;Phung et al., 2021;Shi et al., 2022). Muandet et al. (Shankar et al., 2018;Volpi et al., 2018;Qiao et al., 2020;Zhou et al., 2020a;b;Yao et al., 2022), which generates more source domain data to simulate domain shifts during training. Zhou et al. (2020b) proposed a data augmentation on the feature space by mixing the feature statistics of instances from different domains. Meta-learning-based methods have also been studied for domain generalization (Li et al., 2018a;Balaji et al., 2018;Dou et al., 2019;Du et al., 2021b;Bui et al., 2021;Du et al., 2021c). Li et al. (2018a) introduced the model agnostic meta-learning (Finn et al., 2017) into domain generalization. Du et al. (2020b) proposed the meta-variational information bottleneck for domain-invariant learning.
Test-time adaptation and source-free adaptation. Recently, adaptive methods have been proposed to better match the source-trained model and the target data at test time (Sun et al., 2020;Li et al., 2020;D'Innocente et al., 2019;Pandey et al., 2021;Iwasawa & Matsuo, 2021;Dubey et al., 2021;Zhang et al., 2021). Test-time adaptation (Sun et al., 2020;Wang et al., 2021;Zhou & Levine, 2021) fine-tunes (part of) a network trained on source domains by batches of target samples. Xiao et al. (2022) proposed single-sample generalization that adapts a model to each target sample under a meta-learning framework. There are also some source-free domain adaptation methods (Liang et al., 2020;Dong et al., 2021;Liang et al., 2021) that adapt the source-trained model on only the target data. These methods follow the domain adaptation settings to fine-tune the source-trained model by the entire target set. By contrast, we do sample adaptation at test time but strictly follow the domain generalization settings. In our method, no target sample is available during the training of the models. At test time, each target sample is adapted to the source domains and predicted by the source-trained model individually, without fine-tuning the models or requiring large amounts of target data.
Energy-based model. The energy-based model is a classical learning framework (Ackley et al., 1985;Hinton, 2002;Hinton et al., 2006;LeCun et al., 2006). Recently, (Xie et al., 2016;Nijkamp et al., 2019;2020;Du & Mordatch, 2019;Du et al., 2021a;Xie et al., 2022) further extend the energy-based model to high-dimensional data using contrastive divergence and Stochastic Gradient Langevin dynamics. Different from most of these works that only model the data distributions, some recent works model the joint distributions (Grathwohl et al., 2020;Xiao et al., 2021b). In our work, we define the joint distribution of data and label to promote the classification of unseen target samples in domain generalization, and further incorporate a latent variable to incorporate the categorical information into the Langevin dynamics procedure. Energy-based models for various tasks have been proposed, e.g., image generation (Du et al., 2020a;Nie et al., 2021), out-of-distribution detection (Liu et al., 2020), and anomaly detection (Dehaene et al., 2020). Some methods also utilize energy-based models for domain adaptation (Zou et al., 2021;Xie et al., 2021;Kurmi et al., 2021). Different from these methods, we focus on domain generalization and utilize the energy-based model to express the source domain distributions without any target data during training.
CONCLUSION AND DISCUSSIONS
In this paper, we propose a discriminative energy-based model to adapt the target samples to the source data distributions for domain generalization. The energy-based model is designed on the joint space of input, output, and a latent variable, which is constructed by a domain specific classification model and an energy function. With the trained energy-based model, the target samples are adapted to the source distributions through Langevin dynamics and then predicted by the classification model. Since we aim to prompt the classification of the target samples, the model is trained to achieve label-preserving adaptation by incorporating the categorical latent variable. We evaluate the method on six image and text benchmarks. The results demonstrate its effectiveness and generality. We have not tested our approach beyond image and text classification tasks, but since our sample adaptation is conducted on the feature space, it should be possible to extend the method to other complex tasks based on feature representations. Compared with recent model adaptation methods, our method does not need to adjust the model parameters at test time, which requires batches of target samples to provide sufficient target information. This is more data efficient and challenging at test time, therefore the training procedure is more involved with complex optimization objectives. One limitation of our proposed method is the iterative adaptation requirement for each target sample, which introduces an extra time cost at both training and test time. The problem can be mitigated by speeding up the energy minimization with optimization techniques during Langevin dynamics, e.g., Nesterov momentum (Nesterov, 1983), or by exploring one-step methods for sample adaptation. We leave these explorations for future work.
A DERIVATIONS
Derivation of energy-based sample adaptation. Recall our discriminative energy-based model
p θ,φ (x, y) = p φ (y|x) exp(−E θ (x)) Z θ,φ ,(12)
where Z θ,φ = p φ (y|x)exp(−E θ (x))dxdy is the partition function. φ and θ denote the parameters of the classifier and energy function, respectively. To jointly train the parameters, we minimize the contrastive divergence proposed by Hinton (2002):
L = D KL [p d (x, y)||p θ,φ (x, y)] − D KL [q θ,φ (x, y)||p θ,φ (x, y)],(13)
where p d (x, y) denotes the real data distribution and q θ,φ (x, y) = t θ p(x, y) denotes t sequential MCMC samplings from the distribution expressed by the energy-based model (Du et al., 2021a). The gradient of the first term with respect to θ and φ is
∇ θ,φ D KL [p d (x, y)||p θ,φ (x, y)] = ∇ θ,φ E p d (x,y) log p d (x, y) p θ,φ (x, y) = E p d (x,y) ∇ θ,φ log p d (x, y) − ∇ θ,φ log p θ,φ (x, y) = E p d (x,y) − ∇ θ,φ log p θ,φ (x, y) ,(14)
while the gradient of the second term is
∇ θ,φ D KL [q θ,φ (x, y)||p θ,φ (x, y)] =∇ θ,φ E q θ,φ (x,y) log q θ,φ (x, y) p θ,φ (x, y) =∇ θ,φ q θ,φ (x, y)∇ q θ,φ D KL [q θ,φ (x, y)||p θ,φ (x, y)] + E q θ,φ (x,y) − ∇ θ,φ log p θ,φ (x, y) .(15)
Combining eq. (14) and eq. (15), we have the overall gradient as:
∇ θ,φ L all = − (E p d (x,y) ∇ θ,φ log p θ,φ (x, y) − E q θ,φ (x,y) ∇ θ,φ log p θ,φ (x, y) + ∇ θ,φ q θ,φ (x, y)∇ q θ,φ D KL [q θ,φ (x, y)||p θ,φ (x, y)]).(16)
For the first two terms, the gradient can be further derived to
E p d (x,y) ∇ θ,φ log p θ,φ (x, y) − E q θ,φ (x,y) ∇ θ,φ log p θ,φ (x, y) =E p d (x,y) ∇ θ,φ (log p φ (y|x) − E θ (x) − log Z θ,φ ) −E q θ,φ (x,y) ∇ θ,φ (log p φ (y|x) − E θ (x) − log Z θ,φ ) .(17)
Moreover, ∇ θ,φ log Z θ,φ can be written as the expectation & Kingma, 2021;Xiao et al., 2021b), which is therefore canceled out in eq. (17) (Hinton, 2002). We then have the loss function for the first two terms as
E p θ,φ (x,y) [∇ φ log p φ (y|x) − ∇ θ E θ (x)] (SongL 1 = E p d (x,y) E θ (x) − log p φ (y|x) − E q θ,φ (x,y) E θ (x) − log p φ (y|x) .(18)
Furthermore, we have the loss function
L 2 = E q θ,φ (x,y) log q θ,φ (x, y) p stop_grad(θ,φ) (x, y) = − E q θ,φ (x,y) logp stop_grad(φ) (y|x) − E stop_grad(θ) (x) − logZ stop_grad(θ,φ) + E q θ,φ (x,y) logq θ,φ (x, y) ,(19)
which has the same gradient as the last term in eq. (16) (Du et al., 2021a). The stop_grad here means that we do not backpropagate the gradients to update the parameters by the corresponding forward functions. Thus, these parameters can be treated as constants.
Since the gradient of θ and φ is stopped in log Z stop_grad(θ,φ) , we treat it as a constant independent of q θ,φ (x, y) and therefore remove it from the eq. (19). In addition, the term E q θ,φ (x,y) log p φ (y|x) in eq. (18) encourages wrong prediction of the updated samples from q θ,φ (x, y), which goes against our goal of promoting classification by adapting target samples. The term E q θ,φ (x,y) logq θ,φ (x, y) in eq. (19) can be treated as a negative entropy of q θ,φ (x, y), which is always negative and hard to estimate. Therefore, we remove these two terms in the final loss function by applying an upper bound of the combination of eq. (18) and eq. (19) as:
L = − E p d (x,y) log p φ (y|x) + E p d (x,y) E θ (x) − E stop_grad(q θ,φ (x,y)) E θ (x) + E q θ,φ (x,y) E stop_grad(θ) (x) − log p stop_grad(φ) (y|x) .(20)
Energy-based sample adaptation with categorical latent variable. To keep the categorical information during sample adaptation, we introduce a categorical latent variable z into our discriminative energy-based model, which is defined as
p θ,φ (x, y) = p θ,φ (x, y, z)dz = p φ (y|z, x)p φ (z|x) exp(−E θ (x|z)) Z θ,φ dz.
We optimize the parameters θ and φ also by the contrastive y)], which has similar gradient as eq. (14) and eq. (15). The latent variable z is estimated by variational inference, leading to a lower bound of log
divergence D KL [p d (x, y)||p θ,φ (x, y)] − D KL [q θ,φ (x, y)||p θ,φ (x,p θ,φ (x, y) ≥ E q φ (z) [log p φ (y|z, x) − E θ (x|z) − log Z θ,φ ] + D KL [q φ (z|d x )||p φ (z|x)].
We obtain the final loss function of the contrastive divergence in a similar way as eq. (20) by estimating the gradient and remove the terms that are hard to estimate or conflict with our final goal. The final objective function is:
L f = E p d (x,y) E q φ (z) [−log p φ (y|z, x)] + D KL [q φ (z|d x )||p φ (z|x)] + E q φ (z) E p d (x) [E θ (x|z)] − E stop_grad(q θ (x)) [E θ (x, z)] + E q θ (x) E q stop_grad(φ) (z) E stop_grad(θ) (x, z) − log p stop_grad(φ) (y|z, x) − D KL [q stop_grad(φ) (z|d x )||p stop_grad(φ) (z|x)] .(21)
B ALGORITHM
We provide the detailed training and test algorithm of our energy-based sample adaptation in Algorithm 1.
C DATASETS AND IMPLEMENTATION DETAILS
Model. To be efficient, we train a shared backbone for all source domains while a domain-specific classifier and a neural-network-based energy function for each source domain. The feature extractor backbone is a basic Residual Network without the final fully connected layer (classifier). Both the prior distribution p φ (z|x) and posterior distribution q φ (z|d x ) of the latent variable z are generated by a neural network φ that consists of four fully connected layers with ReLU activation, which outputs the mean and variance of the distribution. The last layer of φ outputs both the mean and standard derivation of the distribution p φ (z|x) and q φ (z|d x ) for further Monte Carlo sampling. The dimension of z is the same as the feature representations x, e.g., 512 for ResNet-18 and 2048 for ResNet-50. d x is obtained by the center features of the batch of samples that have the same categories as the current sample x in each iteration.
Deployed on feature representations, the energy function consists of three fully connected layers with two dropout layers. The latent variable z is incorporated into the energy function by concatenating with the feature representation x. The input dimension is doubled of the output feature of the backbone, i.e., 1024 for ResNet-18 and 4096 for ResNet-50. We use the swish function as activation in the energy functions (Du et al., 2021a). The final output of the EBM is a scalar, which is processed by a sigmoid function following Du et al. (2021a) to bound the energy to the region [0, 1] and improve the stability during training. During training, we introduce a replay buffer B to store the past updated samples from the modeled distribution (Du & Mordatch, 2019). By sampling from B with 50% probability, we can initialize the negative samples with either the sample features from other source domains or the past Langevin dynamics procedure. This can increase the number of sampling steps and the sample diversity.
Training details and hyperparameters. We evaluate on PACS with both a ResNet-18 and ResNet-50 pretrained on ImageNet. We use Adam optimization and train for 10,000 iterations with a batch Algorithm 1 Energy-based sample adaptation TRAINING TIME Require: Source domains Ds= D i s S i=1 each with joint distribution p d i s (I, y) of input image and label. Require: Learning rate µ; iteration numbers M ; step numbers K and step size λ of the energy function. size of 128. We set the learning rate to 0.00005 for ResNet-18, 0.00001 for ResNet-50, and 0.0001 for the energy-based model and classification model. We use 20 steps of Langevin dynamics sampling to adapt the target samples to source distributions, with a step size of 50. We set the number of Monte Carlo sampling N in eq. (11) as 10 for PACS. Most of the experimental settings on Office-Home are the same as on PACS. The learning rate of the backbone is set to 0.00001 for both ResNet-18 and ResNet-50. The number of Monte Carlo sampling is 5. For fair comparison, we evaluate the rotated MNIST and Fashion-MNIST with ResNet-18, following (Piratla et al., 2020). The other settings are also the same as PACS. On PHEME we conduct the experiments based on a pretrained DistilBERT. We set the learning rate as 0.00003 and use 20 steps of Langevin dynamics with a step size of 20.
Initialize pretrained backbone ψ; φ i , θ i , B i = ∅ for each source domain D i s . for iter in M do for D i s in Ds do Sample datapoints {(I i , y i )} ∼ p d i s (I, y); {(I j , y j )} ∼ {p d j s (I, y)} j =i or B with 50% probability. Feature representations x i = f ψ (I i ), x j = f ψ (I j ) for k in K do x j k ← x j k−1 − ∇xE θ i (x j k−1 |z j ) + ω, z j ∼ q φ i (z j |d x j ), ω ∼ N (0, σ). end for q θ i (x) ← x j k , p d (x) ← p d i s (x i ). (ψ, φ i ) ← (ψ, φ i ) − λ∇ ψ,φ i L f (p d (x)) θ i ← θ i − λ∇ θ i L f (p d (x), q θ i (x)). B i ← B i ∪ xφ i , θ i for each source domain in D i s S i=1 . Input feature representations xt = f ψ (It). for i in {1, . . . , S} do for k in K do x t,k ← x t,k−1 − ∇xE θ i (x t,k−1 |zt) + ω, zt ∼ p φ i (zt|xt), ω ∼ N (0, σ). end for y i t = p φ i (yt|x t,k , zt) end for return yt = 1 S S i=1 y i t .
We train all models on an NVIDIA Tesla V100 GPU for 10,000 iterations. The learning rates of the backbone are different for different datasets as shown in Table 3. The learning rates of the domainspecific classifiers and energy functions are both set to 0.0001 for all datasets. For each source domain, we randomly select 128 samples as a batch to train the backbone and classification model. We also select 128 samples from the other source domains together with the current domain samples to train the domain-specific energy function. We use a replay buffer with 500 feature representations and apply spectral normalization on all weights of the energy function (Du & Mordatch, 2019). We Table 3.
D VISUALIZATIONS
More visualizations of the adaptation procedure. To further show the effectiveness of the iterative adaptation of target samples, we provide more visualizations on PACS. Figure 5 visualizes the source domain features and the target domain features both before and after the adaptation to each individual source domain. Figure 6 visualizes more iterative adaptation procedure of the target samples. Subfigures in different rows show the adaptation of samples from different target domains to source domains. Similar with the visualizations in the main paper, the target samples gradually approach the source data distributions during the iterative adaptation. Therefore, the predictions of the target samples on the source domain classifier become more accurate after adaptation.
Failure cases. We also provide some failure cases on PACS in Figure 7 to gain more insights in our method. Our method is confused with samples that have objects of different categories (first row) and multiple objects or complex background (last three rows). A possible reason is that there is noisy information contained in the latent variable of these samples, leading to adaptation without a clear direction, which behaves as wrong adaptation directions, e.g., visualization in row 1 column 4, or unstable updates with fluctuations in small regions, e.g., visualizations in row 2 column 3 and row 3 column 4. Obtaining the latent variable with more accurate and clear categorical information can be one solution for these failure cases. We can also solve the problem by achieving more stable adaptations with optimization techniques like Nesterov momentum (Nesterov, 1983). Moreover, although failing in these cases, the adaptation of the target sample to some source domains still improves the performance, e.g., the adaptation of the photo sample (row 1 column 2) and cartoon sample (row 3 column 3) to the art-painting domain and the adaptation of the sketch sample (row 4 column 4) to the cartoon domain, which further demonstrate the effectiveness of our iterative sample adaptation through the energy-based model. The results motivate another solution for these failure cases, which is to learn to select the best source domain, or top-n source domains for adaptation and prediction of each target sample. We leave these explorations for future work.
E MORE EXPERIMENTAL RESULTS
Analyses and discussions of the categorical latent variables. In the proposed method, the categorical latent representation for the test sample will have high fidelity to the correct class. This is guaranteed by the training procedure of our method. As shown in the training objective function (eq. 10), we minimize the KL divergence to encourage the prior p φ (z|x) to be close to the variational posterior q φ (z|d x ). d x is essentially the class prototype containing the categorical information.
By doing so, we train the inference model p φ (z|x) to learn to extract categorical information from a single sample. Moreover, we also supervise the sample adaptation procedures by the predicted log-likelihood of the adapted samples (the last term in eq. (10)). The supervision is inherent in the objective function of our discriminative energy-based model as in the derivation of eq. (7) and eq. Figure 6: More visualizations of the iterative adaptation on PACS. We visualize the adaptation of samples from different target domains to source domains on PACS. Each subfigure shows the adaptation of one target sample to one source domain. The adaptation procedure of the target samples is represented by the gradient color from green to red. In each figure, the target sample has the same label as the source data, but is mispredicted due to domain shifts. During adaptation, the target samples gradually approach the source distributions and eventually get correct predictions.
(10). Due to this supervision, the model is trained to learn to adapt out-of-distribution samples to the source distribution while being able to maintain the correct categorical information conditioned on z Although trained only on source domains, the ability can be generalized to the target domain since it is trained by mimicking different domain shifts during training. To further show that z captures the categorical information in x, we visualized the features x t and latent variables z t of the target samples in Figure 8, which shows that z t actually captures the categorical information. Moreover, z t is more discriminative than x t as shown in the figure. Although z t is approximated by only x t during inference. Figure 7: Failure case visualizations of our method on PACS. The visualization settings are the same as Figure 6. Our method makes wrong predictions on samples with complex background or multiple objects. However, our method still achieves good adaptation of these target samples to some source domains, which shows its effectiveness.
Moreover, the categorical latent variable benefits the correctness of the model in the case that the target samples are adapted to previously unexplored regions with very large numbers of steps. Our optimization objective is to minimize the energy to adapt the sample, therefore it is possible that the energy of the target samples is lower than the source data after very large numbers of steps. In this case, the adapted samples could arrive in previously unexplored regions due to the limit of source data. This can further be demonstrated in Figure 4, where the performance of the adapted samples drops after large numbers of steps, reaching a low energy value. Additionally, in the unexplored regions, the classifier could not be well trained, which might also be a reason for causing the performance drop. This is also one reason that we set the number of steps as a small value, e.g., 20 and 50. The categorical latent variable benefits the correctness of the model in such cases as also can be found in Figure 4. The oracle model shows almost no performance degradation even with small energy values after adaptation. The model with the latent variable p(z|x) is also more robust to the step numbers and energy values than the model without z. These results show the role of the latent variable in preserving the categorical information during adaptation and somewhat correcting prediction after adaptation.
With the categorical latent variable z, it is natural to make the final prediction directly by p φ (y|z) without the sample adaptation procedure. However, here we would like to clarify that it is sub-optimal.
Visualizations of ! Visualizations of ! Figure 8: Visualizations of the target features x t and categorical latent variables z t . We use art-painting on PACS as the target domain. Different colors denote different categories. z t is obtained by p φ (z t |x t ). Most data points of z t are clustered according to their labels, demonstrating that z t can capture the categorical information. The latent variable is dedicated to preserving the categorical information in x during adaptation. It still contains the domain information of the target samples. Therefore, it is not optimal to directly make predictions on the latent variable z due to the domain shifts between the z and the sourcetrained classifiers. By contrast, the proposed method moves the target features close to the source distributions to address domain shifts while preserving the categorical information during adaptation.
To show how it works by direct prediction on z, we provide the experimental results of making predictions only from z in Table 4. As expected it is worse than predictions on the adapted target features x, demonstrating the analysis we provided above.
To show the advantages of our method, we also combine the prediction of the latent variable z with model adaptation methods. We use the online adaptation proposed by Wang et al. (2021), where all target samples are utilized to adapt the source-trained models in an online manner. The model keeps updating step by step. In each step, the model is adapted to one batch of target samples. As shown in Table 4b, with large numbers of target samples per step, e.g., 128, the adaptation with Tent is competitive. However, when the number of samples for online adaptation is small, e.g., 1 and 16, the performance of the adapted model even drops, especially for single sample adaptation. By contrast, our method adapts each target sample to the source distribution. All target samples are adapted and predicted equally and individually. The overall performance of our method is comparable to Tent with 128 samples per adaptation step.
Time cost with different adaptation steps. As the number of steps increases, both the training and test time cost consistently increases for all target domains. Without adaptation, the test time cost for one test batch is about 0.05 second. The 20-step adaptation will take about 0.1 extra second. This number will increase to 0.25 second with 50 steps. The test time increases by more than 0.5 seconds for 100 iterations, which is ten times that without adaptation and might limit the application of the proposal. The training time cost is more than two times that for ERM for 100 iterations as shown in Table 5. Since the extra time cost is mainly caused by the iterative adaptation, potential solutions can be speeding up the Langevin dynamics with some optimization techniques like Nesterov momentum (Nesterov, 1983), or exploring some one-step methods for the target sample adaptation. In other experiments on PACS in the paper we use 20 steps for all target domains considering both the overall performance and the time cost.
Detailed comparisons. We provide the detailed performance of each target domain on PACS (Table 6), Office-Home (Table 7), and PHEME (Table 8). On PACS, our method achieves competitive results on each target domain and the best overall performance with both ResNet-18 and ResNet-50 as the backbone. Moreover, our method performs better than most of the recent model adaptation methods (Wang et al., 2021;Dubey et al., 2021;Iwasawa & Matsuo, 2021;Xiao et al., 2022). The conclusion on Office-Home and PHEME is similar to that on PACS. We achieve competitive and even better performance on each target domain.
Results on single source domain generalization. To show the ability of our method of handling corruption distribution shifts and single source domain generalization, we conduct some experiments on CIFAR-10-C and ImageNet-C. We train the model on original data and evaluate it on the data with 15 types of corruption. Table 8: Generalization beyond image data. Rumour detection on the PHEME microblog dataset. Our method achieves the best overall performance and is competitive in each domain. Since our method needs to mimic distribution shifts to train the discriminative energy-based model during training, for the single source domain setting, we generate the negative samples by adding random noise to the image and features of the clean data. We also use the other 4 corruption types (not contained in the evaluation corruption types) as the negative samples during training, which we regard as "corrupted data as negative samples". Note that these corrupted data are only used as negative samples to train the energy-based model. As shown in Table 9, by mimicking better domain shifts during training, our method achieves competitive results with Sun et al. (2020). We also compare our method with some data-augmentation-based methods (e.g., Mixup (Guo et al., 2019), CutMix (Yun et al., 2019) and AugMix (Hendrycks et al., 2020)), our sample adaptation is also competitive with these methods. The proposed method performs worse with a single source domain, although we generate extra negative samples to mimic the domain shifts. The reason can be that the randomly generated domain shifts do not well simulate the domain shift at test time.
Analyses for ensemble prediction. We conduct several experiments on PACS to analyze the ensemble inference in our method. We first provide the results of each source-domain-specific classifier before and after sample adaptation. As shown in Table 10, Although the performance before and after adaptation to different source domains are different due to domain shifts, the proposed sample adaptation to most of the source domains performs better. Moreover, the ensemble inference further improves the overall performance of both without and with sample adaptation, where the results with sample adaptation are still better, as expected.
We also try different aggregation methods to make the final predictions. The results are provided in Table 11. The best results in Table 10 are comparable, but it is difficult to find the best source domain for adaptation before obtaining the results. We tried to find the closest source domain of each target sample by the cosine similarity of feature representations and the predicted confidence. We also tried to aggregate the predictions by weighted average according to the Cosine similarity. With cosine similarity, the weighted averaged results are comparable to the common ensemble method we used in the paper, but the results of adaptation to the closest source domain are not so good. The reason can be that the cosine measure is not able to estimate the domain relationships, showing that it is difficult to reliably estimate the relationships between source and single target samples. The results obtained by using the most confident adaptation are also not as good as the ensemble method, although comparable. The reason can be that ensemble methods introduce uncertainty into the predictions, which is more robust.
Benefit for larger domain gaps. To show the benefit of our proposal for domain generalization scenarios with large gaps, we conduct experiments on rotated MNIST and Fashion MNIST. The results are shown in Figure 9. The models are trained on source domains with rotation angles from 15 • to 75 • , 30 • to 60 • , and 30 • to 45 • , and always tested on target domains with angles of 0 • and 90 • . As the number of domains seen during training decreases the domain gap between source and target increases, and the performance gaps between our method and others becomes more pronounced. Notably, when comparing our method with the recent test-time adaptation of Xiao et al. (2022), which adapts a model to each target sample, shows adapting target samples better handles larger domain gaps than adapting the model.
Figure 1 :
1Illustration of the proposed energy-based model. It aims to adapt the target samples to the source distributions, which can be more accurately classified by the source domain classifier (left).
Figure 2 :
2Overall process of the proposed sample adaptation by discriminative energy-based model. In each iteration, we train the classification model φ i and energy function E θ i of one source domain D i . I i and I j denote the images from domains D i and D j , respectively. I i denotes one batch of images, which generate the center features d i . The energy function E θ i is trained by using x i as positive samples and adapted samples q θ i (x i ) generated by x j from other domains as negative samples. The adaptation is achieved by Langevin dynamics of E θ i . During inference, the target samples are adapted by Langevin dynamics of the energy function E θ of each source domain and then predicted by eq. (11).
PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), DomainNet (Peng et al., 2019), and Rotated MNIST and Fashion-MNIST.
Implementation details. We evaluate on PACS and Office-Home with both a ResNet-18 and ResNet-50 (He et al., 2016) and on DomainNet with a ResNet-50. The backbones are pretrained on ImageNet (Deng et al., 2009). On PHEME we conduct the experiments based on a pretrained DistilBERT (Sanh et al., 2019), following (Wright & Augenstein, 2020). To increase the number of sampling steps and sample diversity of the energy functions, we introduce a replay buffer B that stores the past updated samples from the modeled distribution (Du & Mordatch, 2019). The details of the models and hyperparameters are provided in Appendix C.
(2013) and Ghifary et al. (2016) learn domain invariant representations by matching the moments of features across source domains. Li et al. (Li et al., 2018b) further improved the model by learning conditional-invariant features. Recently, Mahajan et al. (2021) introduced causal matching to model within-class variations for generalization. Shi et al. (2022) provided a gradient matching to encourage consistent gradient directions across domains. Arjovsky et al. (2019) and Ahuja et al. (2021) proposed on invariant risk minimization to learn an invariant classifier. Another widely used methodology is domain augmentation
Require: Target images It from the target domain; trained backbone ψ; and domain-specific model
Figure 5 :
5Benefit of energy-based test sample adaptation. Different shapes denote different classes. From left to right: adaptation to the source domains photo, cartoon, and sketch of samples from the target domain art-painting. After adaptation, the target samples ( ) are more close to the source data ( ) than before ( ), demonstrating the effectiveness of our method. Best viewed in color. use random noise with standard deviation λ = 0.001 and clip the gradients to have individual value magnitude of less than 0.01 similar to (Du & Mordatch, 2019). The step size and number of steps for Langevin dynamics are different for different datasets as shown in
9 :
9Experiments on single-source domain generalization. The model is trained on original data and evaluated on 15 different types of corruption. Our method is competitive with Sun et al. (2020), Rusak et al. (2020) and Hendrycks et al. (2020), and is outperformed by Wang et al. (2021). Mimicking good domain shifts during training is important for our method.
Table 1 :
1Benefit of energy-based test sample adaptation. Experiments on PACS using a ResNet-18 averaged over five runs. Optimized by eq. (7), our model improves after adaptation. With the latent variable (eq. 10) performance improves further, both before and after adaptation.Adaptation
Photo
Art-painting Cartoon
Sketch
Mean
Without latent variable (eq. 7)
94.73 ±0.22 78.66 ±0.59 78.24 ±0.71 78.34 ±0.62 82.49 ±0.26
94.59 ±0.16 80.45 ±0.52 79.98 ±0.51 79.23 ±0.32 83.51 ±0.30
With latent variable (eq. 10)
95.12 ±0.41 79.79 ±0.64 79.15 ±0.37 79.28 ±0.82 83.33 ±0.43
96.05 ±0.37 82.28 ±0.31 81.55 ±0.65 79.81 ±0.41 84.92 ±0.59
Table 2 :
2Comparisons on image and text datasets. Our method achieves the best mean accuracy for all datasets, independent of the backbone. Larger adaptation steps (i.e., 50) lead to better performance.This paper w/o adaptation 83.33 ±0.43 86.05 ±0.37 65.01 ±0.47 70.44 ±0.25 42.90 ±0.34 75.4 ±0.13 This paper w/ adaptation (10 steps) 84.25 ±0.48 87.05 ±0.26 65.73 ±0.32 71.13 ±0.43 43.75 ±0.49 76.0 ±0.16 This paper w/ adaptation (20 steps) 84.92 ±0.59 87.70 ±0.28 66.31 ±0.21 72.07 ±0.38 44.66 ±0.51 76.5 ±0.18 This paper w/ adaptation (50 steps) 85.10 ±0.33 88.12 ±0.25 66.75 ±0.21 72.25 ±0.32 44.98 ±0.43 76.9 ±0.16PACS
Office-Home
DomainNet PHEME
ResNet-18 ResNet-50 ResNet-18 ResNet-50 ResNet-50 DistilBERT
Iwasawa & Matsuo (2021)
81.40
85.10
57.00
68.30
-
-
Zhou et al. (2020a)
82.83
84.90
65.63
67.66
-
-
Gulrajani & Lopez-Paz (2020)
-
85.50
-
66.50
40.90
-
Wang et al. (2021)
83.09
86.23
64.13
67.99
-
75.8 ±0.23
Dubey et al. (2021)
-
-
68.90
43.90
-
Xiao et al. (2022)
84.15
87.51
66.02
71.07
-
76.1 ±0.21
ACKNOWLEDGMENTThis work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance Top consortia forKnowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy. Manh-Ha Bui, Toan Tran, Anh Tran, and Dinh Phung. Exploiting domain-specific features to enhance domain generalization. In Advances in Neural Information Processing Systems, volume 34, 2021. Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. In Advances in Neural Information Processing Systems, volume 34, 2021. Antonio D'Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Learning to generalize one sample at a time with self-supervision. arXiv preprint arXiv:1910.03915, 2019. Yilun Du, Shuang Li, Joshua Tenenbaum, and Igor Mordatch. Improved contrastive divergence training of energy based models. In International Conference on Machine Learning. PMLR, 2021a. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In International Conference on Learning Representations, 2020. Hongyu Guo, Yongyi Mao, and Richong Zhang. Mixup as locally linear out-of-manifold regularization. In AAAI Conference on Artificial Intelligence, volume 33, pp. 3714-3722, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016. Yusuke Iwasawa and Yutaka Matsuo. Test-time classifier adjustment module for model-agnostic domain generalization. In Advances in Neural Information Processing Systems, volume 34, 2021. Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato, and Fu Jie Huang. Energy-based models. In Predicting structured data. MIT, 2006. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In IEEE International Conference on Computer Vision, pp. 5542-5550, 2017. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In AAAI Conference on Artificial Intelligence, volume 32, 2018a. Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, and Dacheng Tao. Domain generalization via conditional invariant representations. In AAAI Conference on Artificial Intelligence, volume 32, 2018b. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pp. 6028-6039. PMLR, 2020. Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10-18. PMLR, 2013. Weili Nie, Arash Vahdat, and Anima Anandkumar. Controllable and compositional generation with latent-space energy-based models. In Advances in Neural Information Processing Systems, volume 34, 2021. Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent nonpersistent short-run mcmc toward energy-based model. In Advances in Neural Information Processing Systems, volume 32, 2019. Trung Phung, Trung Le, Tung-Long Vuong, Toan Tran, Anh Tran, Hung Bui, and Dinh Phung. On learning domain-invariant representations for transfer learning with multiple sources. In Advances in Neural Information Processing Systems, volume 34, 2021.REFERENCES
David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann
machines. Cognitive Science, 9(1):147-169, 1985.
Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio,
Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-
distribution generalization. In Advances in Neural Information Processing Systems, volume 34,
2021.
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization.
arXiv preprint arXiv:1907.02893, 2019.
Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. MetaReg: Towards domain gen-
eralization using meta-regularization. In Advances in Neural Information Processing Systems,
volume 31, pp. 998-1008, 2018.
David Dehaene, Oriel Frigo, Sébastien Combrexelle, and Pierre Eline. Iterative energy-based
projection on a normal data manifold for anomaly localization. In International Conference on
Learning Representations, 2020.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale
hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition, pp.
248-255, 2009.
Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, and Tongliang Liu. Confident anchor-induced multi-
source free domain adaptation. In Advances in Neural Information Processing Systems, volume 34,
pp. 2848-2860, 2021.
Qi Dou, Daniel C Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via
model-agnostic learning of semantic features. In Advances in Neural Information Processing
Systems, 2019.
Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. In
Advances in Neural Information Processing Systems, volume 32, 2019.
Yilun Du, Shuang Li, and Igor Mordatch. Compositional visual generation with energy based models.
In Advances in Neural Information Processing Systems, volume 33, pp. 6637-6647, 2020a.
Yingjun Du, Jun Xu, Huan Xiong, Qiang Qiu, Xiantong Zhen, Cees G M Snoek, and Ling Shao.
Learning to learn with variational information bottleneck for domain generalization. In European
Conference on Computer Vision, pp. 200-216, 2020b.
Yingjun Du, Xiantong Zhen, Ling Shao, and Cees G M Snoek. MetaNorm: Learning to normalize
few-shot batches across domains. In International Conference on Learning Representations,
2021b.
Yingjun Du, Xiantong Zhen, Ling Shao, and Cees GM Snoek. Hierarchical variational memory for
few-shot learning across domains. arXiv preprint arXiv:2112.08181, 2021c.
Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, and Dhruv Mahajan. Adaptive methods
for real-world domain generalization. In IEEE Conference on Computer Vision and Pattern
Recognition, pp. 14340-14349, 2021.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of
deep networks. In International Conference on Machine Learning, pp. 1126-1135. PMLR, 2017.
Muhammad Ghifary, David Balduzzi, W Bastiaan Kleijn, and Mengjie Zhang. Scatter compo-
nent analysis: A unified framework for domain adaptation and domain generalization. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 39(7):1414-1430, 2016.
Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi,
and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like
one. In International Conference on Learning Representations, 2020.
Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshmi-
narayanan. AugMix: A simple data processing method to improve robustness and uncertainty.
International Conference on Learning Representations, 2020.
Geoffrey Hinton, Simon Osindero, Max Welling, and Yee-Whye Teh. Unsupervised discovery of
nonlinear structure using contrastive backpropagation. Cognitive Science, 30(4):725-731, 2006.
Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14(8):1771-1800, 2002.
David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai
Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapola-
tion. In International Conference on Machine Learning, pp. 5815-5826. PMLR, 2021.
Vinod K Kurmi, Venkatesh K Subramanian, and Vinay P Namboodiri. Domain impression: A source
data free domain adaptation method. In IEEE/CVF Winter Conference on Applications of Computer
Vision, pp. 615-625, 2021.
Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised
domain adaptation without source data. In IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pp. 9641-9650, 2020.
Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised
domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 2021.
Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection.
In Advances in Neural Information Processing Systems, volume 33, pp. 21464-21475, 2020.
Yuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, and Alexandre
Alahi. Ttt++: When does self-supervised test-time training fail or thrive? In Advances in Neural
Information Processing Systems, volume 34, 2021.
Divyat Mahajan, Shruti Tople, and Amit Sharma. Domain generalization using causal matching. In
International Conference on Machine Learning, pp. 7313-7324. PMLR, 2021.
Saeid Motiian, Marco Piccirilli, Donald A Adjeroh, and Gianfranco Doretto. Unified deep supervised
domain adaptation and generalization. In IEEE International Conference on Computer Vision, pp.
5715-5725, 2017.
Yurii E Nesterov. A method for solving the convex programming problem with convergence rate O
(1/kˆ2). In Doklady Akademii Nauk SSSR, volume 269, pp. 543-547, 1983.
A Tuan Nguyen, Toan Tran, Yarin Gal, and Atilim Gunes Baydin. Domain invariant representation
learning with domain density transformations. In Advances in Neural Information Processing
Systems, volume 34, 2021.
Erik Nijkamp, Mitch Hill, Tian Han, Song-Chun Zhu, and Ying Nian Wu. On the anatomy of
mcmc-based maximum likelihood learning of energy-based models. In AAAI Conference on
Artificial Intelligence, volume 34, pp. 5272-5280, 2020.
Prashant Pandey, Mrigank Raman, Sumanth Varambally, and Prathosh AP. Generalization on unseen
domains via inference-time label-preserving target projections. In IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pp. 12924-12933, 2021.
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching
for multi-source domain adaptation. In IEEE/CVF international conference on computer vision,
pp. 1406-1415, 2019.
Vihari Piratla, Praneeth Netrapalli, and Sunita Sarawagi. Efficient domain generalization via common-
specific low-rank decomposition. In International Conference on Machine Learning, pp. 7728-
7738. PMLR, 2020.
Table 3 :
3Implementation details of our method per dataset and backbone.Dataset
Backbone Backbone learning rate Step size Number of steps
PACS
ResNet-18
0.00005
50
20
ResNet-50
0.00001
50
20
Office-Home
ResNet-18
0.00001
100
20
ResNet-50
0.00001
100
20
Rotated MNIST ResNet-18
0.00005
50
20
Fashion-MNIST ResNet-18
0.00005
50
20
PHEME
DistilBERT
0.00003
40
20
Table 4 :
4Analyses on the categorical latent variable. As expected, prediction on the adapted target samples x performs better than prediction directly on the categorical latent variable z.(a) Overall comparisons on PACS and Office-Home.ResNet-18 ResNet-50 ResNet-18 ResNet-50 predict directly on z 82.46 ±0.34 85.95 ±0.33 64.49 ±0.25 70.60 ±0.53 predict on adapted target samples x 84.92 ±0.59 87.70 ±0.28 66.31 ±0.21 72.07 ±0.38 (b) Detailed comparisons on PACS. ±0.43 75.63 ±0.68 82.46 ±0.34 Predict on z with model adaptation (Tent) Adaptation with 1 sample per step 80.49 ±0.27 44.14 ±0.38 51.49 ±0.44 30.28 ±0.66 51.60 ±0.37 Adaptation with 16 samples per step 93.65 ±0.33 80.20 ±0.24 76.90 ±0.52 68.49 ±0.72 79.81 ±0.31 Adaptation with 64 samples per step 96.04 ±0.33 81.91 ±0.37 80.81 ±0.64 76.33 ±0.65 83.77 ±0.41 Adaptation with 128 samples per step 97.25 ±0.24 84.91 ±0.31 81.12 ±0.47 76.80 ±0.83 85.02 ±0.49 Predict on adapted target samples x with our method Adaptation with 1 sample x 96.05 ±0.37 82.28 ±0.31 81.55 ±0.65 79.81 ±0.41 84.92 ±0.59PACS
Office-Home
Photo
Art-painting Cartoon
Sketch
Mean
Predict directly on z
No adaptation
94,22 ±0.25 79.52 ±0.21
80.46
Table 5 :
5Training cost of the proposed method. Compared with ERM, our method has about 20% more parameters, most of which come from the energy functions of source domains. Similar to test time, the time cost of training increases along with the number of steps of the energy-based model.Parameters Adaptation steps 10000 iterations training time
ERM
11.18M
-
6.2 h
This paper 13.73M
20
7.9 h
40
9.4 h
60
10.6 h
80
12.1 h
100
14.0 h
Table 6 :
6Comparisons on PACS. Our method achieves best mean accuracy with a ResNet-18 backbone and is competitive with ResNet-50.Backbone Method
Photo
Art-painting Cartoon
Sketch
Mean
ResNet-18
Dou et al. (2019)
94.99
80.29
77.17
71.69
81.04
Iwasawa & Matsuo (2021)
-
-
-
-
81.40
Zhao et al. (2020)
96.65
80.70
76.40
71.77
81.46
Wang et al. (2021)
95.49
81.55
77.67
77.64
83.09
Zhou et al. (2020b)
96.10
84.10
78.80
75.90
83.70
Xiao et al. (2022)
95.87
82.02
79.73
78.96
84.15
This paper
96.05 ±0.37 82.28 ±0.31
81.55 ±0.65 79.81 ±0.41 84.92 ±0.59
ResNet-50
Dou et al. (2019)
95.01
82.89
80.49
72.29
82.67
Dubey et al. (2021)
-
-
-
-
84.50
Iwasawa & Matsuo (2021)
-
-
-
-
85.10
Zhao et al. (2020)
98.25
87.51
79.31
76.30
85.34
Gulrajani & Lopez-Paz (2020) 97.20
84.70
80.80
79.30
85.50
Wang et al. (2021)
97.96
86.30
82.53
78.11
86.23
Seo et al. (2020)
95.99
87.04
80.62
82.90
86.64
Xiao et al. (2022)
97.88
88.09
83.83
80.21
87.51
This paper
97.67 ±0.14 88.00 ±0.29
84.75 ±0.39 80.40 ±0.38 87.70 ±0.28
Table 7 :
7Comparisons on Office-Home. Our method achieves the best mean accuracy using both a ResNet-18 and ResNet-50 backbone. ±0.33 53.93 ±0.34 74.50 ±0.39 76.74 ±0.24 66.31 ±0.21 ±0.14 58.37 ±0.30 79.29 ±0.32 81.26 ±0.26 72.07 ±0.38Backbone Method
Art
Clipart
Product
Real World Mean
ResNet-18
Iwasawa & Matsuo (2021)
47.00
46.80
68.00
66.10
57.00
Wang et al. (2021)
56.45
52.06
73.19
74.82
64.13
Xiao et al. (2022)]
59.39
53.94
74.68
76.07
66.02
This paper
60.08 ResNet-50
Gulrajani & Lopez-Paz (2020) 61.30
52.40
75.80
76.60
66.50
Wang et al. (2021)
62.12
56.65
75.61
77.58
67.99
Dubey et al. (2021)
-
-
-
-
68.90
Xiao et al. (2022)
67.21
57.97
78.61
80.47
71.07
This paper
69.33
Table
Table 10 :
10Sample adaptation to each source domain on PACS. The experiments are conducted on ResNet-18. Due to the domain shifts between the target domain and different source domains, the performance before and after adaptation are different. The results with sample adaptation to most of the source domains are better than those without adaptation. The ensemble inference further improves the overall performance, where the results with sample adaptation are still better than that without adaptation. w/o adaptation 95.79 ±0.23 95.03 ±0.27 95.05 ±0.42 95.12 ±0.41 w/ adaptation 95.81 ±0.27 94.69 ±0.21 95.99 ±0.45 96.05 ±0.37 w/o adaptation 78.52 ±0.43 79.68 ±0.37 79.83 ±0.52 79.79 ±0.64 w/ adaptation 81.49 ±0.33 82.19 ±0.35 80.81 ±0.43 82.28 ±0.31 w/o adaptation 79.05 ±0.33 78.93 ±0.41 78.80 ±0.55 79.15 ±0.37 w/ adaptation 81.09 ±0.38 80.44 ±0.31 80.32 ±0.71 81.55 ±0.65 w/o adaptation 78.32 ±0.56 76.98 ±0.73 76.16 ±0.82 79.28 ±0.82 w/ adaptation 79.72 ±0.43 79.69 ±0.67 79.77 ±0.45 79.81 ±0.41(a) Photo
Art-painting Cartoon
Sketch
Ensemble
(b) Art-painting
Photo
Cartoon
Sketch
Ensemble
(c) Cartoon
Photo
Art-painting Sketch
Ensemble
(d) Sketch
Photo
Art-painting Cartoon
Ensemble
Table 11 :
11Analyses of different aggregation methods for the predictions. The experiments are conducted on PACS using ResNet-18. The results with different aggregation methods are similar while ensemble inference performs slightly better. Adaptation to the closest source domain 95.41 ±0.28 79.86 ±0.41 79.67 ±0.44 78,97 ±0.72 83.48 ±0.43 Weighted average of adaptation to different source domains 95.93 ±0.33 82.18 ±0.37 81.24 ±0.52 79.54 ±0.77 84.76 ±0.55 Most confident prediction after adaptation 95.77 ±0.25 81.93 ±0.31 80.67 ±0.65 79.25 ±0.62 84.41 ±0.32 Ensemble (This paper) 96.05 ±0.37 82.28 ±0.31 81.55 ±0.65 79.81 ±0.41 84.92 ±0.59Aggregation methods
Photo
Art-painting Cartoon
Sketch
Mean
Code available: https://github.com/zzzx1224/EBTSA-ICLR2023.
Active learning for domain adaptation: An energy-based approach. Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng, Guoren Wang, arXiv:2112.01406arXiv preprintBinhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng, and Guoren Wang. Active learning for domain adaptation: An energy-based approach. arXiv preprint arXiv:2112.01406, 2021.
A theory of generative convnet. Jianwen Xie, Yang Lu, Song-Chun Zhu, Yingnian Wu, International Conference on Machine Learning. PMLRJianwen Xie, Yang Lu, Song-Chun Zhu, and Yingnian Wu. A theory of generative convnet. In International Conference on Machine Learning, pp. 2635-2644. PMLR, 2016.
A tale of two flows: cooperative learning of langevin flow and normalizing flow toward energy-based model. Jianwen Xie, Yaxuan Zhu, Jun Li, Ping Li, International Conference on Learning Representations. Jianwen Xie, Yaxuan Zhu, Jun Li, and Ping Li. A tale of two flows: cooperative learning of langevin flow and normalizing flow toward energy-based model. In International Conference on Learning Representations, 2022.
Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Shiqi Yang, Joost Van De, Luis Weijer, Shangling Herranz, Jui, Advances in Neural Information Processing Systems. 34Shiqi Yang, Joost van de Weijer, Luis Herranz, Shangling Jui, et al. Exploiting the intrinsic neighbor- hood structure for source-free domain adaptation. In Advances in Neural Information Processing Systems, volume 34, pp. 29393-29405, 2021.
Improving out-of-distribution robustness via selective augmentation. Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn, arXiv:2201.00299arXiv preprintHuaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, and Chelsea Finn. Improving out-of-distribution robustness via selective augmentation. arXiv preprint arXiv:2201.00299, 2022.
Cutmix: Regularization strategy to train strong classifiers with localizable features. Sangdoo Yun, Dongyoon Han, Sanghyuk Seong Joon Oh, Junsuk Chun, Youngjoon Choe, Yoo, IEEE/CVF International Conference on Computer Vision. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In IEEE/CVF International Conference on Computer Vision, pp. 6023-6032, 2019.
Adaptive risk minimization: Learning to adapt to domain shift. Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, Chelsea Finn, Advances in Neural Information Processing Systems. 34Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk minimization: Learning to adapt to domain shift. In Advances in Neural Information Processing Systems, volume 34, 2021.
Domain generalization via entropy regularization. Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, Dacheng Tao, Advances in Neural Information Processing Systems. 33Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. Domain generalization via entropy regularization. In Advances in Neural Information Processing Systems, volume 33, 2020.
Training on test data with bayesian adaptation for covariate shift. Aurick Zhou, Sergey Levine, Advances in Neural Information Processing Systems. Aurick Zhou and Sergey Levine. Training on test data with bayesian adaptation for covariate shift. In Advances in Neural Information Processing Systems, 2021.
Learning to generate novel domains for domain generalization. Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, Tao Xiang, European Conference on Computer Vision. Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain generalization. In European Conference on Computer Vision, pp. 561-578, 2020a.
Domain generalization with mixstyle. Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang, International Conference on Learning Representations. Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. In International Conference on Learning Representations, 2020b.
Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, Chen Change Loy, arXiv:2103.02503Domain generalization: A survey. arXiv preprintKaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. arXiv preprint arXiv:2103.02503, 2021.
Unsupervised energy-based adversarial domain adaptation for cross-domain text classification. Han Zou, Jianfei Yang, Xiaojian Wu, Findings of the Association for Computational Linguistics. Han Zou, Jianfei Yang, and Xiaojian Wu. Unsupervised energy-based adversarial domain adaptation for cross-domain text classification. In Findings of the Association for Computational Linguistics, pp. 1208-1218, 2021.
Analysing how people orient to and spread rumours in social media by looking at conversational threads. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak, Peter Hoi, Tolmie, PloS one. 113150989Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one, 11(3):e0150989, 2016.
Charlie Hebdo Ferguson German Wings Ottawa Shooting Sydney Siege Mean. Charlie Hebdo Ferguson German Wings Ottawa Shooting Sydney Siege Mean
. 15°, 604515°, 30°, 45°, 60°, 75°30°, 45°, 60°30°, 45°A
. 15°, 604515°, 30°, 45°, 60°, 75°30°, 45°, 60°30°, 45°F
We train on different source set distributions and evaluate on target sets with rotation angles of 0 • and 90 • . As the domain gap between source and target sets increases. 9Benefit for larger domain gaps. our method performs better than alternativesigure 9: Benefit for larger domain gaps. We train on different source set distributions and evaluate on target sets with rotation angles of 0 • and 90 • . As the domain gap between source and target sets increases, our method performs better than alternatives. |
20,472,740 | NATURAL LANGUAGE INFERENCE OVER INTERACTION SPACE | Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system. | [
1957433
] | NATURAL LANGUAGE INFERENCE OVER INTERACTION SPACE
Yichen Gong [email protected]
New York University
USA ‡ Horizon Robotics, Inc
BeijingNew YorkChina
Heng Luo [email protected]
New York University
USA ‡ Horizon Robotics, Inc
BeijingNew YorkChina
Jian Zhang [email protected]
New York University
USA ‡ Horizon Robotics, Inc
BeijingNew YorkChina
NATURAL LANGUAGE INFERENCE OVER INTERACTION SPACE
Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.
INTRODUCTION
Natural Language Inference (NLI also known as recognizing textual entiailment, or RTE) task requires one to determine whether the logical relationship between two sentences is among entailment (if the premise is true, then the hypothesis must be true), contradiction (if the premise is true, then the hypothesis must be false) and neutral (neither entailment nor contradiction). NLI is known as a fundamental and yet challenging task for natural language understanding , not only because it requires one to identify the language pattern, but also to understand certain common sense knowledge. In Table 1, three samples from MultiNLI corpus show solving the task requires one to handle the full complexity of lexical and compositional semantics. Entailment is related to a broad range of tasks: in abstractive summarization, the generated summarization text should be entailed by the text; in paraphrase identification, the paraphrased sentences entail each other; in information retrieval, the retrieved text is entailed by the source context (Bos & Markert, 2005). The previous work on NLI (or RTE) has extensively researched on conventional approaches (Fyodorov et al., 2000;Bos & Markert, 2005;MacCartney & Manning, 2009). Recent progress on NLI is enabled by the availability of 570k human annotated dataset and the advancement of representation learning technique.
Among the core representation learning techniques, attention mechanism is broadly applied in many NLU tasks since its introduction: machine translation (Bahdanau et al., 2014), abstractive summarization (Rush et al., 2015), Reading Comprehension , dialog system (Mei et al., 2016), etc. As described by Vaswani et al. (2017), "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key". Attention mechanism is known for its alignment between representations, focusing one part of representation over another, and modeling the dependency regardless of sequence length. The attention weight follows softmax layer is the essential component of attention (Bahdanau et al., 2014). The singlechannel attention weight can be viewed as a single-channel interaction tensor. An single-channel interaction tensor represents the word-by-word interaction between two sentences in one dimension. On the other hand, A multi-channel attention weight, applied on multi-head attention (Vaswani et al., 2017) to align sentences in different representation subspace, can be viewed as a multi-channel interaction tensor. Observing attention's powerful capability, We hypothesize that interaction tensor contains the necessary semantic information required for understanding the text. In this work, we demonstrate the feasibility that natural language inference task can be tackled directly through extracting semantic feature in interaction space. By incorporating the powerful feature extractor such as deep 2-D convolutional neural network architectures, we can extract n-gram pair semantic interaction features from interaction tensor. Our Interactive Inference Network (IIN) architecture is fully compatible with convolutional feature extractor that works well on CIFAR100 (Krizhevsky, 2009) or ImageNET (Russakovsky et al., 2015) with minor adaptation. It builds up a bridge between NLU and computer vision. By hierarchically stacking the feature extractor, the model can understand the text from word level, phrase level to sentence level.
The goal of reducing sequential computation lays the foundation of several recent work such as Extended Neural GPU (Kaiser & Bengio, 2016), ByteNet (Kalchbrenner et al., 2016), ConvS2S (Gehring et al., 2017), Transformer (Vaswani et al., 2017). Recurrent structure generate a sequence of hidden state h t , as the function of previous hidden state h t−1 and the input for position t. The hard constraint precludes the parallelization within training, results in higher computational time complexity thus delays training (Vaswani et al., 2017;Gehring et al., 2017). To tackle the problem, we propose a simple encoder without using any recurrent or recursive structure in this paper.
Our experiments show that one instance of Interactive Inference Network, Densely Interactive Inference Network, achieves new state-of-the-art performance on both SNLI and MultiNLI copora. To test the generality of our architecture, we interpret the paraphrase identification task as natural language inference task where matching as entailment, not matching as neutral. We test the model on Quora Question Pair dataset, which contains over 400k real world question pair, and achieves new state-of-the-art performance.
We introduce the related work in Section 2, and discuss the general framework of IIN along with a specific instance that enjoys state-of-the-art performance on multiple datasets in Section 3. We describe experiments and analysis in Section 4. Finally, we conclude and discuss future work in Section 5.
RELATED WORK
The early exploration on NLI mainly rely on conventional methods and small scale datasets (Marelli et al., 2014). The availability of SNLI dataset with 570k human annotated sentence pairs has enabled a good deal of progress on natural language understanding. The essential representation learning techniques for NLU such as attention , memory and the use of parse structure are studied on the SNLI which serves as an important benchmark for sentence understanding. The models trained on NLI task can be divided into two categories: (i) sentence encoding-based model which aims to find vector representation for each sentence and classifies the relation by using the concatenation of two vector representation along with their absolute element-wise difference and element-wise product . (ii) Joint feature models which use the cross sentence feature or attention from one sentence to another .
After neural attention mechanism is successfully applied on the machine translation task, such technique has became widely used in both natural language process and computer vision domains. Many variants of attention technique such as hard-attention , self-attention , multi-hop attention (Gong & Bowman, 2017), bidirectional attention (Seo et al., 2016) and multi-head attention (Vaswani et al., 2017) are also introduced to tackle more complicated tasks. Before this work, neural attention mechanism is mainly used to make alignment, focusing on specific part of the representation. In this work, we want to show that attention weight contains rich semantic information required for understanding the logical relationship between sentence pair.
Though RNN or LSTM are very good for variable length sequence modeling, using Convolutional neural network in NLU tasks is very desirable because of its parallelism in computation. Convolutional structure has been successfully applied in various domain such as machine translation (Gehring et al., 2017), sentence classification (Kim, 2014), text matching (Hu et al., 2014) and sentiment analysis (Kalchbrenner et al., 2014), etc. The convolution structure is also applied on different level of granularity such as byte (Zhang & LeCun, 2017), character , word (Gehring et al., 2017) and sentences levels.
MODEL
INTERACTIVE INFERENCE NETWORK
The Interactive Inference Network (IIN) is a hierarchical multi-stage process and consists of five components. Each of the components is compatible with different type of implementations. Potentially all exiting approaches in machine learning, such as decision tree, support vector machine and neural network approach, can be transfer to replace certain component in this architecture. We focus on neural network approaches below. Figure 1 provides a visual illustration of Interactive Inference Network.
1. Embedding Layer converts each word or phrase to a vector representation and construct the representation matrix for sentences. In embedding layer, a model can map tokens to vectors with the pre-trained word representation such as GloVe (Pennington et al., 2014), word2Vec (Mikolov et al., 2013) and fasttext (Joulin et al., 2016). It can also utilize the preprocessing tool, e.g. named entity recognizer, part-of-speech recognizer, lexical parser and coreference identifier etc., to incorporate more lexical and syntactical information into the feature vector. 2. Encoding Layer encodes the representations by incorporating the context information or enriching the representation with desirable features for future use. For instance, a model can adopt bidirectional recurrent neural network to model the temporal interaction on both direction, recursive neural network (Socher et al., 2011b) (also known as TreeRNN) to model the compositionality and the recursive structure of language, or self-attention to model the long-term dependency on sentence. Different components of encoder can be combined to obtain a better sentence matrix representation. 3. Interaction Layer creates an word-by-word interaction tensor by both premise representation and hypothesis representation matrix. In a TreeRNN setting, the interaction layer models the interaction between each node pair (Socher et al., 2011a). The interaction can be modeled in different ways. A common approach is to compute the cosine similarity or dot product between each pair of feature vector. On the other hand, a denser interaction tensor can be obtained by using linear layers to scale down element-wise product between each pair of feature vector. In the interaction tensor, one channel of the tensor represents the words interact in one dimension(perspective) and therefore having d channels shows that the model understands the sentences with a implicit world representation of d dimensions(perspectives). 4. Feature Extraction Layer adopts feature extractor to extract the semantic feature from interaction tensor. The convolutional feature extractors, such as AlexNet(Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2014), Inception (Szegedy et al., 2014), ResNet (He et al., 2016) and DenseNet (Huang et al., 2016), proven work well on image recognition are completely compatible under such architecture. The advance in computer vision is now transferable to natural language understanding. Unlike the work (Kim, 2014; who employs 1-D sliding window, our CNN architecture allows 2-D kernel to extract semantic interaction feature from the word-by-word interaction between n-gram pair. Sequential or tree-like feature extractors are also applicable in the feature extraction layer.
5.
Output Layer decodes the acquired feature to give prediction. Under the setting of NLI, it generates the confidence on each class.
DENSELY INTERACTIVE INFERENCE NETWORK
One example of IIN is Densely Interactive Inference Network (DIIN), a relatively simple structure but produces state-of-the-art performance on multiple datasets.
Embedding Layer: For DIIN, we use the concatenation of word embedding, character feature and syntactical features. The word embedding is obtained by mapping token to high dimensional vector space by pre-trained word vector (840B GloVe). Word embedding is updated during training. As in (Kim et al., 2016;Lee et al., 2016), The character feature is obtained by using a convolutional neural network followed by a max pooling over the learned character vectors. Syntactical features include one-hot part-of-speech tagging feature and binary exact match feature. The exact match value is activated if there are tokens with same stem or lemma in the other sentence as the corresponding token. The exact match feature is simple while found extremely useful in reading comprehension task (Chen et al., 2017a). It helps to speed up the convergence speed in NLI task. Now we have premise representation P ∈ R p×d and hypothesis representation H ∈ R h×d , where p refers to the sequence length of premise, h refers to the sequence length of hypothesis and d means the dimension of both representation. The 1-D convolutional neural network and character features weights share the same set of parameters between premise and hypothesis.
Encoding Layer: In the encoding layer, premise representation P and hypothesis representation H are passed through a two-layer highway network, thus having P hw ∈ R p×d and H hw ∈ R h×d for new premise representation and new hypothesis representation. These new representation are then passed to intra-attention (self-attention) layer to take into account the word order and context information. Take premise as example, we model intra-attention by
A ij = α(P hw i , P hw j , w itrAtt ) ∈ R (1) P itrAtt i = p j=1 exp(A ij ) p k=1 exp(A kj ) P hw j , ∀i, j ∈ [1, ..., p] (2) where P itrAtt i is a weighted summation of P hw . We choose α(a, b, , w itrAtt ) = w itrAtt [a; b; a • b], where w itrAtt ∈ R 3d is a trainable weight, • is element-wise multiplication, [;]
is vector concatenation across row, and the implicit multiplication is matrix multiplication. Then both P hw and P itrAtt are fed into a semantic composite fuse gate (fuse gate in short), which acts as a skip connection. The fuse gate is implemented as
z i = tanh(W 1 [P hw i ; P itrAtt i ] + b 1 )(3)r i = σ(W 2 [P hw i ; P itrAtt i ] + b 2 ) (4) f i = σ(W 3 [P hw i ; P itrAtt i ] + b 3 )(5)P enc i = r i • P hw i + f i • z i (6) where W 1 , W 2 , W 3 ∈ R 2d×d and b 1 b 2 , b 3 ∈ R d are trainable weights, σ is sigmoid nonlinear operation.
We do the same operation on hypothesis representation, thus having H enc . The weights of intraattention and fuse gate for premise and hypothesis are not shared, but the difference between the weights of are penalized. The penalization aims to ensure the parallel structure learns the similar functionality but is aware of the subtle semantic difference between premise and hypothesis.
Interaction Layer: The interaction layer models the interaction between premise encoded representation P enc and hypothesis encoded representation H enc as follows:
I ij = β(P enc i , H enc j ) ∈ R d , ∀i ∈ [1, ..., p], ∀j ∈ [1, ..., h](7)
where P enc i is the i-th row vector of P enc , and H enc j is the j-th row vector of H enc . Though there are many implementations of interaction, we find β(a, b) = a • b very useful.
Feature Extraction Layer: We adopt DenseNet (Huang et al., 2016) as convolutional feature extractor in DIIN. Though our experiments show ResNet (He et al., 2016) works well in the architecture, we choose DenseNet because it is effective in saving parameters. One interesting observation with ResNet is that if we remove the skip connection in residual structure, the model does not converge at all. We found batch normalization delays convergence without contributing to accuracy, therefore we does not use it in our case. A ReLU activation function is applied after all convolution unless otherwise noted. Once we have the interaction tensor I, we use a convolution with 1 × 1 kernel to scale down the tensor in a ratio, FSDR, without following ReLU. If the input channel is k then the output channel is f loor(k × FSDR). Then the generated feature map is feed into three sets of Dense block (Huang et al., 2016) and transition block pair. The DenseNet block contains n layers of 3 × 3 convolution layer with growth rate of GR. The transition layer has a convolution layer with 1 × 1 kernel for scaling down purpose, followed by a max pooling layer with stride 2. The transition scale down ratio in transition layer is TSDR.
Output Layer: DIIN uses a linear layer to classify final flattened feature representation to three classes.
EXPERIMENTS
In this section, we present the evaluation of our model. We first perform quantitative evaluation, comparing our model with other competitive models. We then conduct some qualitative analyses to understand how DIIN achieve the high level understanding through interaction.
DATA
Here we introduce three datasets we evaluate our model on. The evaluation metric for all dataset is accuracy.
SNLI Stanford Natural Language Inference (SNLI; Bowman et al. 2015) has 570k human annotated sentence pairs. The premise data is draw from the captions of the Flickr30k corpus, and the hypothesis data is manually composed. The labels provided in are "entailment", "neutral', "contradiction" and "-". "-" shows that annotators cannot reach consensus with each other, thus removed during training and testing as in other works. We use the same data split as in .
MultiNLI Multi-Genre NLI Corpus (MultiNLI; ) has 433k sentence pairs, whose collection process and task detail are modeled closely to SNLI. The premise data is collected from maximally broad range of genre of American English such as written non-fiction genres (SLATE, OUP, GOVERNMENT, VERBATIM, TRAVEL), spoken genres (TELEPHONE, FACE-TO-FACE), less formal written genres (FICTION, LETTERS) and a specialized one for 9/11. Half of these selected genres appear in training set while the rest are not, creating in-domain (matched) and cross-domain (mismatched) development/test sets. We use the same data split as provided by . Since test set labels are not provided, the test performance is obtained through submission on Kaggle.com 1 . Each team is limited to two submissions per day.
Quora question pair Quora question pair dataset contains over 400k real world question pair selected from Quora.com. A binary annotation which stands for match (duplicate) or not match (not duplicate) is provided for each question pair. In our case, duplicate question pair can be interpreted as entailment relation and not duplicate as neutral. We use the same split ratio as mentioned in .
We also study the human performance for both SNLI and MultiNLI. In the dev&test set on SNLI and matched&mismatched development set on MultiNLI, each sentences pair is provided with a set of "annotator labels". The "annotator labels" contains a list of five labels annotated by five different annotators. The final "gold label" is set to certain label if it has equal to or greater than three votes. Otherwise, the label is set to "-", since there is no agreement. If we look from another perspective, the ratio that "annotator label" fails to match "gold label" is the human performance. 1 For example, if three annotators vote A and the other two annotators vote B, then following the crowd-sourcing guideline, the "gold label" would be A; in this case, the human performance is 60% for this particular sample. In light of this guideline, we calculate the human performance for SNLI dev&test set and MultiNLI development sets. We don't take into account samples that labeled with "-", since we discard them during model testing. The human performance score is 88.1% for SNLI development set, 87.7% for SNLI test set, 88.5% for MultiNLI matched development set and 89.2% for MultiNLI mismatched development set. Since we don't have access to MultiNLI test set labels, we don't provide the human performance for MultiNLI test set here.
EXPERIMENTS SETTING
We implement our algorithm with Tensorflow (Abadi et al., 2016) framework. An Adadelta optimizer(Zeiler, 2012) with ρ as 0.95 and as 1e−8 is used to optimize all the trainable weights. The initial learning rate is set to 0.5 and batch size to 70. When the model does not improve best indomain performance for 30,000 steps, an SGD optimizer with learning rate of 3e−4 is used to help model to find a better local optimum. Dropout layers are applied before all linear layers and after word-embedding layer. We use an exponential decayed keep rate during training, where the initial keep rate is 1.0 and the decay rate is 0.977 for every 10,000 step. We initialize our word embeddings with pre-trained 300D GloVe 840B vectors Pennington et al. (2014) while the out-of-vocabulary word are randomly initialized with uniform distribution. The character embeddings are randomly initialized. All weights are constraint by L2 regularization, and the L2 regularization at step t is calculated as follows:
L2Ratio t = σ( (t − L2F ullStep/2) × 8 L2F ullStep/2 ) × L2F ullRatio(8)
where L2F ullRatio determines the maximum L2 regularization ratio, and L2F ullStep determines at which step the maximum L2 regularization ratio would be applied on the L2 regularization. We choose L2F ullRatio as 0.9e − 5 and L2F ullStep as 100,000. The ratio of L2 penalty between the difference of two encoder weights is set to 1e − 3. For a dense block in feature extraction layer, the number of layer n is set to 8 and growth rate is set to 20. The first scale down ratio F SDR in feature extraction layer is set to 0.3 and transitional scale down ratio T SDR is set to 0.5. The sequence length is set as a hard cutoff on all experiments: 48 for MultiNLI, 32 for SNLI and 24 for Quora Question Pair Dataset. During the experiments on MultiNLI, we use 15% of data from SNLI as in . We select the parameter by the best run of development accuracy. Our ensembling approach considers the majority vote of the predictions given by multiple runs of the same model under different random parameter initialization.
EXPERIMENT ON MULTINLI
We compare our result with all other published systems in Table 2. Besides ESIM, the state-of-theart model on SNLI, all other models appear at RepEval 2017 workshop. RepEval 2017 workshop requires all submitted model to be sentence encoding-based model therefore alignment between sentences and memory module are not eligible for competition. All models except ours share one common feature that they use LSTM as a essential building block as encoder. Our approach, without using any recurrent structure, achieves the new state-of-the-art performance of 80.0%, exceeding current state-of-the-art performance by more than 5%. Unlike the observation from , we find the out-of-domain test performance is consistently lower than in-domain test performance. Selecting parameters from the best in-domain development accuracy partially contributes to this result.
EXPERIMENT ON SNLI
In 67.0 67. 6 2. InnerAtt(Balazs et al., 2017) 72.1 72.1 3. ESIM 72.3 72.1 4. Gated-Att BiLSTM (Chen et al., 2017b) 73.2 73.6 5. Shorcut-Stacked encoder (Nie & Bansal, 2017) 74 . to encode the sentence. The next group of model, experiments (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18), uses cross sentence feature. aligns each sentence word-by-word with attention on top of LSTMs. enforces cross sentence attention word-by-word matching with the proprosed mL-STM model. proposes long short-term memory-network(LSTMN) with deep attention fusion that links the current word to previous word stored in memory. decomposes the task into sub-problems and conquer them respectively. proposes neural tree indexer, a full n-ary tree whose subtrees can be overlapped. Re-read LSTM proposed by considers the attention vector of one sentence as the inner-state of LSTM for another sentence. propose a sequential model that infers locally, and a ensemble with tree-like inference module that further improves performance. We show our model, DIIN, achieves state-of-the-art performance on the competitive leaderboard.
EXPERIMENT ON QUORA QUESTION PAIR DATASET
In this subsection, we evaluate the effectiveness of our model for paraphrase identification as natural language inference task. Other than our baselines, we compare with and
Model Accuracy Dev Acc Test Acc Tomar et al. (2017). BIMPM models different perspective of matching between sentence pair on both direction, then aggregates matching vector with LSTM. DECATT word and DECATT char uses automatically collected in-domain paraphrase data to noisy pretrain n-gram word embedding and ngram subword embedding correspondingly on decomposable attention model proposed by . In Table 4, our experiment shows DIIN has better performance than all other models and an ensemble score is higher than the former best result for more than 1 percent.
ANALYSIS
Ablation Study We conduct a ablation study on our base model to examine the effectiveness of each component. We study our model on MultiNLI dataset and we use Matched validation score as the standard for model selection. The result is shown in Table 5. In the experiment 2, we remove the convolutional feature extractor and then model is structured as a sentence-encoding based model. The sentence representation matrix is max-pooled over time to obtain a feature vector. Once we have the feature vector p for premise and h for hypothesis, we use [p; h; |p−h|; p•h] as final feature vector to classify the relationship. We obtain 73.2 for matched score and 73.6 on mismatched data. The result is competitive among other sentence-encoding based model. We further study how encoding layer contribute in enriching the feature space in interaction tensor. If we remove encoding layer completely, then we'll obtain a 73.5 for matched score and 73.2 for mismatched score. The result demonstrate the feature extraction layer have powerful capability to capture the semantic feature.
In experiment 4, we remove both self-attention and fuse gate, thus retaining only highway network. The result improves to 77.7 and 77.3 respectively on matched and mismatched development set. However, in experiment 5, when we only remove fuse gate, to our surprise, the performance degrade to 73.5 for matched score and 73.8 for mismatched. On the other hand, if we use the addition of the representation after highway network and the representation after self-attention as skip connection as in experiment 9, the performance increase to 77.3 and 76.3. The comparison indicates self-attention layer makes the training harder to converge while a skip connection could ease the gradient flow for both highway layer and self-attention layer. By comparing the base model and the model the in experiment 6, we show that the fuse gate not only well serves as a skip connection, but also makes good decision upon which information the fuse for both representation. To show that dense interaction tensor contains more semantic information, we replace the dense interaction tensor with dot product similarity matrix between the encoded representation of premise and hypothesis. The result shows that the dot product similarity matrix has an inferior capacity of semantic information.
Dimensionality and Parameter number study To study the influence of the model dimension d which is also the channel number of interaction tensor, we design experiments to find out whether dimension has influence on performance. We also present the parameter count of these models. The dimensionality is 448 where 300 comes from word embedding, 100 comes from char feature, 47 comes from Part of speech tagging and 1 comes from the binary exact match feature. Since Highway network sets the output dimensionality default as that in input, we design a variant to highway network so that different output size could be obtained. The variant of highway layer is
t i = tanh(W t x i + b t )(9)g i = σ(W g x i + b g )(10)x i = x i d in = d out W x x i + b x d in = d out (11) o i = g i • t i + (1 − g i ) • x i(12)
where x i is the i-th vector of input matrix x, o i is the i-th vector of output matrix o, W t , W g , W x ∈ R din×dout and b t , b g , b x ∈ R dout are trainable weights.
The result shows that higher dimension number have better performance when the dimension number is lower certain threshold, however, when the number of dimensionality is greater than the threshold, larger number of parameter and higher dimensionality doesn't contribute to performance. In the case of SNLI, due to its simplicity in language pattern, 250D would be suffice to obtain a good performance. On the other hand, it requires 350D to achieve a competitive performance on MultiNLI. We fail to reproduce our best performance with the new structure on MultiNLI. It shows that the additional layer on highway network doesn't helps convergence.
Error analysis To analyze the model prediction, we use annotated subset of development set provided by that consists of 1,000 examples each tagged with zero or more following tags:
• CONDITIONAL: whether the sentence contains a conditional.
• WORD OVERLAP: whether both sentences share more than 70% of their tokens.
• NEGATION: whether a negation shows up in either sentence.
• ANTO: whether two sentences contain antonym pair. • LONG SENTENCE: whether premise or hypothesis is longer than 30 or 16 tokens respectively. • TENSE DIFFERENCE: whether any verb in two sentences uses different tense.
• ACTIVE/PASSIVE: whether there is an active-to-passive (or vice versa) transformation from the premise to the hypothesis. • PARAPHRASE: whether the two sentences are close paraphrases • QUANTITY/TIME REASONING: whether understanding the pair requires quantity or time reasoning. • COREF: Whether the hypothesis contains a pronoun or referring expression that needs to be resolved using the premise. • QUANTIFIER: Whether either sentence contains one of the following quantifier: much, enough, more, most, less, least, no, none, some, any, many, few, several, almost, nearly. • MODAL: Whether one of the following modal verbs appears in either sentence: can, could, may, might, must, will, would, should. • BELIEF: Whether one of the following belief verbs appear in either sentence: know, believe, understand, doubt, think, suppose, recognize, forget, remember, imagine, mean, agree, disagree, deny, promise.
For more detailed descriptions, please resort to . The result is shown in Table 7. We find DIIN is consistently better on sentence pair with WORD OVERLAP, ANTO, LONG SENTENCE, PARAPHRASE and BELIEF tags by a large margin. During investigation, we hypothesize exact match feature helps the model to better understand paraphrase, therefore we study the result from second ablation ablation study where exact match feature is not used. Surprisingly, the model without exact model feature does not work worse on PARAPHRASE, instead, the accuracy on ANTO drops about 10%. DIIN is also work well on LONG SENTENCE, partially because the receptive field is large enough to cover all tokens.
Visualization We also visualize the hidden representation from interaction tensor I and the feature map from first dense block in Figure 2. We pick a sentence pair whose premise is "South Carolina has no referendum right, so the Supreme Court canceled the vote and upheld the ban." and hypothesis is "South Carolina has a referendum right, so the Supreme Court was powerless over the state.". The upper row of figures are sampled from hidden representation of interaction tensor I. We observe the values of neurons are highly correlated row-wise and column-wise in the interaction tensor I and different channel of hidden representation shows different aspect of interaction. Though in certain channel same words, "referendum", or phrases, "supreme court", cause activation, different word or phrase pair, such as "ban" and "powerless over", also cause activation in other activation. It shows the model's strong capacity of understanding text in different perspective. The lower row of Figure 2 shows the feature map from first dense block. After being convolved from the interaction tensor and previous feature map, new feature maps shows activation in different position, demonstrating different semantic features are found. The first figure in the lower row has similar pattern as normal attention weight whereas others has no obvious pattern. Different channels of feature maps indicate different kinds of semantic feature. Figure 2: A visualization of hidden representation. The premise is "South Carolina has no referendum right, so the Supreme Court canceled the vote and upheld the ban." and the hypothesis is "South Carolina has a referendum right, so the Supreme Court was powerless over the state.". The upper row are sampled from interaction tensor I and the lower row are sample from the feature map of first dense block. We use viridis colormap, where yellow represents activation and purple shows the neuron is not active.
CONCLUSION AND FUTURE WORK
We show the interaction tensor (or attention weight) contains semantic information to understand the natural language. We introduce Interactive Inference Network, a novel class of architecture that allows the model to solve NLI or NLI alike tasks via extracting semantic feature from interaction tensor end-to-end. One instance of such architecture, Densely Interactive Inference Network (DIIN), achieves state-of-the-art performance on multiple datasets. By ablating each component in DIIN and changing the dimensionality, we show the effectiveness of each component in DIIN.
Though we have the initial exploration of natural language inference in interaction space, the full potential is not yet clear. We will keep exploring the potential of interaction space. Incorporating common-sense knowledge from external resources such as knowledge base to leverage the capacity of the mode is another research goal of ours.
Figure 1 :
1A visual illustration of Interactive Inference Network (IIN).
Table 1 :
1Samples from MultiNLI datasets.
Table 3 ,
3we compare our model to other model performance on SNLI. Experiments (2-7) are
sentence encoding based model. Bowman et al. (2016) provides a BiLSTM baseline. Vendrov
et al. (2015) adopts two layer GRU encoder with pre-trained "skip-thoughts" vectors. To capture
sentence-level semantics, Mou et al. (2015) use tree-based CNN and Bowman et al. (2016) propose
a stack-augmented parser-interpreter neural network (SPINN) which incorporates parsing informa-
tion in a sequential manner. Liu et al. (2016) uses intra-attention on top of BiLSTM to generate sen-
tence representation, and Munkhdalai & Yu (2016) proposes an memory augmented neural network
Table 2 :
2MultiNLI result.Model
Table 3 :
3SNLI result.
Table 4 :
4Quora question dataset result. First six rows are copied from Wang et al. (2017) and next
two rows from (Tomar et al., 2017).
Table 5 :
5Ablation study result.Dev Accuracy
Table 6 :
6Dimensionality and parameter number study result.designed as follows:
Table 7 :
7MultiNLI result.
ACKNOWLEDGMENTSWe thank Yuchen Lu, Chang Huang and Kai Yu for their sincere and insightful advice.
. Handcrafted, Bowman, Handcrafted features(Bowman et al., 2015)
. Lstm, Bowman, 80LSTM encoder(Bowman et al., 2016) 80.6
. Vendrov, 814 4. tree-based CNN encoderspretrained GRU encoders(Vendrov et al., 2015) 81.4 4. tree-based CNN encoders(Mou et al., 2015)
. Spinn-Pi, Bowman, 83SPINN-PI encoders(Bowman et al., 2016) 83.2
BiLSTM intra-attention encoders. Liu, 84BiLSTM intra-attention encoders(Liu et al., 2016) 84.2
Rocktäschel, LSTM with attention. 83LSTM with attention(Rocktäschel et al., 2015) 83.5
. & Wang, Jiang, 86mLSTM(Wang & Jiang, 2015) 86.1
Cheng, LSTMN with deep attention fusion. 86863 11. decomposable attention modelLSTMN with deep attention fusion(Cheng et al., 2016) 86.3 11. decomposable attention model(Parikh et al., 2016) 86.3
Intra-sentence attention. Parikh, 86Intra-sentence attention + (11)(Parikh et al., 2016) 86.8
. ( Bimpm, Wang, 86BiMPM(Wang et al., 2017) 86.9
. Nti-Slstm-Lstm( Yu, & Munkhdalai, ; Sha, 87.3 15. re-read LSTM87NTI-SLSTM-LSTM(Yu & Munkhdalai, 2017) 87.3 15. re-read LSTM(Sha et al., 2016) 87.5
. Esim(chen, 88.0ESIM(Chen et al., 2016) 88.0
Chen , ESIM ensemble with syntactic tree-LSTM. 88ESIM ensemble with syntactic tree-LSTM(Chen et al., 2016) 88.6
. Wang, BiMPM (ensemble. BiMPM (ensemble)(Wang et al., 2017)
Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv.org. Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker; Pete Warden, Martin Wattenberg, Martin WickeMartín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv.org, March 2016.
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Neural Machine Translation by Jointly Learning to Align and Translate. arXiv.org. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv.org, September 2014.
Refining Raw Sentence Representations for Textual Entailment Recognition via Attention. arXiv.org. A Jorge, Edison Balazs, Pablo Marrese-Taylor, Yutaka Loyola, Matsuo, Jorge A Balazs, Edison Marrese-Taylor, Pablo Loyola, and Yutaka Matsuo. Refining Raw Sentence Representations for Textual Entailment Recognition via Attention. arXiv.org, July 2017.
Recognising Textual Entailment with Logical Inference. Johan Bos, Katja Markert, Johan Bos and Katja Markert. Recognising Textual Entailment with Logical Inference.
. Hlt/Emnlp, HLT/EMNLP, 2005.
Gabor Samuel R Bowman, Christopher Angeli, Christopher D Potts, Manning, arXiv:1508.05326A large annotated corpus for learning natural language inference. arXiv.org. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large anno- tated corpus for learning natural language inference. arXiv.org, pp. arXiv:1508.05326, August 2015.
A Fast Unified Model for Parsing and Sentence Understanding. arXiv.org. Jon Samuel R Bowman, Abhinav Gauthier, Raghav Rastogi, Gupta, D Christopher, Christopher Manning, Potts, Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. A Fast Unified Model for Parsing and Sentence Understanding. arXiv.org, March 2016.
Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, arXiv:1704.00051Reading Wikipedia to Answer Open-Domain Questions. arXiv.org. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to Answer Open- Domain Questions. arXiv.org, pp. arXiv:1704.00051, March 2017a.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen, Enhanced LSTM for Natural Language Inference. arXiv.org. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced LSTM for Natural Language Inference. arXiv.org, September 2016.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen, arXiv:1708.01353Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference. arXiv.org. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Recurrent Neu- ral Network-Based Sentence Encoder with Gated Attention for Natural Language Inference. arXiv.org, pp. arXiv:1708.01353, August 2017b.
Long Short-Term Memory-Networks for Machine Reading. arXiv.org. Jianpeng Cheng, Li Dong, Mirella Lapata, Jianpeng Cheng, Li Dong, and Mirella Lapata. Long Short-Term Memory-Networks for Machine Reading. arXiv.org, January 2016.
and Nissim Francez. a natural language inference system. Yaroslav Fyodorov, Yoad Winter, Proceedings of the 2nd Workshop on Inference in Computational Semantics. the 2nd Workshop on Inference in Computational SemanticsYaroslav Fyodorov, Yoad Winter, and Nissim Francez. a natural language inference system. Pro- ceedings of the 2nd Workshop on Inference in Computational Semantics, pp. 1-17, November 2000.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N Dauphin, Convolutional Sequence to Sequence Learning. arXiv.org. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional Sequence to Sequence Learning. arXiv.org, May 2017.
Yichen Gong, Samuel R Bowman, Ruminating Reader: Reasoning with Gated Multi-Hop Attention. arXiv.org. Yichen Gong and Samuel R Bowman. Ruminating Reader: Reasoning with Gated Multi-Hop At- tention. arXiv.org, April 2017.
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPRKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. CVPR, 2016.
Karl Moritz Hermann, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Teaching Machines to Read and Comprehend. arXiv.org. Karl Moritz Hermann, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching Machines to Read and Comprehend. arXiv.org, June 2015.
Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen, Convolutional Neural Network Architectures for Matching Natural Language Sentences. NIPS. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional Neural Network Architec- tures for Matching Natural Language Sentences. NIPS, 2014.
Gao Huang, Zhuang Liu, Q Kilian, Laurens Weinberger, Van Der Maaten, Densely Connected Convolutional Networks. arXiv.org. Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely Connected Convolutional Networks. arXiv.org, August 2016.
Bag of Tricks for Efficient Text Classification. arXiv.org. Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of Tricks for Efficient Text Classification. arXiv.org, July 2016.
Can Active Memory Replace Attention? NIPS. Lukasz Kaiser, Samy Bengio, Lukasz Kaiser and Samy Bengio. Can Active Memory Replace Attention? NIPS, 2016.
Nal Kalchbrenner, Edward Grefenstette, Phil Blunsom, Convolutional Neural Network for Modelling Sentences. arXiv.org. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A Convolutional Neural Network for Modelling Sentences. arXiv.org, April 2014.
Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural Machine Translation in Linear Time. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, CoRRNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural Machine Translation in Linear Time. CoRR, 2016.
Yoon Kim, arXiv:1408.5882Convolutional Neural Networks for Sentence Classification. arXiv.org. Yoon Kim. Convolutional Neural Networks for Sentence Classification. arXiv.org, pp. arXiv:1408.5882, August 2014.
Yoon Kim, Yacine Jernite, David Sontag, Alexander M Rush, Character-Aware Neural Language Models. AAAI. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-Aware Neural Lan- guage Models. AAAI, 2016.
Learning Multiple Layers of Features from Tiny Images. Al Krizhevsky, Al Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, ImageNet Classification with Deep Convolutional Neural Networks. NIPS. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Con- volutional Neural Networks. NIPS, 2012.
Jason Lee, Kyunghyun Cho, Thomas Hofmann, Fully Character-Level Neural Machine Translation without Explicit Segmentation. arXiv.org. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully Character-Level Neural Machine Trans- lation without Explicit Segmentation. arXiv.org, October 2016.
Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang, Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention. arXiv.org. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention. arXiv.org, May 2016.
An extended model of natural logic. Bill Maccartney, Christopher D Manning, Bill MacCartney and Christopher D. Manning. An extended model of natural logic. 2009.
A SICK cure for the evaluation of compositional distributional semantic models. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, LRECMarco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. A SICK cure for the evaluation of compositional distributional semantic models. LREC, 2014.
Hongyuan Mei, Mohit Bansal, Matthew R Walter, Coherent Dialogue with Attention-based Language Models. arXiv.org. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. Coherent Dialogue with Attention-based Language Models. arXiv.org, November 2016.
Tomas Mikolov, Ilya Sutskever, Kai Chen, S Gregory, Jeffrey Corrado, Dean, Representations of Words and Phrases and their Compositionality. NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. Distributed Rep- resentations of Words and Phrases and their Compositionality. NIPS, 2013.
Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin, Natural Language Inference by Tree-Based Convolution and Heuristic Matching. arXiv.org. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. Natural Language Inference by Tree-Based Convolution and Heuristic Matching. arXiv.org, December 2015.
. Tsendsuren Munkhdalai, Hong Yu, Neural Semantic Encoders. arXiv.org. Tsendsuren Munkhdalai and Hong Yu. Neural Semantic Encoders. arXiv.org, July 2016.
Nikita Nangia, Adina Williams, Angeliki Lazaridou, Samuel R Bowman, The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations. arXiv.org. Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel R Bowman. The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference with Sentence Representations. arXiv.org, July 2017.
Yixin Nie, Mohit Bansal, arXiv:1708.02312Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. arXiv.org. Yixin Nie and Mohit Bansal. Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. arXiv.org, pp. arXiv:1708.02312, August 2017.
P Ankur, Oscar Parikh, Täckström, Dipanjan Das, and Jakob Uszkoreit. A Decomposable Attention Model for Natural Language Inference. arXiv.org. Ankur P Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A Decomposable Attention Model for Natural Language Inference. arXiv.org, June 2016.
Glove: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Stroudsburg, PA, USAAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pp. 1532-1543, Stroudsburg, PA, USA, 2014. Association for Com- putational Linguistics.
Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Phil Blunsom, Reasoning about Entailment with Neural Attention. arXiv.org. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, and Phil Blunsom. Reasoning about Entailment with Neural Attention. arXiv.org, September 2015.
Sumit Alexander M Rush, Jason Chopra, Weston, arXiv:1509.00685Neural Attention Model for Abstractive Sentence Summarization. arXiv.org. Alexander M Rush, Sumit Chopra, and Jason Weston. A Neural Attention Model for Abstractive Sentence Summarization. arXiv.org, pp. arXiv:1509.00685, September 2015.
ImageNet Large Scale Visual Recognition Challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, S Michael, Alexander C Bernstein, Fei-Fei Berg, Li, International Journal of Computer Vision. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S Bernstein, Alexander C Berg, and Fei-Fei Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 2015.
Bidirectional Attention Flow for Machine Comprehension. arXiv.org. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidirectional Attention Flow for Machine Comprehension. arXiv.org, November 2016.
Reading and Thinking -Re-read LSTM Unit for Textual Entailment Recognition. Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li, COLINGLei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. Reading and Thinking -Re-read LSTM Unit for Textual Entailment Recognition. COLING, 2016.
Karen Simonyan, Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv.org. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv.org, September 2014.
Richard Socher, H Eric, Jeffrey Huang, Pennington, Y Andrew, Christopher D Ng, Manning, Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. NIPS. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. NIPS, 2011a.
Richard Socher, Cliff Chiung, -Yu Lin, Y Andrew, Christopher D Ng, Manning, Parsing Natural Scenes and Natural Language with Recursive Neural Networks. ICML. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y Ng, and Christopher D Manning. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. ICML, 2011b.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, Going Deeper with Convolutions. arXiv.org. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. arXiv.org, September 2014.
Thyago Gaurav Singh Tomar, Oscar Duque, Jakob Täckström, Dipanjan Uszkoreit, Das, Neural Paraphrase Identification of Questions with Noisy Pretraining. arXiv.org. Gaurav Singh Tomar, Thyago Duque, Oscar Täckström, Jakob Uszkoreit, and Dipanjan Das. Neural Paraphrase Identification of Questions with Noisy Pretraining. arXiv.org, April 2017.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention Is All You Need. arXiv.org. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. arXiv.org, June 2017.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-Embeddings of Images and Language. arXiv.org. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-Embeddings of Images and Language. arXiv.org, November 2015.
Shuohang Wang, Jing Jiang, Learning Natural Language Inference with LSTM. arXiv.org. Shuohang Wang and Jing Jiang. Learning Natural Language Inference with LSTM. arXiv.org, December 2015.
Bilateral Multi-Perspective Matching for Natural Language Sentences. Zhiguo Wang, cs.AIWael Hamza, cs.AIRadu Florian, cs.AIZhiguo Wang, Wael Hamza, and Radu Florian. Bilateral Multi-Perspective Matching for Natural Language Sentences. cs.AI, 2017.
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv.org. Adina Williams, Nikita Nangia, Samuel R Bowman, Adina Williams, Nikita Nangia, and Samuel R Bowman. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. arXiv.org, April 2017.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. arXiv.org. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. arXiv.org, February 2015.
Neural Tree Indexers for Text Understanding. Hong Yu, Tsendsuren Munkhdalai, Hong Yu and Tsendsuren Munkhdalai. Neural Tree Indexers for Text Understanding. EACL, 2017.
D Matthew, Zeiler, arXiv:1212.5701ADADELTA: An Adaptive Learning Rate Method. arXiv.org. Matthew D Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv.org, pp. arXiv:1212.5701, December 2012.
Which Encoding is the Best for Text Classification in Chinese. Xiang Zhang, Yann Lecun, English, Japanese and Korean? arXiv.org. Xiang Zhang and Yann LeCun. Which Encoding is the Best for Text Classification in Chinese, English, Japanese and Korean? arXiv.org, August 2017.
Character-level Convolutional Networks for Text Classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, arXiv:1509.01626CoRR1509Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level Convolutional Networks for Text Classification. CoRR, 1509:arXiv:1509.01626, 2015. |
260,498,700 | EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS | Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8× and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2× to 7×.Published as a conference paper at ICLR 2017The more powerful server class GPUs used in data centers can generally perform inference quickly enough to serve one user, but in the data center performance per dollar is very important. Techniques that allow models to be evaluated faster enable more users to be served per GPU increasing the effective performance per dollar. | [] | EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS
Sharan Narang [email protected]
Greg Diamos [email protected]
Shubho Sengupta
Erich Elsen
Baidu Research
EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS
Published as a conference paper at ICLR 2017
Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8× and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2× to 7×.Published as a conference paper at ICLR 2017The more powerful server class GPUs used in data centers can generally perform inference quickly enough to serve one user, but in the data center performance per dollar is very important. Techniques that allow models to be evaluated faster enable more users to be served per GPU increasing the effective performance per dollar.
INTRODUCTION
Recent advances in multiple fields such as speech recognition (Graves & Jaitly, 2014;Amodei et al., 2015), language modeling (Józefowicz et al., 2016) and machine translation can be at least partially attributed to larger training datasets, larger models and more compute that allows larger models to be trained on larger datasets.
For example, the deep neural network used for acoustic modeling in Hannun et al. (2014) had 11 million parameters which grew to approximately 67 million for bidirectional RNNs and further to 116 million for the latest forward only GRU models in Amodei et al. (2015). And in language modeling the size of the non-embedding parameters (mostly in the recurrent layers) have exploded even as various ways of hand engineering sparsity into the embeddings have been explored in Józefowicz et al. (2016) and Chen et al. (2015a).
These large models face two significant challenges in deployment. Mobile phones and embedded devices have limited memory and storage and in some cases network bandwidth is also a concern. In addition, the evaluation of these models requires a significant amount of computation. Even in cases when the networks can be evaluated fast enough, it will still have a significant impact on battery life in mobile devices (Han et al., 2015).
Inference performance of RNNs is dominated by the memory bandwidth of the hardware, since most of the work is simply reading in the parameters at every time step. Moving from a dense calculation to a sparse one comes with a penalty, but if the sparsity factor is large enough, then the smaller amount of data required by the sparse routines becomes a win. Furthermore, this suggests that if the parameter sizes can be reduced to fit in cache or other very fast memory, then large speedups could be realized, resulting in a super-linear increase in performance.
We propose a method to reduce the number of weights in recurrent neural networks. While the network is training we progressively set more and more weights to zero using a monotonically increasing threshold. By controlling the shape of the function that maps iteration count to threshold value, we can control how sparse the final weight matrices become. We prune all the weights of a recurrent layer; other layer types with significantly fewer parameters are not pruned. Separate threshold functions can be used for each layer, although in practice we use one threshold function per layer type. With this approach, we can achieve sparsity of 90% with a small loss in accuracy. We show this technique works with Gated Recurrent Units (GRU) (Cho et al., 2014) as well as vanilla RNNs.
In addition to the benefits of less storage and faster inference, this technique can also improve the accuracy over a dense baseline. By starting with a larger dense matrix than the baseline and then pruning it down, we can achieve equal or better accuracy compared to the baseline but with a much smaller number of parameters.
This approach can be implemented easily in current training frameworks and is agnostic to the optimization algorithm. Furthermore, training time does not increase unlike previous approaches such as in Han et al. (2015). State of the art results in speech recognition generally require days to weeks of training time, so a further 3-4× increase in training time is undesirable.
RELATED WORK
There have been several proposals to reduce the memory footprint of weights and activations in neural networks. One method is to use a fixed point representation to quantize weights to signed bytes and activations to unsigned bytes (Vanhoucke et al., 2011). Another technique that has been tried in the past is to learn a low rank factorization of the weight matrices. One method is to carefully construct one of the factors and learn the other (Denil et al., 2013). Inspired by this technique, a low rank approximation for the convolution layers achieves twice the speed while staying within 1% of the original model in terms of accuracy (Denton et al., 2014). The convolution layer can also be approximated by a smaller set of basis filters (Jaderberg et al., 2014). By doing this they achieve a 2.5x speedup with no loss in accuracy. Quantization techniques like k-means clustering of weights can also reduce the storage size of the models by focusing only on the fully connected layers (Gong et al., 2014). A hash function can also reduce memory footprint by tying together weights that fall in the same hash bucket (Chen et al., 2015b). This reduces the model size by a factor of 8.
Yet another approach to reduce compute and network size is through network pruning. One method is to use several bias techniques to decay weights (Hanson & Pratt, 1989). Yet another approach is to use the diagonal terms of a Hessian matrix to construct a saliency threshold and used this to drop weights that fall below a given saliency threshold (LeCun et al., 1989). In this technique, once a weight has been set to 0, the network is retrained with these weights frozen at 0. Optimal Brain Surgeon is another work in the same vein that prunes weights using the inverse of a Hessian matrix with the additional advantage of no re-training after pruning (Hassibi et al., 1993).
Both pruning and quantization techniques can be combined to get impressive gains on AlexNet trained on the ImageNet dataset (Han et al., 2015). In this case, pruning, quantization and subsequent Huffman encoding results in a 35x reduction in model size without affecting accuracy. There has also been some recent work to shrink model size for recurrent and LSTM networks used in automatic speech recognition (ASR) (Lu et al., 2016). By using a hybrid strategy of using Toeplitz matrices for the bottom layer and shared low-rank factors on the top layers, they were able to reduce the parameters of a LSTM by 75% while incurring a 0.3% increase in word error rate (WER).
Our method is a pruning technique that is computationally efficient for large recurrent networks that have become the norm for automatic speech recognition. Unlike the methods that need to approximate a Hessian (LeCun et al., 1989;Hassibi et al., 1993) our method uses a simple heuristic to choose the threshold used to drop weights. Yet another advantage, when compared to methods that need re-training (Han et al., 2015), is that our pruning technique is part of training and needs no additional re-training. Even though our technique requires judicious choice of pruning hyperparameters, we feel that it is easier than choosing the structure of matrices to guide the sparsification for recurrent networks (Lu et al., 2016). Another approach for pruning feed forward neural networks for speech recognition is using simple threshold to prune all weights (Yu et al., 2012) at a particular epoch. However, we find that gradual pruning produces better results than hard pruning.
IMPLEMENTATION
Our pruning approach involves maintaining a set of masks, a monotonically increasing threshold and a set of hyper parameters that are used to determine the threshold. During model initialization, we create a set of binary masks, one for each weight in the network that are all initially set to one. After every optimizer update step, each weight is multiplied with its corresponding mask. At regular intervals, the masks are updated by setting all parameters that are lower than the threshold to zero.
The threshold is computed using hyper-parameters shown in Table 1. The hyper-parameters control the duration, rate and frequency of pruning the parameters for each layer. We use a different set of hyper-parameters for each layer type resulting in a different threshold for each layer type. The threshold is updated at regular intervals using the hyper-parameters according to Algorithm 1. We don't modify the gradients in the back-propagation step. It is possible for the updates of a pruned weight to be larger than the threshold of that layer. In this case, the weight will be involved in the forward pass again.
We provide heuristics to help determine start itr, ramp itr and end itr in table 1. After picking these hyper parameters and assuming that ramp slope(φ) is 1.5× start slope (θ), we calculate (θ) using equation 1.
θ = 2 * q * freq 2 * (ramp itr − start itr ) + 3 * (end itr − ramp itr )(1)
In order to determine q in equation 1, we use an existing weight array from a previously trained model. The weights are sorted using absolute values and we pick the weight corresponding to the 90th percentile as q. This allows us to pick reasonable values for the hyper-parameters required for pruning. A validation set can be used to fine tune these parameters.
We only prune the weights of the recurrent and linear layers but not the biases or batch norm parameters since they are much fewer in number compared to the weights. For the recurrent layers, we prune both the input weight matrix and the recurrent weight matrix. Similarly, we prune all the weights in gated recurrent units including those of the reset and update gates.
Algorithm 1 Pruning Algorithm current itr = 0 while training do for all parameters do param = (param and mask ) if current itr > start itr and current itr < end itr then if (current itr mod freq) == 0 then if current itr < ramp itr then = θ * (current itr − start itr + 1)/freq else = (θ * (ramp itr − start itr + 1) + φ * (current itr − ramp itr + 1))/freq end if mask = abs(param) < end if end if end for current itr += 1 end while
EXPERIMENTS
We run all our experiments on a training set of 2100 hours of English speech data and a validation set of 3.5 hours of multi-speaker data. This is a small subset of the datasets that we use to train our state-of-the-art automatic speech recognition models. We train the models using Nesterov SGD for 20 epochs. Besides the hyper-parameters for determining the threshold, all other hyper-parameters remain unchanged between the dense and sparse training runs. We find that our pruning approach works well for vanilla bidirectional recurrent layers and forward only gated recurrent units.
BIDIRECTIONAL RNNS
We use the Deep Speech 2 model for these experiments. As shown in Table 2, this model has 2 convolution layers, followed by 7 bidirectional recurrent layers and a CTC cost layer. Each recurrent linear layer has 1760 hidden units, creating a network of approximately 67 million parameters. For these experiments, we prune the linear layers that feed into the recurrent layers, the forward and backward recurrent layers and fully connected layer before the CTC layer. These experiments use clipped rectified-linear units (ReLU) σ(x) = min(max(x, 0), 20) as the activation function.
In the sparse run, the pruning begins shortly after the first epoch and continues until the 10 th epoch. We chose these hyper-parameters so that the model has an overall sparsity of 88% at the end of pruning, which is 8x smaller than the original dense model. The character error rate (CER) on the devset is about 20% worse relative to the dense model as shown in Table 3.
An argument against this sparsity result might be that we are taking advantage of a large model that overfits our relatively small dataset. In order to test this hypothesis, we train a dense model with 704 hidden units in each layer, that has approximately the same number of parameters as the final sparse model. Table 3 shows that this model performs worse than the sparse models. Thus sparse model is a better approach to reduce parameters than using a dense model with fewer hidden units.
In order to recover the loss in accuracy, we train sparse models with larger recurrent layers with 2560 and 3072 hidden units. Figure 1a shows the training and dev curves for these sparse models compared to the dense baseline model. These experiments use the same hyper-parameters (except for small changes in the pruning hyper-parameters) and the same dataset as the baseline model. As we see in Table 3, the model with 2560 hidden units achieves a 0.75% relative improvement compared to the dense baseline model, while the model with 3072 hidden units has a 3.95% improvement. The dense 2560 model also improves the CER by 11.85% relative to the dense baseline model. The sparse 2560 model is about 12% worse than the corresponding dense model. Both these large models are pruned to achieve a final sparsity of around 92%. These sparse larger models have significantly fewer parameters than the baseline dense model. We also compare our gradual pruning approach to the hard pruning approach proposed in Yu et al. (2012). In their approach, all parameters below a certain threshold are pruned at particular epoch. Table 4 shows the results of pruning the RNN dense baseline model at different epochs to achieve final parameter count ranging from 8 million to 11 million. The network is trained for the same number of epochs as the gradual pruning experiments. These hard threshold results are compared with the RNN Sparse 1760 model in Table 3. For approximately same number of parameters, gradual pruning is 7% to 9% better than hard pruning.
We conclude that pruning models to achieve sparsity of around 90% reduces the relative accuracy of the model by 10% to 20%. However, for a given performance requirement, it is better to prune a larger model than to use a smaller dense model. Gradually pruning a model produces better results than hard pruning. Figure 1: Training and dev curves for baseline (dense) and sparse training. Figure 1a includes training and dev curves for models with larger recurrent layers with 2560 and 3072 hidden units compared to the 1760 dense baseline. Figure 1b plots the training and dev curves for GRU models (sparse and dense) with 2560 parameters.
GATED RECURRENT UNITS
We also experimented with GRU models shown in Table 5, that have 2560 hidden units in the GRU layer and a total of 115 million parameters. For these experiments, we prune all layers except the convolution layers since they have relatively fewer parameters. Figure 1b compares the training and dev curves of a sparse GRU model a dense GRU model. The sparse GRU model has a 13.8% drop in the accuracy relative to the dense model. As shown in Table 3, the sparse model has an overall sparsity of 88.6% with 13 million parameters. Similar to the RNN models, we train a sparse GRU model with 3568 hidden units. The dataset and the hyperparameters are not changed from the previous GRU experiments. This model has an overall sparsity of 91.82% with 17.8 million parameters. As shown in Table 3, the model with 3568 hidden units is only 2.2% worse than the baseline dense GRU model. We expect to match the performance of the GRU dense network by slightly lowering the sparsity of this network or by increasing the hidden units for the layers.
In addition, we experimented with pruning only the GRU layers and keeping all the parameters in fully connected layers. The accuracy for these experiments is around 7% worse than the baseline dense model. However, this model only achieves 50% compression due to the size of the fully connected layers. The success of deep learning in recent years have been driven by large models trained on large datasets. However this also increases the inference time after the models have been deployed. We can mitigate this effect by using sparse layers.
A General Matrix-Matrix Multiply (GEMM) is the most compute intensive operation in evaluating a neural network model. Table 6 compares times for GEMM for recurrent layers with different number of hidden units that are 95% sparse. The performance benchmark was run using NVIDIA's CUDNN and cuSPARSE libraries on a TitanX Maxwell GPU and compiled using CUDA 7.5. All experiments are run on a minibatch of 1 and in this case, the operation is known as a sparse matrix-vector product (SpMV). We can achieve speed-ups ranging from 3x to 1.15x depending on the size of the recurrent layer. Similarly, for the GRU models, the speed-ups range from 7x to 3.5x. However, we notice that cuSPARSE performance is substantially lower than the approximately 20x speedup that we would expect by comparing the bandwidth requirements of the 95% sparse and dense networks. State of the art SpMV routines can achieve close to device memory bandwidth for a wide array of matrix shapes and sparsity patterns (see Baxter (2016) and Liu et al. (2013)). This means that the performance should improve by the factor that parameter counts are reduced. Additionally, we find that the cuSPARSE performance degrades with larger batch sizes. It should be possible for a better implementation to further exploit the significant reuse of the weight matrix provided by large batch sizes.
COMPRESSION
Pruning allows us to reduce the memory footprint of a model which allows them to be deployed on phones and other embedded devices. Figure 2a shows the sparsity of all the recurrent layers with the same hyper-parameters used to prune the layers. The layers are ordered such that layer 1 is closest to input and layer 14 is the final recurrent layer before the cost layer. We see that the initial layers are pruned more aggressively compared to the final layers. We also performed experiments where the hyper parameters are different for the recurrent layers resulting in equal sparsity for all the layers. However, we get higher CER for these experiments. We conclude that to get good accuracy, it is important to prune the final layers slightly less than the initial ones. Figure 2a plots sparsity of recurrent layers in the network with the same hyper-parameters used for pruning . Figure 2b plots the pruning schedule of a single layer during a training run.
In Figure 2b, we plot the pruning schedule of a 95% sparse recurrent layer of the bidirectional model trained for 20 epochs (55000 iterations). We begin pruning the network at the start of the second epoch at 2700 iterations. We stop pruning a layer after 10 epochs (half the total epochs) are complete at 27000 iterations. We see that nearly 25000 weights are pruned before 5 epochs are complete at around 15000 iterations. In our experiments, we've noticed that pruning schedules that are a convex curve tend to outperform schedules with a linear slope.
PERSISTENT KERNELS
Persistent Recurrent Neural Networks (Diamos et al., 2016) is a technique that increases the computational intensity of evaluating an RNN by caching the weights in on-chip memory such as caches, block RAM, or register files across multiple timesteps. A high degree of sparsity allows significantly large Persistent RNNs to be stored in on-chip memory. When all the weights are stored in float16, a NVIDIA P100 GPU can support a vanilla RNN size of about 2600 hidden units. With the same datatype, at 90% sparsity, and 99% sparsity, a P100 can support RNNs with about 8000, and 24000 hidden units respectively. We expect these kernels to be bandwidth limited out of the memory that is used to store the parameters. This offers the potential of a 146x speedup compared to the TitanX GPU if the entire RNN layer can be stored in registers rather than the GPU DRAM of a TitanX.
Additionally, sparse matrix multiplication involves scheduling and load balancing phases to divide the work up evenly over thousands of threads and to route corresponding weights and activations to individual threads. Since the sparsity patterns for RNNs are fixed over many timesteps these scheduling and load balancing operations can be factored outside of the loop, performed once, and reused many times.
CONCLUSION AND FUTURE WORK
We have demonstrated that by pruning the weights of RNNs during training we can find sparse models that are more accurate than dense models while significantly reducing model size. These sparse models are especially suited for deployment on mobile devices and on back-end server farms due to their small size and increased computational efficiency. Even with existing sub-optimal sparse matrix-vector libraries we realize speed-ups with these models. This technique is orthogonal to quantization techniques which would allow for even further reductions in model size and corresponding increase in performance.
We wish to investigate whether these techniques can generalize to language modeling tasks and if they can effectively reduce the size of embedding layers. We also wish to compare the sparsity generated by our pruning technique to that obtained by L1 regularization.
We are investigating training techniques that don't require maintaining dense matrices for a significant portion of the calculation. Further work remains to implement optimal small batch sparse matrix-dense vector routine for GPUs and ARM processors that would help in deployment.
Figure 2 :
2Pruning characteristics.
Table 1 :
1Hyper-Parameters used for determining threshold ( )HYPER-PARAM
DESCRIPTION
HEURISTIC VALUES
start itr
Iteration to start pruning
Start of second epoch
ramp itr
Iteration to increase the rate of
pruning
Start of 25% of total epochs
end itr
Iteration to stop pruning more pa-
rameters
Start of 50% of total epochs
start slope
(θ)
Initial slope to prune the weights
See equation 1
ramp slope
(φ)
Ramp slope to change the rate of
pruning
1.5θ to 2θ
freq
Number of iterations after which
is updated
100
Table 2 :
2Deep Speech 2 architecture with 1760 hidden unitsLAYER ID
TYPE
# PARAMS
layer 0
2D Convolution
19616
layer 1
2D Convolution
239168
layer 2
Bidirectional Recurrent Linear 8507840
layer 3
Bidirectional Recurrent Linear 9296320
layer 4
Bidirectional Recurrent Linear 9296320
layer 5
Bidirectional Recurrent Linear 9296320
layer 6
Bidirectional Recurrent Linear 9296320
layer 7
Bidirectional Recurrent Linear 9296320
layer 8
Bidirectional Recurrent Linear 9296320
layer 9
FullyConnected
3101120
layer 10
CTCCost
95054
Table 3 :
3GRU & bidirectional RNN model resultsMODEL
# UNITS CER # PARAMS RELATIVE PERF
RNN Dense Baseline 1760
10.67 67 million
0.0%
RNN Dense Small
704
14.50 11.6 million -35.89%
RNN Dense Medium 2560
9.43
141 million
11.85%
RNN Sparse 1760
1760
12.88 8.3 million
-20.71%
RNN Sparse Medium 2560
10.59 11.1 million 0.75%
RNN Sparse Big
3072
10.25 16.7 million 3.95%
GRU Dense
2560
9.55
115 million
0.0%
GRU Sparse
2560
10.87 13 million
-13.82%
GRU Sparse Medium 3568
9.76
17.8 million -2.20%
Table 4 :
4RNN dense baseline model with hard pruning# UNITS PRUNED EPOCH CER # PARAMS RELATIVE PERF
1760
5
13.82 8 million
-29.52%
1760
7
13.27 11 million
-24.37%
1760
10
13.41 8.4 million
-25.68%
1760
12
13.63 8 million
-27.74%
1760
15
26.33 9.2 million
-146.77%
Table 5 :
5Gated recurrent units modelLAYER ID
TYPE
# PARAMS
layer 0
2D Convolution
19616
layer 1
2D Convolution
239168
layer 2
Gated Recurrent Linear 29752320
layer 3
Gated Recurrent Linear 39336960
layer 4
Gated Recurrent Linear 39336960
layer 5
Row Convolution
107520
layer 6
FullyConnected
6558720
layer 7
CTCCost
74269
Table 6 :
6GEMM times for recurrent layers with different sparsityLAYER SIZE SPARSITY LAYER TYPE TIME (µsec) SPEEDUP
1760
0%
RNN
56
1
1760
95%
RNN
20
2.8
2560
95%
RNN
29
1.93
3072
95%
RNN
48
1.16
2560
0%
GRU
313
1
2560
95%
GRU
46
6.80
3568
95%
GRU
89
3.5
5 PERFORMANCE
5.1 COMPUTE TIME
ACKNOWLEDGMENTSWe would like to thank Bryan Catanzaro for helpful discussions related to this work.
Deep speech 2: End-to-end speech recognition in english and mandarin. Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, arXiv:1512.02595arXiv preprintDario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
. Sean Baxter, Moderngpu, Sean Baxter. Moderngpu, 2016. URL https://nvlabs.github.io/moderngpu/ segreduce.html.
Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906. Welin Chen, David Grangier, Michael Auli, Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2015a. URL http://arxiv.org/abs/1512.
Compressing neural networks with the hashing trick. Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, Yixin Chen, abs/1504.04788CoRRWenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015b. URL http://arxiv. org/abs/1504.04788.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Predicting parameters in deep learning. CoRR, abs/1306.0543. Misha Denil, Babak Shakibi, Laurent Dinh, Marc'aurelio Ranzato, Nando De Freitas, Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. CoRR, abs/1306.0543, 2013. URL http://arxiv.org/abs/ 1306.0543.
Exploiting linear structure within convolutional networks for efficient evaluation. Emily Denton, Wojciech Zaremba, Joan Bruna, Yann Lecun, Rob Fergus, abs/1404.0736CoRREmily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. CoRR, abs/1404.0736, 2014. URL http://arxiv.org/abs/1404.0736.
Persistent rnns: Stashing recurrent weights onchip. Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, Sanjeev Satheesh, Proceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine LearningGreg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, and Sanjeev Satheesh. Persistent rnns: Stashing recurrent weights on- chip. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2024-2033, 2016.
Compressing deep convolutional networks using vector quantization. CoRR, abs/1412. Yunchao Gong, Liu Liu, Ming Yang, Lubomir D Bourdev, 6115Yunchao Gong, Liu Liu, Ming Yang, and Lubomir D. Bourdev. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115, 2014. URL http://arxiv.org/ abs/1412.6115.
Towards end-to-end speech recognition with recurrent neural networks. Alex Graves, Navdeep Jaitly, ICML. 14Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pp. 1764-1772, 2014.
Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. Song Han, Huizi Mao, William J Dally, abs/1510.00149CoRR2Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015.
Deep speech: Scaling up end-to-end speech recognition. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, arXiv:1412.5567arXiv preprintAwni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
Advances in neural information processing systems 1. chapter Comparing Biases for Minimal Network Construction with Back-propagation. Stephen José , Hanson , Lorien Pratt, Stephen José Hanson and Lorien Pratt. Advances in neural information processing systems 1. chap- ter Comparing Biases for Minimal Network Construction with Back-propagation, pp. 177-185.
Optimal brain surgeon and general network pruning. Babak Hassibi, G David, Gregory J Stork, Wolff, IEEE International Conference on. IEEENeural NetworksBabak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In Neural Networks, 1993., IEEE International Conference on, pp. 293-299. IEEE, 1993.
Speeding up convolutional neural networks with low rank expansions. CoRR, abs/1405. Max Jaderberg, Andrea Vedaldi, Andrew Zisserman, 3866Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. CoRR, abs/1405.3866, 2014. URL http://arxiv.org/abs/ 1405.3866.
Exploring the limits of language modeling. Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu, abs/1602.02410CoRRRafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410.
Optimal brain damage. Yann Lecun, S John, Sara A Denker, Richard E Solla, Lawrence D Howard, Jackel, NIPs. 2Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPs, volume 2, pp. 598-605, 1989.
Efficient sparse matrix-vector multiplication on x86-based many-core processors. Xing Liu, Mikhail Smelyanskiy, Edmond Chow, Pradeep Dubey, http:/doi.acm.org/10.1145/2464996.2465013Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ICS '13. the 27th International ACM Conference on International Conference on Supercomputing, ICS '13New York, NY, USAACMXing Liu, Mikhail Smelyanskiy, Edmond Chow, and Pradeep Dubey. Efficient sparse matrix-vector multiplication on x86-based many-core processors. In Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ICS '13, pp. 273-282, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2130-3. doi: 10.1145/2464996.2465013. URL http: //doi.acm.org/10.1145/2464996.2465013.
Learning compact recurrent neural networks. CoRR. Zhiyun Lu, Vikas Sindhwani, Tara N Sainath, abs/1604.02594Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. Learning compact recurrent neural networks. CoRR, abs/1604.02594, 2016. URL http://arxiv.org/abs/1604.02594.
Improving the speed of neural networks on cpus. Vincent Vanhoucke, Andrew Senior, Mark Z Mao, Deep Learning and Unsupervised Feature Learning Workshop. Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, abs/1609.08144Oriol Vinyals. Greg Corrado, Macduff Hughes, and Jeffrey DeanCoRRYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.
Exploiting sparseness in deep neural networks for large vocabulary speech recognition. Dong Yu, Frank Seide, Gang Li, Li Deng, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEDong Yu, Frank Seide, Gang Li, and Li Deng. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4409-4412. IEEE, 2012. |
247,594,724 | MONOTONIC DIFFERENTIABLE SORTING NETWORKS | Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks differentiable. One problem of current differentiable sorting methods is that they are non-monotonic. To address this issue, we propose a novel relaxation of conditional swap operations that guarantees monotonicity in differentiable sorting networks. We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic. Monotonicity ensures that the gradients always have the correct sign, which is an advantage in gradient-based optimization. We demonstrate that monotonic differentiable sorting networks improve upon previous differentiable sorting methods. | [
3515469,
67915085
] | MONOTONIC DIFFERENTIABLE SORTING NETWORKS
Felix Petersen [email protected]
University of Konstanz
Christian Borgelt
University of Salzburg
Hilde Kuehne
University of Frankfurt
IBM-MIT Watson AI Lab
Oliver Deussen
University of Konstanz
MONOTONIC DIFFERENTIABLE SORTING NETWORKS
Published as a conference paper at ICLR 2022
Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks differentiable. One problem of current differentiable sorting methods is that they are non-monotonic. To address this issue, we propose a novel relaxation of conditional swap operations that guarantees monotonicity in differentiable sorting networks. We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic. Monotonicity ensures that the gradients always have the correct sign, which is an advantage in gradient-based optimization. We demonstrate that monotonic differentiable sorting networks improve upon previous differentiable sorting methods.
INTRODUCTION
Recently, the idea of end-to-end training of neural networks with ordering supervision via continuous relaxation of the sorting function has been presented by Grover et al. [1]. The idea of ordering supervision is that the ground truth order of some samples is known while their absolute values remain unsupervised. This is done by integrating a sorting algorithm in the neural architecture. As the error needs to be propagated in a meaningful way back to the neural network when training with a sorting algorithm in the architecture, it is necessary to use a differentiable sorting function. Several such differentiable sorting functions have been introduced, e.g., by Grover et al. [1], Cuturi et al. [2], Blondel et al. [3], and Petersen et al. [4]. In this work, we focus on analyzing differentiable sorting functions [1]- [4] and demonstrate how monotonicity improves differentiable sorting networks [4].
Sorting networks are a family of sorting algorithms that consist of two basic components: so called "wires" (or "lanes") carrying values, and conditional swap operations that connect pairs of wires [5]. An example of such a sorting network is shown in the center of Figure 1. The conditional swap operations swap the values carried by these wires if they are not in the desired order. They allow for fast hardware-implementation, e.g., in ASICs, as well as on highly parallelized general-purpose hardware like GPUs. Differentiable sorting networks [4] continuously relax the conditional swap operations by relaxing their step function to a logistic sigmoid function.
One problem that arises in this context is that using a logistic sigmoid function does not preserve monotonicity of the relaxed sorting operation, which can cause gradients with the wrong sign. In this work, we present a family of sigmoid functions that preserve monotonicity of differentiable sorting networks. These include the cumulative density function (CDF) of the Cauchy distribution, as well as a function that minimizes the error-bound and thus induces the smallest possible approximation error. For all sigmoid functions, we prove and visualize the respective properties and validate their advantages empirically. In fact, by making the sorting function monotonic, it also becomes quasiconvex, which has been shown to produce favorable convergence rates [6]. In Figure 2, we demonstrate monotonicity for different choices of sigmoid functions. As can be seen in Figure 4, existing differentiable sorting operators are either non-monotonic or have an unbounded error.
Following recent work [1], [2], [4], we benchmark our continuous relaxations by predicting values displayed on four-digit MNIST images [7] supervised only by their ground truth order. The evaluation shows that our method outperforms existing relaxations of the sorting function on the four-digit MNIST ordering task as well as the SVHN ranking task. Figure 1: The architecture for training with ordering supervision. Left: input values are fed separately into a Convolutional Neural Network (CNN) that has the same weights for all instances. The CNN maps these values to scalar values a 0 , ..., a 5 . Center: the odd-even sorting network sorts the scalars by parallel conditional swap operations such that all inputs can be propagated to their correct ordered position. Right: It produces a differentiable permutation matrix P. In this experiment, the training objective is the cross-entropy between P and the ground truth permutation matrix Q. By propagating the error backward through the sorting network, we can train the CNN.
Contributions. In this work, we show that sigmoid functions with specific characteristics produce monotonic and error-bounded differentiable sorting networks. We provide theoretical guarantees for these functions and also give the monotonic function that minimizes the approximation error. We empirically demonstrate that the proposed functions improve performance.
RELATED WORK
Recently, differentiable approximations of the sorting function for weak supervision were introduced by Grover et al. [1], Cuturi et al. [2], Blondel et al. [3], and Petersen et al. [4].
In 2019, Grover et al. [1] proposed NeuralSort, a continuous relaxation of the argsort operator. A (hard) permutation matrix is a square matrix with entries 0 and 1 such that every row and every column sums up to 1, which defines the permutation necessary to sort a sequence. Grover et al. relax hard permutation matrices by approximating them as unimodal row-stochastic matrices. This relaxation allows for gradient-based stochastic optimization. On various tasks, including sorting four-digit MNIST numbers, they benchmark their relaxation against the Sinkhorn and Gumbel-Sinkhorn approaches proposed by Mena et al. [8].
Cuturi et al. [2] follow this idea and approach differentiable sorting by smoothed ranking and sorting operators using optimal transport. As the optimal transport problem alone is costly, they regularize it and solve it using the Sinkhorn algorithm [9]. By relaxing the permutation matrix, which sorts a sequence of scalars, they also train a scalar predictor of values displayed by four-digit numbers while supervising their relative order only.
Blondel et al. [3] cast the problem of sorting and ranking as a linear program over a permutahedron. To smooth the resulting discontinuous function and provide useful derivatives, they introduce a strongly convex regularization. They evaluate the proposed approach in the context of top-k classification and label ranking accuracy via a soft Spearman's rank correlation coefficient.
Recently, Petersen et al. [4] proposed differentiable sorting networks, a differentiable sorting operator based on sorting networks with differentiably relaxed conditional swap operations. Differentiable sorting networks achieved a new state-of-the-art on both the four-digit MNIST sorting benchmark and the SVHN sorting benchmark. Petersen et al. [10] also proposed a general method for continuously relaxing algorithms via logistic distributions. They apply it, i.a., to the bubble sort algorithm and benchmark in on the MNIST sorting benchmark.
Applications and Broader Impact. In the domain of recommender systems, Lee et al. [11] propose differentiable ranking metrics, and Swezey et al. [12] propose PiRank, a learning-to-rank method using differentiable sorting. Other works explore differentiable sorting-based top-k for applications such as differentiable image patch selection [13], differentiable k-nearest-neighbor [1], [14], top-k attention for machine translation [14], and differentiable beam search methods [14], [15].
BACKGROUND: SORTING NETWORKS
Sorting networks have a long tradition in computer science since the 1950s [5]. They are highly parallel data-oblivious sorting algorithms. They are based on so-called conditional pairwise swap operators that map two inputs to two outputs and ensure that these outputs are in a specific order. This is achieved by simply passing through the inputs if they are already in the desired order and swapping them otherwise. The order of executing conditional swaps is independent of the input values, which makes them data-oblivious. Conditional swap operators can be implemented using only min and max. That is, if the inputs are a and b and the outputs a and b , a swap operator ensures a ≤ b and can easily be formalized as a = min(a, b) and b = max(a, b). Examples of sorting nets are the odd-even network [16], which alternatingly swaps odd and even wires with their successors, and the bitonic network [17], which repeatedly merges and sorts bitonic sequences. While the odd-even network requires n layers, the bitonic network uses the divide-and-conquer principle to sort within only (log 2 n)(1 + log 2 n)/2 layers.
Note that, while they are similar in name, sorting networks are not neural networks that sort.
DIFFERENTIABLE SORTING NETWORKS
In the following, we recapitulate the core concepts of differentiable sorting networks [4]. An example of an odd-even sorting network is shown in the center of Figure 1. Here, odd and even neighbors are conditionally swapped until the entire sequence is sorted. Each conditional swap operation can be defined in terms of min and max as detailed above. These operators can be relaxed to differentiable min and max . Note that we denote the differentiable relaxations in italic font and their hard counterparts in roman font. Note that the differentiable relaxations min and max are different from the commonly used softmin and softmax, which are relaxations of argmin and argmax [18].
One example for such a relaxation of min and max is the logistic relaxation
min σ (a, b) = a · σ(b − a) + b · σ(a − b) and max σ (a, b) = a · σ(a − b) + b · σ(b − a) (1)
where σ is the logistic sigmoid function with inverse temperature β > 0:
σ : x → 1 1 + e −βx .(2)
Any layer of a sorting network can also be represented as a relaxed and doubly-stochastic permutation matrix. Multiplying these (layer-wise) permutation matrices yields a (relaxed) total permutation matrix P. Multiplying P with an input x yields the differentiably sorted vectorx = Px, which is also the output of the differentiable sorting network. Whether it is necessary to compute P, or whetherx suffices, depends on the specific application. For example, for a cross-entropy ranking / sorting loss as used in the experiments in Section 6, P can be used to compute the cross-entropy to a ground truth permutation matrix Q.
In the next section, we build on these concepts to introduce monotonic differentiable sorting networks, i.e., all differentiably sorted outputsx are non-decreasingly monotonic in all inputs x.
MONOTONIC DIFFERENTIABLE SORTING NETWORKS
In this section, we start by introducing definitions and building theorems upon them. In Section 4.2, we use these definitions and properties to discuss different relaxations of sorting networks.
THEORY
We start by defining sigmoid functions and will then use them to define continuous conditional swaps.
Definition 1 (Sigmoid Function). We define a (unipolar) sigmoid (i.e., s-shaped) function as a continuous monotonically non-decreasing odd-symmetric (around
min f (a, b) = a · f (b − a) + b · f (a − b), max f (a, b) = a · f (a − b) + b · f (b − a), (3) argmin f (a, b) = ( f (b − a), f (a − b) ) , argmax f (a, b) = ( f (a − b), f (b − a) ) . (4)
We require a continuous odd-symmetric sigmoid function to preserve most of the properties of min and max while also making argmin and argmax continuous as shown in Supplementary Material B. In the following, we establish doubly-stochasticity and differentiability of P, which are important properties for differentiable sorting and ranking operators. Lemma 3 (Doubly-Stochasticity and Differentiability of P). (i) The relaxed permutation matrix P, produced by a differentiable sorting network, is doubly-stochastic. (ii) P has the same differentiability as f , e.g., if f is continuously differentiable in the input, P will be continuously differentiable in the input to the sorting network. If f is differentiable almost everywhere (a.e.), P will be diff. a.e.
Proof. (i) For each conditional swap between two elements i, j, the relaxed permutation matrix is 1 at the diagonal except for rows i and j: at points i, i and j, j the value is v ∈ [0, 1], at points i, j and j, i the value is 1 − v and all other entries are 0. This is doubly-stochastic as all rows and columns add up to 1 by construction. As the product of doubly-stochastic matrices is doubly-stochastic, the relaxed permutation matrix P, produced by a differentiable sorting network, is doubly-stochastic.
(ii) The composition of differentiable functions is differentiable and the addition and multiplication of differentiable functions is also differentiable. Thus, a sorting network is differentiable if the employed sigmoid function is differentiable. "Differentiable" may be replaced with any other form of differentiability, such as "differentiable a.e."
Now that we have established the ingredients to differentiable sorting networks, we can focus on the monotonicity of differentiable sorting networks. Definition 4 (Monotonic Continuous Conditional Swaps). We say f produces monotonic conditional swaps if min f (x, 0) is non-decreasingly monotonic in x, i.e., min f (x, 0) ≥ 0 for all x.
It is sufficient to define it w.l.o.g. in terms of min f (x, 0) due to its commutativity, stability, and odd-symmetry of the operators (cf. Supplementary Material B). Theorem 5 (Monotonicity of Continuous Conditional Swaps). A continuous conditional swap (in terms of a differentiable sigmoid function f ) being non-decreasingly monotonic in all arguments and outputs requires that the derivative of f decays no faster than 1/x 2 , i.e.,
f (x) ∈ Ω 1 x 2 .(5)
Proof. We show that Equation 5 is a necessary criterion for monotonicity of the conditional swap. Because f is a continuous sigmoid function with f :
R → [0, 1], min f (x, 0) = f (−x) · x > 0 for some x > 0. Thus, montononicity of min f (x, 0) implies lim sup x→∞ min f (x, 0) > 0 (otherwise the value would decrease again from a value > 0.) Thus, lim x→∞ min f (x, 0) = lim x→∞ f (−x) · x = lim x→∞ f (−x) 1/x (L'Hôpital's rule) = lim x→∞ −f (−x) −1/x 2 (6) = lim x→∞ f (−x) 1/x 2 = lim x→∞ f (x) 1/x 2 = lim sup x→∞ f (x) 1/x 2 > 0 ⇐⇒ f (x) ∈ Ω 1 x 2 .(7)
assuming lim x→∞
f (x)
1/x 2 exists. Otherwise, it can be proven analogously via a proof by contradiction.
Corollary 6 (Monotonic Sorting Networks). If the individual conditional swaps of a sorting network are monotonic, the sorting network is also monotonic.
Proof. If single layers g, h are non-decreasingly monotonic in all arguments and outputs, their composition h • g is also non-decreasingly monotonic in all arguments and outputs. Thus, a network of arbitrarily many layers is non-decreasingly monotonic.
Above, we formalized the property of monotonicity. Another important aspect is whether the error of the differentiable sorting network is bounded. It is very desirable to have a bounded error because without bounded errors the result of the differentiable sorting network diverges from the result of the hard sorting function. Minimizing this error is desirable.
Definition 7 (Error-Bounded Continuous Conditional Swaps). A continuous conditional swap has a bounded error if and only if sup x min f (x, 0) = c is finite. The continuous conditional swap is therefore said to have an error bounded by c.
It is sufficient to define it w.l.o.g. in terms of min f (x, 0) due to its commutativity, stability, and odd-symmetry of the operators (cf. Supplementary Material B). In general, for better comparability between functions, we assume a Lipschitz continuous function f with Lipschitz constant 1.
Theorem 8 (Error-Bounds of Continuous Conditional Swaps). (i) A differentiable continuous condi- tional swap has a bounded error if f (x) ∈ O 1 x 2 .(8)
(ii) If it is additionally monotonic, the error-bound can be found as lim x→∞ min f (x, 0) and additionally the error is bound only if Equation 8 holds.
Proof
. (i) W.l.o.g. we consider x > 0. Let g(z) := f (−1/z), g(0) = 0. Thus, g (z) = 1/z 2 · f (−1/z) ≤ c according to Equation 8. Thus, g(z) = g(0) + z 0 g (t)dt ≤ c · z. Therefore, f (−1/z) ≤ c · z =⇒ 1/z · f (−1/z) ≤ c and with x = 1/z =⇒ x · f (−x) = min f (x, 0) ≤ c. (ii) Let min f (x, 0) be monotonic and bound by min f (x, 0) ≤ c. For x > 0 and h(x) := min f (x, 0), h (x) = −x · f (−x) + f (−x) =⇒ x 2 f (−x) = −xh (x) ≤0 +x · f (−x) ≤ x · f (−x) ≤ c . (9) And thus f (x) ∈ O 1 x 2 .
Theorem 9 (Error-Bounds of Diff. Sorting Networks). If the error of individual conditional swaps of a sorting network is bounded by and the network has layers, the total error is bounded by · .
Proof. For the proof, cf. Supplementary Material D.
Discussion. Monotonicity is highly desirable as otherwise adverse effects such as an input requiring to be decreased to increase the output can occur. In gradient-based training, non-mononicity is problematic as it produces gradients with the opposite sign. In addition, as monotonicity is also given in hard sorting networks, it is desirable to preserve this property in the relaxation. Further, monotonic differentiable sorting networks are quasiconvex and quasiconcave as any monotonic function is both quasiconvex and quasiconcave, which leads to favorable convergence rates [6]. Bounding and reducing the deviation from its hard counterpart reduces the relaxation error, and thus is desirable.
SIGMOID FUNCTIONS
Above, we have specified the space of functions for the differentiable swap operation, as well as their desirable properties. In the following, we discuss four notable candidates as well as their properties. The properties of these functions are visualized in Figures 2 and 3 and an overview over their properties is given in Table 1.
Logistic distributions. The first candidate is the logistic sigmoid function (the CDF of a logistic distribution) as proposed in [4]:
σ(x) = CDF L βx = 1 1 + e −βx(10)
This function is the de-facto default sigmoid function in machine learning. It provides a continuous, error-bounded, and Lipschitz continuous conditional swap. However, for the logistic function, monotonicity is not given, as displayed in Figure 2. Table 1: For each function, we display the function, its derivative, and indicate whether the respective relaxed sorting network is monotonic and has a bounded error.
Function f (CDF) f (PDF) Eq. Mono. Bounded Error σ (10) (≈ .0696/α) f R (11) (1/4/α) f C (12) (1/π 2 /α) f O (13) (1/16/α) Reciprocal Sigmoid Function.
To obtain a function that yields a monotonic as well as errorbound differentiable sorting network, a necessary criterion is
f (x) ∈ Θ(1/x 2 ) (the intersection of Equations 5 and 8.) A natural choice is, therefore, f R (x) = 1 (2|x|+1) 2 , which produces f R (x) = x −∞ 1 (2β|t| + 1) 2 dt = 1 2 2βx 1 + 2β|x| + 1 2 .(11)
f R fulfills all criteria, i.e., it is an adequate sigmoid function and produces monotonic and error-bound conditional swaps. It has an -bounded-error of = 0.25. It is also an affine transformation of the elementary bipolar sigmoid function x → x |x|+1 . Properties of this function are visualized in Table 1 and Figures 2 and 3. Proofs for monotonicity can be found in Supplementary Material D.
Cauchy distributions. By using the CDF of the Cauchy distribution, we maintain montonicity while reducing the error-bound to = 1/π 2 ≈ 0.101. It is defined as
f C (x) = CDF C βx = 1 π x −∞ β 1 + (βt) 2 dt = 1 π arctan βx + 1 2(12)
In the experimental evaluation, we find that tightening the error improves the performance.
Optimal Monotonic Sigmoid Function. At this point, we are interested in the monotonic swap operation that minimizes the error-bound. Here, we set 1-Lipschitz continuity again as a requirement to make different relaxations of conditional swaps comparable. We show that f O is the best possible sigmoid function achieving an error-bound of only = 1/16 Theorem 10 (Optimal Sigmoid Function). The optimal sigmoid function minimizing the error-bound, while producing a monotonic and 1-Lipschitz continuous (with β = 1) conditional swap operation, is
f O (x) = − 1 16βx if βx < − 1 4 , 1 − 1 16βx if βx > + 1 4 , βx + 1 2 otherwise.(13)
Proof. Given the above conditions, the optimal sigmoid function is uniquely determined and can easily be derived as follows: Due to stability, it suffices to consider min f (
x, 0) = x · f (−x) or max f (0, x) = −x · f (x)
. Due to symmetry and inversion, it suffices to consider min f (x, 0) = x · f (−x) for x > 0.
Since min(x, 0) = 0 for x > 0, we have to choose f in such a way as to make min f (x, 0) = x·f (−x) as small as possible, but not negative. For this, f (−x) must be made as small as possible. Since we know that f (0) = 1 2 and we are limited to functions f that are Lipschitz continuous with α = 1, f (−x) cannot be made smaller than 1 2 − x, and hence min f (x, 0) cannot be made smaller than
x · 1 2 − x .
To make min f (x, 0) as small as possible, we have to follow x · 1 2 − x as far as possible (i.e., to values x as large as possible). Monotonicity requires that this function can be followed only up to x = 1 4 , at which point we have min f ( 1 4 , 0) = 1
4 1 2 − 1 4 = 1 16 .
For larger x, that is, for x > 1 4 , the value of x · 1 2 − x decreases again and hence the functional form of the sigmoid function f has to change at x = 1 4 to remain monotonic.
The best that can be achieved for x > 1 4 is to make it constant, as it must not decrease (due to monotonicity) and should not increase (to minimize the deviation from the crisp / hard version). That is, min f (x, 0) = 1 16 for x > 1 4 . It follows x · f (−x) = 1 16 and hence f (−x) = 1 16x for x > 1 4 . Note that, if the transition from the linear part to the hyperbolic part were at |x| < 1 4 , the function would not be Lipschitz continuous with α = 1.
An overview of the selection of sigmoid functions we consider is shown in Table 1. Note how f R , f C and f O in this order get closer to x + 1 2 (the gray diagonal line) and hence steeper in their middle part. This is reflected by a widening region of values of the derivatives that are close to or even equal to 1. Figure 2: min f (x, 0) for different sigmoid functions f ; color coding as in Table 1. Table 1 also indicates whether a sigmoid function yields a monotonic swap operation or not, which is visualized in Figure 2: clearly σ-based sorting networks are not monotonic, while all others are. It also states whether the error is bounded, which for a monotonic swap operation means lim x→∞ min f (x, 0) < ∞, and gives their bound relative to the Lipschitz constant α. Figure 3 displays the loss for a sorting network with n = 3 inputs. We project the hexagon-shaped 3-value permutahedron onto the x-y-plane, while the z-axis indicates the loss. Note that, at the rightmost point Figure 3: Loss for a 3-wire odd-even sorting network, drawn over a permutahedron projected onto the x-y-plane. For logistic sigmoid (left) and optimal sigmoid (right).
0.2 0.1 1 / 2 1 3 / 2 2 x f R f C f O σ
(1, 2, 3), the loss is 0 because all elements are in the correct order, while at the left front (2, 3, 1) and rear (3, 1, 2) the loss is at its maximum because all elements are at the wrong positions. Along the red center line, the loss rises logarithmic for the optimal sigmoid function on the right. Note that the monotonic sigmoid functions produce a loss that is larger when more elements are in the wrong order. For the logistic function, (3, 2, 1) has the same loss as (2, 3, 1) even though one of the ranks is correct at (3, 2, 1), while for (2, 3, 1) all three ranks are incorrect. For the special case of n = 2, i.e., for sorting two elements, NeuralSort [1] and Relaxed Bubble sort [10] are equivalent to differentiable sorting networks with the logistic sigmoid function. Thus, it is non-monotonic, as displayed in Figure 4.
MONOTONICITY OF OTHER DIFFERENTIABLE SORTING OPERATORS
For the Sinkhorn sorting algorithm [2], we can simply construct an example of non-monotonicity by keeping one value fixed, e.g., at zero, and varying the second value (x) as in Figure 4 and displaying the minimum. Notably, for the case of n = 2, this function is numerically equal to NeuralSort and differentiable sorting networks with the logistic function.
For fast sort [3], we follow the same principle and find that it is indeed monotonic (in this example); however, the error is unbounded, which is undesirable.
For differentiable sorting networks, Petersen et al. [4] proposed to extend the sigmoid function by the activation replacement trick, which avoids extensive blurring as well as vanishing gradients. They apply the activation replacement trick ϕ before feeding the values into the logistic sigmoid function; thus, the sigmoid function is effectively σ • ϕ. Here, ϕ : x → x |x| λ + where λ ∈ [0, 1] and ≈ 10 −10 . Here, the asymptotic character of σ • ϕ does not fulfill the requirement set by Theorem 5, and is thereby non-monotonic as also displayed in Figure 4 (purple).
We summarize monotonicity and error-boundness for all differentiable sorting functions in Table 2.
EMPIRICAL EVALUATION
To evaluate the properties of the proposed function as well as their practical impact in the context of sorting supervision, we evaluate them with respect to two standard benchmark datasets. The MNIST sorting dataset [1]- [4] consists of images of numbers from 0000 to 9999 composed of four MNIST digits [7]. Here, the task is training a network to produce a scalar output value for each image such that the ordering of the outputs follows the respective ordering of the images. Specifically, the metrics here are the proportion of full rankings correctly identified, and the proportion of individual element ranks correctly identified [1]. The same task can also be extended to the more realistic SVHN [19] dataset with the difference that the images are already multi-digit numbers as shown in [4].
Comparison to the State-of-the-Art. We first compare the proposed functions to other state-ofthe-art approaches using the same network architecture and training setup as used in previous works, as well as among themselves. The respective hyperparameters for each setting can be found in Supplementary Material A. We report the results in Table 3. The proposed monotonic differentiable sorting networks outperform current state-of-the-art methods by a considerable margin. Especially for those cases where more samples needed to be sorted, the gap between monotonic sorting nets and other techniques grows with larger n. The computational complexity of the proposed method depends on the employed sorting network architecture leading to a complexity of O(n 3 ) for odd-even networks and a complexity of O(n 2 log 2 n) for bitonic networks because all of the employed sigmoid functions can be computed in closed form. This leads to the same runtime as in [4].
Comparing the three proposed functions among themselves, we observe that for odd-even networks on MNIST, the error-optimal function f O performs best. This is because here the approximation error is small. However, for the more complex bitonic sorting networks, f C (Cauchy) performs better than f O . This is because f O does not provide a higher-order smoothness and is only C 1 smooth, while the Cauchy function f C is analytic and C ∞ smooth. Table 3: Results on the four-digit MNIST and SVHN tasks using the same architecture as previous works [1]- [4]. The metric is the proportion of rankings correctly identified, and the value in parentheses is the proportion of individual element ranks correctly identified. All results are averaged over 5 runs. SVHN w/ n = 32 is omitted to reduce the carbon impact of the evaluation. In all settings, the monotonic sorting networks clearly outperform the non-monotonic ones. Top: Odd-Even sorting networks with n = 3 (left) and n = 15 (right). Bottom: n = 32 with an Odd-Even (left) and a Bitonic network (right). For small n, such as 3, Cauchy performs best because it has a low error but is smooth at the same time. For larger n, such as 15 and 32, the optimal sigmoid function (wrt. error) f O performs better because it, while not being smooth, has the smallest possible approximation error which is more important for deeper networks. For the bitonic network with its more complex structure at n = 32 (bottom right), the reciprocal sigmoid f R performs best.
Evaluation of Inverse Temperature β. To further understand the behavior of the proposed monotonic functions compared to the logistic sigmoid function, we evaluate all sigmoid functions for different inverse temperatures β during training. We investigate four settings: odd-even networks for n ∈ {3, 15, 32} and a bitonic sorting network with n = 32 on the MNIST data set. Notably, there are 15 layers in the bitonic sorting networks with n = 32, while the odd-even networks for n = 15 also has 15 layers. We display the results of this evaluation in Figure 5. In Supplementary Material C, we show an analogous figure with additional settings. Note that here, we train for only 50% of the training steps compared to Table 3 to reduce the computational cost.
We observe that the optimal inverse temperature depends on the number of layers, rather than the overall number of samples n. This can be seen when comparing the peak accuracy of each function for the odd-even sorting network for different n and thus for different numbers of layers. The bitonic network for n = 32 (bottom right) has the same number of layers as n = 15 in the odd-even network (top right). Here, the peak performances for each sigmoid function fall within the same range, whereas the peak performances for the odd-even network for n = 32 (bottom left) are shifted almost an order of magnitude to the right. For all configurations, the proposed sigmoid functions for monotonic sorting networks improve over the standard logistic sigmoid function, as well as the ART.
The source code of this work is publicly available at github.com/Felix-Petersen/diffsort.
CONCLUSION
In this work, we addressed and analyzed monotonicity and error-boundness in differentiable sorting and ranking operators. Specifically, we focussed on differentiable sorting networks and presented a family of sigmoid functions that preserve monotonicity and bound approximation errors in differentiable sorting networks. This makes the sorting functions quasiconvex, and we empirically observe that the resulting method outperforms the state-of-the-art in differentiable sorting supervision.
[20] D. Kingma and J. Ba, "Adam: A method for stochastic optimization," in International Conference on Learning Representations (ICLR), 2015.
[21] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet, "Multi-digit number recognition from street view imagery using deep convolutional neural networks," Computing Research Repository (CoRR) in arXiv, 2013.
A IMPLEMENTATION DETAILS
For training, we use the same network architecture as in previous works [1], [2], [4] and also use the Adam optimizer [20] at a learning rate of 3 · 10 −4 . For Figure 5, we train for 100 000 steps. For Table 3, we train for 200 000 steps on MNIST and 1 000 000 steps on SVHN. We preprocess SVHN as done by Goodfellow et al. [21].
A.1 INVERSE TEMPERATURE β
For the inverse temperature β, we use the following values, which correspond to the optima in Figure 5 and were found via grid search:
B PROPERTIES OF min AND max
The core element of differentiable sorting networks is the relaxation of the conditional swap operation, allowing for a soft transition between passing through and swapping, such that the sorting operator becomes differentiable. It is natural to try to achieve this by using soft versions of the minimum (denoted by min) and maximum operators (denoted by max ). But before we consider concrete examples, let us collect some desirable properties that such relaxations should have. Naturally, min and max should satisfy many properties that their crisp / hard counterparts min and max satisfy, as well as a few others (for a, b, c ∈ R):
Symmetry / Commutativity. Since min and max are symmetric/commutative, so should be their soft counterparts: min(a, b) = min(b, a) and max (a, b) = max (b, a).
Ordering. Certainly a (soft) maximum of two numbers should be at least as large as a (soft) minimum of the same two numbers: min(a, b) ≤ max (b, a).
Continuity in Both Arguments. Both min and max should be continuous in both arguments.
Idempotency. If the two arguments are equal in value, this value should be the result of min and max , that is, min(a, a) = max (a, a) = a. (a, b). Note that together with ordering this property implies idempotency, vi7.: a = min(a, a) ≤ min(a, a) ≤ max (a, a) ≤ max(a, a) = a. Otherwise, they cannot be defined via a convex combination of their inputs, making it impossible to define proper argmin and argmax , and hence we could not compute differentiable permutation matrices.
Monotonicity in Both Arguments. For any c > 0, it should be min(a + c, b) ≥ min(a, b), (a, b). Note that the second expression for each operator follows from the first with the help of symmetry / commutativity.
min(a, b + c) ≥ min(a, b), max (a + c, b) ≥ max (a, b), and max (a, b + c) ≥ max
Bounded Error / Minimum Deviation from Hard Versions. Soft versions of minimum and maximum should differ as little as possible from their crisp / hard counterparts. However, this condition needs to be made more precise to yield concrete properties (see below for details).
Note that min and max cannot satisfy associativity, as this would force them to be identical to their hard counterparts. Associativity means that max (a, max (b, c)) = max (max (a, b), c) and min(a, min(b, c)) = min(min (a, b), c). Now consider a, b ∈ R with a < b. Then with associativity and idempotency max (a, max (a, b)) = max (max (a, a), b) = max (a, b) and hence max (a, b) = b = max(a, b) (by comparison of the second arguments). Analogously, one can show that if associativity held, we would have min(a, b) = a = min(a, b). That is, one cannot have both associativity and idempotency. Note that without idempotency, the soft operators would not be bounded by their hard versions. As idempotency is thus necessary, associativity has to be given up.
If min and max are to be bounded by the crisp / hard version and symmetry, ordering, inversion and stability (which imply sum preservation) hold, they must be convex combinations of the arguments a and b with weights that depend only on the difference of a and b. That is,
min(a, b) = f (b − a) · a + (1 − f (b − a)) · b max (a, b) = (1 − f (b − a)) · a + f (b − a) · b,
where f (x) yields a value in [0, 1] (due to boundedness of min and max by their crisp / hard counterparts). Due to inversion, f must satisfy f (x) = 1−f (−x) and hence f (0) = 1 2 . Monotonicity of min and max requires that f is a monotonically increasing function. Continuity requires that f is a continuous function. In summary, f must be a continuous sigmoid function (in the older meaning of this term, i.e., an s-shaped function, of which the logistic function is only a special case) satisfying
f (x) = 1 − f (−x).
As mentioned, the condition that the soft versions of minimum and maximum should deviate as little as possible from the crisp / hard versions causes a slight problem: this deviation can always be made smaller by making the sigmoid function steeper (reaching the crisp / hard versions in the limit for infinite inverse temperature, when the sigmoid function turns into the Heaviside step function). Hence, in order to find the best shape of the sigmoid function, we have to limit its inverse temperature. Therefore, w.l.o.g., we require the sigmoid function to be Lipschitz-continuous with Lipschitz constant α = 1.
C ADDITIONAL EXPERIMENTS
In Figure 6, we display additional results for more setting analogous to Figure 5. Figure 6: Additional results analogous to Figure 5. Evaluating different sigmoid functions on the sorting MNIST task for ranges of different inverse temperatures β. The metric is the proportion of individual element ranks correctly identified. In all settings, the monotonic sorting networks clearly outperform the non-monotonic ones. The first three rows use odd-even networks with n ∈ {3, 5, 7, 9, 15, 32}. The last row uses bitonic networks with n ∈ {16, 32}.
D ADDITIONAL PROOFS
Theorem 9 (Error-Bounds of Diff. Sorting Networks). If the error of individual conditional swaps of a sorting network is bounded by and the network has layers, the total error is bounded by · .
Proof. Induction over number k of executed layers. Let x (k) be input x differentially sorted for k layers and x (k) be input x hard sorted for k layers as an anchor. We require this anchor, as it is possible that x
(k) i < x (k) j but x (k) i > x (k)
j for some i, j, k.
Begin of induction: k = 0. Input vector x equals the vector x (0) after 0 layers. Thus, the error is equal to 0 · .
Step of induction: Given that after k − 1 layers the error is smaller than or equal to (k − 1) , we need to show that the error after k layers is smaller than or equal to k . x(1 + |x|) 1 + 2|x| + |x| 2 + 1 + 2|x| + |x| 2 1 + 2|x| + |x| 2 + x 1 + 2|x| + |x| 2 (23) = 1 2 2x + 2|x| + x|x| + |x| 2 + 1 1 + 2|x| + |x| 2 (24) = 1 2
2(x + |x|) + |x|(x + |x|) + 1 1 + 2|x| + |x| 2 (25)
≥ 1 2 1 1 + 2|x| + |x| 2 (because x + |x| ≥ 0)(26)
> 0 (27) max f R is analogous.
Theorem 12. min f C and max f C are monotonic functions with the sigmoid function f C .
E ADDITIONAL DISCUSSION
"How would the following baseline perform? Hard rank the predictions and compare it with the ground truth rank. Then, use their difference as the learning signal (i.e., instead of the gradient)."
This kind of supervision does not converge, even for small learning rates and in simplified settings. Specifically, we observed in our experiments that the range of values produced by the CNN gets compressed heavily by training in this fashion. Also counteracting it by explicitly adding a term to spread it out again did not help, and training was very unstable. Despite testing various hyperparameters (learning rate, adaptation factor, both absolute and relative to the range of values in a batch or in the whole data set, spread factor, etc.) it did not work, even on toy data like single-digit MNIST with n = 5.
"Could β be jointly trained as a parameter with the model?"
Yes, it could; however, in our experiments, we found that the entire training performs better if β is fixed. If β is also a parameter to be trained, its learning rate should be very small as it (i) should not change too fast and (ii) already accumulates many gradient signals as it is used many times.
1 2
2) function f with f : R → [0, 1] with lim x→−∞ f (x) = 0 and lim x→∞ f (x) = 1.
Figure 4 :
4min(x, 0) for Sinkhorn sort (red), NeuralSort (red), Relaxed Bubble sort (red), diffsort with logistic sigmoid (red), diffsort with activation replacement trick (purple), and Fast Sort (orange).
Figure 5 :
5-3.4 (59.2) f R : Reciprocal Sigmoid 85.7 (89.8) 68.8 (84.2) 53.3 (80.0) 40.0 (76.3) 13.2 (66.0) -11.5 (64.9) f C : Cauchy CDF 85.5 (89.6) 68.5 (84.1) 52.9 (79.8) 39.9 (75.8) 13.7 (66.0) -12.2 (65.6) f O : Optimal Sigmoid 86.0 (90.0) 67.5 (83.5) 53.1 (80.0) 39.1 (76.0) 13.2 (66.3) -10.6 (66.8) -Evaluating different sigmoid functions on the sorting MNIST task for ranges of different inverse temperatures β. The metric is the proportion of individual element ranks correctly identified.
Inversion.
As for min and max, the two operators min and max should be connected in such a way that the result of one operator equals the negated result of the other operator applied to negated arguments: min(a, b) = −max (−a, −b) and max (a, b) = −min(−a, −b). Stability / Shift Invariance. Shifting both arguments by some value c ∈ R should shift each operator's result by the same value: min(a + c, b + c) = min(a, b) + c and max (a + c, b + c) = max (a, b) + c. Stability implies that the values of min and max depend effectively only on the difference of their arguments. Specifically, choosing c = −a yields min(a, b) = min(0, b − a) + a and max (a, b) = max (0, b − a) + a, and c = −b yields min(a, b) = min(a − b, 0) + b and max (a, b) = max (a − b, 0) + b.Sum preservation. The sum of min and max should equal the sum of min and max: min(a, b) + max (a, b) = min(a, b) + max(a, b) = a + b. Note that sum preservation follows from stability, inversion and symmetry: min(a, b) = min(a−b,0)+b = b−max (0, b−a) = b−(max (a, b)−a) = a + b − max (a, b)Bounded by Hard Versions. Soft operators should not yield values more extreme than their crisp / hard counterparts: min(a, b) ≤ min(a, b) and max (a, b) ≤ max
Table 2 :
2Diff. Sorting Networks f R Diff. Sorting Networks f C Diff. Sorting Networks f OFor each differentiable sorting opera-
tor, whether it is monotonic (M), and whether
it has a bounded error (BE).
Method
M BE
NeuralSort
-
Sinkhorn Sort
-
Fast Sort
Relaxed Bubble Sort
-
Diff. Sorting Networks σ
Diff. Sorting Networks σ • ϕ
The layer consists of comparator pairs i, j. W.l.o.g. we assume x(k−1)
i
=
1
2
x
1 + |x|
+ 1 + x
1
2
d
dx
x
1 + |x|
+ 1
(16)
=
1
2
x
1 + |x|
+ 1 + x
1
2
d
dx
x
1 + |x|
(17)
=
1
2
x
1 + |x|
+ 1 + x
1
2
dx
dx · (1 + |x|) − x · d|x|+1
dx
(1 + |x|) 2
(18)
=
1
2
x
1 + |x|
+ 1 + x
1
2
(1 + |x|) − x sgn(x)
(1 + |x|) 2
(19)
=
1
2
x
1 + |x|
+ 1 + x
1
2
1 + |x| − |x|
(1 + |x|) 2
(20)
=
1
2
x
1 + |x|
+ 1 + x
1
2
1
1 + 2|x| + |x| 2
(21)
=
1
2
x
1 + |x|
+ 1 +
x
1 + 2|x| + |x| 2
(22)
=
1
2
ACKNOWLEDGMENTS & FUNDING DISCLOSUREWe warmly thank Robert Denk for helpful discussions. This work was supported by the Goethe Center for Scientific Computing (G-CSC) at Goethe University Frankfurt, the IBM-MIT Watson AI Lab, the DFG in the Cluster of Excellence EXC 2117 "Centre for the Advanced Study of Collective Behaviour"(Project-IDREPRODUCIBILITY STATEMENTWe made the source code and experiments of this work publicly available at github.com/Felix-Petersen/diffsort to foster future research in this direction. All data sets are publicly available. We specify all necessary hyperparameters for each experiment. We use the same model architectures as in previous works. We demonstrate how the choice of hyperparameter β affects the performance inFigure 5. Each experiment can be reproduced on a single GPU.≤ x (k−1) j . W.l.o.g. we assume that wire i will be the min and that wire j will be the max, therefore xj . We distinguish two cases:have to be so close that within margin of error such a reversed order is possible. According to the assumption, xTheorem 11. min f R and max f R are monotonic functions with the sigmoid function f R .Proof. Wlog., we assume a i = x and a j = 0.To show monotonicity, we consider its derivative / slope.Proof. Wlog., we assume a i = x and a j = 0.To show monotonicity, we consider its derivative.To reason about the derivative, we also consider the second derivative:For z ∈ (−∞, 0]: The derivative of min f C (0, x) converges to 1 for z → −∞ (Eq. 31).For z ∈ [0, ∞): The derivative of min f C (0, x) converges to 0 for z → ∞ (Eq. 30).∂ ∂zThe second derivative (Eq. 32) of min f C (0, x) is always negative.Therefore, the derivative is always in (0, 1), and therefore always positive. Thus, min f C (0, x) is strictly monotonic. max f C is analogous.
Stochastic Optimization of Sorting Networks via Continuous Relaxations. A Grover, E Wang, A Zweig, S Ermon, International Conference on Learning Representations (ICLR). A. Grover, E. Wang, A. Zweig, and S. Ermon, "Stochastic Optimization of Sorting Networks via Continuous Relaxations," in International Conference on Learning Representations (ICLR), 2019.
Differentiable ranking and sorting using optimal transport. M Cuturi, O Teboul, J.-P Vert, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)M. Cuturi, O. Teboul, and J.-P. Vert, "Differentiable ranking and sorting using optimal transport," in Proc. Neural Information Processing Systems (NeurIPS), 2019.
Fast Differentiable Sorting and Ranking. M Blondel, O Teboul, Q Berthet, J Djolonga, Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML). Machine Learning Research (PMLR), International Conference on Machine Learning (ICML)2020M. Blondel, O. Teboul, Q. Berthet, and J. Djolonga, "Fast Differentiable Sorting and Ranking," in Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML), 2020.
Differentiable sorting networks for scalable sorting and ranking supervision. F Petersen, C Borgelt, H Kuehne, O Deussen, International Conference on Machine Learning (ICML). PMLR2021Proc. Machine Learning ResearchF. Petersen, C. Borgelt, H. Kuehne, and O. Deussen, "Differentiable sorting networks for scalable sorting and ranking supervision," in Proc. Machine Learning Research (PMLR), International Conference on Machine Learning (ICML), 2021.
D E Knuth, Sorting and Searching. Addison Wesley32nd EdD. E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching (2nd Ed.) Addison Wesley, 1998.
Convergence and efficiency of subgradient methods for quasiconvex minimization. K C Kiwiel, Mathematical Programming. 90K. C. Kiwiel, "Convergence and efficiency of subgradient methods for quasiconvex minimization," Mathematical Programming, vol. 90, 2001.
Mnist handwritten digit database. Y Lecun, C Cortes, C Burges, Y. LeCun, C. Cortes, and C. Burges, "Mnist handwritten digit database," 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist.
Learning latent permutations with gumbel-sinkhorn networks. G Mena, D Belanger, S Linderman, J Snoek, International Conference on Learning Representations (ICLR). G. Mena, D. Belanger, S. Linderman, and J. Snoek, "Learning latent permutations with gumbel-sinkhorn networks," in International Conference on Learning Representations (ICLR), 2018.
Sinkhorn distances: Lightspeed computation of optimal transport. M Cuturi, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)M. Cuturi, "Sinkhorn distances: Lightspeed computation of optimal transport," in Proc. Neural Informa- tion Processing Systems (NeurIPS), 2013.
Learning with algorithmic supervision via continuous relaxations. F Petersen, C Borgelt, H Kuehne, O Deussen, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2021F. Petersen, C. Borgelt, H. Kuehne, and O. Deussen, "Learning with algorithmic supervision via continu- ous relaxations," in Proc. Neural Information Processing Systems (NeurIPS), 2021.
Differentiable ranking metric using relaxed sorting for top-k recommendation. H Lee, S Cho, Y Jang, J Kim, H Woo, IEEE Access. H. Lee, S. Cho, Y. Jang, J. Kim, and H. Woo, "Differentiable ranking metric using relaxed sorting for top-k recommendation," IEEE Access, 2021.
Pirank: Learning to rank via differentiable sorting. R Swezey, A Grover, B Charron, S Ermon, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2021R. Swezey, A. Grover, B. Charron, and S. Ermon, "Pirank: Learning to rank via differentiable sorting," in Proc. Neural Information Processing Systems (NeurIPS), 2021.
Differentiable patch selection for image recognition. J.-B Cordonnier, A Mahendran, A Dosovitskiy, D Weissenborn, J Uszkoreit, T Unterthiner, Proc. International Conference on Computer Vision and Pattern Recognition (CVPR). International Conference on Computer Vision and Pattern Recognition (CVPR)2021J.-B. Cordonnier, A. Mahendran, A. Dosovitskiy, D. Weissenborn, J. Uszkoreit, and T. Unterthiner, "Differentiable patch selection for image recognition," in Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Differentiable top-k with optimal transport. Y Xie, H Dai, M Chen, B Dai, T Zhao, H Zha, W Wei, T Pfister, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)2020Y. Xie, H. Dai, M. Chen, B. Dai, T. Zhao, H. Zha, W. Wei, and T. Pfister, "Differentiable top-k with optimal transport," in Proc. Neural Information Processing Systems (NeurIPS), 2020.
A continuous relaxation of beam search for end-to-end training of neural sequence models. K Goyal, G Neubig, C Dyer, T Berg-Kirkpatrick, AAAI Conference on Artificial Intelligence. K. Goyal, G. Neubig, C. Dyer, and T. Berg-Kirkpatrick, "A continuous relaxation of beam search for end-to-end training of neural sequence models," in AAAI Conference on Artificial Intelligence, 2018.
Parallel neighbor-sort (or the glory of the induction principle). A N Habermann, A. N. Habermann, "Parallel neighbor-sort (or the glory of the induction principle)," 1972.
Sorting networks and their applications. K E Batcher, Proc. AFIPS Spring Joint Computing Conference. AFIPS Spring Joint Computing ConferenceAtlantic City, NJK. E. Batcher, "Sorting networks and their applications," in Proc. AFIPS Spring Joint Computing Conference (Atlantic City, NJ), 1968, pp. 307-314.
Generative adversarial networks. I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Proc. Neural Information Processing Systems (NeurIPS). Neural Information essing Systems (NeurIPS)I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," in Proc. Neural Information Processing Systems (NeurIPS), 2014.
Reading digits in natural images with unsupervised feature learning. Y Netzer, T Wang, A Coates, A Bissacco, B Wu, A Y Ng, Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," 2011. |
204,734,348 | CONDITIONAL LEARNING OF FAIR REPRESENTATIONS | We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification. | [] | CONDITIONAL LEARNING OF FAIR REPRESENTATIONS
Han Zhao [email protected]
Machine Learning Department
Department of Engineering
Montreal Machine Learning Department
Carnegie Mellon University
University of Cambridge
Microsoft Research
Carnegie Mellon University
Amanda Coston [email protected]
Machine Learning Department
Department of Engineering
Montreal Machine Learning Department
Carnegie Mellon University
University of Cambridge
Microsoft Research
Carnegie Mellon University
Tameem Adel
Machine Learning Department
Department of Engineering
Montreal Machine Learning Department
Carnegie Mellon University
University of Cambridge
Microsoft Research
Carnegie Mellon University
Geoffrey J Gordon [email protected]
Machine Learning Department
Department of Engineering
Montreal Machine Learning Department
Carnegie Mellon University
University of Cambridge
Microsoft Research
Carnegie Mellon University
CONDITIONAL LEARNING OF FAIR REPRESENTATIONS
Published as a conference paper at ICLR 2020
We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification.
INTRODUCTION
High-stakes settings, such as loan approvals, criminal justice, and hiring processes, use machine learning tools to help make decisions. A key question in these settings is whether the algorithm makes fair decisions. In settings that have historically had discrimination, we are interested in defining fairness with respect to a protected group, the group which has historically been disadvantaged. The rapidly growing field of algorithmic fairness has a vast literature that proposes various fairness metrics, characterizes the relationship between fairness metrics, and describes methods to build classifiers that satisfy these metrics (Chouldechova & Roth, 2018;Corbett-Davies & Goel, 2018). Among many recent attempts to achieve algorithmic fairness (Dwork et al., 2012;Hardt et al., 2016;Zemel et al., 2013;Zafar et al., 2015), learning fair representations has attracted increasing attention due to its flexibility in learning rich representations based on advances in deep learning (Edwards & Storkey, 2015;Louizos et al., 2015;Beutel et al., 2017;Madras et al., 2018). The backbone idea underpinning this line of work is very intuitive: if the representations of data from different groups are similar to each other, then any classifier acting on such representations will also be agnostic to the group membership.
However, it has long been empirically observed and recently been proved (Zhao & Gordon, 2019) that fairness is often at odds with utility. For example, consider demographic parity, which requires the classifier to be independent of the group membership attribute. It is clear that demographic parity will cripple the utility if the demographic group membership and the target variable are indeed correlated. To escape such inherent trade-off, other notions of fairness, such as equalized odds (Hardt et al., 2016), which asks for equal false positive and negative rates across groups, and accuracy parity (Zafar et al., 2017), which seeks equalized error rates across groups, have been proposed. It is a well-known result that equalized odds is incompatible with demographic parity (Barocas et al., 2017) except in degenerate cases where group membership is independent In this work, we provide an affirmative answer to the above question by proposing an algorithm to align the conditional distributions (on the target variable) of representations across different demographic subgroups. The proposed formulation is a minimax problem that admits a simple reduction to cost-sensitive learning. The key component underpinning the design of our algorithm is the balanced error rate (BER, c.f. Section 2) (Feldman et al., 2015;Menon & Williamson, 2018), over the target variable and protected attributes. We demonstrate both in theory and on two real-world experiments that together with the conditional alignment, BER helps our algorithm to simultaneously ensure accuracy parity and equalized odds across groups. Our key contributions are summarized as follows:
• We prove that BER plays a fundamental role in ensuring accuracy parity and a small joint error across groups. Together with the conditional alignment of representations, this implies that we can simultaneously achieve equalized odds and accuracy parity. Furthermore, we also show that when equalized odds is satisfied, BER serves as an upper bound on the error of each subgroup. These results help to justify the design of our algorithm in using BER instead of the marginal error as our loss functions. • We provide theoretical results that our method achieves equalized odds without impacting demographic parity. This result shows that we can preserve the demographic parity gap for free while simultaneously achieving equalized odds. • Empirically, among existing fair representation learning methods, we demonstrate that our algorithm is able to achieve a better utility on balanced datasets. On an imbalanced dataset, our algorithm is the only method that achieves accuracy parity; however it does so at the cost of decreased utility.
We believe our theoretical results contribute to the understanding on the relationship between equalized odds and accuracy parity, and the proposed algorithm provides an alternative in real-world scenarios where accuracy parity and equalized odds are desired.
PRELIMINARY
We first introduce the notations used throughout the paper and formally describe the problem setup and various definitions of fairness explored in the literature.
Notation
We use X ⊆ R d and Y = {0, 1} to denote the input and output space. Accordingly, we use X and Y to denote the random variables which take values in X and Y, respectively. Lower case letters x and y are used to denote the instantiation of X and Y . To simplify the presentation, we use A ∈ {0, 1} as the sensitive attribute, e.g., race, gender, etc. 1 Let H be the hypothesis class of classifiers. In other words, for h ∈ H, h : X → Y is the predictor that outputs a prediction. Note that even if the predictor does not explicitly take the sensitive attribute A as input, this fairness through blindness mechanism can still be biased due to the potential correlations between X and A. In this work we study the stochastic setting where there is a joint distribution D over X, Y and A from which the data are sampled. To keep the notation consistent, for a, y ∈ {0, 1}, we use D a to mean the conditional distribution of D given A = a and D y to mean the conditional distribution of D given Y = y. For an event E, D(E) denotes the probability of E under D. In particular, in the literature of fair machine learning, we call D(Y = 1) the base rate of distribution D and we use ∆ BR (D, D ) := |D(Y = 1) − D (Y = 1)| to denote the difference of the base rates between two distributions D and D over the same sample space.
Given a feature transformation function g : X → Z that maps instances from the input space X to feature space Z, we define g D := D • g −1 to be the induced (pushforward) distribution of D under g, i.e., for any event E ⊆ Z, g D(E ) := D(g −1 (E )) = D({x ∈ X | g(x) ∈ E }). To measure the discrepancy between distributions, we use d TV (D, D ) to denote the total variation between them: d TV (D, D ) := sup E |D(E) − D (E)|. In particular, for binary random variable Y , it can be readily verified that the total variation between the marginal distributions D(Y ) and D (Y ) reduces to the difference of their base rates, ∆ BR (D, D ). To see this, realize that
d TV (D(Y ), D (Y )) = max{|D(Y = 1) − D (Y = 1)|, |D(Y = 0) − D (Y = 0)|} = |D(Y = 1) − D (Y = 1)| = ∆ BR (D, D ) by definition.
Given a joint distribution D, the error of a predictor h under D is defined as
Err D (h) := E D [|Y − h(X)|].
Note that for binary classification problems, when h(X) ∈ {0, 1}, Err D (h) reduces to the true error rate of binary classification. To make the notation more compact, we may drop the subscript D when it is clear from the context. Furthermore, we use CE D ( Y || Y ) to denote the crossentropy loss function between the predicted variable Y and the true label distribution Y over the joint distribution D. For binary random variables Y , we define BER D ( Y || Y ) to be the balanced error rate of predicting Y using Y , e.g.,
BER D ( Y || Y ) := D( Y = 0 | Y = 1) + D( Y = 1 | Y = 0). Realize that D( Y = 0 | Y = 1) is the false negative rate (FNR) of Y and D( Y = 1 | Y = 0)
corresponds to the false positive rate (FPR). So the balanced error rate can also be understood as the sum of FPR and FNR using the predictor Y . We can similarly define BER D ( A || A) as well.
Problem Setup In this work we focus on group fairness where the group membership is given by the sensitive attribute A. We assume that the sensitive attribute A is available to the learner during training phase, but not inference phase. As a result, post-processing techniques to ensure fairness are not feasible under our setting. In the literature, there are many possible definitions of fairness (Narayanan, 2018), and in what follows we provide a brief review of the ones that are mostly relevant to this work. Definition 2.1 (Demographic Parity (DP)). Given a joint distribution D, a classifier Y satisfies demographic parity if Y is independent of A.
When Y is a deterministic binary classifier, demographic parity reduces to the requirement that D 0 ( Y = 1) = D 1 ( Y = 1), i.e., positive outcome is given to the two groups at the same rate. Demographic parity is also known as statistical parity, and it has been adopted as the default definition of fairness in a series of work (Calders & Verwer, 2010;Edwards & Storkey, 2015;Johndrow et al., 2019;Kamishima et al., 2011;Louizos et al., 2015;Zemel et al., 2013;Madras et al., 2018;Adel et al., 2019). It is not surprising that demographic parity may cripple the utility that we hope to achieve, especially in the common scenario where the base rates differ between two groups (Hardt et al., 2016). Formally, the following theorem characterizes the trade-off in terms of the joint error across different groups:
Theorem 2.1. (Zhao & Gordon, 2019) Let Y = h(g(X)) be the classifier. Then Err D0 (h • g) + Err D1 (h • g) ≥ ∆ BR (D 0 , D 1 ) − d TV (g D 0 , g D 1 ).
In this case of representations that are independent of the sensitive attribute A, then the second term d TV (g D 0 , g D 1 ) becomes 0, and this implies:
Err D0 (h • g) + Err D1 (h • g) ≥ ∆ BR (D 0 , D 1 ).
At a colloquial level, the above inequality could be understood as an uncertainty principle which says:
For fair representations, it is not possible to construct a predictor that simultaneously minimizes the errors on both demographic subgroups.
More precisely, by the pigeonhole principle, the following corollary holds:
Corollary 2.1. If d TV (g D 0 , g D 1 ) = 0, then for any hypothesis h, max{Err D0 (h • g), Err D1 (h • g)} ≥ ∆ BR (D 0 , D 1 )/2.
In words, this means that for fair representations in the demographic parity sense, at least one of the subgroups has to incur a prediction error of at least ∆ BR (D 0 , D 1 )/2 which could be large in settings like criminal justice where ∆ BR (D 0 , D 1 ) is large. In light of such inherent trade-off, an alternative definition is accuracy parity, which asks for equalized error rates across different groups:
Definition 2.2 (Accuracy Parity). Given a joint distribution D, a classifier Y satisfies accuracy parity
if D 0 ( Y = Y ) = D 1 ( Y = Y ).
A violation of accuracy parity is also known as disparate mistreatment (Zafar et al., 2017). Different from the definition of demographic parity, the definition of accuracy parity does not eliminate the perfect predictor even when the base rates differ across groups. Of course, other more refined definitions of fairness also exist in the literature, such as equalized odds (Hardt et al., 2016). Definition 2.3 (Equalized Odds, a.k.a. Positive Rate Parity). Given a joint distribution D, a classifier Y satisfies equalized odds if Y is independent of A conditioned on Y .
The definition of equalized odds essentially requires equal true positive and false positive rates between different groups, hence it is also known as positive rate parity. Analogous to accuracy parity, equalized odds does not eliminate the perfect classifier (Hardt et al., 2016), and we will also justify this observation by formal analysis shortly. Last but not least, we have the following definition for predictive rate parity: Definition 2.4 (Predictive Rate Parity). Given a joint distribution D, a classifier Y satisfies predictive
rate parity if D 0 (Y = 1 | Y = c) = D 1 (Y = 1 | Y = c), ∀c ∈ [0, 1].
Note that in the above definition we allow the classifier Y to be probabilistic, meaning that the output of Y could be any value between 0 and 1. For the case where Y is deterministic, Chouldechova (2017) showed that no deterministic classifier can simultaneously satisfy equalized odds and predictive rate parity when the base rates differ across subgroups and the classifier is not perfect.
ALGORITHM AND ANALYSIS
In this section we first give the proposed optimization formulation and then discuss through formal analysis the motivation of our algorithmic design. Specifically, we show in Section 3.1 why our formulation helps to escape the utility-fairness trade-off. We then in Section 3.2 formally prove that the BERs in the objective function could be used to guarantee a small joint error across different demographic subgroups. In Section 3.3 we establish the relationship between equalized odds and accuracy parity by providing an upper bound of the error gap in terms of both BER and the equalized odds gap. We conclude this section with a brief discussion on the practical implementation of the proposed optimization formulation in Section 3.4. Due to the space limit, we defer all the proofs to the appendix and focus on explaining the high-level intuition and implications of our results.
As briefly discussed in Section 5, a dominant approach in learning fair representations is via adversarial training. Specifically, the following objective function is optimized: min
h,g max h CE D (h(g(X)) || Y ) − λ CE D (h (g(X)) || A)(1)
In the above formulation, the first term corresponds to minimization of prediction loss of the target variable and the second term represents the loss incurred by the adversary h . Overall this minimax optimization problem expresses a trade-off (controlled by the hyperparameter λ > 0) between utility and fairness through the representation learning function g: on one hand g needs to preserve sufficient information related to Y in order to minimize the first term, but on the other hand g also needs to filter out information related to A in order to maximize the second term.
CONDITIONAL LEARNING OF FAIR REPRESENTATIONS
However, as we introduced in Section 2, the above framework is still subjective to the inherent trade-off between utility and fairness. To escape such a trade-off, we advocate for the following optimization formulation instead:
min h,g max h ,h BERD(h(g(X)) || Y ) − λ BER D 0 (h (g(X)) || A) + BER D 1 (h (g(X)) || A)(2)
Note that here we optimize over two distinct adversaries, one for each conditional distribution D y , y ∈ {0, 1}. Intuitively, the main difference between (2) and (1) is that we use BER as our objective function in both terms. By definition, since BER corresponds to the sum of Type-I and Type-II errors in classification, the proposed objective function essentially minimizes the conditional errors instead of the original marginal error:
D( Y = Y ) = D(Y = 0)D( Y = Y | Y = 0) + D(Y = 1)D( Y = Y | Y = 1) BER D ( Y || Y ) ∝ 1 2 D( Y = Y | Y = 0) + 1 2 D( Y = Y | Y = 1),(3)
which means that the loss function gives equal importance to the classification error from both groups. Note that the BERs in the second term of (2) are over D y , y ∈ {0, 1}. Roughly speaking, the second term encourages alignment of the the conditional distributions g D y 0 and g D y 1 for y ∈ {0, 1}. The following proposition shows that a perfect conditional alignment of the representations also implies that any classifier based on the representations naturally satisfies the equalized odds criterion:
Proposition 3.1. For g : X → Z, if d TV (g D y 0 , g D y 1 ) = 0, ∀y ∈ {0, 1}, then for any classifier h : Z → {0, 1}, h • g satisfies equalized odds.
To understand why we aim for conditional alignment of distributions instead of aligning the marginal feature distributions, the following proposition characterizes why such alignment will help us to escape the previous trade-off:
Proposition 3.2. For g : X → Z, if d TV (g D y 0 , g D y 1 ) = 0, ∀y ∈ {0, 1}, then for any classifier h : Z → {0, 1}, d TV ((h • g) D 0 , (h • g) D 1 ) ≤ ∆ BR (D 0 , D 1 ).
As a corollary, this implies that the lower bound given in Theorem 2.1 now vanishes if we instead align the conditional distributions of representations:
Err D0 (h • g) + Err D1 (h • g) ≥ ∆ BR (D 0 , D 1 ) − d TV ((h • g) D 0 , (h • g) D 1 ) = 0,
where the first inequality is due to Lemma 3.1 (Zhao & Gordon, 2019) and the triangle inequality by the d TV (·, ·) distance. Of course, the above lower bound can only serve as a necessary condition but not sufficient to ensure a small joint error across groups. Later (c.f. Theorem 3.2) we will show that together with a small BER on the target variable, it becomes a sufficient condition as well.
THE PRESERVATION OF DEMOGRAPHIC PARITY GAP AND SMALL JOINT ERROR
In this section we show that learning representations by aligning the conditional distributions across groups cannot increase the DP gap as compared to the DP gap of Y . Before we proceed, we first introduce a metric to measure the deviation of a predictor from satisfying demographic parity:
Definition 3.1 (DP Gap). Given a joint distribution D, the demographic parity gap of a classifier Y is ∆ DP ( Y ) := |D 0 ( Y = 1) − D 1 ( Y = 1)|.
Clearly, if ∆ DP ( Y ) = 0, then Y satisfies demographic parity. To simplify the exposition, let γ a := D a (Y = 0), ∀a ∈ {0, 1}. We first prove the following lemma:
Lemma 3.1. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then |D 0 ( Y = y) − D 1 ( Y = y)| ≤ |γ 0 − γ 1 | · D 0 ( Y = y) + D 1 ( Y = y) , ∀y ∈ {0, 1}.
Lemma 3.1 gives an upper bound on the difference of the prediction probabilities across different subgroups. Applying Lemma 3.1 twice for y = 0 and y = 1, we can prove the following theorem:
Theorem 3.1. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then ∆ DP ( Y ) ≤ ∆ BR (D 0 , D 1 ) = ∆ DP (Y ).
Remark Theorem 3.1 shows that aligning the conditional distributions of representations between groups will not add more bias in terms of the demographic parity gap. In particular, the DP gap of any classifier that satisfies equalized odds will be at most the DP gap of the perfect classifier. This is particularly interesting as it is well-known in the literature (Barocas et al., 2017) that demographic parity is not compatible with equalized odds except in degenerate cases. Despite this result, Theorem 3.1 says that we can still achieve equalized odds and simultaneously preserve the DP gap.
In Section 3.1 we show that aligning the conditional distributions of representations between groups helps reduce the lower bound of the joint error, but nevertheless that is only a necessary condition.
In the next theorem we show that together with a small Type-I and Type-II error in inferring the target variable Y , these two properties are also sufficient to ensure a small joint error across different demographic subgroups.
Theorem 3.2. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then Err D0 ( Y ) + Err D1 ( Y ) ≤ 2BER D ( Y || Y ).
The above bound means that in order to achieve small joint error across groups, it suffices for us to minimize the BER if a classifier satisfies equalized odds. Note that by definition, the BER in the bound equals to the sum of Type-I and Type-II classification errors using Y as a classifier. Theorem 3.2 gives an upper bound of the joint error across groups and it also serves as a motivation for us to design the optimization formulation (2) that simultaneously minimizes the BER and aligns the conditional distributions.
CONDITIONAL ALIGNMENT AND BALANCED ERROR RATES LEAD TO SMALL ERROR
In this section we will see that a small BER and equalized odds together not only serve as a guarantee of a small joint error, but they also lead to a small error gap between different demographic subgroups. Recall that we define γ a := D a (Y = 0), ∀a ∈ {0, 1}. Before we proceed, we first formally define the accuracy gap and equalized odds gap of a classifier Y :
Definition 3.2 (Error Gap). Given a joint distribution D, the error gap of a classifier Y is ∆ Err ( Y ) := |D 0 ( Y = Y ) − D 1 ( Y = Y )|. Definition 3.3 (Equalized Odds Gap). Given a joint distribution D, the equalized odds gap of a classifier Y is ∆ EO ( Y ) := max y∈{0,1} |D y 0 ( Y = 1) − D y 1 ( Y = 1)|.
By definition the error gap could also be understood as the accuracy parity gap between different subgroups. The following theorem characterizes the relationship between error gap, equalized odds gap and the difference of base rates across subgroups:
Theorem 3.3. For any classifier Y , ∆ Err ( Y ) ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y ) + 2∆ EO ( Y ).
As a direct corollary of Theorem 3.3, if the classifier Y satisfies equalized odds, then ∆ EO ( Y ) = 0. In this case since ∆ BR (D 0 , D 1 ) is a constant, minimizing the balanced error rate BER D ( Y || Y ) also leads to minimizing the error gap. Furthermore, if we combine Theorem 3.2 and Theorem 3.3 together, we can guarantee that each of the errors cannot be too large: Corollary 3.1. For any joint distribution D and classifier Y , if Y satisfies equalized odds, then
max{Err D0 ( Y ), Err D1 ( Y )} ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y )/2 + BER D ( Y || Y ).
Remark It is a well-known fact that out of the three fairness criteria, i.e., demographic parity, equalized odds, and predictive rate parity, any two of them cannot hold simultaneously (Barocas et al., 2017) except in degenerate cases. By contrast, Theorem 3.3 suggests it is possible to achieve both equalized odds and accuracy parity. In particular, among all classifiers that satisfy equalize odds, it suffices to minimize the sum of Type-I and Type-II error BER D ( Y || Y ) in order to achieve accuracy parity. It is also worth pointing out that Theorem 3.3 provides only an upper bound, but not necessarily the tightest one. In particular, the error gap could still be 0 while BER D ( Y || Y ) is greater than 0. To see this, we have
Err D0 ( Y ) = D 0 (Y = 0) · D 0 ( Y = 1 | Y = 0) + D 0 (Y = 1) · D 0 ( Y = 0 | Y = 1) Err D1 ( Y ) = D 1 (Y = 0) · D 1 ( Y = 1 | Y = 0) + D 1 (Y = 1) · D 1 ( Y = 0 | Y = 1). Now if the predictor Y satisfies equalized odds, then D 0 ( Y = 1 | Y = 0) = D 1 ( Y = 1 | Y = 0) = D( Y = 1 | Y = 0), D 0 ( Y = 0 | Y = 1) = D 1 ( Y = 0 | Y = 1) = D( Y = 0 | Y = 1).
Hence the error gap ∆ Err ( Y ) admits the following identity:
∆ Err ( Y ) = D( Y = 1 | Y = 0) D 0 (Y = 0) − D 1 (Y = 0) + D( Y = 0 | Y = 1) D 0 (Y = 1) − D 1 (Y = 1) = ∆ BR (D 0 , D 1 ) · D( Y = 1 | Y = 0) − D( Y = 0 | Y = 1) = ∆ BR (D 0 , D 1 ) · FPR( Y ) − FNR( Y ) .
In other words, if the predictor Y satisfies equalized odds, then in order to have equalized accuracy, Y only needs to have equalized FPR and FNR globally when the base rates differ across groups. This is a much weaker condition to ask for than the one asking BER D ( Y || Y ) = 0.
PRACTICAL IMPLEMENTATION
We cannot directly optimize the proposed optimization formulation (2) since the binary 0/1 loss is NPhard to optimize, or even approximately optimize over a wide range of hypothesis classes (Ben-David et al., 2003). However, observe that for any classifier Y and y ∈ {0, 1}, the log-loss (cross-entropy loss) CE D y ( Y || Y ) is a convex relaxation of the binary loss:
D( Y = y | Y = y) = D( Y = y, Y = y) D(Y = y) ≤ CE D y ( Y || Y ) D(Y = y) .(4)
Hence in practice we can relax the optimization problem (2) to a cost-sensitive cross-entropy loss minimization problem, where the weight for each class is given by the inverse marginal probability of the corresponding class. This allows us to equivalently optimize the objective function without explicitly computing the conditional distributions.
EMPIRICAL STUDIES
In light of our theoretic findings, in this section we verify the effectiveness of the proposed algorithm in simultaneously ensuring equalized odds and accuracy parity using real-world datasets. We also analyze the impact of imposing such parity constraints on the utility of the target classifier, as well as its relationship to the intrinsic structure of the binary classification problem, e.g., the difference of base rates across groups, the global imbalance of the target variable, etc. We analyze how this imbalance affects the utility-fairness trade-off. As we shall see shortly, we will empirically demonstrate that, in many cases, especially the ones where the dataset is imbalanced in terms of the target variable, this will inevitably compromise the target utility. While for balanced datasets, this trend is less obvious: the proposed algorithm achieves a better fairness-utility trade-off when compared with existing fair representation learning methods and we can hope to achieve fairness without sacrificing too much on utility. To this end, we perform experiments on two popular real-world datasets in the literature of algorithmic fairness, including an income-prediction dataset, known as the Adult dataset, from the UCI Machine Learning Repository (Dua & Graff, 2017), and the Propublica COMPAS dataset (Dieterich et al., 2016). The basic statistics of these two datasets are listed in Table 1.
Adult Each instance in the Adult dataset describes an adult, e.g., gender, education level, age, etc, from the 1994 US Census. In this dataset we use gender as the sensitive attribute, and the processed data contains 114 attributes. The target variable (income) is also binary: 1 if ≥ 50K/year otherwise 0. For the sensitive attribute A, A = 0 means male otherwise female. From Table 1 we can see that the base rates are quite different (0.310 vs. 0.113) across groups in the Adult dataset. The dataset is also imbalanced in the sense that only around 24.6% of the instances have target label 1. Furthermore, the group ratio is also imbalanced: roughly 67.3% of the data are male.
COMPAS The goal of the COMPAS dataset is binary classification on whether a criminal defendant will recidivate within two years or not. Each instance contains 12 attributes, including age, race, gender, number of prior crimes, etc. For this dataset, we use the race (white A = 0 vs. black A = 1) as our sensitive attribute and target variable is 1 iff recidivism. The base rates are different across groups, but the COMPAS dataset is balanced in both the target variable and the sensitive attribute.
To validate the effect of ensuring equalized odds and accuracy parity, for each dataset, we perform controlled experiments by fixing the baseline network architecture so that it is shared among all the fair representation learning methods. We term the proposed method CFAIR (for conditional fair representations) that minimizes conditional errors both the target variable loss function and adversary loss function. To demonstrate the importance of using BER in the loss function of target variable, we compare with a variant of CFAIR that only uses BER in the loss of adversaries, denoted as CFAIR-EO. To see the relative effect of using cross-entropy loss vs L 1 loss, we also show the results of LAFTR (Madras et al., 2018), a state-of-the-art method for learning fair representations. Note that LAFTR is closely related to CFAIR-EO but slightly different: LAFTR uses global crossentropy loss for target variable, but conditional L 1 loss for the adversary. Also, there is only one adversary in LAFTR, while there are two adversaries, one for D 0 and one for D 1 , in both CFAIR and CFAIR-EO. Lastly, we also present baseline results of FAIR (Edwards & Storkey, 2015), which aims for demographic parity representations, and NODEBIAS, the baseline network without any fairness constraint. For all the fair representation learning methods, we use the gradient reversal layer (Ganin et al., 2016) to implement the gradient descent ascent (GDA) algorithm to optimize the minimax problem. All the experimental details, including network architectures, learning rates, batch sizes, etc. are provided in the appendix.
RESULTS AND ANALYSIS
In Figure 1 and Figure 2 we show the error gap ∆ Err , equalized odds gap ∆ EO , demographic parity gap ∆ DP and the joint error across groups Err 0 + Err 1 of the aforementioned fair representation learning algorithms on both the Adult and the COMPAS datasets. For each algorithm and dataset, we also gradually increase the value of the trade-off parameter λ and compute the corresponding metrics. Adult Due to the imbalance of A in the Adult dataset, in the first plot of Figure 1 we can see that all the algorithms except CFAIR have a large error gap of around 0.12. As a comparison, observe that the error gap of CFAIR when λ = 1e3 almost reduces to 0, confirming the effectiveness of our algorithm in ensuring accuracy parity. From the second plot, we can verify that all the three methods, including CFAIR, CFAIR-EO and LAFTR successfully ensure a small equalized odds gap, and they also decrease demographic parity gaps (the third plot). FAIR is the most effective one in mitigating ∆ DP since its objective function directly optimizes for that goal. Note that from the second plot we can also confirm that CFAIR-EO is more effective than LAFTR in reducing ∆ EO . The reason is two-fold. First, CFAIR-EO uses two distinct adversaries and hence it effectively competes with stronger adversaries than LAFTR. Second, CFAIR-EO uses the cross-entropy loss instead of the L 1 loss for the adversary, and it is well-known that the maximum-likelihood estimator (equivalent to using the cross-entropy loss) is asymptotically efficient and optimal. On the other hand, since the Adult dataset is imbalanced (in terms of Y ), using BER in the loss function of the target variable can thus to a large extent hurt the utility, and this is also confirmed from the last plot, where we show the joint error.
COMPAS The first three plots of Figure 2 once again verify that CFAIR successfully leads to reduced error gap, equalized odds gap and also demographic parity gap. These experimental results are consistent with our theoretical findings where we show that if the representations satisfy equalized odds, then its ∆ DP cannot exceed that of the optimal classifier, as shown by the horizontal dashed line in the third plot. In the fourth plot of Figure 2, we can see that as we increase λ, all the fair representation learning algorithms sacrifice utility. However, in contrast to Figure 1, here the proposed algorithm CFAIR has the smallest trade-off: this shows that CFAIR is particularly suited in the cases when the dataset is balanced and we would like to simultaneously ensure accuracy parity and equalized odds. As a comparison, while CFAIR-EO is still effective, it is slightly worse than CFAIR in terms of both ensuring parity and achieving small joint error.
RELATED WORK
Algorithmic Fairness In the literature of algorithmic fairness, two key notions of fairness have been extensively proposed and explored, i.e., group fairness, including various variants defined in Section 2, and individual fairness, which means that similar individuals should be treated similarly. Due to the complexity in defining a distance metric to measure the similarity between individuals (Dwork et al., 2012), most recent research focuses on designing efficient algorithms to achieve group fairness (Zemel et al., 2013;Zafar et al., 2015;Hardt et al., 2016;Zafar et al., 2017;Madras et al., 2018;. In particular, Hardt et al. (2016) proposed a post-processing technique to achieve equalized odds by taking as input the model's prediction and the sensitive attribute. However, the post-processing technique requires access to the sensitive attribute during the inference phase, which is often not available in many real-world scenarios. Another line of work uses causal inference to define notions of causal fairness and to formulate procedures for achieving these notions (Zhang et al., 2018;Wang et al., 2019;Kilbertus et al., 2017;Kusner et al., 2017;Nabi & Shpitser, 2018). These approaches require making untestable assumptions. Of particular note is the observation in Coston et al. (2019) that fairness-adjustment procedures based on Y in settings with treatment effects may lead to adverse outcomes. To apply our method in such settings, we would need to match conditional counterfactual distributions, which could be a direction of future research.
Theoretical Results on Fairness Theoretical work studying the relationship between different kinds of fairness notions are abundant. Motivated by the controversy of the potential discriminatory bias in recidivism prediction instruments, Chouldechova (2017) showed an intrinsic incompatibility between equalized odds and predictive rate parity. In the seminal work of Kleinberg et al. (2016), the authors demonstrated that when the base rates differ between different groups, then a non-perfect classifier cannot simultaneously be statistically calibrated and satisfy equalized odds. In the context of cost-sensitive learning, Menon & Williamson (2018) show that if the optimal decision function is dissimilar to a fair decision, then the fairness constraint will not significantly harm the target utility. The idea of reducing fair classification to cost-sensitive learning is not new. ? explored the connection between fair classification and a sequence of cost-sensitive learning problems where each stage corresponds to solving a linear minimax saddle point problem. In a recent work (Zhao & Gordon, 2019), the authors proved a lower bound on the joint error across different groups when a classifier satisfies demographic parity. They also showed that when the decision functions are close between groups, demographic parity also implies accuracy parity. The theoretical results in this work establish a relationship between accuracy parity and equalized odds: these two fairness notions are fundamentally related by the base rate gap and the balanced error rate. Furthermore, we also show that for any predictor that satisfies equalized odds, the balanced error rate also serves as an upper bound on the joint error across demographic subgroups.
Fair Representations Through the lens of representation learning, recent advances in building fair algorithmic decision making systems focus on using adversarial methods to learn fair representations that also preserve sufficient information for the prediction task (Edwards & Storkey, 2015;Beutel et al., 2017;Zhang et al., 2018;Madras et al., 2018;Adel et al., 2019). In a nutshell, the key idea is to frame the problem of learning fair representations as a two-player game, where the data owner is competing against an adversary. The goal of the adversary is to infer the group attribute as much as possible while the goal of the data owner is to remove information related to the group attribute and simultaneously to preserve utility-related information for accurate prediction. Apart from using adversarial classifiers to enforce group fairness, other distance metrics have also been used to learn fair representations, e.g., the maximum mean discrepancy (Louizos et al., 2015), and the Wasserstein-1 distance (Jiang et al., 2019). In contrast to these methods, in this paper we advocate for optimizing BER on both the target loss and adversary loss in order to simultaneously achieve accuracy parity and equalized odds. We also show that this leads to better utility-fairness trade-off for balanced datasets.
CONCLUSION AND FUTURE WORK
In this paper we propose a novel representation learning algorithm that aims to simultaneously ensure accuracy parity and equalized odds. The main idea underlying the design of our algorithm is to align the conditional distributions of representations (rather than marginal distributions) and use balanced error rate (i.e., the conditional error) on both the target variable and the sensitive attribute. Theoretically, we prove how these two concepts together help to ensure accuracy parity and equalized odds without impacting demographic parity, and we also show how these two can be used to give a guarantee on the joint error across different demographic subgroups. Empirically, we demonstrate on two real-world experiments that the proposed algorithm effectively leads to the desired notions of fairness, and it also leads to better utility-fairness trade-off on balanced datasets.
Calibration and Utility Our work takes a step towards better understanding the relationships between different notions of fairness and their corresponding trade-off with utility. In some scenarios, e.g., the COMPAS tool, it is desired to have a decision making system that is also well calibrated. While it is well-known that statistical calibration is not compatible with demographic parity or equalized odds, from a theoretical standpoint it is still not clear whether calibration will harm utility and if so, what is the fundamental limit of a calibrated tool on utility.
Fairness and Privacy Future work could also investigate how to make use of the close relationship between privacy and group fairness. At a colloquial level, fairness constraints require a predictor to be (to some extent) agnostic about the group membership attribute. The membership query attack in privacy asks the same question -is it possible to guarantee that even an optimal adversary cannot steal personal information through inference attacks. Prior work (Dwork et al., 2012) has described the connection between the notion of individual fairness and differential privacy. Hence it would be interesting to exploit techniques developed in the literature of privacy to develop more efficient fairness-aware learning algorithms. On the other hand, results obtained in the algorithmic fairness literature could also potentially lead to better privacy-preserving machine learning algorithms (? Proof. To prove this proposition, we first show that for any pair of distributions D, D over Z and any hypothesis h : D ). Note that since h is a hypothesis, there are only two events in the induced probability space, i.e., h(·) = 0 or h(·) = 1. Hence by definition of the induced (pushforward) distribution, we have:
Z → {0, 1}, d TV (h D, h D ) ≤ d TV (D,d TV (h D, h D ) = max E=h −1 (0), or E=h −1 (1) |D(E) − D (E)| ≤ sup E is measurable subset of Z |D(E) − D (E)| = d TV (D, D ).
Apply the above inequality twice for y ∈ {0, 1}:
0 ≤ d TV ((h • g) D y 0 , (h • g) D y 1 ) ≤ d TV (g D y 0 , g D y 1 ) = 0, meaning d TV ((h • g) D y 0 , (h • g) D y 1 ) = 0, which further implies that h(g(X)) is independent of A given Y = y since h(g(X)) is binary. A.2 PROOF OF PROPOSITION 3.2 Proposition 3.2. For g : X → Z, if d TV (g D y 0 , g D y 1 ) = 0, ∀y ∈ {0, 1}, then for any classifier h : Z → {0, 1}, d TV ((h • g) D 0 , (h • g) D 1 ) ≤ ∆ BR (D 0 , D 1 ).
Proof. Let Y = (h • g)(X) and note that Y is binary, we have
d TV ((h • g) D 0 , (h • g) D 1 ) = 1 2 |D 0 ( Y = 0) − D 1 ( Y = 0)| + |D 0 ( Y = 1) − D 1 ( Y = 1)| .
Now, by Proposition 3.1, if d TV (g D y 0 , g D y 1 ) = 0, ∀y ∈ {0, 1}, it follows that d TV ((h • g) D y 0 , (h • g) D y 1 ) = 0, ∀y ∈ {0, 1} as well. Applying Lemma 3.1, we know that ∀y ∈ {0, 1},
|D 0 ( Y = y) − D 1 ( Y = y)| ≤ |D 0 (Y = 0) − D 1 (Y = 0)| · D 0 ( Y = y) + D 1 ( Y = y) .
Hence,
d TV ((h • g) D 0 , (h • g) D 1 ) = 1 2 |D 0 ( Y = 0) − D 1 ( Y = 0)| + |D 0 ( Y = 1) − D 1 ( Y = 1)| ≤ |D 0 (Y = 0) − D 1 (Y = 0)| 2 D 0 ( Y = 0) + D 1 ( Y = 0) + D 0 ( Y = 1) + D 1 ( Y = 1) = |D 0 (Y = 0) − D 1 (Y = 0)| 2 D 0 ( Y = 0) + D 0 ( Y = 1) + D 1 ( Y = 0) + D 1 ( Y = 1) = |D 0 (Y = 0) − D 1 (Y = 0)| 2 · 2 = |D 0 (Y = 0) − D 1 (Y = 0)| = ∆ BR (D 0 , D 1 ).
A.3 PROOF OF LEMMA 3.1
Recall that we define γ a := D a (Y = 0), ∀a ∈ {0, 1}.
Lemma 3.1. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then |D 0 ( Y = y) − D 1 ( Y = y)| ≤ |γ 0 − γ 1 | · D 0 ( Y = y) + D 1 ( Y = y) , ∀y ∈ {0, 1}.
Proof. To bound |D 0 ( Y = y) − D 1 ( Y = y)|, for y ∈ {0, 1}, by the law of total probability, we have:
|D 0 ( Y = y) − D 1 ( Y = y)| = = D 0 0 ( Y = y)D 0 (Y = 0) + D 1 0 ( Y = y)D 0 (Y = 1) − D 0 1 ( Y = y)D 1 (Y = 0) + D 1 1 ( Y = y)D 1 (Y = 1) ≤ γ 0 D 0 0 ( Y = y) − γ 1 D 0 1 ( Y = y) + (1 − γ 0 )D 1 0 ( Y = y) − (1 − γ 1 )D 1 1 ( Y = y) ,
where the above inequality is due to the triangular inequality. Now apply Proposition 3.1, we know that Y satisfies equalized odds, so we have D 0
0 ( Y = y) = D 0 1 ( Y = y) = D 0 ( Y = y) and D 1 0 ( Y = y) = D 1 1 ( Y = y) = D 1 ( Y = y)
, leading to:
= γ 0 − γ 1 · D 0 ( Y = y) + (1 − γ 0 ) − (1 − γ 1 ) · D 1 ( Y = y) = |γ 0 − γ 1 | · D 0 ( Y = y) + D 1 ( Y = y) ,
which completes the proof.
A.4 PROOF OF THEOREM 3.1
Theorem 3.1. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then ∆ DP ( Y ) ≤ ∆ BR (D 0 , D 1 ) = ∆ DP (Y ). Proof. To bound ∆ DP ( Y ), realize that |D 0 ( Y = 0) − D 1 ( Y = 0)| = |D 0 ( Y = 1) − D 1 ( Y = 1)|,
so we can rewrite the DP gap as follows:
∆ DP ( Y ) = 1 2 D 0 ( Y = 0) − D 1 ( Y = 0)| + |D 0 ( Y = 1) − D 1 ( Y = 1) .
Now apply Lemma 3.1 twice for y = 0 and y = 1, we have:
|D 0 ( Y = 0) − D 1 ( Y = 0)| ≤ |γ 0 − γ 1 | · D 0 ( Y = 0) + D 1 ( Y = 0) |D 0 ( Y = 1) − D 1 ( Y = 1)| ≤ |γ 0 − γ 1 | · D 0 ( Y = 1) + D 1 ( Y = 1) .
Taking sum of the above two inequalities yields
|D 0 ( Y = 0) − D 1 ( Y = 0)| + |D 0 ( Y = 1) − D 1 ( Y = 1)| ≤ |γ 0 − γ 1 | D 0 ( Y = 0) + D 1 ( Y = 0) + D 0 ( Y = 1) + D 1 ( Y = 1) = |γ 0 − γ 1 | D 0 ( Y = 0) + D 0 ( Y = 1) + D 1 ( Y = 0) + D 1 ( Y = 1) = 2 γ 0 − γ 1 .
Combining all the inequalities above, we know that
∆ DP ( Y ) = 1 2 D 0 ( Y = 0) − D 1 ( Y = 0)| + |D 0 ( Y = 1) − D 1 ( Y = 1) ≤ γ 0 − γ 1 = |D 0 (Y = 0) − D 1 (Y = 0)| = |D 0 (Y = 1) − D 1 (Y = 1)| = ∆ BR (D 0 , D 1 ) = ∆ DP (Y ),
completing the proof.
A.5 PROOF OF THEOREM 3.2
Theorem 3.2. Assume the conditions in Proposition 3.1 hold and let Y = h(g(X)) be the classifier,
then Err D0 ( Y ) + Err D1 ( Y ) ≤ 2BER D ( Y || Y ).
Proof. First, by the law of total probability, we have:
Err D0 ( Y ) + Err D1 ( Y ) = D 0 (Y = Y ) + D 1 (Y = Y ) = D 1 0 ( Y = 0)D 0 (Y = 1) + D 0 0 ( Y = 1)D 0 (Y = 0) + D 1 1 ( Y = 0)D 1 (Y = 1) + D 0 1 ( Y = 1)D 1 (Y = 0) Again, by Proposition 3.1, the classifier Y = (h • g)(X) satisfies equalized odds, so we have D 1 0 ( Y = 0) = D 1 ( Y = 0), D 0 0 ( Y = 1) = D 0 ( Y = 1), D 1 1 ( Y = 0) = D 1 ( Y = 0) and D 0 1 ( Y = 1) = D 0 ( Y = 1): = D 1 ( Y = 0)D 0 (Y = 1) + D 0 ( Y = 1)D 0 (Y = 0) + D 1 ( Y = 0)D 1 (Y = 1) + D 0 ( Y = 1)D 1 (Y = 0) = D 1 ( Y = 0) · D 0 (Y = 1) + D 1 (Y = 1) + D 0 ( Y = 1) · D 0 (Y = 0) + D 1 (Y = 0) ≤ 2D 1 ( Y = 0) + 2D 0 ( Y = 1) = 2BER D ( Y || Y )
, which completes the proof.
A.6 PROOF OF THEOREM 3.3
Theorem 3.3. For any classifier Y , ∆ Err ( Y ) ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y ) + 2∆ EO ( Y ).
Before we give the proof of Theorem 3.3, we first prove the following two lemmas that will be used in the following proof.
Lemma A.1. Define γ a := D a (Y = 0), ∀a ∈ {0, 1}, then |γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1)| ≤ |γ 0 − γ 1 | · D 0 ( Y = 1) + γ 0 D 0 (A = 1)∆ EO ( Y ) + γ 1 D 0 (A = 0)∆ EO ( Y ).
Proof. In order to prove the upper bound in the lemma, it suffices if we could give the desired upper bound for the following term 1)) , following which we will have:
|γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1)| − |(γ 0 − γ 1 )D 0 ( Y = 1)| ≤ γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1) − (γ 0 − γ 1 )D 0 ( Y = 1) = γ 0 (D 0 0 ( Y = 1) − D 0 ( Y = 1)) − γ 1 (D 0 1 ( Y = 1) − D 0 ( Y =|γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1)| ≤ |(γ 0 − γ 1 )D 0 ( Y = 1)| + γ 0 (D 0 0 ( Y = 1) − D 0 ( Y = 1)) − γ 1 (D 0 1 ( Y = 1) − D 0 ( Y = 1))
, and an application of the Bayes formula could finish the proof. To do so, let us first simplify D 0 0 ( Y = 1) − D 0 ( Y = 1). Applying the Bayes's formula, we know that:
D 0 0 ( Y = 1) − D 0 ( Y = 1) = D 0 0 ( Y = 1) − D 0 0 ( Y = 1)D 0 (A = 0) + D 0 1 ( Y = 1)D 0 (A = 1) = D 0 0 ( Y = 1) − D 0 0 ( Y = 1)D 0 (A = 0) − D 0 1 ( Y = 1)D 0 (A = 1) = D 0 (A = 1) D 0 0 ( Y = 1) − D 0 1 ( Y = 1) . Similarly, for the second term D 0 1 ( Y = 1) − D 0 ( Y = 1), we can show that: D 0 1 ( Y = 1) − D 0 ( Y = 1) = D 0 (A = 0) D 0 1 ( Y = 1) − D 0 0 ( Y = 1)
. Plug these two identities into above, we can continue the analysis with
γ 0 (D 0 0 ( Y = 1) − D 0 ( Y = 1)) − γ 1 (D 0 1 ( Y = 1) − D 0 ( Y = 1)) = γ 0 D 0 (A = 1)(D 0 0 ( Y = 1) − D 0 1 ( Y = 1)) − γ 1 D 0 (A = 0)(D 0 1 ( Y = 1) − D 0 0 ( Y = 1)) ≤ γ 0 D 0 (A = 1)(D 0 0 ( Y = 1) − D 0 1 ( Y = 1)) + γ 1 D 0 (A = 0)(D 0 1 ( Y = 1) − D 0 0 ( Y = 1)) ≤ γ 0 D 0 (A = 1)∆ EO ( Y ) + γ 1 D 0 (A = 0)∆ EO ( Y )
. The first inequality holds by triangular inequality and the second one holds by the definition of equalized odds gap.
Lemma A.2. Define γ a := D a (Y = 0), ∀a ∈ {0, 1}, then |(1 − γ 0 )D 1 0 ( Y = 0) − (1 − γ 1 )D 1 1 ( Y = 0)| ≤ |γ 0 − γ 1 | · D 1 ( Y = 0) + (1 − γ 0 )D 1 (A = 1)∆ EO ( Y ) + (1 − γ 1 )D 1 (A = 0)∆ EO ( Y ).
Proof. The proof of this lemma is symmetric to the previous one, so we omit it here. Now we are ready to prove Theorem 3.3:
Proof of Theorem 3.3. First, by the law of total probability, it is easy to verify that following identity holds for a ∈ {0, 1}:
D a ( Y = Y ) = D a (Y = 1, Y = 0) + D a (Y = 0, Y = 1) = (1 − γ a )D 1 a ( Y = 0) + γ a D 0 a ( Y = 1)
. Using this identity, to bound the error gap, we have:
|D 0 (Y = Y ) − D 1 (Y = Y )| = ((1 − γ 0 )D 1 0 ( Y = 0) + γ 0 D 0 0 ( Y = 1)) − ((1 − γ 1 )D 1 1 ( Y = 0) + γ 1 D 0 1 ( Y = 1)) ≤ γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1) + (1 − γ 0 )D 1 0 ( Y = 0) − (1 − γ 1 )D 1 1 ( Y = 0)
. Invoke Lemma A.1 and Lemma A.2 to bound the above two terms:
|D 0 (Y = Y ) − D 1 (Y = Y )| ≤ γ 0 D 0 0 ( Y = 1) − γ 1 D 0 1 ( Y = 1) + (1 − γ 0 )D 1 0 ( Y = 0) − (1 − γ 1 )D 1 1 ( Y = 0) ≤ γ 0 D 0 (A = 1)∆ EO ( Y ) + γ 1 D 0 (A = 0)∆ EO ( Y ) + (1 − γ 0 )D 1 (A = 1)∆ EO ( Y ) + (1 − γ 1 )D 1 (A = 0)∆ EO ( Y ) + γ 0 − γ 1 D 0 ( Y = 1) + γ 0 − γ 1 D 1 ( Y = 0),
Realize that both γ 0 , γ 1 ∈ [0, 1], we have:
≤ D 0 (A = 1)∆ EO ( Y ) + D 0 (A = 0)∆ EO ( Y ) + D 1 (A = 1)∆ EO ( Y ) + D 1 (A = 0)∆ EO ( Y ) + γ 0 − γ 1 D 0 ( Y = 1) + γ 0 − γ 1 D 1 ( Y = 0) = 2∆ EO ( Y ) + γ 0 − γ 1 D 0 ( Y = 1) + γ 0 − γ 1 D 1 ( Y = 0) = 2∆ EO ( Y ) + ∆ BR (D 0 , D 1 ) · BER D ( Y || Y ),
which completes the proof.
We also provide the proof of Corollary 3.1:
Corollary 3.1. For any joint distribution D and classifier Y , if Y satisfies equalized odds, then max{Err D0 ( Y ), Err D1 ( Y )} ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y )/2 + BER D ( Y || Y ).
Proof. We first invoke Theorem 3.3, if Y satisfies equalized odds, then ∆ EO ( Y ) = 0, which implies:
∆ Err ( Y ) = Err D0 ( Y ) − Err D1 ( Y ) ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y ).
On the other hand, by Theorem 3.2, we know that
Err D0 ( Y ) + Err D1 ( Y ) ≤ 2BER D ( Y || Y ).
Combine the above two inequalities and recall that max{a, b} = (|a + b| + |a − b|)/2, ∀a, b ∈ R, yielding:
max{Err D0 ( Y ), Err D1 ( Y )} = |Err D0 ( Y ) − Err D1 ( Y )| + |Err D0 ( Y ) + Err D1 ( Y )| 2 ≤ ∆ BR (D 0 , D 1 ) · BER D ( Y || Y ) + 2BER D ( Y || Y ) 2 = ∆ BR (D 0 , D 1 ) · BER D ( Y || Y )/2 + BER D ( Y || Y ),
completing the proof.
B EXPERIMENTAL DETAILS B.1 THE ADULT EXPERIMENT
For the baseline network NODEBIAS, we implement a three-layer neural network with ReLU as the hidden activation function and logistic regression as the target output function. The input layer contains 114 units, and the hidden layer contains 60 hidden units. The output layer only contain one unit, whose output is interpreted as the probability of D( Y = 1 | X = x).
For the adversary in FAIR and LAFTR, again, we use a three-layer feed-forward network. Specifically, the input layer of the adversary is the hidden representations of the baseline network that contains 60 units. The hidden layer of the adversary network contains 50 units, with ReLU activation. Finally, the output of the adversary also contains one unit, representing the adversary's inference probability D( A = 1 | Z = z). The network structure of the adversaries in both CFAIR and CFAIR-EO are exactly the same as the one used in FAIR and LAFTR, except that there are two adversaries, one for D 0 ( A = 1 | Z = z) and one for D 1 ( A = 1 | Z = z).
The hyperparameters used in the experiment are listed in Table 2. Again, for the baseline network NODEBIAS, we implement a three-layer neural network with ReLU as the hidden activation function and logistic regression as the target output function. The input layer contains 11 units, and the hidden layer contains 10 hidden units. The output layer only contain one unit, whose output is interpreted as the probability of D( Y = 1 | X = x).
For the adversary in FAIR and LAFTR, again, we use a three-layer feed-forward network. Specifically, the input layer of the adversary is the hidden representations of the baseline network that contains 60 units. The hidden layer of the adversary network contains 10 units, with ReLU activation. Finally, the output of the adversary also contains one unit, representing the adversary's inference probability D( A = 1 | Z = z). The network structure of the adversaries in both CFAIR and CFAIR-EO are exactly the same as the one used in FAIR and LAFTR, except that there are two adversaries, one for D 0 ( A = 1 | Z = z) and one for D 1 ( A = 1 | Z = z).
The hyperparameters used in the experiment are listed in Table 3.
Figure 1 :
1The error gap ∆ Err , equalized odds gap ∆ EO , demographic parity gap ∆ DP and joint error Err 0 + Err 1 on the Adult dataset with λ ∈ {0.1, 1.0, 10.0, 100.0, 1000.0}.
Figure 2 :
2The error gap ∆ Err , equalized odds gap ∆ EO , demographic parity gap ∆ DP and joint error Err 0 + Err 1 on the COMPAS dataset with λ ∈ {0.1, 1.0, 10.0}.
Table 1 :
1Statistics about the Adult and COMPAS datasets.Train / Test
D0(Y = 1) D1(Y = 1) ∆BR(D0, D1) D(Y = 1) D(A = 1)
Adult
30, 162/15, 060
0.310
0.113
0.196
0.246
0.673
COMPAS
4, 320/1, 852
0.400
0.529
0.129
0.467
0.514
4.1 EXPERIMENTAL SETUP
1}, then for any classifier h : Z → {0, 1}, h • g satisfies equalized odds.).
ACKNOWLEDGMENTS
HZ and GG would like to acknowledge support from the DARPA XAI project, contract
#FA87501720152 and an Nvidia GPU grant. HZ would also like to thank Richard Zemel, To-
niann Pitassi, David Madras and Elliot Creager for helpful discussions during HZ's visit to the Vector
Institute.
A PROOFS
A.1 PROOF OF PROPOSITION 3.1
Proposition 3.1. For g : X → Z, if d TV (g D y
0 , g D y
1 ) = 0, ∀y ∈ {0,
Table 2 :
2Hyperparameters used in the Adult experiment. Training Epochs λ ∈ {0.1, 1.0, 10.0, 100.0, 1000.0} 100 B.2 THE COMPAS EXPERIMENTOptimization Algorithm
AdaDelta
Learning Rate
1.0
Batch Size
512
Table 3 :
3Hyperparameters used in the COMPAS experiment. Training Epochs λ ∈ {0.1, 1.0} 20 Training Epochs λ = 10.0 15Optimization Algorithm
AdaDelta
Learning Rate
1.0
Batch Size
512
Our main results could also be straightforwardly extended to the setting where A is a categorical variable.
One-network adversarial fairness. Tameem Adel, Isabel Valera, Zoubin Ghahramani, Adrian Weller, 33rd AAAI Conference on Artificial Intelligence. Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. One-network adversarial fairness. In 33rd AAAI Conference on Artificial Intelligence, 2019.
Fairness in machine learning. Solon Barocas, Moritz Hardt, Arvind Narayanan, NIPS Tutorial. Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. NIPS Tutorial, 2017.
On the difficulty of approximately maximizing agreements. Shai Ben-David, Nadav Eiron, Philip M Long, Journal of Computer and System Sciences. 663Shai Ben-David, Nadav Eiron, and Philip M Long. On the difficulty of approximately maximizing agreements. Journal of Computer and System Sciences, 66(3):496-514, 2003.
Alex Beutel, Jilin Chen, Zhe Zhao, Ed H Chi, arXiv:1707.00075Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprintAlex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075, 2017.
Three naive bayes approaches for discrimination-free classification. Toon Calders, Sicco Verwer, Data Mining and Knowledge Discovery. 212Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277-292, 2010.
Building classifiers with independency constraints. Toon Calders, Faisal Kamiran, Mykola Pechenizkiy, 2009 IEEE International Conference on Data Mining Workshops. IEEEToon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pp. 13-18. IEEE, 2009.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Alexandra Chouldechova, 5Big dataAlexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153-163, 2017.
The frontiers of fairness in machine learning. Alexandra Chouldechova, Aaron Roth, arXiv:1810.08810arXiv preprintAlexandra Chouldechova and Aaron Roth. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810, 2018.
The measure and mismeasure of fairness: A critical review of fair machine learning. Sam Corbett, - Davies, Sharad Goel, arXiv:1808.00023arXiv preprintSam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.
Amanda Coston, Alexandra Chouldechova, Edward H Kennedy, arXiv:1909.00066Counterfactual risk assessments, evaluation, and fairness. arXiv preprintAmanda Coston, Alexandra Chouldechova, and Edward H Kennedy. Counterfactual risk assessments, evaluation, and fairness. arXiv preprint arXiv:1909.00066, 2019.
Flexibly fair representation learning by disentanglement. Elliot Creager, David Madras, Joern-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel, International Conference on Machine Learning. Elliot Creager, David Madras, Joern-Henrik Jacobsen, Marissa Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentanglement. In International Conference on Machine Learning, pp. 1436-1445, 2019.
Compas risk scales: Demonstrating accuracy equity and predictive parity. William Dieterich, Christina Mendoza, Tim Brennan, Northpoint IncWilliam Dieterich, Christina Mendoza, and Tim Brennan. Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpoint Inc, 2016.
UCI machine learning repository. Dheeru Dua, Casey Graff, Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive. ics.uci.edu/ml.
Fairness through awareness. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel, Proceedings of the 3rd innovations in theoretical computer science conference. the 3rd innovations in theoretical computer science conferenceACMCynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214-226. ACM, 2012.
Harrison Edwards, Amos Storkey, arXiv:1511.05897Censoring representations with an adversary. arXiv preprintHarrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. Michael Feldman, A Sorelle, John Friedler, Moeller, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMMichael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubra- manian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259-268. ACM, 2015.
Domain-adversarial training of neural networks. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, The Journal of Machine Learning Research. 171Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030, 2016.
Equality of opportunity in supervised learning. Moritz Hardt, Eric Price, Nati Srebro, Advances in neural information processing systems. Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pp. 3315-3323, 2016.
Wasserstein fair classification. Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, Silvia Chiappa, arXiv:1907.12059arXiv preprintRay Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. Wasserstein fair classification. arXiv preprint arXiv:1907.12059, 2019.
An algorithm for removing sensitive information: application to race-independent recidivism prediction. Kristian James E Johndrow, Lum, The Annals of Applied Statistics. 131James E Johndrow, Kristian Lum, et al. An algorithm for removing sensitive information: application to race-independent recidivism prediction. The Annals of Applied Statistics, 13(1):189-220, 2019.
Classifying without discriminating. Faisal Kamiran, Toon Calders, 2nd International Conference on Computer, Control and Communication. IEEEFaisal Kamiran and Toon Calders. Classifying without discriminating. In 2009 2nd International Conference on Computer, Control and Communication, pp. 1-6. IEEE, 2009.
Fairness-aware learning through regularization approach. Toshihiro Kamishima, Shotaro Akaho, Jun Sakuma, 2011 IEEE 11th International Conference on Data Mining Workshops. IEEEToshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regular- ization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pp. 643-650. IEEE, 2011.
Avoiding discrimination through causal reasoning. Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf, Advances in Neural Information Processing Systems. Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, pp. 656-666, 2017.
Inherent trade-offs in the fair determination of risk scores. Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan, arXiv:1609.05807arXiv preprintJon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determi- nation of risk scores. arXiv preprint arXiv:1609.05807, 2016.
Counterfactual fairness. J Matt, Joshua Kusner, Chris Loftus, Ricardo Russell, Silva, Advances in Neural Information Processing Systems. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pp. 4066-4076, 2017.
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard Zemel, arXiv:1511.00830The variational fair autoencoder. arXiv preprintChristos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. arXiv preprint arXiv:1511.00830, 2015.
Learning adversarially fair and transferable representations. David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel, International Conference on Machine Learning. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In International Conference on Machine Learning, pp. 3381-3390, 2018.
Fairness through causal awareness: Learning causal latent-variable models for biased data. David Madras, Elliot Creager, Toniann Pitassi, Richard Zemel, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyACMDavid Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Fairness through causal awareness: Learning causal latent-variable models for biased data. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 349-358. ACM, 2019.
The cost of fairness in binary classification. Aditya Krishna Menon, Robert C Williamson, Conference on Fairness, Accountability and Transparency. Aditya Krishna Menon and Robert C Williamson. The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, pp. 107-118, 2018.
Fair inference on outcomes. Razieh Nabi, Ilya Shpitser, Thirty-Second AAAI Conference on Artificial Intelligence. Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Translation tutorial: 21 fairness definitions and their politics. Arvind Narayanan, Proc. Conf. Fairness Accountability Transp. Conf. Fairness Accountability TranspNew York, USAArvind Narayanan. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA, 2018.
Equal opportunity and affirmative action via counterfactual predictions. Yixin Wang, Dhanya Sridhar, David M Blei, arXiv:1905.10870arXiv preprintYixin Wang, Dhanya Sridhar, and David M Blei. Equal opportunity and affirmative action via counterfactual predictions. arXiv preprint arXiv:1905.10870, 2019.
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P Gummadi, arXiv:1507.05259Fairness constraints: Mechanisms for fair classification. arXiv preprintMuhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259, 2015.
Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P Gummadi, Proceedings of the 26th International Conference on World Wide Web. the 26th International Conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeMuhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreat- ment. In Proceedings of the 26th International Conference on World Wide Web, pp. 1171-1180. International World Wide Web Conferences Steering Committee, 2017.
Learning fair representations. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork, International Conference on Machine Learning. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pp. 325-333, 2013.
Mitigating unwanted biases with adversarial learning. Brian Hu Zhang, Blake Lemoine, Margaret Mitchell, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyACMBrian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335- 340. ACM, 2018.
Inherent tradeoffs in learning fair representations. Han Zhao, Geoffrey J Gordon, Advances in neural information processing systems. Han Zhao and Geoffrey J Gordon. Inherent tradeoffs in learning fair representations. In Advances in neural information processing systems, 2019. |
1,859,294 | Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks | Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/. | [
9672033,
51559,
1463401,
5590763,
14025106,
11212020,
1428702
] | Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos
Barcelona Supercomputing Center
‡ Google Inc
Brendan Jou [email protected]
Xavier Giró-I-Nieto [email protected]
Universitat Politècnica de Catalunya
Γ Columbia University
Jordi Torres [email protected]
Barcelona Supercomputing Center
‡ Google Inc
Shih-Fu Chang [email protected]
Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/.
Introduction
Recurrent Neural Networks (RNNs) have become the standard approach for practitioners when addressing machine learning tasks involving sequential data. Such success has been enabled by the appearance of larger datasets, more powerful computing resources and improved architectures and training algorithms. Gated units, such as the Long Short-Term Memory [24] (LSTM) and the Gated Recurrent Unit [11] (GRU), were designed to deal with the vanishing gradients problem commonly found in RNNs [8]. These architectures have become popularized thanks to their impressive results in a variety of tasks such as machine translation [5], language modeling [53] or speech recognition [19].
Some of the main limitations of RNNs are their challenging training and deployment when dealing with long sequences, due to their inherently sequential behaviour. These challenges include throughput degradation, slower convergence during training and memory leakage, even for gated architectures [38]. Sequence shortening techniques, which can be seen as a sort of conditional computation [7,6,15] in time, can alleviate these issues. The most common approaches, such as cropping discrete signals or reducing the sampling rate in continuous signals, are heuristics and can be suboptimal. In contrast, we propose a model that is able to learn which samples (i.e. elements in the input sequence) need to be used in order to solve the target task. Consider a video understanding task as an example: scenes with large motion may benefit from high frame rates, whereas only a few frames are needed to capture the semantics of a mostly static scene.
The main contribution of this work is a novel modification for existing RNN architectures that allows them to skip state updates, decreasing the number of sequential operations to be performed, without requiring any additional supervision signal. This model, called Skip RNN, adaptively determines whether the state needs to be updated or copied to the next time step, thereby allow a "skip" in the computation graph. We show how the network can be encouraged to perform fewer state updates by adding a penalization term during training, allowing us to train models of different target computation budgets. The proposed modification is implemented on top of well known RNN architectures, namely LSTM and GRU, and the resulting models show promising results in a series of sequence modeling tasks. In particular, the proposed Skip RNN architecture is evaluated on five sequence learning problems: an adding task, sine wave frequency discrimination, digit classification, sentiment analysis in movie reviews and action classification in video. This paper is structured as follows: Section 2 provides an overview of the related work, Section 3 describes the proposed model, experimental evaluation of Skip RNN in a series of sequence modeling tasks is presented in Section 4, and Section 5 summarizes the main results and some potential extensions of this work. Source code is publicly available at https://imatge-upc.github.io/ skiprnn-2017-telecombcn/.
Related work
Conditional computation has been shown to allow gradual increases in model capacity without a proportional increases in computational cost by exploiting certain computation paths for each input [7,33,2,35,41]. This idea has been extended in the temporal domain, either by learning how many times an input needs to be pondered before moving to the next one [18] or building RNNs whose number of layers depends on the input data [12]. Some works have addressed time-dependent computation in RNNs by updating only a fraction of the hidden states based on the current hidden state and input [26], or following periodic patterns [29,38]. However, due to the inherently sequential nature of RNNs and the parallel computation capabilities of modern hardware, reducing the size of the matrices involved in the computations performed at each time step does not accelerate inference. The proposed Skip RNN model can be seen as form of conditional computation in time, where the computation associated to the RNN updates may or may not be executed at every time step. This is related to the UPDATE and COPY operations in hierarchical multiscale RNNs [12], but applied to the whole stack of RNN layers at the same time. This difference is key to allowing our approach to skip input samples, effectively reducing sequential computation and shielding the hidden state over longer time lags. Learning whether to update or copy the hidden state through time steps can be seen as a learnable Zoneout mask [30] which is shared between all the units in the hidden state. Similarly, it can be interpretted as an input-dependent recurrent version of stochastic depth [25].
Selecting parts of the input signal is similar in spirit to the hard attention mechanisms that have been applied to image regions [37], where only some patches of the input image are attended in order to generate captions [49] or detect objects [3]. Our model can be understood to generate a hard temporal attention mask on the fly given the previously seen samples, deciding which time steps should be attended and operating on a subset of input samples. Subsampling input sequences has been explored for visual storylines generation [43], although jointly optimizing the RNN weights and the subsampling mechanism is computationally unfeasible and the Expectation Maximization algorithm is used instead. Similar research has been conducted for video analysis tasks, discovering minimally needed evidence for event recognition [9] and training agents that decide which frames need to be observed in order to localize actions in time [50,46]. Motivated by the advantages of training recurrent models on shorter subsequences, efforts have been conducted towards learning differentiable subsampling mechanisms [40], although the computational complexity of the proposed method precludes its application to long input sequences. In contrast, our proposed method can be trained with backpropagation and does not degrade the complexity of the baseline RNNs.
Accelerating inference in RNNs is difficult due to their inherently sequential nature, leading to the design of Quasi-Recurrent Neural Networks [10], which relax the temporal dependency between consecutive steps. With the goal of speeding up RNN inference, LSTM-Jump [51] augments an LSTM cell with a classification layer that will decide how many steps to jump between RNN updates. Despite its promising results on text tasks, the model needs to be trained with REINFORCE [48], which requires the definition of a reward signal. Determining such reward signal is not trivial and does not necessarily generalize across tasks, e.g. regression and classification tasks may require from different reward signals. Moreover, the number of tokens read between jumps, the maximum jump distance and the number of jumps allowed need to be chosen ahead of time. These hyperparameters define a reduced set of subsequences that the model can sample, instead of allowing the network to learn any arbitrary sampling scheme. Unlike LSTM-Jump, our proposed approach is differentiable, thus not requiring any modifications to the loss function and simplifying the optimization process, and is not limited to a predefined set of sample selection patterns.
Model Description
An RNN takes an input sequence x = (x 1 , . . . , x T ) and generates a state sequence s = (s 1 , . . . , s T ) by iteratively applying a parametric state transition model S from t = 1 to T :
s t = S(s t−1 , x t )(1)
We augment the network with a binary state update gate, u t ∈ {0, 1}, selecting whether the state of the RNN will be updated or copied from the previous time step. At every time step t, the probabilitỹ u t+1 ∈ [0, 1] of performing a state update at t + 1 is emitted. The resulting architecture is depicted in Figure 1 and can be characterized as follows:
u t = f binarize (ũ t ) (2) s t = u t · S(s t−1 , x t ) + (1 − u t ) · s t−1 (3) ∆ũ t = σ(W p s t + b p ) (4) u t+1 = u t · ∆ũ t + (1 − u t ) · (ũ t + min(∆ũ t , 1 −ũ t ))(5)
where σ is the sigmoid function and f binarize : [0, 1] → {0, 1} binarizes the input value. Should the network be composed of several layers, some columns of W p can be fixed to 0 so that ∆ũ t depends only on the states of a subset of layers (see Section 4.5 for an example with two layers). We implement f binarize as a deterministic step function u t = round(ũ t ), although a stochastic sampling from a Bernoulli distribution u t ∼ Bernoulli(ũ t ) would be possible as well.
The model formulation implements the observation that the likelihood of requesting a new input increases with the number of consecutively skipped samples. Whenever a state update is omitted, the pre-activation of the state update gate for the following time step,ũ t+1 , is incremented by ∆ũ t . On the other hand, if a state update is performed, the accumulated value is flushed andũ t+1 = ∆ũ t .
The number of skipped time steps can be computed ahead of time. For the particular formulation used in this work, where f binarize is implemented by means of a rounding function, the number of skipped samples after performing a state update at time step t is given by:
N skip (t) = min{n : n · ∆ũ t ≥ 0.5} − 1(6)
where n ∈ Z + . This enables more efficient implementations where no computation at all is performed whenever u t = 0. These computational savings are possible because ∆ũ t = σ(W p s t + b p ) = σ(W p s t−1 + b p ) = ∆ũ t−1 when u t = 0 and there is no need to evaluate it again, as depicted in Figure 1d.
There are several advantages in reducing the number of RNN updates. From the computational standpoint, fewer updates translates into fewer required sequential operations to process an input signal, leading to faster inference and reduced energy consumption. Unlike some other models that aim to reduce the average number of operations per step [38,26], ours enables skipping steps completely. Replacing RNN updates with copy operations increases the memory of the network and its ability to model long term dependencies even for gated units, since the exponential memory decay observed in LSTM and GRU [38] is alleviated. During training, gradients are propagated through fewer updating time steps, providing faster convergence in some tasks involving long sequences. Moreover, the proposed model is orthogonal to recent advances in RNNs and could be used in conjunction with such techniques, e.g. normalization [13,4], regularization [53,30], variable computation [26,38] or even external memory [20,47].
Error gradients
The whole model is differentiable except for f binarize , which outputs binary values. A common method for optimizing functions involving discrete variables is REINFORCE [48], although several estimators have been proposed for the particular case of neurons with binary outputs [7]. We select the straight-through estimator [23], which consists in approximating the step function by the identity when computing gradients during the backward pass:
f binarize + S s t ũ t+1 σ Δũ t u t x t 0 1 0 1 s t-1 ũ t (a) S s t σ Δũ t x t s t-1 ũ t ũ t+1 (b) x t s t-1 ũ t s t + ũ t+1 σ Δũ t (c) x t s t-1 ũ t s t Δũ t-1 + ũ t+1 Δũ t (d)∂f binarize (x) ∂x = 1(7)
This yields a biased estimator that has proven more efficient than other unbiased but high-variance estimators such as REINFORCE [7] and has been successfully applied in different works [14,12]. By using the straight-through estimator as the backward pass for f binarize , all the model parameters can be trained to minimize the target loss function with standard backpropagation and without defining any additional supervision or reward signal.
Limiting computation
The Skip RNN is able to learn when to update or copy the state without explicit information about which samples are useful to solve the task at hand. However, a different operating point on the trade-off between performance and number of processed samples may be required depending on the application, e.g. one may be willing to sacrifice a few accuracy points in order to run faster on machines with low computational power, or to reduce energy impact on portable devices. The proposed model can be encouraged to perform fewer state updates through additional loss terms, a common practice in neural networks with dynamically allocated computation [33,35,18,26]. In particular, we consider a cost per sample:
L budget = λ · T t=1 u t(8)
where L budget is the cost associated to a single sequence, λ is the cost per sample and T is the sequence length. This formulation bears a similarity to weight decay regularization, where the network is encouraged to slowly converge towards a solution where the norm of the weights is smaller. Similarly, in this case the network is encouraged to slowly converge towards a solution where fewer state updates are required.
Despite this formulation has been extensively studied in our experiments, different budget loss terms can be used depending on the application. For instance, a specific number of samples may be encouraged by applying an L 1 or L 2 loss between the target value and the number of updates per sequence, T t=1 u t .
Experiments
In the following section, we investigate the advantages of adding this state skipping to LSTMs and GRUs for a variety of tasks. In addition to the evaluation metric for each task, we also report the number of RNN state updates (i.e. the number of elements in the input sequence that are used by the model) as a measure of the computational load for each model. Since skipping an RNN update results in ignoring its corresponding input, we will refer to the number of updates and the number of used samples (i.e. elements in a sequence) interchangeably.
Training is performed with Adam [28], learning rate of 10 −4 , β 1 = 0.9, β 2 = 0.999 and = 10 −8 on batches of 256. Gradient clipping [39] with a threshold of 1 is applied to all trainable variables. Bias b p in Equation 4 is initialized to 1, so that all samples are used at the beginning of training 2 . The initial hidden state s 0 is learned during training, whereasũ 0 is set to a constant value of 1 in order to force the first update at t = 1.
Experiments are implemented with TensorFlow 3 and run on a single NVIDIA K80 GPU.
Adding Task
We revisit one of the original LSTM tasks [24], where the network is given a sequence of (value, marker) tuples. The desired output is the addition of only the two values that are marked with a 1, whereas those marked with a 0 need to be ignored. We follow the experimental setup by Neil et al. [38], where the first marker is randomly placed among the first 10% of samples (drawn with uniform probability) and the second one is placed among the last half of samples (drawn with uniform probability). This marker distribution yields sequences where at least 40% of the samples are distractors and provide no useful information at all. However, it is worth noting that in this task the risk of missing a marker is very large as compared to the benefits of working on shorter subsequences.
We train RNN models with 110 units each on sequences of length 50, where the values are uniformly drawn from U(−0.5, 0.5). The final RNN state is fed to a fully connected layer that regresses the scalar output. The model is trained to minimize the Mean Squared Error (MSE) between the output and the ground truth. We consider that a model is able to solve the task when its MSE on a held-out set of examples is at least two orders of magnitude below the variance of the output distribution. This criterion is a stricter version of the one followed in [24].
While all models learn to solve the task, results in Table 1 show that Skip RNN models are able to do so with roughly half of the updates of their corresponding counterparts. Interestingly, Skip LSTM tends to skip more updates than the Skip GRU when no cost per sample is set, behavior that may be related to the lack of output gate in the latter. We hypothesize that there are two possible reasons why the output gate makes the LSTM more prone to skipping updates: (a) it introduces an additional source of memory decay, and (b) it allows to mask out some units in the cell state that may specialize in deciding when to update or copy, making the final regression layer agnostic to such process.
We observed that the models using fewer updates never miss any marker, since the penalization in terms of MSE would be very large (see Figure 2 for examples). These models learn to skip most of the samples in the 40% of the sequence where there are no markers. Moreover, most updates are Table 1: Results for the adding task, displayed as mean ± std over four different runs. The task is considered to be solved if the MSE is at least two orders of magnitude below the variance of the output distribution. skipped once the second marker is found, since all the relevant information in the sequence has been already seen. This last pattern provides evidence that the proposed models effectively learn to decide whether to update or copy the hidden state based on the input sequence, as opposed to learning biases in the dataset only. As a downside, Skip RNN models show some difficulties skipping a large number of updates at once, probably due to the cumulative nature ofũ t .
Frequency Discrimination Task
In this experiment, the network is trained to classify between sinusoids whose period is in range T ∼ U (5, 6) milliseconds and those whose period is in range T ∼ {(1, 5) ∪ (6, 100)} milliseconds [38]. Every sine wave with period T has a random phase shift drawn from U(0, T ). At every time step, the input to the network is a single scalar representing the amplitude of the signal. Since sinusoid are continuous signals, this tasks allows to study whether Skip RNNs converge to the same solutions when their parameters are fixed but the sampling period is changed. We study two different sampling periods, T s = {0.5, 1} milliseconds, for each set of hyperparameters.
We train RNNs with 110 units each on input signals of 100 milliseconds. Batches are stratified, containing the same number of samples for each class, yielding a 50% chance accuracy. The last state of the RNN is fed into a 2-way classifier and trained with cross-entropy loss. We consider that a model is able to solve the task when it achieves an accuracy over 99% on a held-out set of examples. Table 2 summarizes results for this task. When no cost per sample is set (λ = 0), the number of updates differ under different sampling conditions. We attribute this behavior to the potentially large number of local minima in the cost function, since there are numerous subsampling patterns for which the task can be successfully solved and we are not explicitly encouraging the network to converge to a particular solution. On the other hand, when λ > 0 Skip RNN models with the same cost per sample use roughly the same number of input samples even when the sampling frequency is doubled. This is a desirable property, since solutions are robust to oversampled input signals.
MNIST Classification from a Sequence of Pixels
The MNIST handwritten digits classification benchmark [32] is traditionally addressed with Convolutional Neural Networks (CNNs) that can efficiently exploit spatial dependencies through weight Table 3: Accuracy and used samples on the test set of MNIST after 600 epochs of training. Results are displayed as mean ± std over four different runs.
sharing. By flattening the 28 × 28 images into 784-d vectors, however, it can be reformulated as a challenging task for RNNs where long term dependencies need to be leveraged [31]. We follow the standard data split and set aside 5,000 training samples for validation purposes. After processing all pixels with an RNN with 110 units, the last hidden state is fed into a linear classifier predicting the digit class. All models are trained for 600 epochs to minimize cross-entropy loss. Table 3 summarizes classification results on the test set after 600 epochs of training. Skip RNNs are not only able to solve the task using fewer updates than their counterparts, but also show a lower variation among runs and train faster (see Figure 3). We hypothesize that skipping updates make the Skip RNNs work on shorter subsequences, simplifying the optimization process and allowing the networks to capture long term dependencies more easily. A similar behavior was observed for Phased LSTM, where increasing the sparsity of cell updates accelerates training for very long sequences [38].
Sequences of pixels can be reshaped back into 2D images, allowing to visualize the samples used by the RNNs as a sort of hard visual attention model [49]. Examples such as the ones depicted in Figure 4 show how the model learns to skip pixels that are not discriminative, such as the padding regions in the top and bottom of images. Similarly to the qualitative results for the adding task (Section 4.1), attended samples vary depending on the particular input being given to the network.
Sentiment Analysis on IMDB
The IMDB dataset [34] contains 25,000 training and 25,000 testing movie reviews annotated into two classes, positive and negative sentiment, with an approximate average length of 240 words per review. We set aside 15% of training data for validation purposes. Words are embedded into 300-d vector representations before being fed to an RNN with 128 units. The embedding matrix is initialized using pre-trained word2vec 4 embeddings [36] when available, or random vectors drawn from U(−0.25, 0.25) otherwise [27]. Dropout with rate 0.2 is applied between the last RNN state and the classification layer in order to reduce overfitting. We evaluate the models on sequences of length 200 and 400 by cropping longer sequences and padding shorter ones [51].
Results on the test are reported in Table 4. In a task where it is hard to predict which input tokens will be discriminative, the Skip RNN models are able to achieve similar accuracy rates to the baseline models while reducing the number of required updates. These results highlight the trade-off between accuracy and the available computational budget, since a larger cost per sample results in lower accuracies. However, allowing the network to select which samples to use instead of cropping sequences at a given length boosts performance, as observed for the Skip LSTM (length 400, λ = 10 −4 ), which achieves a higher accuracy than the baseline LSTM (length 200) while seeing roughly the same number of words per review. A similar behavior can be seen for the Skip RNN models with λ = 10 −3 , where allowing them to select words from longer reviews boosts classification accuracy while using a comparable number of tokens per sequence.
Action classification on UCF-101
One of the most accurate and scalable pipelines for video analysis consists in extracting frame level features with a CNN and modeling their temporal evolution with an RNN [17,52]. Videos are commonly recorded at high sampling rates, rapidly generating long sequences with strong temporal redundancy that are challenging for RNNs. Moreover, processing frames with a CNN is computationally expensive and may become prohibitive for high framerates. These issues have been alleviated in previous works by using short clips [17] or by downsampling the original data in order to cover long temporal spans without increasing the sequence length excessively [52]. Instead of addressing the long sequence problem at the input data level, we train RNN models using long frame sequences without downsampling and let the network learn which frames need to be used. examples with empty frames. Activations in the Global Average Pooling layer from a ResNet-50 [22] CNN pretrained on the ImageNet dataset [16] are used as frame level features, which are fed into two stacked RNN layers with 512 units each. The weights in the CNN are not tuned during training to reduce overfitting. The hidden state in the last RNN layer is used to compute the update probability for the Skip RNN models.
We evaluate the different models on the first split of UCF-101 and report results in Table 5. Skip RNN models do not only improve the classification accuracy with respect to the baseline, but require very few updates to do so, possibly due to the low motion between consecutive frames resulting in frame level features with high temporal redundancy [42]. Moreover, Figure 5 shows how models performing fewer updates converge faster thanks to the gradients being preserved during longer spans when training with backpropagation through time.
Conclusion
We presented Skip RNNs as an extension to existing recurrent architectures enabling them to skip state updates thereby reducing the number of sequential operations in the computation graph. Unlike other approaches, all parameters in Skip RNN are trained with backpropagation without requiring the introduction of task-dependent hyperparameters like a dropout rate. Experiments conducted with LSTMs and GRUs showed that Skip RNNs can match or in some cases even outperform the baseline models while relaxing their computational requirements. Skip RNNs provide faster and more stable training for long sequences and complex models, likely due to gradients being backpropagated through fewer time steps resulting in a simpler optimization task. Moreover, the introduced computational savings are better suited for modern hardware than those methods that reduce the amount of computation required at each time step [29,38,12]. The presented results motivate several new research directions toward designing efficient RNN architectures. Introducing stochasticity in neural network training has proven beneficial for generalization [45,30], and in this work we propose a deterministic rounding operation with stochastic sampling. We showed that the addition of a loss term penalizing the number of updates is important in the performance of Skip RNN and allows flexibility to specialize to tasks of varying budget requirements, e.g. the cost can be increased at each time step to encourage the network to emit a decision earlier [1], or the number of updates can be strictly bounded and enforced. Finally, understanding and analyzing the patterns followed by the model when deciding whether to update or copy the RNN state may provide insight for developing better and more efficient architectures.
Figure 1 :
1Model architecture of the proposed Skip RNN. (a) Complete Skip RNN architecture, where the computation graph at time step t is conditioned on u t . (b) Architecture when the state is updated, i.e. u t = 1. (c) Architecture when the update step is skipped and the previous state is copied, i.e. u t = 0. (d) In practice, redundant computation is avoided by propagating ∆ũ t between time steps when u t = 0.
Figure 2 :
2Sample usage examples for the Skip GRU with λ = 10 −5 on the adding task. Red dots indicate used samples, whereas blue ones are skipped.
Figure 3 :
3Accuracy evolution during training on the validation set of MNIST. The Skip GRU exhibits lower variance and faster convergence than the baseline GRU. A similar behavior is observed for LSTM and Skip LSTM, but omitted for clarity. Shading shows maximum and minimum over 4 runs, while dark lines indicate the mean.
Figure 4 :
4Sample usage examples for the Skip LSTM with λ = 10 −4 on the test set of MNIST. Red pixels are used, whereas blue ones are skipped.
Figure 5 :
5Accuracy evolution during the first 300 training epochs on the validation set of UCF-101 (split 1). Skip LSTM models converge much faster than the baseline LSTM.
Table 2 :
2Results for the frequency discrimination task, displayed as mean ± std over four different
runs. The task is considered to be solved if the classification accuracy is over 99%. Models with the
same cost per sample (λ > 0) converge to a similar number of used samples under different sampling
conditions.
Model
Accuracy
State updates
LSTM
0.910 ± 0.045 784.00 ± 0.00
Skip LSTM, λ = 10 −4 0.973 ± 0.002 379.38 ± 33.09
GRU
0.968 ± 0.013 784.00 ± 0.00
Skip GRU, λ = 10 −4
0.976 ± 0.003 392.62 ± 26.48
UCF-101[44] is a dataset containing 13,320 trimmed videos belonging to 101 different action categories. We use 10 seconds of video sampled at 25fps, cropping longer ones and padding shorterLSTM 0.843 ± 0.003 200.00 ± 0.00 0.868 ± 0.004 400.00 ± 0.00 Skip LSTM, λ = 0 0.844 ± 0.004 196.75 ± 5.63 0.866 ± 0.004 369.70 ± 19.35 Skip LSTM, λ = 10 −5 0.846 ± 0.004 197.15 ± 3.16 0.865 ± 0.001 380.62 ± 18.20 Skip LSTM, λ = 10 −4 0.837 ± 0.006 164.65 ± 8.67 0.862 ± 0.003 186.30 ± 25.72 Skip LSTM, λ = 10 −3 0.811 ± 0.007Model
Length 200
Length 400
Accuracy
State updates
Accuracy
State updates
73.85 ± 1.90
0.836 ± 0.007
84.22 ± 1.98
GRU
0.845 ± 0.006 200.00 ± 0.00 0.862 ± 0.003 400.00 ± 0.00
Skip GRU, λ = 0
0.848 ± 0.002 200.00 ± 0.00 0.866 ± 0.002 399.02 ± 1.69
Skip GRU, λ = 10 −5
0.842 ± 0.005 199.25 ± 1.30 0.862 ± 0.008 398.00 ± 2.06
Skip GRU, λ = 10 −4
0.834 ± 0.006 180.97 ± 8.90 0.853 ± 0.011 314.30 ± 2.82
Skip GRU, λ = 10 −3
0.800 ± 0.007 106.15 ± 37.92 0.814 ± 0.005
99.12 ± 2.69
Table 4 :
4Accuracy and used samples on the test set of IMDB for different sequence lengths. Results are displayed as mean ± std over four different runs.Model
Accuracy State updates
LSTM
0.671
250.0
Skip LSTM, λ = 0
0.749
138.9
Skip LSTM, λ = 10 −5
0.757
24.2
Skip LSTM, λ = 10 −4
0.790
7.6
GRU
0.791
250.0
Skip GRU, λ = 0
0.796
124.2
Skip GRU, λ = 10 −5
0.792
29.7
Skip GRU, λ = 10 −4
0.793
23.7
Table 5 :
5Accuracy and used samples on the validation set of UCF-101 (split 1).
In practice, forcing the network to use all samples at the beginning of training improves its robustness against random initializations of its weights and increases the reproducibility of the presented experiments. A similar behavior was observed in other augmented RNN architectures such as Neural Stacks[21].3 https://www.tensorflow.org
https://code.google.com/archive/p/word2vec/
AcknowledgmentsThis work was partially supported by the Spanish Ministry of Economy and Competitivity under contracts TIN2012-34557 by the BSC-CNS Severo Ochoa program (SEV-2011-00067), and contracts TEC2013-43935-R and TEC2016-75976-R. It has also been supported by grants 2014-SGR-1051 and 2014-SGR-1421 by the Government of Catalonia, and the European Regional Development Fund (ERDF). We would also like to thank the technical support team at the Barcelona Supercomputing Center.
M S Aliakbarian, F Saleh, M Salzmann, B Fernando, L Petersson, L Andersson, arXiv:1703.07023Encouraging LSTMs to anticipate actions very early. arXiv preprintM. S. Aliakbarian, F. Saleh, M. Salzmann, B. Fernando, L. Petersson, and L. Andersson. Encouraging LSTMs to anticipate actions very early. arXiv preprint arXiv:1703.07023, 2017.
Dynamic capacity networks. A Almahairi, N Ballas, T Cooijmans, Y Zheng, H Larochelle, A Courville, ICML. A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. Courville. Dynamic capacity networks. In ICML, 2016.
Multiple object recognition with visual attention. J Ba, V Mnih, K Kavukcuoglu, arXiv:1412.7755arXiv preprintJ. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
J L Ba, J R Kiros, G E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJ. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, ICLR. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Deep learning of representations: Looking forward. Y Bengio, SLSP. Y. Bengio. Deep learning of representations: Looking forward. In SLSP, 2013.
Estimating or propagating gradients through stochastic neurons for conditional computation. Y Bengio, N Léonard, A Courville, arXiv:1308.3432arXiv preprintY. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Learning long-term dependencies with gradient descent is difficult. Y Bengio, P Simard, P Frasconi, IEEE Transactions on Neural Networks. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 1994.
Minimally needed evidence for complex event recognition in unconstrained videos. S Bhattacharya, F X Yu, S.-F Chang, ICMR. S. Bhattacharya, F. X. Yu, and S.-F. Chang. Minimally needed evidence for complex event recognition in unconstrained videos. In ICMR, 2014.
Quasi-recurrent neural networks. J Bradbury, S Merity, C Xiong, R Socher, ICLR. J. Bradbury, S. Merity, C. Xiong, and R. Socher. Quasi-recurrent neural networks. In ICLR, 2017.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, EMNLP. K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014.
Hierarchical multiscale recurrent neural networks. J Chung, S Ahn, Y Bengio, ICLR. J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. In ICLR, 2017.
Recurrent batch normalization. T Cooijmans, N Ballas, C Laurent, Ç Gülçehre, A Courville, T. Cooijmans, N. Ballas, C. Laurent, Ç. Gülçehre, and A. Courville. Recurrent batch normaliza- tion. In ICLR, 2017.
M Courbariaux, I Hubara, D Soudry, R El-Yaniv, Y Bengio, arXiv:1602.02830Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprintM. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
Low-rank approximations for conditional feedforward computation in deep neural networks. A Davis, I , arXiv:1312.4461arXiv preprintA. Davis and I. Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013.
ImageNet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
Long-term recurrent convolutional networks for visual recognition and description. J Donahue, L Hendricks, S Guadarrama, M Rohrbach, S Venugopalan, K Saenko, T Darrell, CVPR. J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
Adaptive computation time for recurrent neural networks. A Graves, arXiv:1603.08983arXiv preprintA. Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, In ICASSP. A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
A Graves, G Wayne, I Danihelka, arXiv:1410.5401Neural turing machines. arXiv preprintA. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Learning to transduce with unbounded memory. E Grefenstette, K M Hermann, M Suleyman, P Blunsom, NIPS. E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In NIPS, 2015.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
Neural networks for machine learning. Coursera video lectures. G Hinton, G. Hinton. Neural networks for machine learning. Coursera video lectures, 2012.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 1997.
Deep networks with stochastic depth. G Huang, Y Sun, Z Liu, D Sedra, K Q Weinberger, ECCV. G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
Variable computation in recurrent neural networks. Y Jernite, E Grave, A Joulin, T Mikolov, Y. Jernite, E. Grave, A. Joulin, and T. Mikolov. Variable computation in recurrent neural networks. In ICLR, 2017.
Convolutional neural networks for sentence classification. Y Kim, EMNLP. Y. Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980arXiv preprintD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
A clockwork rnn. J Koutnik, K Greff, F Gomez, J Schmidhuber, ICML. J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork rnn. In ICML, 2014.
D Krueger, T Maharaj, J Kramár, M Pezeshki, N Ballas, N R Ke, A Goyal, Y Bengio, H Larochelle, A Courville, Regularizing rnns by randomly preserving hidden activations. ICLRD. Krueger, T. Maharaj, J. Kramár, M. Pezeshki, N. Ballas, N. R. Ke, A. Goyal, Y. Bengio, H. Larochelle, A. Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. In ICLR, 2017.
A simple way to initialize recurrent networks of rectified linear units. Q V Le, N Jaitly, G E Hinton, arXiv:1504.00941arXiv preprintQ. V. Le, N. Jaitly, and G. E. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. the IEEEY. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution. L Liu, J Deng, arXiv:1701.00299arXiv preprintL. Liu and J. Deng. Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution. arXiv preprint arXiv:1701.00299, 2017.
Learning word vectors for sentiment analysis. A L Maas, R E Daly, P T Pham, D Huang, A Y Ng, C Potts, ACL. A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In ACL, 2011.
Deciding how to decide: Dynamic routing in artificial neural networks. M Mcgill, P Perona, ICML. M. McGill and P. Perona. Deciding how to decide: Dynamic routing in artificial neural networks. In ICML, 2017.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, NIPS. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
Recurrent models of visual attention. V Mnih, N Heess, A Graves, NIPS. V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In NIPS, 2014.
Phased LSTM: accelerating recurrent network training for long or event-based sequences. D Neil, M Pfeiffer, S Liu, NIPS. D. Neil, M. Pfeiffer, and S. Liu. Phased LSTM: accelerating recurrent network training for long or event-based sequences. In NIPS, 2016.
On the difficulty of training recurrent neural networks. R Pascanu, T Mikolov, Y Bengio, ICML. R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013.
Training a subsampling mechanism in expectation. C Raffel, D Lawson, ICLR Workshop Track. C. Raffel and D. Lawson. Training a subsampling mechanism in expectation. In ICLR Workshop Track, 2017.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. N Shazeer, A Mirhoseini, K Maziarz, A Davis, Q Le, G Hinton, J Dean, N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In ICLR, 2017.
Clockwork convnets for video semantic segmentation. E Shelhamer, K Rakelly, J Hoffman, T Darrell, arXiv:1608.03609arXiv preprintE. Shelhamer, K. Rakelly, J. Hoffman, and T. Darrell. Clockwork convnets for video semantic segmentation. arXiv preprint arXiv:1608.03609, 2016.
Learning visual storylines with skipping recurrent neural networks. G A Sigurdsson, X Chen, A Gupta, ECCV. G. A. Sigurdsson, X. Chen, and A. Gupta. Learning visual storylines with skipping recurrent neural networks. In ECCV, 2016.
Ucf101: A dataset of 101 human actions classes from videos in the wild. K Soomro, A R Zamir, M Shah, arXiv:1212.0402arXiv preprintK. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014.
Leaving some stones unturned: dynamic feature prioritization for activity detection in streaming video. Y.-C Su, K Grauman, ECCV. Y.-C. Su and K. Grauman. Leaving some stones unturned: dynamic feature prioritization for activity detection in streaming video. In ECCV, 2016.
. J Weston, S Chopra, A Bordes, arXiv:1410.3916Memory networks. arXiv preprintJ. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning. R J Williams, R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992.
Show, attend and tell: Neural image caption generation with visual attention. K Xu, J Ba, R Kiros, K Cho, A Courville, R Salakhudinov, R Zemel, Y Bengio, ICML. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
End-to-end learning of action detection from frame glimpses in videos. S Yeung, O Russakovsky, G Mori, L Fei-Fei, CVPR. S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In CVPR, 2016.
Learning to skim text. A W Yu, H Lee, Q V Le, ACL. A. W. Yu, H. Lee, and Q. V. Le. Learning to skim text. In ACL, 2017.
Beyond short snippets: Deep networks for video classification. J Yue-Hei, M Ng, S Hausknecht, O Vijayanarasimhan, R Vinyals, G Monga, Toderici, CVPR. J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
Recurrent neural network regularization. W Zaremba, I Sutskever, O Vinyals, ICLR. W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. In ICLR, 2015. |
249,461,627 | SCALEFORMER: ITERATIVE MULTI-SCALE REFINING TRANSFORMERS FOR TIME SERIES FORECASTING | The performance of time series forecasting has recently been greatly improved by the introduction of transformers. In this paper, we propose a general multi-scale framework that can be applied to the state-of-the-art transformer-based time series forecasting models (FEDformer, Autoformer, etc.). By iteratively refining a forecasted time series at multiple scales with shared weights, introducing architecture adaptations, and a specially-designed normalization scheme, we are able to achieve significant performance improvements, from 5.5% to 38.5% across datasets and transformer architectures, with minimal additional computational overhead. Via detailed ablation studies, we demonstrate the effectiveness of each of our contributions across the architecture and methodology. Furthermore, our experiments on various public datasets demonstrate that the proposed improvements outperform their corresponding baseline counterparts. Our code is publicly available in https://github.com/BorealisAI/scaleformer.Published as a conference paper at ICLR 2023 forecasts which can lead to runaway error propagation. To mitigate this issue, we introduce cross-scale normalization at each step.Our approach re-orders model capacity to shift the focus on scale awareness, but does not fundamentally alter the attention-driven paradigm of transformers. As a result, it can be readily adapted to work jointly with multiple recent time series transformer architectures, acting broadly orthogonally to their own contributions. Leveraging this, we chose to operate with various transformer-based backbones (e.g. Fedformer, Autoformer, Informer, Reformer, Performer) to further probe the effect of our multi-scale method on a variety of experimental setups.Our contributions are as follows: (1) we introduce a novel iterative scale-refinement paradigm that can be readily adapted to a variety of transformer-based time series forecasting architectures. (2) To minimize distribution shifts between scales and windows, we introduce cross-scale normalization on outputs of the Transformer. (3) Using Informer and AutoFormer, two state-of-the-art architectures, as backbones, we demonstrate empirically the effectiveness of our approach on a variety of datasets. Depending on the choice of transformer architecture, our mutli-scale framework results in mean squared error reductions ranging from 5.5% to 38.5%. (4) Via a detailed ablation study of our findings, we demonstrate the validity of our architectural and methodological choices. | [] | SCALEFORMER: ITERATIVE MULTI-SCALE REFINING TRANSFORMERS FOR TIME SERIES FORECASTING
Mohammad Amin
Simon Fraser University
Borealis, AICanada, Canada, Canada
Shabani [email protected]
Simon Fraser University
Borealis, AICanada, Canada, Canada
Amir Abdi
Simon Fraser University
Borealis, AICanada, Canada, Canada
Lili Meng
Simon Fraser University
Borealis, AICanada, Canada, Canada
Tristan Sylvain
Simon Fraser University
Borealis, AICanada, Canada, Canada
SCALEFORMER: ITERATIVE MULTI-SCALE REFINING TRANSFORMERS FOR TIME SERIES FORECASTING
Published as a conference paper at ICLR 2023
The performance of time series forecasting has recently been greatly improved by the introduction of transformers. In this paper, we propose a general multi-scale framework that can be applied to the state-of-the-art transformer-based time series forecasting models (FEDformer, Autoformer, etc.). By iteratively refining a forecasted time series at multiple scales with shared weights, introducing architecture adaptations, and a specially-designed normalization scheme, we are able to achieve significant performance improvements, from 5.5% to 38.5% across datasets and transformer architectures, with minimal additional computational overhead. Via detailed ablation studies, we demonstrate the effectiveness of each of our contributions across the architecture and methodology. Furthermore, our experiments on various public datasets demonstrate that the proposed improvements outperform their corresponding baseline counterparts. Our code is publicly available in https://github.com/BorealisAI/scaleformer.Published as a conference paper at ICLR 2023 forecasts which can lead to runaway error propagation. To mitigate this issue, we introduce cross-scale normalization at each step.Our approach re-orders model capacity to shift the focus on scale awareness, but does not fundamentally alter the attention-driven paradigm of transformers. As a result, it can be readily adapted to work jointly with multiple recent time series transformer architectures, acting broadly orthogonally to their own contributions. Leveraging this, we chose to operate with various transformer-based backbones (e.g. Fedformer, Autoformer, Informer, Reformer, Performer) to further probe the effect of our multi-scale method on a variety of experimental setups.Our contributions are as follows: (1) we introduce a novel iterative scale-refinement paradigm that can be readily adapted to a variety of transformer-based time series forecasting architectures. (2) To minimize distribution shifts between scales and windows, we introduce cross-scale normalization on outputs of the Transformer. (3) Using Informer and AutoFormer, two state-of-the-art architectures, as backbones, we demonstrate empirically the effectiveness of our approach on a variety of datasets. Depending on the choice of transformer architecture, our mutli-scale framework results in mean squared error reductions ranging from 5.5% to 38.5%. (4) Via a detailed ablation study of our findings, we demonstrate the validity of our architectural and methodological choices.
INTRODUCTION
Integrating information at different time scales is essential to accurately model and forecast time series (Mozer, 1991;Ferreira et al., 2006). From weather patterns that fluctuate both locally and globally, as well as throughout the day and across seasons and years, to radio carrier waves which contain relevant signals at different frequencies, time series forecasting models need to encourage scale awareness in learnt representations. While transformer-based architectures have become the mainstream and stateof-the-art for time series forecasting in recent years, advances have focused mainly on mitigating the standard quadratic complexity in time and space, e.g., attention (Li et al., 2019;Zhou et al., 2021) or structural changes (Xu et al., 2021;Zhou et al., 2022b), rather than explicit scale-awareness. The essential cross-scale feature relationships are often learnt implicitly, and are not encouraged by architectural priors of any kind beyond the stacked attention blocks that characterize the transformer models. Autoformer (Xu et al., 2021) and Fedformer (Zhou et al., 2022b) introduced some emphasis on scale-awareness by enforcing different computational paths for the trend and seasonal components of the input time series; however, this structural prior only focused on two scales: low-and high-frequency components.
Given their importance to forecasting, can we make transformers more scale-aware?
We enable this scale-awareness with Scaleformer. In our proposed approach, showcased in Figure 1, time series forecasts are iteratively refined at successive time-steps, allowing the model to better capture the inter-dependencies and specificities of each scale. However, scale itself is not sufficient. Iterative refinement at different scales can cause significant distribution shifts between intermediate
X!"# $%&
Step 0
Step i
Step m
Step i as a model-agnostic framework to utilize multi-scale time-series in transformers while keeping the number of parameters and time complexity roughly the same.
X " ! %"$ X ! &'( X !)* &'( X "#$ X ! "#$ X ! %"$ X! $%& X ' $%& X ( $%&X
METHOD
In this section, we first introduce the problem setting in Sec. 3.1, then describe the proposed framework in Sec. 3.2, and the normalization scheme in Sec. 3.3. We provide details on the input's representation in Sec. 3.4, and the loss function in Sec. 3.5.
PROBLEM SETTING
We denote X (L) and X (H) the look-back and horizon windows for the forecast, respectively, of corresponding lengths L , H . Given a starting time t 0 we can express these time-series of dimension d x , as follows:
X (L) = {x t |x t ∈ R dx , t ∈ [t 0 , t 0 + L ]} and X (H) = {x t |x t ∈ R dx , t ∈ [t 0 + L + 1, t 0 + L + H ]}.
The goal of the forecasting task is to predict the horizon window X (H) given the look-back window X (L) .
MULTI-SCALE FRAMEWORK
Our proposed framework applies successive transformer modules to iteratively refine a time-series forecast, at different temporal scales. The proposed framework is shown in Figure 2.
Given an input time-series X (L) , we iteratively apply the same neural module mutliple times at different temporal scales. Concretely, we consider a set of scales S = {s m , ..., s 2 , s 1 , 1} (i.e. for the default scale of s = 2, S is a set of consecutive powers of 2), where m = log s L − 1 and s is a downscaling factor. The input to the encoder at the i-th step (0 ≤ i ≤ m) is the original look-back window X (L) , downsampled by a scale factor of s i ≡ s m−i via an average pooling operation. The input to the decoder, on the other hand, is X out i−1 upsampled by a factor of s via a linear interpolation.
Finally, X dec 0 is initialized to an array of 0s. The model performs the following operations:
x t,i = 1 s i
Without Normalization With Normalization Original Ditstribution Downsampled Distribution
Figure 3: The figure shows the output results of two series using the same trained multi-scale model with and without shifting the data (left) which demonstrates the importance of normalization. On the right, we can see the distribution changes due to the downsampling of two series compared to the original scales from the Electricity dataset.
Distribution shift is when the distribution of input to a model or its sub-components changes across training to deployment (Shimodaira, 2000;Ioffe & Szegedy, 2015). In our context, two distinct distribution shifts are common. First, there is a natural distribution shift between the look-back window and the forecast window (the covariate shift). Additionally, there is a distribution shift between the predicted forecast windows at two consecutive scales which is a result of the upsampling operation alongside the error accumulation during the intermediate computations. As a result, normalizing the output at a given step by either the look-back window statistics or the previously predicted forecast window statistics result in an accumulation of errors across steps. We mitigate this by considering a moving average of forecast and look-back statistics as the basis for the output normalization. While this change might appear relatively minor, it has a significant impact on the resulting distribution of outputs. The improvement is more evident when compared to the alternative approaches, namely, normalizing by either look-back or previous forecast window statistics.
INPUT EMBEDDING
Following the previous works, we embed our input to have the same number of features as the hidden dimension of the model. The embedding consists of three parts: (1) Value embedding which uses a linear layer to map the input observations of each step x t to the same dimension as the model. We further concatenate an additional value 0, 0.5, or 1 respectively showing if each observation is coming from the look-back window, zero initialization, or the prediction of the previous steps.
(2) Temporal Embedding which again uses a linear layer to embed the time stamp related to each observation to the hidden dimension of the model. Here we concatenate an additional value 1/s i − 0.5 as the current scale for the network before passing to the linear layer.
(3) We also use a fixed positional embedding which is adapted to the different scales s i as follows:
P E(pos, 2k, s i ) = sin pos × s i 10000 2k/dmodel , P E(pos, 2k + 1, s i ) = cos pos × s i 10000 2k/dmodel (9)
LOSS FUNCTION
Using the standard MSE objective to train time-series forecasting models leaves them sensitive to outliers. One possible solution is to use objectives more robust to outliers, such as the Huber loss (Huber, 1964). However, when there are no major outliers, such objectives tend to underperform. Given the heterogeneous nature of the data, we instead utilize the adaptive loss (Barron, 2019):
f (ξ, α, c) = |α − 2| α (ξ/c) 2 |α − 2| + 1 α/2 − 1(10)with ξ = (X out i − X (H) i ) in step i.
The parameters α and c, which modulate the loss sensitivity to outliers, are learnt in an end-to-end fashion during training. To the best of our knowledge, this is the first time this objective has been adapted to the context of time-series forecasting.
EXPERIMENTS
In this section, we first showcase the main results of our proposed approach on a variety of forecasting dataset in Sec. 4.1. Then, we provide an ablation study of the different components of our model in Sec. 4.2, and also present qualitative results in Sec. 4.3. Moreover, we discuss a series of additional extensions of our method in Sec. 4.4, shedding light on promising future directions.
MAIN RESULTS
Baselines:
To measure the effectiveness of the proposed framework, we mainly use state-of-the-art transformer-based models FedFormer (Zhou et al., 2022b) Reformer (Kitaev et al., 2020, Performer (Choromanski et al., 2020), Informer (Zhou et al., 2021) and Autoformer (Xu et al., 2021) which are proven to have beaten other transformer-based (e.g. LogTrans (Li et al., 2019), Reformer (Kitaev et al., 2020)), RNN-based (e.g. LSTMNet (Lai et al., 2018), LSTM) and TCN (Bai et al., 2018) models. For brevity, we only keep comparisons with the transformer models in the tables.
Datasets: We consider four public datasets with different characteristics to evaluate our proposed framework. Electricity Consuming Load (ECL) 1 corresponds to the electricity consumption (Kwh) of 321 clients. Traffic 2 aggregates the hourly occupancy rate of 963 car lanes of San Francisco bay area freeways. Weather 3 contains 21 meteorological indicators, such as air temperature, humidity, etc, recorded every 10 minutes for the entirety of 2020. Exchange-Rate (Lai et al., 2018) collects the daily exchange rates of 8 countries (Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore) from 1990 to 2016. National Illness (ILI) 4 corresponds to the weekly Implementation details: Following previous work (Xu et al., 2021;Zhou et al., 2021), we pass X enc = X (L) as the input to the encoder. While an array of zero-values would be the default to pass to the decoder, the decoder instead takes as input the second half of the look-back window padded with zeros X dec = {x t0+ L /2 , ..., x L , 0, 0, ..., 0} with length L /2 + H . The hidden dimension of models is 512 with a batch size of 32. We use the Adam optimizer with a learning rate of 1e-4. The look-back window size is fixed to 96, and the horizon is varied from 96 to 720. We repeat each experiment 5 times and report average values to reduce randomness. For additional implementation details on our model and baselines please refer to Appendix A.
Main results and comparison with baselines: Table 1 shows the results of the proposed framework compared to the baselines. Our proposed multi-scale framework with the adaptive loss outperforms the baselines in almost all of the experiments with an average improvement of 5.6% over FEDFormer, 13% over Autoformer and 38% over Informer which are the three most recent transformer-based architectures on MSE. We also achieved significant error reduction on MAE. The improvement is statistically significant in all cases, and in certain cases quite substantial. In particular for the exchange-rate dataset, with Informer and Reformer base models, our approach improves upon the respective baselines by over 50% averaged over the different horizon lengths.
Time and memory complexity: The proposed framework uses the same number of parameters for the model as the baselines (except two parameters α and c of the Adaptive loss). Our framework sacrifices a small amount of computation efficiency for the sake of a significant performance improvement. We expand our analysis in Appendix C. As shown in Table 4 in Appendix, if we replace the operation at the final scale by an interpolation of the prior output, we can achieve improved performance over the baselines, at no computational overhead.
Informer Autoformer FEDformer
Electricity Exchange Traffic Weather Electricity Exchange Traffic Weather Electricity Exchange Traffic Weather
ABLATION STUDY
We present main ablation studies in this section, more ablation results are shown in Appendix G.
Impact of each component: Two important components of our approach are the multi-scale framework and the use of the adaptive loss. We conduct multiple experiments (1) removing the multi-scale framework and/or (2) replacing the Adaptive loss by the MSE for training, in order to demonstrate the benefits of these two components. Figure 4 shows the effect of multi-scale and the loss function with different base models. Considering the impact of ablating the adaptive loss, we can see that for both the multi-scale and base models, training with the adaptive loss improves performance. Similarly, adding the multi-scale framework improves performance, both with and without the adaptive loss.
Overall, combining the adaptive loss with the multi-scale framework results in the best performance.
Cross-scale normalization: As we discussed in Section 3.3, having the cross-scale normalization is crucial to avoid the distribution shifts. To confirm that, we conduct two experiments. Firstly, we use the multi-scale framework without the cross-scale normalization to argue that the error accumulation and covariate shift between the scales leads higher error compared to only a single scale. As shown in Table 2, while the multi-scale framework can get better results in a few cases, it mostly have higher errors than baselines.
On the other hand, adding the normalization with only a single scale can still help to achieve better performance by reducing the effect of covariate shift between the training and the test series. As shown in Table 3, the normalization improves the results of Informer and FEDformer consistently. The decomposition layer of the Autoformer solves a similar problem and replacing that with our normalization harms the capacity of the model.
QUALITATIVE RESULTS
We have also shown the qualitative comparisons between the vanilla Informer and FEDformer versus the results of our framework in Figure 5. Most notably, in both cases our approach appears significantly better at forecasting the statistical properties (such as local variance) of the signal. Our scaleformer-based models capture trends (and other human-relevant) information better than their baselines. Despite these interesting findings, we would like to emphasize that these are randomly selected qualitative examples that may have their own limitations. For more qualitative results, please refer to Section I in the Appendix.
EXTENSIONS AND DISCUSSION
The Scaleformer structural prior has been shown to be beneficial when applied to transformer-based, deterministic time series forecasting. It is not however limited to those settings. In this section, we show it can be extended to probabilisitc forecasting and non transformer-based encoders, both of which are closely coupled with our primary application. We also aim to highlight potential promising future directions. We show that our Scaleformer can improve performance in a probabilistic forecasting setting (please refer to Table 9 in Appendix for more details). We adopt the probabilistic output of DeepAR (Salinas et al., 2020), which is the most common probabilistic forecasting treatment. In this setting, instead of a point estimate, we have two prediction heads, predicting the mean µ and standard deviation σ, trained with a negative log likelihood loss (NLL). NLL and continuous ranked probability score (CRPS) are used as evaluation metrics. All other hyperparameters remain unchanged. Here, again, scaleformers continue to outperform the probabilistic Informer.
While we have mainly focused on improving transformer-based models, they are not the only encoders. Recent models such as NHits (Challu et al., 2022) and FiLM (Zhou et al., 2022a) attain competitive performance, while assuming a fixed length univariate input/output. They are less flexible compared with variable length of multi-variate input/output, but result in strong performance and faster inference than transformers, making them interesting to consider. The Scaleformer prior demonstrates a statistically significant improvement, on average, when adapted by NHits and FiLM to iteratively refine predictions. For more details please refer to Appendix K.
The results mentioned above demonstrate the fact that ScaleFormer can adapt to settings distinct from point-wise time-series forecasts with transformers (the primary scope of our paper), such as probabilistic forecasts and non-transformer models. We consider such directions to therefore be promising for future work.
CONCLUSION
Noting that introducing structural priors that account for multi-scale information is essential for accurate time-series forecastings, this paper proposes a novel multi-scale framework on top of recent state-the-art methods for time-series forecasting using Transformers. Our framework iteratively refines a forecasted time-series at increasingly fine-grained scales, and introduces a normalization scheme that minimizes distribution shifts between scales. These contributions result in vastly improved performance over baseline transformer architectures, across a variety of settings, and qualitatively result in forecasts that better capture the trend and local variations of the target signal. Additionally, our detailed ablation study shows that the different components synergetically work together to deliver this outcome.
For future work, it is promising to extend the preliminary work that has been done applying Scale-Former architectures to both probabilistic forecasting and non-transformer models. (Xu et al., 2021). The hidden dimension of models are fixed to 512 with a batch size of 32, and we train each model for for 10 epochs with early stop enabled. To optimize the models, an Adam optimizer has been used with a learning rate of 1e-4 for the forecasting model and 1e-3 for optimizing the adaptive loss. The forecasting module is fixed to 2 encoder layers and 1 decoder layer. The look-back window size is fixed to 96, and the horizon is varied from 96 to 720. We repeat each experiment 5 times to reduce the effect of randomness in the reported values. In all experiments, the temporal scale factor s is fixed to 2.
For Informer, we train the model without any changes as the core of our framework. However, Autoformer uses a decomposition layer at the input of the decoder and does not pass the trend series to the network, which makes the model unaware of the previous predictions. To resolve this, we pass zeros as the trend and the series without decomposition as the input to the decoder. For Reformer, we used the available implementation 6 of the model from Xu et al. (2021). In addition, we used the pytorch library 7 of Performer (Choromanski et al., 2020) for our performer baseline using the same parameters as our Reformer model, and finally, we use the official implementation 8 of FEDformer (Zhou et al., 2022b) for the FEDformer model. We fixed the number of modes to 64 following the original paper and Wavelet Enhanced Structure for the core modules. To make it consistent with our other experiments, we use the moving average with the kernel size 25 as in Xu et al. (2021), however, FEDformer (Zhou et al., 2022b) is using moving average with the kernel size of 24. Our random seed is fixed to 2022 in all experiments.
B MORE MOTIVATIONS
This section aims to provide more motivations for the use of a multi-scale architecture. Let us first consider the following classical example, highlighted in section 2 of Ferreira et al. (2006), corresponding to the monthly flows of the Fraser River from January of 1913 to December of 1990.
As shown in the their corresponding plot, the annual averages are strongly inter-related, pointing to the fact that seasonality alone will not suffice to model the variations. In the context of the paper, this showcases a failure mode of an ARMA model, but this failing is more general: models that do not explicitly account for inter-scale dependencies will perform poorly on similar datasets.
Different approaches have attempted to introduce multi-scale processing (Ferreira et al., 2006;Mozer, 1991) in ways that differ from our own approach. The multi-scale temporal structure for music composition is introduced in (Mozer, 1991). Ferreira et al. (2006) proposed a time series model with rich autocorrelation structures by coupling processes evolving at different levels of resolution through time. However, their base models are constrained to simple statistical models, e.g. Autoregressive models.
To conclude, we note the following: (1) the approaches mentioned above have applied multi-scale modeling with success, and (2) we are the first work to explicitly consider a multi-scale prior by construction for transformers.
C REDUCING COMPUTATIONAL COST
To obtain an estimation of the total running time of Scaleformer, the running time of each scale as the terms of a geometric progression based on the scale factor s which results the total time of 1−s m 1−s multiplied by the running time of the baseline method. In this regard, Table 4 shows the running times of different experiments where s = 2 with the batch size is 32 on a GeForce GTX 1080 Ti GPU with 64 cores. While our current code is not optimised, still the scaleformer of each method takes roughly twice of the baselines which is consistent with the mentioned formula. Note that this is the smallest scale which means that our method is bounded to twice of the baseline and considering a larger scale 5 https://github.com/thuml/Autoformer 6 https://github.com/thuml/Autoformer/blob/main/models/Reformer.py 7 https://github.com/lucidrains/performer-pytorch 8 https://github.com/MAZiqing/FEDformer factor can reduce the time overhead. Table 5 shows the impact of replacing the Scaleformer operation at the real scale by an interpolation of values of the previous scale, i.e., at scale s 0 we do not apply the transformer but rather compute the results by linear interpolation of X out m−1 , thereby reducing the compute cost. This lower cost alternative results in better performance than the baselines, yet worse results than the full Scaleformer. This shows that adding scale is indeed more efficient, and that there is the possibility of a trade-off between further improved performance (full Scaleformer) or improved computational efficiency (interpolated Scaleformer).
D JUSTIFICATION FOR THE ADAPTIVE LOSS
Mathematically, the justification for the adaptive loss is as follows. Considering the ξ term in equation 10, the function is asymptotically close (but not equivalent due to the denominator) to ξ α . As a result, for outliers (for which ξ will be large), the loss term will function as a L α penalty on ξ, which will penalize outliers more for large α. The converse of this is that we would expect a model trained with such a loss to learn lower values of α for settings with fewer outliers.
E INTERPLAY BETWEEN ADAPTIVE LOSS AND MULTI-SCALE ARCHITECTURE
The main reason for combining the adaptive loss and multi-scale architecture under a unified framework is: they are synergetic. How do we explain this synergy? Compared to other transformer architectures (notably the baselines used), ScaleFormer has more iterative steps: the sequential multi-scale operations. Iterative computation tends to accumulate more errors, which will behave like outliers for the purpose of this loss. As a result, the process that leads to the need for the two components can be expressed as: (1) The multi-scale architecture is beneficial for performance as a useful structural prior for time series data. (2) The multi-scale architecture however relies on sequential computation that increases the likelihood of explosive error accumulation. (3) The adaptive loss serves to mitigate this issue, leading to more stable learning and better performance.
F SINGLE-SCALE MODEL WITH MEAN-NORMALIZATION
A single-scale model with mean normalization performs better than the multi-scale version without normalization. The reason for this is that normalization as our proposed scheme is targeting at all forms of internal distribution shift, not only those induced by the multi-scale architecture. In our submission, we make the case for the multi-scale prior as a natural prior to add to transformers. From empirical observations, we found that such a prior requires adapting normalization. When investigating means of normalizing, we observed additional benefits to non-multiscale architectures as well. This means that they also suffer from other forms of distribution shift: we attribute it in the paper to e.g. shifts between lookback and forecast distributions.
G MORE ABLATION STUDIES AND PARAMETER ANALYSIS
G.1 ITERATIVE REFINEMENT USING THE SAME SCALE
To further confirm the effect of the multi-scale framework, we also compare our proposed framework with another baseline by keeping the original scale in each iteration. As Table 6 shows, using multiscale framework outperforms keeping the original scale while having significantly lower memory and time complexity overhead. Following the main experiments, we use the scale factor of 2 which results S = {16, 8, 4, 2, 1} for Scaleformer (showed by -MSA) and a scale factor of 1 with similarly 5 iterations for iterative refinement (showed by -IA). Table 7 extends the results of Figure 4 of the paper to all four datasets for both Autoformer and Informer, in the multivariate setting and for the two backbones, Autoformer and Informer. Table 7: Comparison of training with either only an Adaptive loss "-A", only the multi-scale framework with MSE loss "-MS", or the whole Multi-scale framework and Adaptive loss "-MSA". The results confirm that the combination of all our proposed contributions is essential, and results in significantly improved performance in both Informer and Autoformer. Experiments are done in the multi-variate setting. Table 8 showcases the results of our model using different values of the scale parameter s. It shows that a scale of 2 results in better performance. Figure 6 shows the impact of α on the shape of the loss function on the left, and example ground truth time-series corresponding to the horizon window on the right. As noted in the main paper, lower values of α tend to result in better robustness to outliers. This is indeed confirmed empirically in the case of the weather dataset, which corresponds to the lowest learnt α value, and has the outliers with the largest relative scales. For simplicity, we have excluded c on the analysis as it does not impact the robustness with regards to the outliers, as shown in Barron (2019).
G.2 MORE RESULTS ON DIFFERENT COMPONENTS
H DISCUSSION ON NORMALIZATION FOR AUTOFORMER
There is limited performance gain for Autoformer from the normalization alone. The reason is that Autoformer benefits from an inner series decomposition module which acts as the normalization by nature. Indeed one benefit of our proposed framework is a simple solution to bring the benefits of these specified designs to other baselines which significantly reduces the gap between for example Informer and Autoformer. AutoFormer already benefits from an internal component that reduces internal distribution shift: the series decomposition module. J PROBABILISTIC FORECASTING Table 9 shows the comparison of probabilistic methods for Informer by following the probabilistic output of DeepAR (Salinas et al., 2020), which is the most common probabilistic forecasting treatment. On implementation details, instead of point estimate, we have two prediction heads (one predict µ and one predict σ) with negative log likelihood loss. The other hyper parameters is the same as before. We use NLL loss and continuous ranked probability score (CRPS) which are commonly used in probabilistic forecasting as evaluation metrics. Table 10 shows the comparison results of NHiTs (Challu et al., 2022) and FiLM (Zhou et al., 2022a) as two baselines. For each method, we copy original model to have model for different scales and we concatenate the input with the output of previous scale for the new scale. The training hyperparameters such as optimizer and learning rate is the same as the previous baselines.
I ADDITIONAL QUALITATIVE RESULTS
L CROSS-SCALE NORMALIZATION
The choice of normalization approach is essential to our method, due to the aforementioned distribution shifts. In this section, we showcase multiple experiments that were done to study the impact of different ways of normalizing inputs. In Table 11, we compare using two forms of normalization with Informer-MSA as the baseline: mean-only versus normalization using both mean and standard deviation. We consider that adding normalization to our method is a trade-off between two issues faced during training. One one hand, (internal) distribution shifts hinder performance, as demonstrated in the experiments (See Table 2). On the other hand, normalizing the internal representations results in a form of information loss (during the processing of the input): we lose information about mean and standard deviation of the input data. We believe that the reason mean-normalization outperforms mean and standard deviation normalization is because it results in a better trade-off, losing less information while still managing to address the internal distribution shift. Fig. 10 shows the different dataset distributions. As we can see, mean normalization works better for datasets such as electricity and traffic with almost unimodal distributions. For datasets such as exchange-rate and to some extent weather, which have multi-modal distributions, we find that mean and standandard deviation works better. Our hypothesis is that mean-normalization is too simple an approach for more complex input distributions. We note that, in all cases, either form of normalization results in strong improvements compared to the absence of normalization.
M SYNTHETIC DATASET
To further evaluate our proposed framework and adaptive loss function, we generate a new dataset based on the Mackey-Glass equations (Mackey & Glass, 1977). Concretely, we use the following
equation: dx dt = 0.2 × x(t − τ ) 1 + x(t − τ ) 10 − (0.1 × x(t)),(11)
where we follow López-Caraballo et al. (2016); Farmer et al. (1983) and we also consider x(0) = 1.2.
We create three series of length 10k. First we create a series by only using the above equation and considering τ = 18 where the series has a chaotic behaviour when τ ≥ 17. We also provide two additional series with τ = 12 and τ = 9, and also with added seasonal and trend components. Our synthetic dataset is a combination of these 3 series (See Figure 11 for an example). We compare our method using the new dataset on both Informer and Autoformer in Table 12, showing our proposed framework significantly improves the performance over the baselines. In addition, to further analyze the effect of the adaptive loss on noisy datasets with outliers, we randomly replace a percentage of the training data with outliers by defining an outlier as a sample with a distance of larger than 50 times of standard deviation of the series from the median. Figure 11 shows the improvement of using adaptive loss on both Informer baseline and our multi-scale version of Informer. Adaptive loss increases the performance up to roughly 70% when there is a noisy input with extreme outliers (5% of data) while it gets comparable results for a clean dataset, demonstrating it as a strong candidate to replace MSE loss in the time series forecasting tasks.
N STATISTICAL TESTS
To further validate the strengths of our empirical results, we have conducted two statistical tests. Each test is a Student's t-test Kendall (1960) to determine whether the two sets of results can be distinguished. The first test, in Table 13, shows that for most settings, the adaptive loss provides gains that are statistically significant (p < 0.05). The second test, in Table 14 shows that for most settings, the multi-scale architectures provides gains that are statistically significant (p < 0.5).
Figure 1 :
1Intermediate forecasts by our model at different time scales. Iterative refinement of a time series forecast is a strong structural prior that benefits time series forecasting.
Figure 2 :
2Overview of the proposed Scaleformer framework. (Left) Representation of a single scale block. In each step, we pass the normalized upsampled output from previous step along with the normalized downsampled encoder as the input. (Right) Representation of the full architecture. We process the input in a multi-scale manner iteratively from the smallest scale to the original scale.
Figure 4 :
4Comparison of training with Adaptive loss "-A", multi-scale framework with MSE loss "-MS", Multi-scale framework and Adaptive loss "-MSA". It shows the combination of all our proposed contributions is essential, and results in significantly improved performance.
Figure 5 :
5Qualitative comparison of base models with our framework. Our multi-scale models can better capture the global trend and local variations of the signal than their baseline equivalents. To keep the figure concise and readable, we only show Informer and FEDformer, please refer toFigure 8in Appendix for more results.
6 :
6Comparison of the MSE and MAE results for our proposed multi-scale framework version of Informer and Autoformer (-MSA) with the iterative refinemenet baseline of keeping the original scale in each iteration (126±0.01 0.259±0.01 0.410±0.05 0.485±0.03 0.168±0.05 0.298±0.03 0.649±0.08 0.632±0.04 192 0.253±0.03 0.373±0.02 0.809±0.14 0.698±0.06 0.427±0.12 0.484±0.06 0.938±0.03 0.761±0.01 Weather 96 0.163±0.01 0.226±0.01 0.233±0.01 0.293±0.00 0.210±0.02 0.279±0.02 0.222±0.01 0.286±0.02 192 0.221±0.01 0.290±0.02 0.401±0.02 0.429±0.01 0.289±0.01 0.333±0.01 0.357±0.02 0.393±0.02 Electricity 96 0.188±0.00 0.303±0.01 0.248±0.01 0.360±0.01 0.203±0.01 0.315±0.01 0.237±0.00 0.344±0.00 192 0.197±0.00 0.310±0.00 0.265±0.01 0.373±0.01 0.219±0.00 0.331±0.00 0.271±0.01 0.374±0.01 Traffic 96 0.567±0.00 0.350±0.00 0.681±0.03 0.432±0.01 0.597±0.01 0.369±0.00 0.641±0.01 0.367±0.01 192 0.589±0.01 0.360±0.01 0.699±0.02 0.446±0.01 0.655±0.01 0.399±0.01 0.695±0.01 0.388±0.01
Figure 6 :Figure 7 :
67(a) The loss value based as a function of the ξ (absolute difference between prediction and target) and the learned α for each dataset. Different colors correspond to different datasets (each with specific value of α). (b) Samples taken from the ground-truth horizon window of each dataset. The Weather dataset sample has the outliers with the largest scales compared to the input series. As a consequence, the learnt value of α is the lowest. Different colors correspond to different variables in the multi-variate time-series we are considering. Qualitative results of each scale. The model can correct its previous mistakes on higher scales, showcasing increased robustness to forecasting artifacts that occur individually at each scale.
Figure 7 ,Figure 8 :Figure 9 :
789Figure 8, and Figure 9 provide additional qualitative results respectively showing the intermediate results of each scale and the comparison between the baselines and our approach with the horizon length of 720 and 192. As we can see, Scaleformer is able to better learn local and global time-series variations. Additional qualitative comparison of the baselines against our framework. Additional qualitative comparison of the baselines against our framework.
Figure 10 :
10Dataset distribution with histogram and kernel density estimation. Note that different datasets have either mostly uni-modal or multi-modal distributions.
Mean+STDEV 0 .
0248±0.01 0.343±0.01 0.271±0.02 0.363±0.02 0.265±0.01 0.359±0.00 0.365±0.01 0.425±0.01 Exchange Mean 0.191±0.02 0.326±0.02 0.374±0.05 0.453±0.02 0.555±0.03 0.567±0.01 1.011±0.06 0.782±0.02 Mean+STDEV 0.165±0.01 0.300±0.01 0.280±0.02 0.387±0.02 0.487±0.13 0.518±0.06 1.406±0.20 0.931±0.07 ILI Mean 3.695±0.07 1.242±0.01 3.798±0.07 1.266±0.01 3.759±0.09 1.238±0.01 3.606±0.02 1.222±0.00 Mean+STDEV 4.582±0.29 1.435±0.07 4.681±0.15 1.470±0.03 3.611±0.39 1.243±0.07 4.156±0.
Figure 11 :
11Left: an example input of the synthetic dataset. Right: The relative improvement of adaptive loss increases with increasing the percentage of outliers in the dataset.
Table 1 :
1Comparison of the MSE and MAE results for our proposed multi-scale framework version of different methods (-MSA) with respective baselines. Results are given in the multi-variate setting, for different lenghts of the horizon window. The best results are shown in Bold. Our method outperforms vanilla version of the baselines over almost all datasets and settings. The average improvement (error reduction) is shown in Green numbers at the bottom with respect the base models. MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEMethod/
FEDformer FED-MSA Autoformer Auto-MSA
Informer
Info-MSA
Reformer
Ref-MSA
Performer
Per-MSA
Dataset
MSE MAE Exchange
96 0.155 0.285 0.109 0.240 0.154 0.285 0.126 0.259 0.966 0.792 0.168 0.298 1.063 0.826 0.182 0.311 0.667 0.669 0.179 0.305
192 0.274 0.384 0.241 0.353 0.356 0.428 0.253 0.373 1.088 0.842 0.427 0.484 1.597 1.029 0.375 0.446 1.339 0.904 0.439 0.486
336 0.452 0.498 0.471 0.508 0.441 0.495 0.519 0.538 1.598 1.016 0.500 0.535 1.712 1.070 0.605 0.591 1.081 0.844 0.563 0.577
720 1.172 0.839 1.259 0.865 1.118 0.819 0.928 0.751 2.679 1.340 1.017 0.790 1.918 1.160 1.089 0.857 0.867 0.766 1.219 0.882
Weather
96 0.288 0.365 0.220 0.289 0.267 0.334 0.163 0.226 0.388 0.435 0.210 0.279 0.347 0.388 0.199 0.263 0.441 0.479 0.228 0.291
192 0.368 0.425 0.341 0.385 0.323 0.376 0.221 0.290 0.433 0.453 0.289 0.333 0.463 0.469 0.294 0.355 0.475 0.501 0.302 0.357
336 0.447 0.469 0.463 0.455 0.364 0.397 0.282 0.340 0.610 0.551 0.418 0.427 0.734 0.622 0.463 0.464 0.478 0.482 0.441 0.456
720 0.640 0.574 0.682 0.565 0.425 0.434 0.369 0.396 0.978 0.723 0.595 0.532 0.815 0.674 0.493 0.471 0.563 0.552 0.817 0.655
Electricity
96 0.201 0.317 0.182 0.297 0.197 0.312 0.188 0.303 0.344 0.421 0.203 0.315 0.294 0.382 0.183 0.291 0.294 0.387 0.190 0.300
192 0.200 0.314 0.188 0.300 0.219 0.329 0.197 0.310 0.344 0.426 0.219 0.331 0.331 0.409 0.194 0.304 0.305 0.400 0.200 0.310
336 0.214 0.330 0.210 0.324 0.263 0.359 0.224 0.333 0.358 0.440 0.253 0.360 0.361 0.428 0.209 0.321 0.331 0.416 0.209 0.322
720 0.239 0.350 0.232 0.339 0.290 0.380 0.249 0.358 0.386 0.452 0.293 0.390 0.316 0.393 0.234 0.340 0.304 0.386 0.228 0.335
Traffic
96 0.601 0.376 0.564 0.351 0.628 0.393 0.567 0.350 0.748 0.426 0.597 0.369 0.698 0.386 0.615 0.377 0.730 0.405 0.612 0.371
192 0.603 0.379 0.570 0.349 0.634 0.401 0.589 0.360 0.772 0.436 0.655 0.399 0.694 0.378 0.613 0.367 0.698 0.387 0.608 0.368
336 0.602 0.375 0.576 0.349 0.619 0.385 0.609 0.383 0.868 0.493 0.761 0.455 0.695 0.377 0.617 0.360 0.678 0.370 0.604 0.356
720 0.615 0.378 0.602 0.360 0.656 0.403 0.642 0.397 1.074 0.606 0.924 0.521 0.692 0.376 0.638 0.360 0.672 0.364 0.634 0.360
ILI
24 3.025 1.189 2.745 1.075 3.862 1.370 3.370 1.213 5.402 1.581 3.742 1.252 3.961 1.289 3.534 1.212 4.806 1.471 3.437 1.148
32 3.034 1.201 2.748 1.072 3.871 1.379 3.088 1.164 5.296 1.587 3.807 1.272 4.022 1.311 3.652 1.235 4.669 1.455 4.055 1.248
48 2.444 1.041 2.793 1.059 2.891 1.138 3.207 1.153 5.226 1.569 3.940 1.272 4.269 1.340 3.506 1.168 4.488 1.371 4.055 1.248
64 2.686 1.112 2.678 1.071 3.164 1.223 2.954 1.112 5.304 1.578 3.670 1.234 4.370 1.385 3.487 1.177 4.607 1.404 3.828 1.224
Vs Ours
5.6% 5.9%
13.5% 9.1%
38.5% 26.7%
38.3% 25.2%
23.3% 16.9%
Table 2 :
2Multi-scale framework without cross-scale normalization. Correctly normalizing across different scales (as per our cross-mean normalization) is essential to obtain good performance when using the multi-scale framework.Dataset
FEDformer
FED-MS (w/o N)
Autoformer
Auto-MS (w/o N)
Informer
Info-MS (w/o N)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Weather
96
0.288
0.365
0.300
0.342
0.267
0.334
0.191
0.277
0.388
0.435
0.402
0.438
192
0.368
0.425
0.424
0.422
0.323
0.376
0.281
0.360
0.433
0.453
0.393
0.434
336
0.447
0.469
0.531
0.493
0.364
0.397
0.376
0.420
0.610
0.551
0.566
0.528
720
0.640
0.574
0.714
0.576
0.425
0.434
0.439
0.465
0.978
0.723
1.293
0.845
Electricity
96
0.201
0.317
0.258
0.356
0.197
0.312
0.221
0.337
0.344
0.421
0.407
0.465
192
0.200
0.314
0.259
0.357
0.219
0.329
0.251
0.357
0.344
0.426
0.407
0.469
336
0.214
0.330
0.268
0.364
0.263
0.359
0.288
0.380
0.358
0.440
0.392
0.461
720
0.239
0.350
0.285
0.368
0.290
0.380
0.309
0.397
0.386
0.452
0.391
0.453
recorded influenza-like illness patients from the US Center for Disease Control and Prevention. We
consider horizon lengths of 24, 32, 48, and 64 with an input length of 32.
Table 3 :
3Single-scale framework with cross scale normalization "-N". The cross-scale normalization (which in the single-scale case corresponds to mean-normalization of the output) does not improve the performance of the Autoformer, as it already has an internal trend-cycle normalization component. However, it does improve the results of the Informer and FEDformer.Dataset
FEDformer
FEDformer-N
Autoformer
Autoformer-N
Informer
Informer-N
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Weather
96
0.288
0.365
0.234
0.292
0.267
0.334
0.323
0.401
0.388
0.435
0.253
0.333
192
0.368
0.425
0.287
0.337
0.323
0.376
0.531
0.543
0.433
0.453
0.357
0.408
336
0.447
0.469
0.436
0.443
0.364
0.397
0.859
0.708
0.610
0.551
0.459
0.461
720
0.640
0.574
0.545
0.504
0.425
0.434
1.682
1.028
0.978
0.723
0.870
0.676
Electricity
96
0.201
0.317
0.194
0.307
0.197
0.312
0.251
0.364
0.344
0.421
0.247
0.356
192
0.200
0.314
0.195
0.304
0.219
0.329
0.263
0.372
0.344
0.426
0.291
0.394
336
0.214
0.330
0.200
0.310
0.263
0.359
0.276
0.388
0.358
0.440
0.321
0.416
720
0.239
0.350
0.225
0.332
0.290
0.380
0.280
0.385
0.386
0.452
0.362
0.434
Rob Hyndman, Anne B Koehler, J Keith Ord, and Ralph D Snyder. Forecasting with exponential smoothing: the state space approach. Springer Science & Business Media, 2008. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015. M. G. Kendall. Studies in the history of probability and statistics. where shall the history of statistics begin? Biometrika, 47(3/4):447-449, 1960. ISSN 00063444. URL http://www.jstor. org/stable/2333315. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In International Conference on Learning Representations, 2020. URL https://openreview. net/forum?id=rkgNKkHtvB. Bjoern Krollner, Bruce J Vanstone, Gavin R Finnie, et al. Financial time series forecasting with machine learning techniques: a survey. In ESANN, 2010. Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 95-104, 2018. Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in Neural Information Processing Systems, 32:5243-5253, 2019. Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 773-782, 2018. CH López-Caraballo, I Salfate, JA Lazzús, P Rojas, M Rivera, and L Palma-Chilla. Mackey-glass noisy chaotic time series prediction by a swarm-optimized neural network. In Journal of Physics:Allan H Murphy. What is a good forecast? an essay on the nature of goodness in weather forecasting.Conference Series, volume 720, pp. 012002. IOP Publishing, 2016.
Michael C Mackey and Leon Glass. Oscillation and chaos in physiological control systems. Science,
197(4300):287-289, 1977.
Kiran Madhusudhanan, Johannes Burchert, Nghia Duong-Trung, Stefan Born, and Lars Schmidt-
Thieme. Yformer: U-net inspired transformer architecture for far horizon time series forecasting.
arXiv preprint arXiv:2110.08255, 2021.
Michael C Mozer. Induction of multiscale temporal structure. Advances in neural information
processing systems, 4, 1991.
Weather and forecasting, 8(2):281-293, 1993.
Piotr Nawrot, Szymon Tworkowski, Michał Tyrolski, Łukasz Kaiser, Yuhuai Wu, Christian Szegedy,
and Henryk Michalewski. Hierarchical transformers are more efficient language models. arXiv
preprint arXiv:2110.13711, 2021.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,
Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, An-
dreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank
Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An
imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle,
A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural In-
formation Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL
http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style\
-high-performance-deep-learning-library.pdf.
Syama Sundar Rangapuram, Matthias W Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and
Tim Januschowski. Deep state space models for time series forecasting. Advances in neural
information processing systems, 31, 2018.
A IMPLEMENTATION DETAILS
Our implementation is based on the Pytorch (Paszke et al., 2019) implementation 5 of the Autoformer
and Informer
Table 4 :
4Quantitative comparison of training and the test time of different experiments. Using a scale factor 2 the Scaleformer version of the baselines (showed by -MSA) should takes roughly twice of the original baseline time which is almost the same in the quantitative comparisons.Method
Training time (s)
Test time (s)
Output Length
96
192
336
720
96
192
336
720
Informer
11.38
13.83
17.38
25.31
0.79
1.12
1.21
1.23
Informer-MSA
28.73
31.88
36.37
51.15
2.42
2.63
2.87
2.96
Autoformer
17.34
23.34
31.35
52.34
1.94
2.29
2.88
3.38
Autoformer-MSA
40.57
51.17
63.53
111.84
5.22
6.06
7.04
8.10
Table 5 :
5Comparison of the MSE and MAE results for our proposed multi-scale framework version of Informer and Autoformer by removing the last step and using an interpolation instead (-MSA r ) with the corresponding original models as the baseline. Results are given in the multi-variate setting, for different lenghts of the horizon window. The look-back window size is fixed to 96 for all experiments. The best results are shown in Bold. Our method outperforms vanilla versions of both Informer and Autoformer over almost all datasets and settings.Dataset
Autoformer
Autoformer-MSA r
Informer
Informer-MSA r
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange
96
0.154±0.01
0.285±0.00
0.132±0.02
0.265±0.02
0.966±0.10
0.792±0.04
0.248±0.07
0.366±0.06
192
0.356±0.09
0.428±0.05
0.418±0.22
0.466±0.12
1.088±0.04
0.842±0.01
0.727±0.19
0.637±0.09
336
0.441±0.02
0.495±0.01
0.736±0.25
0.629±0.10
1.598±0.08
1.016±0.02
0.643±0.03
0.620±0.02
720
1.118±0.04
0.819±0.02
0.773±0.26
0.709±0.13
2.679±0.22
1.340±0.06
1.036±0.08
0.803±0.03
Weather
96
0.267±0.03
0.334±0.02
0.168±0.01
0.239±0.02
0.388±0.04
0.435±0.03
0.211±0.01
0.278±0.01
192
0.323±0.01
0.376±0.00
0.226±0.01
0.296±0.01
0.433±0.05
0.453±0.03
0.288±0.03
0.336±0.02
336
0.364±0.02
0.397±0.01
0.298±0.02
0.351±0.02
0.610±0.04
0.551±0.02
0.459±0.02
0.445±0.02
720
0.425±0.01
0.434±0.01
0.412±0.04
0.434±0.03
0.978±0.05
0.723±0.02
0.593±0.07
0.528±0.04
Electricity
96
0.197±0.01
0.312±0.01
0.189±0.00
0.305±0.00
0.344±0.00
0.421±0.00
0.194±0.00
0.308±0.00
192
0.219±0.01
0.329±0.01
0.207±0.00
0.323±0.00
0.344±0.01
0.426±0.01
0.212±0.00
0.327±0.00
336
0.263±0.04
0.359±0.03
0.225±0.01
0.339±0.00
0.358±0.01
0.440±0.01
0.246±0.00
0.357±0.00
720
0.290±0.05
0.380±0.02
0.250±0.01
0.361±0.01
0.386±0.00
0.452±0.00
0.283±0.01
0.386±0.01
Traffic
96
0.628±0.02
0.393±0.02
0.585±0.01
0.365±0.01
0.748±0.01
0.426±0.01
0.595±0.00
0.362±0.00
192
0.634±0.01
0.401±0.01
0.606±0.01
0.375±0.01
0.772±0.02
0.436±0.01
0.629±0.00
0.381±0.00
336
0.619±0.01
0.385±0.01
0.631±0.02
0.400±0.01
0.868±0.04
0.493±0.03
0.692±0.01
0.410±0.01
720
0.656±0.01
0.403±0.01
0.660±0.00
0.418±0.01
1.074±0.02
0.606±0.01
0.803±0.01
0.461±0.00
Table
Table 8 :
8Comparison of the baselines with our method using different scales. Reducing the scale factor from s = 16 to s = 2 increases the number of steps but achieves lower error on average.Dataset
Informer
Informer-MSA(s=16)
Informer-MSA(s=4)
Informer-MSA(s=2)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange
96
0.966±0.10
0.792±0.04
0.376±0.08
0.480±0.04
0.246±0.06
0.374±0.04
0.168±0.05
0.298±0.03
192
1.088±0.04
0.842±0.01
0.878±0.08
0.721±0.03
0.661±0.20
0.617±0.09
0.427±0.12
0.484±0.06
336
1.598±0.08
1.016±0.02
0.899±0.07
0.733±0.03
0.697±0.10
0.648±0.05
0.500±0.05
0.535±0.02
720
2.679±0.22
1.340±0.06
1.633±0.14
1.031±0.05
1.457±0.27
0.958±0.09
1.017±0.05
0.790±0.02
Weather
96
0.388±0.04
0.435±0.03
0.208±0.02
0.275±0.02
0.189±0.01
0.259±0.01
0.210±0.02
0.279±0.02
192
0.433±0.05
0.453±0.03
0.302±0.02
0.354±0.02
0.287±0.02
0.333±0.01
0.289±0.01
0.333±0.01
336
0.610±0.04
0.551±0.02
0.470±0.06
0.456±0.04
0.420±0.03
0.433±0.02
0.418±0.04
0.427±0.03
720
0.978±0.05
0.723±0.02
0.639±0.07
0.544±0.04
0.627±0.07
0.543±0.04
0.595±0.04
0.532±0.02
Electricity
96
0.344±0.00
0.421±0.00
0.215±0.00
0.329±0.00
0.203±0.00
0.315±0.00
0.203±0.01
0.315±0.01
192
0.344±0.01
0.426±0.01
0.257±0.01
0.370±0.01
0.235±0.01
0.347±0.01
0.219±0.00
0.331±0.00
336
0.358±0.01
0.440±0.01
0.300±0.04
0.400±0.03
0.264±0.01
0.373±0.01
0.253±0.01
0.360±0.01
720
0.386±0.00
0.452±0.00
0.334±0.03
0.418±0.02
0.306±0.01
0.401±0.01
0.293±0.01
0.390±0.01
Traffic
96
0.748±0.01
0.426±0.01
0.648±0.02
0.386±0.01
0.616±0.00
0.374±0.01
0.597±0.01
0.369±0.00
192
0.772±0.02
0.436±0.01
0.679±0.01
0.391±0.01
0.686±0.01
0.404±0.01
0.655±0.01
0.399±0.01
336
0.868±0.04
0.493±0.03
0.811±0.02
0.455±0.01
0.782±0.03
0.451±0.02
0.761±0.03
0.455±0.03
720
1.074±0.02
0.606±0.01
1.020±0.07
0.542±0.03
0.965±0.03
0.521±0.01
0.924±0.02
0.521±0.01
Dataset
Autoformer
Autoformer-MSA(16)
Autoformer-MSA(4)
Autoformer-MSA(2)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange
96
0.154±0.01
0.285±0.00
0.182±0.03
0.316±0.03
0.170±0.03
0.304±0.02
0.126±0.01
0.259±0.01
192
0.356±0.09
0.428±0.05
0.514±0.18
0.537±0.10
0.359±0.14
0.443±0.08
0.253±0.03
0.373±0.02
336
0.441±0.02
0.495±0.01
0.527±0.07
0.570±0.04
0.606±0.18
0.585±0.10
0.519±0.16
0.538±0.09
720
1.118±0.04
0.819±0.02
1.019±0.18
0.819±0.08
0.973±0.22
0.809±0.09
0.928±0.23
0.751±0.09
Weather
96
0.267±0.03
0.334±0.02
0.169±0.00
0.239±0.01
0.164±0.00
0.234±0.01
0.163±0.01
0.226±0.01
192
0.323±0.01
0.376±0.00
0.240±0.03
0.310±0.03
0.229±0.01
0.299±0.02
0.221±0.01
0.290±0.02
336
0.364±0.02
0.397±0.01
0.304±0.03
0.354±0.03
0.303±0.01
0.350±0.01
0.282±0.02
0.340±0.03
720
0.425±0.01
0.434±0.01
0.375±0.01
0.403±0.01
0.382±0.02
0.414±0.01
0.369±0.04
0.396±0.03
Electricity
96
0.197±0.01
0.312±0.01
0.190±0.00
0.306±0.00
0.188±0.00
0.303±0.00
0.188±0.00
0.303±0.01
192
0.219±0.01
0.329±0.01
0.206±0.01
0.320±0.01
0.207±0.00
0.320±0.00
0.197±0.00
0.310±0.00
336
0.263±0.04
0.359±0.03
0.236±0.02
0.344±0.01
0.237±0.03
0.344±0.02
0.224±0.02
0.333±0.01
720
0.290±0.05
0.380±0.02
0.260±0.01
0.368±0.01
0.261±0.01
0.369±0.01
0.249±0.01
0.358±0.01
Traffic
96
0.628±0.02
0.393±0.02
0.605±0.01
0.380±0.01
0.594±0.02
0.367±0.01
0.567±0.00
0.350±0.00
192
0.634±0.01
0.401±0.01
0.626±0.01
0.393±0.01
0.600±0.01
0.369±0.00
0.589±0.01
0.360±0.01
336
0.619±0.01
0.385±0.01
0.635±0.01
0.400±0.00
0.625±0.01
0.386±0.01
0.619±0.01
0.383±0.01
720
0.656±0.01
0.403±0.01
0.697±0.03
0.439±0.01
0.678±0.02
0.422±0.02
0.642±0.01
0.397±0.01
Table 9 :
9Scaleformer improving probabilistic metrics in Probabilistic forecasting for Informer. By adapting Informer to make probabilistic predictions, we are able to show the Scaleformer prior again brings benefits. While such an analysis excludes dedicated probabilistic approaches for conciseness, it nevertheless shows the generality of our proposed approach.548±0.02 2.360±0.20 0.702±0.05 4.350±1.45 0.826±0.02 4.302±0.49 1.268±0.06 13.140±1.84 Informer-MSA 0.202±0.01 0.452±0.09 0.284±0.02 0.818±0.11 0.414±0.06 1.724±0.43 0.570±0.03 2.862±0.21 376±0.03 1.180±0.21 0.502±0.03 1.752±0.23 0.564±0.02 1.928±0.27 0.684±0.09 2.210±0.46 Informer-MSA 0.250±0.02 0.392±0.14 0.294±0.01 0.610±0.04 0.308±0.02 0.728±0.10 0.438±0.04 1.270±0.14 330±0.01 1.106±0.05 0.338±0.05 1.254±0.04 0.348±0.01 1.244±0.07 0.528±0.00 1.856±0.06 Informer-MSA 0.238±0.01 0.578±0.01 0.290±0.00 0.776±0.01 0.324±0.03 0.904±0.10 0.358±0.01 1.022±0.04 372±0.04 1.376±0.05 0.340±0.01 1.404±0.04 0.372±0.01 1.516±0.06 0.568±0.01 1.658±0.01 Informer-MSA 0.288±0.01 1.094±0.03 0.312±0.01 1.102±0.04 0.368±0.02 1.194±0.05 0.442±0.02 1.378±0.06 K EXTENSION TO OTHER METHODSDataset
96
192
336
720
Metric
CRPS
NLL
CRPS
NLL
CRPS
NLL
CRPS
NLL
Exchange
Informer
0.Weather
Informer
0.Electricity
Informer
0.Traffic
Informer
0.
Table 10 :
10The effect of applying our proposed framework to NHits and FiLM as two non-transformer based models. Best results are shown in Bold.Dataset
NHiTS
NHiTS-MSA
FiLM
FiLM-MSA
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange
96
0.091±0.00
0.218±0.01
0.087±0.00
0.206±0.00
0.083±0.00
0.201±0.00
0.081±0.00
0.197±0.00
192
0.200±0.02
0.332±0.01
0.186±0.01
0.306±0.00
0.179±0.00
0.301±0.00
0.156±0.00
0.284±0.00
336
0.347±0.03
0.442±0.02
0.381±0.01
0.445±0.01
0.341±0.00
0.421±0.00
0.253±0.01
0.378±0.01
720
0.761±0.20
0.662±0.08
1.124±0.07
0.808±0.03
0.896±0.01
0.714±0.00
0.728±0.01
0.659±0.00
Weather
96
0.169±0.00
0.228±0.00
0.167±0.00
0.211±0.00
0.194±0.00
0.235±0.00
0.195±0.00
0.232±0.00
192
0.210±0.00
0.268±0.00
0.208±0.00
0.253±0.00
0.238±0.00
0.270±0.00
0.235±0.00
0.269±0.00
336
0.261±0.00
0.313±0.00
0.261±0.00
0.294±0.00
0.288±0.00
0.305±0.00
0.275±0.00
0.303±0.00
720
0.333±0.01
0.372±0.01
0.331±0.00
0.348±0.00
0.359±0.00
0.350±0.00
0.337±0.00
0.356±0.00
Table 11 :
11Comparison of using standard normalization and zero-mean shifting in our cross scale normalization. Zero-mean shifting gets better results in almost all of the Traffic, Electricity, and Illness dataset. While standard normalization gets better results in Weather and Exchange datasets. We use zero-mean shifting in our experiments as it removes less information from the input time-series.Prediction Length
96 (24)
192 (32)
336 (48)
720 (64)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Electricity
Mean
0.199±0.00 0.312±0.00 0.215±0.00 0.327±0.00 0.250±0.01 0.358±0.01 0.297±0.01 0.391±0.01
Table 12 :
12Comparison of our proposed framework and the baselines using a synthetic data with 3 series generated based on Mackey Glass formulation by τ = {18, 12, 9}.Dataset
96
192
336
720
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Autoformerbase
0.435±0.05
0.464±0.03
0.399±0.06
0.456±0.03
0.361±0.03
0.449±0.02
0.361±0.02
0.452±0.01
AutoformerMS
0.075±0.01
0.192±0.02
0.113±0.02
0.237±0.02
0.164±0.03
0.298±0.03
0.468±0.22
0.501±0.14
Informerbase
0.265±0.06
0.367±0.03
0.287±0.01
0.396±0.00
0.243±0.02
0.372±0.01
0.246±0.01
0.375±0.01
InformerMS
0.078±0.00
0.192±0.00
0.109±0.01
0.233±0.02
0.164±0.01
0.309±0.01
0.227±0.03
0.356±0.02
Table 13 :
13Student's t-test p-values for the test corresponding to whether the Adaptive loss provides an improvement. The base model is FedFormer. While we cannot conclude the adaptive loss provides an improvement for exchange rate and certain window sizes for traffic, for all other datasets the improvement is notable.Window size
96
192
336
720
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange rate
0.7157
0.77286
0.46741
0.57392
0.61317
0.68405
0.97102
0.56755
Electricity
0.05844
0.01267
0.00124
0.00187
0.01294
0.00576
0.00374
0.00597
Weather
0.25509
0.11246
0.00014
8e-05
0.0648
0.00826
0.05881
0.02213
Traffic
0.21029
0.02572
0.78931
0.19641
0.40819
0.248
0.3491
0.17579
Table 14 :
14Student's t-test p-values for the test corresponding to whether the multi-scale prior provides an improvement. The base model is FedFormer. For certain window sizes of weather and exchange rate the improvement is not statistically provable. For most other settings it is almost always significant.Window size
96
192
336
720
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
Exchange rate
0.01109
0.01657
0.02263
0.01674
0.14988
0.26543
0.11071
0.13141
Electricity
0.00029
0.00088
2e-05
0.00017
0.28221
0.3316
0.00593
0.07219
Weather
0.02525
0.01987
0.2201
0.05433
0.14593
0.12861
0.00362
0.02396
Traffic
0.01024
0.00241
0.01714
0.00749
0.00058
0.00033
0.00019
0.00012
https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 2 https://pems.dot.ca.gov 3 https://www.bgc-jena.mpg.de/wetter/ 4 https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html
An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. Shaojie Bai, J Zico Kolter, Vladlen Koltun, arXiv:1803.01271Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271, 2018.
A general and adaptive robust loss function. T Jonathan, Barron, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJonathan T Barron. A general and adaptive robust loss function. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4331-4339, 2019.
Some recent advances in forecasting and control. E P George, Box, M Gwilym, Jenkins, Journal of the Royal Statistical Society. Series C (Applied Statistics). 172George EP Box and Gwilym M Jenkins. Some recent advances in forecasting and control. Journal of the Royal Statistical Society. Series C (Applied Statistics), 17(2):91-109, 1968.
Time series: theory and methods. J Peter, Richard A Brockwell, Davis, Springer Science & Business MediaPeter J Brockwell and Richard A Davis. Time series: theory and methods. Springer Science & Business Media, 2009.
N-hits: Neural hierarchical interpolation for time series forecasting. Cristian Challu, G Kin, Boris N Olivares, Federico Oreshkin, Max Garza, Artur Mergenthaler, Dubrawski, arXiv:2201.12886arXiv preprintCristian Challu, Kin G Olivares, Boris N Oreshkin, Federico Garza, Max Mergenthaler, and Artur Dubrawski. N-hits: Neural hierarchical interpolation for time series forecasting. arXiv preprint arXiv:2201.12886, 2022.
Hierarchical deep generative models for multi-rate multivariate time series. Zhengping Che, Sanjay Purushotham, Guangyu Li, Bo Jiang, Yan Liu, International Conference on Machine Learning. PMLRZhengping Che, Sanjay Purushotham, Guangyu Li, Bo Jiang, and Yan Liu. Hierarchical deep generative models for multi-rate multivariate time series. In International Conference on Machine Learning, pp. 784-793. PMLR, 2018.
Multiscale adaptive graph neural network for multivariate time series forecasting. Ling Chen, Donghui Chen, Zongjiang Shang, Youdong Zhang, Bo Wen, Chenghu Yang, arXiv:2201.04828arXiv preprintLing Chen, Donghui Chen, Zongjiang Shang, Youdong Zhang, Bo Wen, and Chenghu Yang. Multi- scale adaptive graph neural network for multivariate time series forecasting. arXiv preprint arXiv:2201.04828, 2022.
Time-aware multi-scale rnns for time series modeling. Zipeng Chen, Qianli Ma, Zhenxi Lin, IJCAI. 2021Zipeng Chen, Qianli Ma, and Zhenxi Lin. Time-aware multi-scale rnns for time series modeling. In IJCAI, 2021.
Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller, Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. Rethinking attention with performers, 2020.
Junyoung Chung, Sungjin Ahn, Yoshua Bengio, arXiv:1609.01704Hierarchical multiscale recurrent neural networks. arXiv preprintJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
Multi-scale convolutional neural networks for time series classification. Zhicheng Cui, Wenlin Chen, Yixin Chen, arXiv:1603.06995arXiv preprintZhicheng Cui, Wenlin Chen, and Yixin Chen. Multi-scale convolutional neural networks for time series classification. arXiv preprint arXiv:1603.06995, 2016.
Hierarchical multi-scale gaussian transformer for stock movement prediction. Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, Jian Guo, IJCAI. Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, and Jian Guo. Hierarchical multi-scale gaussian transformer for stock movement prediction. In IJCAI, pp. 4640-4646, 2020.
Preformer: Predictive transformer with multi-scale segmentwise correlations for long-term time series forecasting. Dazhao Du, Bing Su, Zhewei Wei, arXiv:2202.11356arXiv preprintDazhao Du, Bing Su, and Zhewei Wei. Preformer: Predictive transformer with multi-scale segment- wise correlations for long-term time series forecasting. arXiv preprint arXiv:2202.11356, 2022.
Multiscale vision transformers. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, Christoph Feichtenhofer, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionHaoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6824-6835, 2021.
The dimension of chaotic attractors. Physica D: Nonlinear Phenomena. J Doyne Farmer, Edward Ott, James A Yorke, 10.1016/0167-2789(83)90125-2.URLhttps:/www.sciencedirect.com/science/article/pii/01672789839012520167-27897J.Doyne Farmer, Edward Ott, and James A. Yorke. The dimension of chaotic attractors. Phys- ica D: Nonlinear Phenomena, 7(1):153-180, 1983. ISSN 0167-2789. doi: https://doi.org/ 10.1016/0167-2789(83)90125-2. URL https://www.sciencedirect.com/science/ article/pii/0167278983901252.
Multi-scale and hidden resolution time series models. A R Marco, David M Ferreira, Higdon, K H Herbert, Mike Lee, West, Bayesian Analysis. 14Marco AR Ferreira, David M Higdon, Herbert KH Lee, and Mike West. Multi-scale and hidden resolution time series models. Bayesian Analysis, 1(4):947-967, 2006.
Robust Estimation of a Location Parameter. J Peter, Huber, 10.1214/aoms/1177703732The Annals of Mathematical Statistics. 351Peter J. Huber. Robust Estimation of a Location Parameter. The Annals of Mathematical Statistics, 35(1):73 -101, 1964. doi: 10.1214/aoms/1177703732. URL https://doi.org/10.1214/ aoms/1177703732.
Deepar: Probabilistic forecasting with autoregressive recurrent networks. David Salinas, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, International Journal of Forecasting. 363David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3): 1181-1191, 2020.
Studies in astronomical time series analysis. i-modeling random processes in the time domain. D Jeffery, Scargle, The Astrophysical Journal Supplement Series. 45Jeffery D Scargle. Studies in astronomical time series analysis. i-modeling random processes in the time domain. The Astrophysical Journal Supplement Series, 45:1-71, 1981.
Timeseries anomaly detection using temporal hierarchical one-class network. Lifeng Shen, Zhuocong Li, James Kwok, Advances in Neural Information Processing Systems. 33Lifeng Shen, Zhuocong Li, and James Kwok. Timeseries anomaly detection using temporal hierar- chical one-class network. Advances in Neural Information Processing Systems, 33:13016-13026, 2020.
Improving predictive inference under covariate shift by weighting the loglikelihood function. Hidetoshi Shimodaira, Journal of statistical planning and inference. 902Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000.
Sandeep Subramanian, Ronan Collobert, Marc'aurelio Ranzato, Y-Lan Boureau, arXiv:2005.00581Multi-scale transformer language models. arXiv preprintSandeep Subramanian, Ronan Collobert, Marc'Aurelio Ranzato, and Y-Lan Boureau. Multi-scale transformer language models. arXiv preprint arXiv:2005.00581, 2020.
Forecasting for inventory planning: a 50-year review. A Aris, John E Syntetos, Stephen M Boylan, Disney, Journal of the Operational Research Society. 601Aris A Syntetos, John E Boylan, and Stephen M Disney. Forecasting for inventory planning: a 50-year review. Journal of the Operational Research Society, 60(1):S149-S160, 2009.
Probabilistic transformer for time series analysis. Binh Tang, David S Matteson, Advances in Neural Information Processing Systems. 34Binh Tang and David S Matteson. Probabilistic transformer for time series analysis. Advances in Neural Information Processing Systems, 34:23592-23608, 2021.
Deep transformer models for time series forecasting: The influenza prevalence case. Neo Wu, Bradley Green, Xue Ben, Shawn O' Banion, arXiv:2001.08317arXiv preprintNeo Wu, Bradley Green, Xue Ben, and Shawn O'Banion. Deep transformer models for time series forecasting: The influenza prevalence case. arXiv preprint arXiv:2001.08317, 2020.
Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Jiehui Xu, Jianmin Wang, Mingsheng Long, Advances in Neural Information Processing Systems. 342021Jiehui Xu, Jianmin Wang, Mingsheng Long, et al. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Advances in Neural Information Processing Systems, 34, 2021.
A transformer-based framework for multivariate time series representation learning. George Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, Carsten Eickhoff, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningGeorge Zerveas, Srideepika Jayaraman, Dhaval Patel, Anuradha Bhamidipaty, and Carsten Eickhoff. A transformer-based framework for multivariate time series representation learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2114-2124, 2021.
Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionPengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, and Jianfeng Gao. Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2998-3008, 2021.
Multi-scale group transformer for long sequence modeling in speech separation. Yucheng Zhao, Chong Luo, Zheng-Jun Zha, Wenjun Zeng, Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. the Twenty-Ninth International Conference on International Joint Conferences on Artificial IntelligenceYucheng Zhao, Chong Luo, Zheng-Jun Zha, and Wenjun Zeng. Multi-scale group transformer for long sequence modeling in speech separation. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 3251-3257, 2021.
Informer: Beyond efficient transformer for long sequence time-series forecasting. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang, Proceedings of AAAI. AAAIHaoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of AAAI, 2021.
Tian Zhou, Ziqing Ma, Qingsong Wen, Liang Sun, Tao Yao, Rong Jin, arXiv:2205.08897Frequency improved legendre memory model for long-term time series forecasting. arXiv preprintTian Zhou, Ziqing Ma, Qingsong Wen, Liang Sun, Tao Yao, Rong Jin, et al. Film: Frequency improved legendre memory model for long-term time series forecasting. arXiv preprint arXiv:2205.08897, 2022a.
FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, Rong Jin, Proc. 39th International Conference on Machine Learning. 39th International Conference on Machine Learning2022ICML 2022Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proc. 39th International Conference on Machine Learning (ICML 2022), 2022b. |
204,837,843 | NEURAL EXECUTION OF GRAPH ALGORITHMS | Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are bestsuited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks-showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm. * Work performed while the author was at DeepMind. 1 We note that there exist good reasons for choosing this approach, e.g. ease of optimisation.Under review as a conference paper at ICLR 2020Given that the majority of popular algorithms requires making discrete decisions over neighbourhoods (e.g. "which edge should be taken?"), we suggest that a highly suitable architecture for this task is a message-passing neural network (Gilmer et al., 2017) with a maximisation aggregator-a claim we verify, demonstrating clear performance benefits for simultaneously learning breadth-first search for reachability with the Bellman-Ford algorithm for shortest paths. We also verify its applicability to sequential reasoning, through learning Prim's algorithm(Prim, 1957)for minimum spanning trees.Note that our approach complements Reed & De Freitas(2015): we show that a relatively simple graph neural network architecture is able to learn and algorithmically transfer among different tasks, do not require explicitly denoting subroutines, and tackle tasks with superlinear time complexity. | [] | NEURAL EXECUTION OF GRAPH ALGORITHMS
Petar Veličković Deepmind [email protected]
Stanford University
University of Cambridge
Rex Ying [email protected]
Stanford University
University of Cambridge
Matilde Padovano
Stanford University
University of Cambridge
Raia Hadsell Deepmind
Stanford University
University of Cambridge
Charles Blundell
Stanford University
University of Cambridge
NEURAL EXECUTION OF GRAPH ALGORITHMS
Under review as a conference paper at ICLR 2020
Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are bestsuited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks-showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm. * Work performed while the author was at DeepMind. 1 We note that there exist good reasons for choosing this approach, e.g. ease of optimisation.Under review as a conference paper at ICLR 2020Given that the majority of popular algorithms requires making discrete decisions over neighbourhoods (e.g. "which edge should be taken?"), we suggest that a highly suitable architecture for this task is a message-passing neural network (Gilmer et al., 2017) with a maximisation aggregator-a claim we verify, demonstrating clear performance benefits for simultaneously learning breadth-first search for reachability with the Bellman-Ford algorithm for shortest paths. We also verify its applicability to sequential reasoning, through learning Prim's algorithm(Prim, 1957)for minimum spanning trees.Note that our approach complements Reed & De Freitas(2015): we show that a relatively simple graph neural network architecture is able to learn and algorithmically transfer among different tasks, do not require explicitly denoting subroutines, and tackle tasks with superlinear time complexity.
INTRODUCTION
A multitude of important real-world tasks can be formulated as tasks over graph-structured inputs, such as navigation, web search, protein folding, and game-playing. Theoretical computer science has successfully discovered effective and highly influential algorithms for many of these tasks. But many problems are still considered intractable from this perspective.
Machine learning approaches have been applied to many of these classic tasks, from tasks with known polynomial time algorithms such as shortest paths (Graves et al., 2016;Xu et al., 2019) and sorting (Reed & De Freitas, 2015), to intractable tasks such as travelling salesman (Vinyals et al., 2015;Bello et al., 2016;Kool et al., 2018) and boolean satisfiability (Selsam et al., 2018;Selsam & Bjørner, 2019). Recently, this work often relies on advancements in graph representation learning (Bronstein et al., 2017;Hamilton et al., 2017; with graph neural networks (GNNs) (Li et al., 2015;Kipf & Welling, 2016;. In almost all cases so far, ground-truth solutions are used to drive learning, giving the model complete freedom to find a mapping from raw inputs to such solution 1 .
Many classical algorithms share related subroutines: for example, shortest path computation (via the Bellman-Ford (Bellman, 1958) algorithm) and breadth-first search both must enumerate sets of edges adjacent to a particular node. Inspired by previous work on the more general tasks of program synthesis and learning to execute (Zaremba & Sutskever, 2014;Kaiser & Sutskever, 2015;Kurach et al., 2015;Reed & De Freitas, 2015;, we show that by learning several algorithms simultaneously and providing a supervision signal, our neural network is able to demonstrate positive knowledge transfer between learning different algorithms. The supervision signal is driven by how a known classical algorithm would process such inputs (including any relevant intermediate outputs), providing explicit (and reusable) guidance on how to tackle graph-structured problems. We call this approach neural graph algorithm execution.
PROBLEM SETUP
GRAPH ALGORITHMS
We consider graphs G = (V, E) where V is the set of nodes (or vertices) and E is the set of edges (pairs of nodes). We will consider graphs for two purposes: 1) as part of the task to be solved (e.g., the graph provided as input to breadth first search), 2) as the input to a graph neural network.
A graph neural network receives a sequence of T ∈ N graph-structured inputs. For each element of the sequence, we will use a fixed G and vary meta-data associated with the nodes and edges of the graph in the input. In particular, to provide a graph G = (V, E) as input to a graph neural network, each node i ∈ V has associated node features x (t) i ∈ R Nx where t ∈ {1, . . . , T } denotes the index in the input sequence and N x is the dimensionality of the node features. Similarly, each edge (i, j) ∈ E has associated edge features e (t) ij ∈ R Ne where N e is the dimensionality of the edge features. At each step, the algorithm produces node-level outputs y (t) i ∈ R Ny . Some of these outputs may then be reused as inputs on the next step; i.e., x (t+1) i may contain some elements of y (t) i .
LEARNING TO EXECUTE GRAPH ALGORITHMS
We are interested in learning a graph neural network that can execute one or more of several potential algorithms. The specific algorithm to be executed, denoted A, is provided as an input to the network. The structure of the graph neural network follows the encode-process-decode paradigm . First we will describe the encode-process-decode architecture and then describe the specific parameterisations that are used for each sub-network. The relation between the variables used by the graph algorithm and our neural algorithm executor setup is further illustrated in Figure 1.
For each algorithm A we define an encoder network f A . It is applied to the current input features and previous latent features h
(t−1) i (with h (0) i = 0) to produce encoded inputs, z (t) i , as such: z (t) i = f A ( x (t) i , h (t−1) i )(1)
The encoded inputs are then processed using the processor network P . The processor network shares its parameters among all algorithms being learnt. The processor network takes as input the encoded inputs
Z (t) = { z t i } i∈V and edge features E (t) = { e (t)
ij } e∈E and produces as output latent node features,
H (t) = { h t i ∈ R K } i∈V : H (t) = P (Z (t) , E (t) )(2)
The node and algorithm specific outputs are then calculated by the decoder-network, g A :
y (t) i = g A ( z (t) i , h (t) i )(3)
Note that the processor network also needs to make a decision on whether to terminate the algorithm. This is performed by an algorithm-specific termination network, T A , which provides the probability of termination τ (t) -after applying the logistic sigmoid activation σ-as follows:
τ (t) = σ(T A (H (t) , H (t) ))(4)
where
H (t) = 1 |V | i∈V h (t)
i is the average node embedding. If the algorithm hasn't terminated (e.g. τ (t) > 0.5) the computation of Eqns. 1-4 is repeated-with parts of y
(t) i potentially reused in x (t+1) i .
Under review as a conference paper at ICLR 2020 i (e.g. reachability, shortest-path distance) are updated at every step of execution. Analogously, the node values are predicted by the neural executor from the hidden representation h In our experiments, all algorithm-dependent networks, f A , g A and T A , are all linear projections, placing the majority of the representational power of our method in the processor network P . As we would like the processor network to be mindful of the structural properties of the input, we employ a graph neural network (GNN) layer capable of exploiting edge features as P . Specifically, we compare graph attention networks (GATs) . (Equation 5, left) against message-passing neural networks (MPNNs)
x (t) 1 x (t) 2 x (t) 4 x (t) 3 x (t) 5 x (t) 6 e (t) 12 e (t) 24 e (t) 34 e (t) 45 e (t) 46 y (t) 4 x (t+1) 4 z (t) 1 z (t) 2 z (t) 4 z (t) 3 z (t) 5 z (t) 6 e (t) 24 e (t) 34 e (t) 45 e (t) 46 h (t−1) 4 h (t) 4 y (t) 4(Equation 5, right): h (t) i = ReLU (j,i)∈E a z (t) i , z (t) j , e (t) ij W z (t) j h (t) i = U z (t) i , (j,i)∈E M z (t) i , z (t) j , e (t) ij (5)
where W is a learnable projection matrix, a is an attention mechanism producing scalar coefficients, while M and U are neural networks producing vector messages.
is an elementwise aggregation operator, such as maximisation, summation or averaging. We use linear projections for M and U .
Note that the processor network P is algorithm-agnostic, and can hence be used to execute several algorithms simultaneously. Lastly, we note that the setup also easily allows for including edge-level outputs, and graph-level inputs and outputs-however, these were not required in our experiments.
EXPERIMENTAL SETUP
Graph generation To provide our learner with a wide variety of input graph structure types, we follow prior work and generate undirected graphs from seven categories:
• Ladder graphs;
• 2D grid graphs;
• Trees, uniformly randomly generated from the Prüfer sequence;
• Erdős-Rényi (Erdős & Rényi, 1960) graphs, with edge probability min log 2 |V | |V | , 0.5 ; • Barabási-Albert (Albert & Barabási, 2002) graphs, attaching either four or five edges to every incoming node;
• 4-Community graphs-first generating four disjoint Erdős-Rényi graphs with edge probability 0.7, followed by interconnecting their nodes with edge probability 0.01;
• 4-Caveman (Watts, 1999) graphs, having each of their intra-clique edges removed with probability 0.7, followed by inserting 0.025|V | additional shortcut edges between cliques.
We additionally insert a self-edge to every node in the graphs, in order to support easier retention of self-information through message passing. Finally, we attach a real-valued weight to every edge, drawn uniformly from the range [0.2, 1]. These weight values serve as the sole edge features, e
ij , for
x (t) i x (t) j x (t) u x (t) k min x (t) u , min (v,u)∈E x (t) v + e vu x (t) i + e (t) iu x (t) j + e (t) ju x (t) k + e (t) ku z (t) i z (t) j z (t) u z (t) k U z (t) u , (v,u)∈E M z (t) u , z (t) v , e (t) vu M z (t) u , z (t) i , e (t) iu M z (t) u , z (t) j , e (t) ju M z (t) u , z (t)
k , e all steps t. Note that sampling edge weights in this manner essentially guarantees the uniqueness of the recovered solution, simplifying downstream evaluation. We also ignore corner-case inputs (such as negative weight cycles), leaving their handling to future work.
We aim to study the algorithm execution task from a "programmer" perspective: human experts may manually inspect only relatively small graphs, and any algorithms derived from this should apply to arbitrarily large graphs. For each category, we generate 100 training and 5 validation graphs of only 20 nodes. For testing, 5 additional graphs of 20, 50 and 100 nodes are generated per category.
Parallel algorithms We consider two classical algorithms: breadth-first search for reachability, and the Bellman-Ford algorithm (Bellman, 1958) for shortest paths. The former maintains a single-bit value in each node, determining whether said node is reachable from a source node, and the latter maintains a scalar value in each node, representing its distance from the source node.
In both cases, the algorithm is initialised by randomly selecting the source node, s. As the initial input to the algorithms, x
(1) i , we have:
BFS : x (1) i = 1 i = s 0 i = s Bellman-Ford : x (1) i = 0 i = s +∞ i = s(6)
This information is then propagated according to the chosen algorithm: a node becomes reachable from s if any of its neighbours are reachable from s, and we may update the distance to a given node as the minimal way to reach any of its neighbours, then taking the connecting edge:
BFS : x (t+1) i = 1 x (t) i = 1 1 ∃j.(j, i) ∈ E ∧ x (t) j = 1 0 otherwise B-F : x (t+1) i = min x (t) i , min (j,i)∈E x (t) j + e (t) ji(7)
For breadth-first search, no additional information is being computed, hence y
(t) i = x (t+1) i
. Additionally, at each step the Bellman-Ford algorithm may compute, for each node, the "predecessor" node, p (t) i in the shortest path (indicating which edge should be taken to reach this node). This information is ultimately used to reconstruct shortest paths, and hence represents the crucial output:
Bellman-Ford : p (t) i = i i = s argmin j;(j,i)∈E x (t) j + e (t) ji i = s(8)
Hence, for Bellman-Ford, y
(t) i = p (t) i x (t+1) i
, where is concatenation. To provide a numerically stable value for +∞, we set all such entries to the length of the longest shortest path in the graph + 1.
We learn to execute these two algorithms simultaneously-at each step, concatenating the relevant x (t) i and y (t) i values for them. As both of the algorithms considered here (and most others) rely on discrete decisions over neighbourhoods, learning to execute them should be naturally suited for the MPNN with the max-aggregator-a claim which we directly verify in the remainder of this section.
Sequential algorithms Unlike the previous two algorithms, single iterations of many classical graph algorithms will specifically focus on one node at a time-very often the case with constructive tasks. We seek to demonstrate that our neural graph algorithm execution paradigm aligns well with this setting too, and in this context we study Prim's algorithm (Prim, 1957) for minimum spanning trees.
Prim's algorithm maintains a partially constructed minimum spanning tree (MST)-initially, it is a singleton tree consisting of only a source node, s. At each step, Prim's algorithm searches for a new node to connect to the MST-chosen so that the edge attaching it to the tree is the lightest possible:
Prim : x (1) i = 1 i = s 0 i = s Prim : x (t+1) i = 1 x (t) i = 1 1 i = argmin j s.t. x (t) j =0 min k s.t. x (t) k =1 e (t) jk 0 otherwise (9)
Once the new node is selected, the algorithm attaches it to the MST via this edge-similarly to Bellman-Ford, we can keep track of predecessor nodes, p
(t) i : Prim : p (t) i = i i = s p (t−1) i i = s ∧ x (t) i = 1 argmin j s.t. x (t) j =1 e (t) ij x (t) i = 0 ∧ x (t+1) i = 1 ⊥ (undefined) otherwise(10)
In Equations 9-10, the boxed updates are the only modifications to the state of the algorithm at step tcentering only on the selected node to attach to the MST. Once again, the algorithm requires discrete decisions based on neighbourhood edge weights, hence we expect outperformance of MPNN-max.
For a visualisation of the expected alignment between a graph algorithm and our neural graph executors, refer to Figure 2.
Neural network architectures To assess the comparative benefits of different architectures for the neural algorithm execution task, we consider many candidate networks executing the computation of Equations 1-5, especially the processor network P : For the MPNN update rule, we consider maximisation, mean and summation aggregators. For the GAT update rule, we consider the originally proposed attention mechanism of , as well as Transformer attention ; Additionally for GAT, we consider also attending over the full graph-adding a second attention head, only acting on the non-edges of the graph (and hence not accepting any edge features). The two heads' features are then concatenated and passed through another linear layer.
Analogously to our expectation that the best-performing MPNN rule will perform maximisation, we attempt to force the attentional coefficients of GAT to be as sharp as possible-applying either an entropy penalty to them (as in ) or the Gumbel softmax trick (Jang et al., 2016).
We perform an additional sanity check to ensure that a GNN-like architecture is necessary in this case. Prior work (Xu et al., 2019) has already demonstrated the unsuitability of MLPs for reasoning tasks like these, and they will not support variable amounts of nodes. Here, instead, we consider an LSTM (Hochreiter & Schmidhuber, 1997) architecture into which serialised graphs are fed (we use an edge list, in a setup similar to (Graves et al., 2016)).
In all cases, the neural networks compute a latent dimension of K = 32 features, and are optimised using the Adam SGD optimiser (Kingma & Ba, 2014) on the binary cross-entropy for the reachability predictions, mean squared error for the distance predictions, categorical cross-entropy for the predecessor node predictions, and binary cross-entropy for predicting termination (all applied simultaneously). We use an initial learning rate of 0.0005, and perform early stopping on the validation accuracy for the predecessor node (with a patience of 10 epochs). If the termination network T A does not terminate the neural network computation within |V | steps, it is assumed terminated at that point. 99.66% / 100.0% 94.25% / 100.0% 94.72% / 98.63% MPNN-max 100.0% / 100.0% 100.0% / 100.0% 99.92% / 99.80% For Prim's algorithm, as only one node at a time is updated, we optimise the categorical cross-entropy of predicting the next node, masked across all the nodes not added to the MST yet.
It should be noted that, when learning to execute Bellman-Ford and Prim's algorithms, the prediction of p (t) i is performed by scoring each node-pair using an edge-wise scoring network (a neural network predicting a scalar score from h
(t) i h (t) j e (t)
ij ), followed by a softmax over all neighbours of i.
RESULTS AND DISCUSSION
Parallel algorithm execution In order to evaluate how faithfully the neural algorithm executor replicates the two parallel algorithms, we propose reporting the accuracy of predicting the reachability (for breadth-first search; Table 1), as well as predicting the predecessor node (for Bellman-Ford; Table 2). We report this metric averaged across all steps t (to give a sense of how well the algorithm is imitated across time), as well as the last-step performance (which corresponds to the final solution). While it is not necessary for recovering the final answer, we also provide the mean squared error of the models on the Bellman-Ford distance information, as well as the termination accuracy (computed at each step separately)-averaged across all timesteps-in Table 3.
The results confirm our hypotheses: the MPNN-max model exhibits superior generalisation performance on both reachability and shortest-path predecessor node prediction. Even when allowing for hardening the attention of GAT-like models (using entropy or Gumbel softmax), the more flexible computational model of MPNN is capable of outperforming them. The performance gap on predicting the predecessor also widens significantly as the test graph size increases.
Our findings are compounded by observing the mean squared error metric on the intermediate result:
with the MPNN-max being the only model providing a reasonable level of regression error at the 100node generalisation level. It further accentuates that, even though models like the MPNN-sum model may also learn various thresholding functions-as demonstrated by (Xu et al., 2018)-aggregating messages in this way can lead to outputs of exploding magnitude, rendering the network hard to numerically control for larger graphs.
Lastly, we perform two additional studies, executing the shortest-path prediction on MPNN-max without predicting reachability, and without supervising on any intermediate algorithm computationsthat is, learning to predict predecessors (and termination behaviour) directly from the inputs, x
i . Note that this is the primary way such tasks have been tackled by graph neural networks in prior work. We report these results as no-reach and no-algo in Table 2, respectively.
Looking at the no-reach ablation, we observe clear signs of positive knowledge transfer occurring between the reachability and shortest-path tasks: when the shortest path algorithm is learned in isolation, the predictive power of MPNN-max drops significantly (while still outperforming many other approaches). In Appendix A, we provide a brief theoretical insight to justify this. Similarly, considering the no-algo experiment, we conclude that there is a clear benefit to supervising on the distance information-giving an additional performance improvement compared to the standard approach of only supervising on the final downstream outputs. Taken in conjunction, these two results provide encouragement for studying this particular learning setup.
We note that our observations still hold when training/testing on larger graphs (Appendix B). We also find that there is no significant overfitting to a particular input graph category-however we do provide an in-depth analysis of per-category performance in Appendix C.
Additional metrics The graphs we generate may be roughly partitioned into two types based on their local regularity-specifically, the ladder, grid and tree graphs all exhibit regular local structure, while the remaining four categories are more variable. As such, we hypothesise that learning from a graph of one such type only will exhibit better generalisation for graphs of the same type. We verify this claim in Table 4, where we train on either only Erdős-Rényi graphs or trees of 20 nodes, and report the generalisation performance on 100-node graphs across the seven categories. The results directly validate our claim, implying that the MPNN-max model is capable of biasing itself to the structural regularities found in the input graphs. Despite this bias, the model still achieves generalisation performances that outperform any other model, even when trained on the full dataset.
Further, we highlight that our choices of aggregation metrics may not be the most ideal way to assess performance of the algorithm executors: the last-step performance provides no indication of faithfulness to the original algorithm, while the mean-step performance may be artificially improved by terminating the algorithm at a latter point. While here we leave the problem of determining a better single-number metric to future work, we also decide to compound the results in Tables 1-2 by also plotting the test reachability/predecessor accuracies for each timestep of the algorithm individually (for 100-node graphs): refer to Figure 3.
Such visualisations can help identify cases where neural executors are "cheating", by e.g. immediately predicting every node is reachable: in these cases, we can see a characteristic-initially weak but steadily improving-performance curve. It also further solidifies the outperformance of MPNN-max. Figure 3: The per-step algorithm execution performances in terms of reachability accuracy (left), distance mean-squared error (middle) and predecessor accuracy (right), tested on 100-node graphs after training on 20-node graphs. Please mind the scale of the MSE plot.
Lastly, in Appendix D we apply the recently proposed GNNExplainer model to detecting which graph substructures contributed the most to certain predictions.
Sequential algorithm execution We demonstrate results for all considered architectures on executing Prim's algorithm within Table 5. We provide the accuracy of predicting the next MST node (computed against the algorithm's "ground-truth" ordering), as well as the accuracy of reconstructing the final MST (via the predecessors).
As anticipated, our results once again show strong generalisation outperformance of MPNN-max. We additionally compared against a non-sequential version (no-algo), where the MPNN-max model was trained to directly predict predecessors (without requiring sequentially chosing nodes). This resulted in poor generalisation to larger graphs, weaker than even the LSTM sequential baseline. The insights from our setup verify that our neural graph execution paradigm is applicable to sequential algorithm execution as well-substantially expanding its range of possible applications.
CONCLUSIONS
In this manuscript, we have presented the neural graph algorithm execution task, where-unlike prior approaches-we optimise neural networks to imitate individual steps and all intermediate outputs of classical graph algorithms, parallel as well as sequential. Through extensive evaluation-especially on the tasks of reachability, shortest paths and minimum spanning trees-we have determined a highly suitable architecture in maximisation-based message passing neural networks, and identified clear benefits for multi-task learning and positive transfer, as many classical algorithms share related subroutines. We believe that the results presented here should serve as strong motivation for further work in the area, attempting to learn more algorithms simultaneously and exploiting the similarities between their respective subroutines whenever appropriate.
A THEORETICAL INSIGHTS
We provide a brief theoretical insight into why learning to imitate multiple algorithms simultaneously may provide benefits to downstream predictive power.
Our insight comes from an information-theoretic perspective. Consider two algorithms, A and B, that both operate on the same input, x, and produce outputs y A and y B , respectively. We consider the task of learning to execute A (that is, predicting y A from x), with and without y B provided 2 . We operate on the further assumption that A and B share subroutines-implying that, knowing x, there is information content preserved between y A and y B . Formally, we say that the conditional mutual information of the corresponding random variables, I(Y A ; Y B |X), is positive.
Expanding out the expression for I(Y A ; Y B |X), denoting Shannon entropy by H, we obtain:
I(Y A ; Y B |X) = H(Y A |X) + H(Y B |X) − H(Y A , Y B |X) = H(Y A |X) + H(Y B |X) − (H(Y A |Y B , X) + H(Y B |X)) = H(Y A |X) − H(Y A |Y B , X)(11)
As
I(Y A ; Y B |X) > 0, we conclude H(Y A |X) > H(Y A |Y B , X)
; therefore, providing y B upfront strictly reduces the information-theoretic uncertainty in y A , thus making it potentially more suitable for being learned by optimisation techniques.
B LARGER-SCALE STUDIES
To investigate the behaviour of the models when generalising to larger graphs, we further conduct experiments for executing the Bellman-Ford algorithm when training on graphs with 100 nodes, and test on graphs with 1000 nodes, as reported in Table 6. These results further solidify the outperformance of MPNN-based models, even outside of the studied "programmer" regime.
C PERFORMANCE PER GRAPH TYPE
As our training and testing graphs come from a specific set of seven categories, it is natural to study the predictive power of the model conditioned on the testing category. Firstly, in Table 7, we provide the per-category results of predicting reachability and predecessor for MPNN-max. We report that the performance is roughly evenly distributed across the categories-with trees being the easiest to learn on and grids/community graphs the hardest. These results align well with our expectations:
• In trees, computing the shortest path tree is equivalent to a much simpler task-rooting the input tree in the source vertex-that a model may readily pick up on. • Making proper choices on grids requires propagating decisions over long trajectories 3 -as such, a poor decision early on may more strongly compromise the overall performance of retrieving the shortest path tree. • As the community graphs are composed of four interconnected dense graphs (Erdös-Rényi with p = 0.7), the node degree distribution the model is required to handle may change drastically as graphs increase in size. This may require the model to aggregate messages over substantially larger neighbourhoods than it is used to during training.
D EXPLAINING GNN PREDICTIONS
We provide a further qualitative analysis of what the MPNN-max architecture has learnt when performing algorithm execution. In particular, we apply a model similar to GNNExplainer for explaining the decisions made during the neural execution. For the reachability task, the explainer asnwers the question: "for a given node u, which node in the neighbourhood of u influences the reachability prediction made by the neural execution model?".
We use the best performing model, MPNN-max, to demonstrate the explanation. Given an already trained model on graphs with 20 nodes, starting from any node u of the neural execution sequence, we optimise for an adjacency mask M, that is initialized to 1 for all edges that connect to u, and 0 everywhere else. Instead of the original adjacency matrix A, we use A σ(M) as the input adjacency to the model. We fix the model parameters and only train on the adjacency mask using the same reachability loss, with the additional term of the sum of values in the adjacency mask. This encourages the explanation to remove as many edges as possible from the immediate neighbourhood of u, while still being able to perform the correct reachability updates.
When the mask is trained until convergence, we pick the edge that has the maximum weight in the mask to be the predecessor that explains the reachability of the node.
We then perform the same explanation procedure on the ground-truth predecessor of u in the BFS algorithm, and continue the process until we reach the source node, s of the BFS algorithm. If all explanations are correct, we will observe a path that connects the node u to the starting node of the BFS algorithm. Any disconnection indicates an incorrect explanation that deviates from the groundtruth, which could either be due to the incorrect prediction of the model, or an incorrect explanation.
Using the standard training dataset in our experiments, we observe that 82.16% of the instances have a path explanation with no error in the explanation compared to the ground-truth predecessors. Two of these examples are visualised in Figure 4. Additionally, 93.85% of the predecessor explanation correspond to the ground-truth, providing further qualitative insight into the algorithm execution capacity of MPNN-max. Figure 4: Identified reachability paths for a noisy Caveman graph and tree graph, using GNNExplainer. The purple edges indicate the predecessor relationships identified by the explainer, while the yellow edges are the remainder of the graph's edges.
Figure 1 :
1A visualisation of the relation between local computations of graph algorithms (left) and the neural graph algorithm executor (right). In graph algorithms, node values y
Figure 2 :
2Illustrating the alignment of one step of the Bellman-Ford algorithm (left) with one step of a message passing neural network (right), and the supervision signal used for the algorithm learner.
Table 1 :
1Accuracy of predicting reachability at different test-set sizes, trained on graphs of 20 nodes. GAT* correspond to the best GAT setup as per Section 3 (GAT-full using the full graph).Reachability (mean step accuracy / last-step accuracy)
Table 2 :
2Accuracy of predicting the shortest-path predecessor node at different test-set sizes. (noreach) corresponds to training without the reachability task. (no-algo) corresponds to the classical setup of directly training on the predecessor, without predicting any intermediate outputs or distances.Predecessor (mean step accuracy / last-step accuracy)
Table 3 :
3Mean squared error for predicting the intermediate distance information from Bellman-Ford, and accuracy of the termination network compared to the ground-truth algorithm, averaged across all timesteps. (no-reach) corresponds to training without the reachability task.B-F mean squared error / mean termination accuracy
Model
20 nodes
50 nodes
100 nodes
LSTM (Hochreiter & Schmidhuber, 1997) 3.857 / 83.43%
11.92 / 86.74% 74.36 / 83.55%
GAT* (Veličković et al., 2018)
43.49 / 85.33%
123.1 / 84.88% 183.6 / 82.16%
GAT-full* (Vaswani et al., 2017)
7.189 / 77.14%
28.89 / 75.51% 58.08 / 77.30%
MPNN-mean (Gilmer et al., 2017)
0.021 / 98.57%
23.73 / 89.29% 91.58 / 86.81%
MPNN-sum (Gilmer et al., 2017)
0.156 / 98.09%
4.745 / 88.11% +∞ / 87.71%
MPNN-max (Gilmer et al., 2017)
0.005 / 98.89% 0.013 / 98.58% 0.238 / 97.82%
MPNN-max (no-reach)
0.452 / 80.18%
2.512 / 91.77% 2.628 / 85.22%
Table 4 :
4The predictive performance of MPNN-max on 100-node graphs, after training on 20-node graphs of a particular type(Erdős-Rényi, or trees).Reachability
Predecessor
Graph type
From Erdős-Rényi From trees
From Erdős-Rényi From trees
Ladder
93.16% / 93.98%
99.93% / 99.67% 76.63% / 65.94%
94.99% / 92.55%
2-D Grid
92.86% / 87.05%
99.85% / 99.32% 79.50% / 70.75%
94.06% / 91.39%
Tree
82.72% / 82.07%
99.92% / 99.62% 70.16% / 63.26%
98.44% / 97.33%
Erdős-Rényi
100.0% / 100.0%
100.0% / 100.0% 96.17% / 93.94%
91.11% / 85.94%
Barabási-Albert 100.0% / 100.0%
100.0% / 100.0% 94.91% / 92.90%
83.90% / 75.79%
4-Community
100.0% / 100.0%
100.0% / 100.0% 90.01% / 86.38%
75.88% / 64.04%
4-Caveman
100.0% / 100.0%
100.0% / 100.0% 91.55% / 90.04%
80.02% / 72.06%
Table 5 :
5Accuracy of selecting the next node to add to the minimum spanning tree, and predicting the
minimum spanning tree predecessor node-at different test-set sizes. (no-algo) corresponds to the
classical setup of directly training on the predecessor, without adding nodes sequentially.
Accuracy (next MST node / MST predecessor)
Model
20 nodes
50 nodes
100 nodes
LSTM (Hochreiter & Schmidhuber, 1997) 11.29% / 52.81%
3.54% / 47.74%
2.66% / 40.89%
Table 6 :
6Results of scaling to large graphs with 1000 nodes, while training on graphs with 100 nodes.Model
Reachability
Predecessor
LSTM
66.63% / 72.62%
33.73% / 32.36%
GAT*
83.43% / 89.15%
37.53% / 36.16%
MPNN-max 100.0% / 99.99% 96.45% / 96.25%
Table 7 :
7The predictive performance of MPNN-max on 100-node graphs-after training on 20-node graphs-partitioned by graph type.0% / 100.0% 94.00% / 89.65% Barabási-Albert 100.0% / 100.0% 92.71% / 88.60% 4-Community 100.0% / 100.0% 86.25% / 79.65% 4-Caveman 100.0% / 100.0% 91.55% / 86.96%Graph type
Reachability
Predecessor
Ladder
95.57% / 98.63% 94.13% / 91.47%
2-D grid
95.93% / 93.28% 87.90% / 83.77%
Tree
99.55% / 98.32% 98.60% / 97.83%
Erdős-Rényi
100.
Note that here we're implicitly assuming that yB is trivial enough to be fully learnt on its own-and thus can be provided to the model. This is a more strict way of assuming that B is a "simpler" algorithm than A.3 Note that this is also the case with ladder graphs, but these trajectories cannot get very complicated in this case, as the ladder graph is a product of two path graphs.
. Gat* (veličković, GAT* (Veličković et al., 2018)
. Gat-Full* (vaswani, 29.94% / 64.27% 18.91% / 53.34% 14.83% / 51.49%GAT-full* (Vaswani et al., 2017) 29.94% / 64.27% 18.91% / 53.34% 14.83% / 51.49%
. Mpnn-Mean (gilmer, MPNN-mean (Gilmer et al., 2017)
. Mpnn-Sum (gilmer, 48.05% / 77.41% 24.40% / 61.83% 31.60% / 43.98%MPNN-sum (Gilmer et al., 2017) 48.05% / 77.41% 24.40% / 61.83% 31.60% / 43.98%
. Mpnn-Max (gilmer, 85% / 93.23% 63.89% / 91.14% 41.37% / 90.02%87MPNN-max (Gilmer et al., 2017) 87.85% / 93.23% 63.89% / 91.14% 41.37% / 90.02%
. Mpnn-Max, / 71.02% -/ 49.83% -/ 23.61%MPNN-max (no-algo) -/ 71.02% -/ 49.83% -/ 23.61%
Statistical mechanics of complex networks. Réka Albert, Albert-László Barabási, Reviews of modern physics. 74147Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002.
W Peter, Jessica B Battaglia, Victor Hamrick, Alvaro Bapst, Vinicius Sanchez-Gonzalez, Mateusz Zambaldi, Andrea Malinowski, David Tacchetti, Adam Raposo, Ryan Santoro, Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintPeter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
On a routing problem. Richard Bellman, Quarterly of applied mathematics. 161Richard Bellman. On a routing problem. Quarterly of applied mathematics, 16(1):87-90, 1958.
Irwan Bello, Hieu Pham, V Quoc, Mohammad Le, Samy Norouzi, Bengio, arXiv:1611.09940Neural combinatorial optimization with reinforcement learning. arXiv preprintIrwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940, 2016.
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
On the evolution of random graphs. Paul Erdős, Alfréd Rényi, Publ. Math. Inst. Hung. Acad. Sci. 51Paul Erdős and Alfréd Rényi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960.
Neural message passing for quantum chemistry. Justin Gilmer, S Samuel, Schoenholz, F Patrick, Oriol Riley, George E Vinyals, Dahl, arXiv:1704.01212arXiv preprintJustin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. 5387626471Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
Rex William L Hamilton, Jure Ying, Leskovec, arXiv:1709.05584Representation learning on graphs: Methods and applications. arXiv preprintWilliam L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
B Jessica, Hamrick, Victor Kelsey R Allen, Tina Bapst, Kevin R Zhu, Joshua B Mckee, Peter W Tenenbaum, Battaglia, arXiv:1806.01203Relational inductive bias for physical construction in humans and machines. arXiv preprintJessica B Hamrick, Kelsey R Allen, Victor Bapst, Tina Zhu, Kevin R McKee, Joshua B Tenenbaum, and Peter W Battaglia. Relational inductive bias for physical construction in humans and machines. arXiv preprint arXiv:1806.01203, 2018.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Łukasz Kaiser, Ilya Sutskever, arXiv:1511.08228Neural gpus learn algorithms. arXiv preprintŁukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
Wouter Kool, Max Herke Van Hoof, Welling, arXiv:1803.08475Attention, learn to solve routing problems!. arXiv preprintWouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475, 2018.
Karol Kurach, arXiv:1511.06392Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv preprintKarol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015.
Gated graph sequence neural networks. Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel, arXiv:1511.05493arXiv preprintYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
Shortest connection networks and some generalizations. Robert Clay Prim, The Bell System Technical Journal. 366Robert Clay Prim. Shortest connection networks and some generalizations. The Bell System Technical Journal, 36(6):1389-1401, 1957.
. Scott Reed, Nando De Freitas, arXiv:1511.06279Neural programmer-interpreters. arXiv preprintScott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015.
Relational recurrent neural networks. Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, Timothy Lillicrap, Advances in Neural Information Processing Systems. Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 7299-7310, 2018.
Guiding high-performance sat solvers with unsat-core predictions. Daniel Selsam, Nikolaj Bjørner, International Conference on Theory and Applications of Satisfiability Testing. SpringerDaniel Selsam and Nikolaj Bjørner. Guiding high-performance sat solvers with unsat-core predictions. In International Conference on Theory and Applications of Satisfiability Testing, pp. 336-353. Springer, 2019.
Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo De Moura, David L Dill, arXiv:1802.03685Learning a sat solver from single-bit supervision. arXiv preprintDaniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L Dill. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685, 2018.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, Graph Attention Networks. International Conference on Learning Representations. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.
Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, Advances in Neural Information Processing Systems. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692-2700, 2015.
Networks, dynamics, and the small-world phenomenon. J Duncan, Watts, American Journal of sociology. 1052Duncan J Watts. Networks, dynamics, and the small-world phenomenon. American Journal of sociology, 105(2):493-527, 1999.
. Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, arXiv:1810.00826arXiv preprintKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
Keyulu Xu, Jingling Li, Mozhi Zhang, S Simon, Ken-Ichi Du, Stefanie Kawarabayashi, Jegelka, arXiv:1905.13211What can neural networks reason about? arXiv preprint. Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? arXiv preprint arXiv:1905.13211, 2019.
Hierarchical graph representation learning with differentiable pooling. Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, L William, Jure Hamilton, Leskovec, arXiv:1806.08804arXiv preprintRex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hier- archical graph representation learning with differentiable pooling. arXiv preprint arXiv:1806.08804, 2018.
Gnn explainer: A tool for post-hoc explanation of graph neural networks. Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec, arXiv:1903.03894arXiv preprintRex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnn explainer: A tool for post-hoc explanation of graph neural networks. arXiv preprint arXiv:1903.03894, 2019.
Jiaxuan You, Rex Ying, Xiang Ren, L William, Jure Hamilton, Leskovec, arXiv:1802.08773Graphrnn: A deep generative model for graphs. arXiv preprintJiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, and Jure Leskovec. Graphrnn: A deep generative model for graphs. arXiv preprint arXiv:1802.08773, 2018.
Position-aware graph neural networks. Jiaxuan You, Rex Ying, Jure Leskovec, arXiv:1906.04817arXiv preprintJiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. arXiv preprint arXiv:1906.04817, 2019.
Learning to execute. Wojciech Zaremba, Ilya Sutskever, arXiv:1410.4615arXiv preprintWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. |
9,655,643 | LATENT SEQUENCE DECOMPOSITIONS | Sequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%. * Work done at Google Brain. | [
2863491,
14434979,
6628106,
5590763,
7147309,
11212020,
1245593,
13972671
] | LATENT SEQUENCE DECOMPOSITIONS
7 Feb 2017
William Chan [email protected]
Carnegie Mellon University
Massachusetts Institute of Technology
Yu Zhang [email protected]
Carnegie Mellon University
Massachusetts Institute of Technology
Quoc V Le
Carnegie Mellon University
Massachusetts Institute of Technology
Navdeep Jaitly [email protected]
Carnegie Mellon University
Massachusetts Institute of Technology
Google Brain
Carnegie Mellon University
Massachusetts Institute of Technology
LATENT SEQUENCE DECOMPOSITIONS
7 Feb 2017Published as a conference paper at ICLR 2017
Sequence-to-sequence models rely on a fixed decomposition of the target sequences into a sequence of tokens that may be words, word-pieces or characters. The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data domains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the training of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%. * Work done at Google Brain.
INTRODUCTION
Sequence-to-sequence (seq2seq) models (Sutskever et al., 2014;Cho et al., 2014) with attention have been successfully applied to many applications including machine translation (Luong et al., 2015;Jean et al., 2015), parsing (Vinyals et al., 2015a), image captioning (Vinyals et al., 2015b;Xu et al., 2015) and Automatic Speech Recognition (ASR) (Chan et al., 2016;Bahdanau et al., 2016a).
Previous work has assumed a fixed deterministic decomposition for each output sequence. The output representation is usually a fixed sequence of words (Sutskever et al., 2014;Cho et al., 2014), phonemes (Chorowski et al., 2015), characters (Chan et al., 2016;Bahdanau et al., 2016a) or even a mixture of characters and words (Luong & Manning, 2016). However, in all these cases, the models are trained towards one fixed decomposition for each output sequence.
We argue against using fixed deterministic decompositions of a sequence that has been defined a priori. Word segmented models (Luong et al., 2015;Jean et al., 2015) often have to deal with large softmax sizes, rare words and Out-of-Vocabulary (OOV) words. Character models (Chan et al., 2016;Bahdanau et al., 2016a) overcome the OOV problem by modelling the smallest output unit, however this typically results in long decoder lengths and computationally expensive inference. And even with mixed (but fixed) character-word models (Luong & Manning, 2016), it is unclear whether such a predefined segmentation is optimal. In all these examples, the output decomposition is only a function of the output sequence. This may be acceptable for problems such as translations, but inappropriate for tasks such as speech recognition, where segmentation should also be informed by the characteristics of the inputs, such as audio. We want our model to have the capacity and flexibility to learn a distribution of sequence decompositions. Additionally, the decomposition should be a sequence of variable length tokens as deemed most probable. For example, language may be more naturally represented as word pieces (Schuster & Nakajima, 2012) rather than individual characters. In many speech and language tasks, it is probably more efficient to model "qu" as one output unit rather than "q" + "u" as separate output units (since in English, "q" is almost always followed by "u"). Word piece models also naturally solve rare word and OOV problems similar to character models.
The output sequence decomposition should be a function of both the input sequence and the output sequence (rather than output sequence alone). For example, in speech, the choice of emitting "ing" as one word piece or as separate tokens of "i" + "n" + "g" should be a function of the current output word as well as the audio signal (i.e., speaking style).
We present the Latent Sequence Decompositions (LSD) framework. LSD does not assume a fixed decomposition for an output sequence, but rather learns to decompose sequences as function of both the input and the output sequence. Each output sequence can be decomposed to a set of latent sequence decompositions using a dictionary of variable length output tokens. The LSD framework produces a distribution over the latent sequence decompositions and marginalizes over them during training. During test inference, we find the best decomposition and output sequence, by using beam search to find the most likely output sequence from the model.
LATENT SEQUENCE DECOMPOSITIONS
In this section, we describe LSD more formally. Let x be our input sequence, y be our output sequence and z be a latent sequence decomposition of y. The latent sequence decomposition z consists of a sequence of z i ∈ Z where Z is the constructed token space. Each token z i need not be the same length, but rather in our framework, we expect the tokens to have different lengths. Specifically, Z ⊆ ∪ n i=1 C i where C is the set of singleton tokens and n is the length of the largest output token. In ASR , C would typically be the set of English characters, while Z would be word pieces (i.e., n-grams of characters).
To give a concrete example, consider a set of tokens {"a", "b", "c", "at", "ca", "cat"}. With this set of tokens, the word "cat" may be represented as the sequence "c", "a", "t", or the sequence "ca", "t", or alternatively as the single token "cat". Since the appropriate decomposition of the word "cat" is not known a priori, the decomposition itself is latent.
Note that the length |z a | of a decomposition z a need not be the same as the length of the output sequence, |y| (for example "ca", "t" has a length of 2, whereas the sequence is 3 characters long). Similarly, a different decomposition z b (for example the 3-gram token "cat") of the same sequence may be of a different length (in this case 1).
Each decomposition, collapses to the target output sequence using a trivial collapsing function y = collapse(z). Clearly, the set of decompositions, {z : collapse(z) = y}, of a sequence, y, using a non-trivial token set, Z, can be combinatorially large.
If there was a known, unique, correct segmentation z * for a given pair, (x, y), one could simply train the model to output the fixed deterministic decomposition z * . However, in most problems, we do not know the best possible decomposition z * ; indeed it may be possible that the output can be correctly decomposed into multiple alternative but valid segmentations. For example, in end-to-end ASR we typically use characters as the output unit of choice (Chan et al., 2016;Bahdanau et al., 2016a) but word pieces may be better units as they more closely align with the acoustic entities such as syllables. However, the most appropriate decomposition z * for a given is (x, y) pair is often unknown. Given a particular y, the best z * could even change depending on the input sequence x (i.e., speaking style).
In LSD, we want to learn a probabilistic segmentation mapping from x → z → y. The model produces a distribution of decompositions, z, given an input sequence x, and the objective is to maximize the log-likelihood of the ground truth sequence y. We can accomplish this by factorizing and marginalizing over all possible z latent sequence decompositions under our model p(z|x; θ) with parameters θ:
log p(y|x; θ) = log z p(y, z|x; θ) (1) = log z p(y|z, x)p(z|x; θ) (2) = log z p(y|z)p(z|x; θ)(3)
where p(y|z) = ½(collapse(z) = y) captures path decompositions z that collapses to y. Due to the exponential number of decompositions of y, exact inference and search is intractable for any nontrivial token set Z and sequence length |y|. We describe a beam search algorithm to do approximate inference decoding in Section 4.
Similarly, computing the exact gradient is intractable. However, we can derive a gradient estimator by differentiating w.r.t. to θ and taking its expectation:
∂ ∂θ log p(y|x; θ) = 1 p(y|x; θ) ∂ ∂θ z p(y|x, z)p(z|x; θ) (4) = 1 p(y|x; θ) z p(y|x, z)∇ θ p(z|x; θ) (5) = 1 p(y|x; θ) z p(y|x, z)p(z|x; θ)∇ θ log p(z|x; θ) (6) = E z∼p(z|x,y;θ) [∇ θ log p(z|x; θ)](7)
Equation 6 uses the identity
∇ θ f θ (x) = f θ (x)∇ θ log f θ (x) assuming f θ (x) = 0 ∀ x.
Equation 7 gives us an unbiased estimator of our gradient. It tells us to sample some latent sequence decomposition z ∼ p(z|y, x; θ) under our model's posterior, where z is constraint to be a valid sequence that collapses to y, i.e. z ∈ {z ′ : collapse(z ′ ) = y}. To train the model, we sample z ∼ p(z|y, x; θ) and compute the gradient of ∇ θ log p(z|x; θ) using backpropagation. However, sampling z ∼ p(z|y, x; θ) is difficult. Doing this exactly is computationally expensive, because it would require sampling correctly from the posterior -it would be possible to do this using a particle filtering like algorithm, but would require a full forward pass through the output sequence to do this.
Instead, in our implementation we use a heuristic to sample z ∼ p(z|y, x; θ). At each output time step t when producing tokens z 1 , z 2 · · · z (t−1) , we sample from z t ∼ p (z t |x, y, z <t , θ) in a left-toright fashion. In other words, we sample valid extensions at each time step t. At the start of the training, this left-to-right sampling procedure is not a good approximation to the posterior, since the next step probabilities at a time step include probabilities of all future paths from that point.
For example, consider the case when the target word is "cat", and the vocabulary includes all possible characters and the tokens "ca", and "cat". At time step 1, when the valid next step options are "c", "ca", "cat", their relative probabilities reflect all possible sequences "c*", "ca*", "cat*" respectively, that start from the first time step of the model. These sets of sequences include sequences other than the target sequence "cat". Thus sampling from the distribution at step 1 is a biased procedure.
However, as training proceeds the model places more and more mass only on the correct hypotheses, and the relative probabilities that the model produces between valid extensions gets closer to the posterior. In practice, we find that the when the model is trained with this method, it quickly collapses to using single character targets, and never escapes from this local minima 1 . Thus, we follow an ǫ-greedy exploration strategy commonly found in reinforcement learning literature (Sutton & Barto, 1998) -we sample z t from a mixture of a uniform distribution over valid next tokens and p (z t |x, y, z <t , θ). The relative probability of using a uniform distribution vs. p (·|x, y, z <t , θ) is varied over training. With this modification the model learns to use the longer n-grams of characters appropriately, as shown in later sections.
MODEL
In this work, we model the latent sequence decompositions p(z|x) with an attention-based seq2seq model . Each output token z i is modelled as a conditional distribution over all previously emitted tokens z <i and the input sequence x using the chain rule:
p(z|x; θ) = i p(z i |x, z <i )(8)
The input sequence x is processed through an EncodeRNN network. The EncodeRNN function transforms the features x into some higher level representation h. In our experimental implementation EncodeRNN is a stacked Bidirectional LSTM (BLSTM) (Schuster & Paliwal, 1997;Graves et al., 2013) with hierarchical subsampling (Hihi & Bengio, 1996;Koutnik et al., 2014):
h = EncodeRNN(x)(9)
The output sequence z is generated with an attention-based transducer one z i token at a time:
s i = DecodeRNN([z i−1 , c i−1 ], s i−1 ) (10) c i = AttentionContext(s i , h) (11) p(z i |x, z <i ) = TokenDistribution(s i , c i )(12)
The DecodeRNN produces a transducer state s i as a function of the previously emitted token z i−1 , previous attention context c i−1 and previous transducer state s i−1 . In our implementation, DecodeRNN is a LSTM (Hochreiter & Schmidhuber, 1997) function without peephole connections.
The AttentionContext function generates c i with a content-based MLP attention network . Energies e i are computed as a function of the encoder features h and current transducer state s i . The energies are normalized into an attention distribution α i . The attention context c i is created as a α i weighted linear sum over h:
e i,j = v, tanh(φ(s i , h j )) (13) α i,j = exp(e i,j ) j ′ exp(e i,j ′ )(14)c i = j α i,j h j(15)
where φ is linear transform function. TokenDistribution is a MLP function with softmax outputs modelling the distribution p(z i |x, z <i ).
DECODING
During inference we want to find the most likely word sequence given the input acoustics:
y = arg max y z log p(y|z)p(z|x)(16)
however this is obviously intractable for any non-trivial token space and sequence lengths. We simply approximate this by decoding for the best word piece sequenceẑ and then collapsing it to its corresponding word sequenceŷ:ẑ
= arg max z log p(z|x)(17)y = collapse(ẑ)(18)
We approximate for the bestẑ sequence by doing a left-to-right beam search (Chan et al., 2016).
EXPERIMENTS
We experimented with the Wall Street Journal (WSJ) ASR task. We used the standard configuration of train si284 dataset for training, dev93 for validation and eval92 for test evaluation. Our input features were 80 dimensional filterbanks computed every 10ms with delta and delta-delta acceleration normalized with per speaker mean and variance as generated by Kaldi (Povey et al., 2011). The EncodeRNN function is a 3 layer BLSTM with 256 LSTM units per-direction (or 512 total) and 4 = 2 2 time factor reduction. The DecodeRNN is a 1 layer LSTM with 256 LSTM units. All the weight matrices were initialized with a uniform distribution U(−0.075, 0.075) and bias vectors to 0. Gradient norm clipping of 1 was used, gaussian weight noise N (0, 0.075) and L2 weight decay 1e−5 (Graves, 2011). We used ADAM with the default hyperparameters described in (Kingma & Ba, 2015), however we decayed the learning rate from 1e−3 to 1e−4. We used 8 GPU workers for asynchronous SGD under the TensorFlow framework (Abadi et al., 2015). We monitor the dev93 Word Error Rate (WER) until convergence and report the corresponding eval92 WER. The models took around 5 days to converge.
We created our token vocabulary Z by looking at the n-gram character counts of the training dataset. We explored n ∈ {2, 3, 4, 5} and took the top {256, 512, 1024} tokens based on their count frequencies (since taking the full n-cartesian exponent of the unigrams would result in an intractable number of tokens for n > 2). We found very minor differences in WER based on the vocabulary size, for our n = {2, 3} word piece experiments we used a vocabulary size of 256 while our n = {4, 5} word piece experiments used a vocabulary size of 512. Additionally, we restrict space to be a unigram token and not included in any other word pieces, this forces the decompositions to break on word boundaries. Table 1 compares the effect of varying the n sized word piece vocabulary. The Latent Sequence Decompositions (LSD) models were trained with the framework described in Section 2 and the (Maximum Extension) MaxExt decomposition is a fixed decomposition. MaxExt is generated in a left-to-right fashion, where at each step the longest word piece extension is selected from the vocabulary. The MaxExt decomposition is not the shortest |z| possible sequence, however it is a deterministic decomposition that can be easily generated in linear time on-the-fly. We decoded these models with simple n-best list beam search without any external dictionary or Language Model (LM).
The baseline model is simply the unigram or character model and achieves 14.76% WER. We find the LSD n = 4 word piece vocabulary model to perform the best at 12.88% WER or yielding a 12.7% relative improvement over the baseline character model. None of our MaxExt models beat our character model baseline, suggesting the maximum extension decomposition to be a poor decomposition choice. However, all our LSD models perform better than the baseline suggesting the LSD framework is able to learn a decomposition better than the baseline character decomposition.
We also look at the distribution of the characters covered based on the word piece lengths during inference across different n sized word piece vocabulary used in training. We define the distribution of the characters covered as the percentage of characters covered by the set of word pieces with the same length across the test set, and we exclude space in this statistic. Figure 1 Figure 1: Distribution of the characters covered by the n-grams of the word piece models. We train Latent Sequence Decompositions (LSD) and Maximum Extension (MaxExt) models with n ∈ {2, 3, 4, 5} sized word piece vocabulary and measure the distribution of the characters covered by the word pieces. The bars with the solid fill represents the LSD models, and the bars with the star hatch fill represents the MaxExt models. Both the LSD and MaxExt models prefer to use n ≥ 2 sized word pieces to cover the majority of the characters. The MaxExt models prefers longer word pieces to cover characters compared to the LSD models.
distribution of the {1, 2, 3, 4, 5}-ngram word pieces the model decides to use to decompose the sequences. When the model is trained to use the bigram word piece vocabulary, we found the model to prefer bigrams (55% of the characters emitted) over characters (45% of the characters emitted) in the LSD decomposition. This suggest that a character only vocabulary may not be the best vocabulary to learn from. Our best model, LSD with n = 4 word piece vocabulary, covered the word characters with 42.16%, 39.35%, 14.83% and 3.66% of the time using 1, 2, 3, 4 sized word pieces respectively. In the n = 5 word piece vocabulary model, the LSD model uses the n = 5 sized word pieces to cover approximately 2% of the characters. We suspect if we used a larger dataset, we could extend the vocabulary to cover even larger n ≥ 5.
The MaxExt model were trained to greedily emit the longest possible word piece, consequently this prior meant the model will prefer to emit long word pieces over characters. While this decomposition results in the shorter |z| length, the WER is slightly worse than the character baseline. This suggest the much shorter decompositions generated by the MaxExt prior may not be best decomposition. This falls onto the principle that the best z * decomposition is not only a function of y * but as a function of (x, y * ). In the case of ASR, the segmentation is a function of the acoustics as well as the text. (Graves & Jaitly, 2014). The previously best reported basic seq2seq model on WSJ WER achieved 18.0% WER (Bahdanau et al., 2016b) with Task Loss Estimation (TLE). Our baseline, also a seq2seq model, achieved 14.8% WER. Main differences between our models is that we did not use convolutional locational-based priors and we used weight noise during training. The deep CNN model with residual connections, batch normalization and convolutions achieved a WER of 11.8% (Zhang et al., 2017) 2 .
Our LSD model using a n = 4 word piece vocabulary achieves a WER of 12.9% or 12.7% relatively better over the baseline seq2seq model. Connectionist Temporal Classification (CTC) (Graves et al., 2006;Graves & Jaitly, 2014) based models assume conditional independence, and can rely on dynamic programming for exact inference. Similarly, Ling et al. (2016) use latent codes to generate text, and also assume conditional independence and leverage on dynamic programming for exact maximum likelihood gradients. Such models can not learn the output language if the language distribution is multimodal. Our seq2seq models makes no such Markovian assumptions and can learn multimodal output distributions. Collobert et al. (2016) and Zweig et al. (2016) developed extensions of CTC where they used some word pieces. However, the word pieces are only used in repeated characters and the decompositions are fixed.
Word piece models with seq2seq have also been recently used in machine translation. Sennrich et al. (2016) used word pieces in rare words, while Wu et al. (2016) used word pieces for all the words, however the decomposition is fixed and defined by heuristics or another model. The decompositions in these models are also only a function of the output sequence, while in LSD the decomposition is a function of both the input and output sequence. The LSD framework allows us to learn a distribution of decompositions rather than learning just one decomposition defined by a priori. used seq2seq to outputs sets, the output sequence is unordered and used fixed length output units; in our decompositions we maintain ordering use variable lengthed output units. Reinforcement learning (i.e., REINFORCE and other task loss estimators) (Sutton & Barto, 1998;Graves & Jaitly, 2014;Ranzato et al., 2016) learn different output sequences can yield different task losses. However, these methods don't directly learn different decompositions of the same sequence. Future work should incorporate LSD with task loss optimization methods.
CONCLUSION
We presented the Latent Sequence Decompositions (LSD) framework. LSD allows us to learn decompositions of sequences that are a function of both the input and output sequence. We presented a biased training algorithm based on sampling valid extensions with an ǫ-greedy strategy, and an approximate decoding algorithm. On the Wall Street Journal speech recognition task, the sequenceto-sequence character model baseline achieves 14.8% WER while the LSD model achieves 12.9%. Using a a deep convolutional neural network on the encoder with LSD, we achieve 9.6% WER.
A LEARNING THE DECOMPOSITIONS
We give the top 8 hypothesis generated by a baseline seq2seq character model, a Latent Sequence Decompositions (LSD) word piece model and a Maximum Extension (MaxExt) word piece model. We note that "shamrock's" is an out-of-vocabulary word while "shamrock" is in-vocabulary. The ground truth is "shamrock's pretax profit from the sale was one hundred twenty five million dollars a spokeswoman said". Note how the LSD model generates multiple decompostions for the same word sequence, this does not happen with the MaxExt model. Reference shamrock's pretax profit from the sale was one hundred twenty five million dollars a spokeswoman said -Character seq2seq 1 c|h|a|m|r|o|c|k|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -1.373868 2 c|h|a|m|r|o|x| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -2.253581 3 c|h|a|m|r|o|c|k|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -3.482713 4 c|h|a|m|r|o|c|k|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -3.493957 5 c|h|a|m|r|o|d|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -3.885185 6 c|h|a|m|r|o|x| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -4.373687 6 c|h|a|m|r|o|c|'|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |a| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -5.148484 8 c|h|a|m|r|o|c|k|s| |p|r|e|t|a|x| |p|r|o|f|i|t| |f|r|o|m| |t|h|e| |s|a|l|e| |w|a|s| |o|n|e| |h|u|n|d|r|e|d| |t|w|e|n|t|y| |f|i|v|e| |m|i|l|l|i|o|n| |d|o|l|l|a|r|s| |o|f| |s|p|o|k|e|s|w|o|m|a|n| |s|a|i|d -5.602793
Word Piece Model Maximum Extension 1 sh|am|ro|ck|'s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -1.155203 2 sh|am|ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -3.031330 3 sh|ar|ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -3.074762 4 sh|e| |m| |ro|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -3.815662 5 sh|e| |mar|x| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -3.880760 6 sh|ar|ro|ck|s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -4.083274 7 sh|e| |m| |ro|ck|ed| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -4.878025 8 sh|e| |m| |ro|ck|s| |pre|ta|x| |pro|fi|t| |from| |the| |sa|le| |was| |one| |hu|nd|red| |tw|ent|y| |five| |mil|lion| |doll|ars| |a| |sp|ok|es|wo|man| |said -5.121490
Word Piece Model Latent Sequence Decompositions 1 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|wo|ma|n| |said -28.111485 2 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|ar|s| |a| |sp|ok|e|s|wo|ma|n| |said -28.172878 3 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |said -28.453381 4 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |said -29.103184 5 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sa|id -29.159660 6 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|lio|n| |doll|a|r|s| |a| |sp|o|k|e|s|w|o|ma|n| |said -29.164141 7 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sai|d -29.169310 8 sh|a|m|ro|c|k|'s| |pre|ta|x| |pro|fi|t| |fro|m| |t|h|e| |sa|l|e| |was| |on|e| |hu|n|dr|e|d| |t|we|nt|y| |fiv|e| |mil|li|o|n| |doll|a|r|s| |a| |sp|ok|e|s|w|om|a|n| |sa|id -29.809937
If we combine our LSD model with the CNN(Zhang et al., 2017) model, we achieve a combined WER of 9.6% WER or 35.1% relatively better over the baseline seq2seq model. These numbers are all reported without the use of any language model. Please see Appendix A for the decompositions generated by our model. The LSD model learns multiple word piece decompositions for the same word sequence. 6 RELATED WORKSingh et al. (2002);McGraw et al. (2013);Lu et al. (2013) built probabilistic pronunciation models for Hidden Markov Model (HMM) based systems. However, such models are still constraint to the conditional independence and Markovian assumptions of HMM-based systems.
Table 1 :
1Wall Street Journal test eval92 Word Error Rate (WER) varying the n sized word piece vocabulary without any dictionary or language model. We compare Latent Sequence Decompositions (LSD) versus the Maximum Extension (MaxExt) decomposition. The LSD models all learn better decompositions compared to the baseline character model, while the MaxExt decomposition appears to be sub-optimal.n
LSD WER MaxExt WER
Baseline
14.76
2
13.15
15.56
3
13.08
15.61
4
12.88
14.96
5
13.52
15.03
Table 2
2compares our WSJ results with other published end-to-end models. The best CTC model achieved 27.3% WER with REINFORCE optimization on WER
Table 2 :
2Wall Street Journal test eval92 Word Error Rate (WER) results across Connectionist Tem-poral Classification (CTC) and Sequence-to-sequence (seq2seq) models. The Latent Sequence De-
composition (LSD) models use a n = 4 word piece vocabulary (LSD4). The Convolutional Neural
Network (CNN) model is with deep residual connections, batch normalization and convolutions.
The best end-to-end model is seq2seq + LSD + CNN at 9.6% WER.
Model
WER
Graves & Jaitly (2014)
CTC
30.1
CTC + WER
27.3
Hannun et al. (2014)
CTC
35.8
Bahdanau et al. (2016a)
seq2seq
18.6
Bahdanau et al. (2016b)
seq2seq + TLE
18.0
Zhang et al. (2017)
seq2seq + CNN 2
11.8
Our Work
seq2seq
14.8
seq2seq + LSD4
12.9
seq2seq + LSD4 + CNN
9.6
Table 3 :
3Top hypothesis comparsion between seq2seq character model, LSD word piece model and MaxExt word piece model.n Hypothesis
One notable exception was the word piece "qu" ("u" is almost always followed by "q" in English). The model does learn to consistently emit "qu" as one token and never produce "q" + "u" as separate tokens.
For our CNN architectures, we use and compare to the "(C (3 × 3) / 2) × 2 + NiN" architecture fromTable 2 line 4.
ACKNOWLEDGMENTSWe thank Ashish Agarwal, Philip Bachman, Dzmitry Bahdanau, Eugene Brevdo, Jan Chorowski, Jeff Dean, Chris Dyer, Gilbert Leung, Mohammad Norouzi, Noam Shazeer, Xin Pan, Luke Vilnis, Oriol Vinyals and the Google Brain team for many insightful discussions and technical assistance.
TensorFlow: Large-scale machine learning on heterogeneous systems. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Oriol Vinyals. Yuan Yu, and Xiaoqiang ZhengVincent Vanhoucke, Vijay Vasudevan, Fernanda ViégasMartín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.
Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations, 2015.
Endto-end Attention-based Large Vocabulary Speech Recognition. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, IEEE International Conference on Acoustics, Speech, and Signal Processing. Philemon Brakel, and Yoshua BengioDzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End- to-end Attention-based Large Vocabulary Speech Recognition. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2016a.
Task Loss Estimation for Sequence Prediction. Dzmitry Bahdanau, Dmitriy Serdyuk, ; Chorowski, Aaron Courville, Yoshua Bengio, International Conference on Learning Representations Workshop. Nan Rosemary KePhilemon BrakelDzmitry Bahdanau, Dmitriy Serdyuk, Philemon Brakel, Nan Rosemary Ke, Jan Chorowski, Aaron Courville, and Yoshua Bengio. Task Loss Estimation for Sequence Prediction. In International Conference on Learning Representations Workshop, 2016b.
Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition. William Chan, Navdeep Jaitly, Quoc Le, Oriol Vinyals, Listen, IEEE International Conference on Acoustics, Speech, and Signal Processing. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition. In IEEE International Con- ference on Acoustics, Speech, and Signal Processing, 2016.
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Kyunghyun Cho, Caglar Bart Van Merrienboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwen, Bengio, Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwen, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Conference on Empirical Methods in Natural Language Processing, 2014.
Attention-Based Models for Speech Recognition. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio, Neural Information Processing Systems. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-Based Models for Speech Recognition. In Neural Information Processing Systems, 2015.
Wav2Letter: an End-to-End ConvNetbased Speech Recognition System. Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve, arXiv:1609.03193Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve. Wav2Letter: an End-to-End ConvNet- based Speech Recognition System. In arXiv:1609.03193, 2016.
Practical Variational Inference for Neural Networks. Alex Graves, Neural Information Processing Systems. Alex Graves. Practical Variational Inference for Neural Networks. In Neural Information Processing Systems, 2011.
Towards End-to-End Speech Recognition with Recurrent Neural Networks. Alex Graves, Navdeep Jaitly, International Conference on Machine Learning. Alex Graves and Navdeep Jaitly. Towards End-to-End Speech Recognition with Recurrent Neural Networks. In International Conference on Machine Learning, 2014.
Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. Alex Graves, Santiago Fernandez, Faustino Gomez, Jurgen Schmiduber, International Conference on Machine Learning. Alex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmiduber. Connectionist Tempo- ral Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. In International Conference on Machine Learning, 2006.
Hybrid Speech Recognition with Bidirectional LSTM. Alex Graves, Navdeep Jaitly, Abdel-Rahman Mohamed, Automatic Speech Recognition and Understanding Workshop. Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. Hybrid Speech Recognition with Bidi- rectional LSTM. In Automatic Speech Recognition and Understanding Workshop, 2013.
First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs. Awni Hannun, Andrew Maas, Daniel Jurafsky, Andrew Ng, arXiv:1408.2873Awni Hannun, Andrew Maas, Daniel Jurafsky, and Andrew Ng. First-Pass Large Vocabulary Con- tinuous Speech Recognition using Bi-Directional Recurrent DNNs. In arXiv:1408.2873, 2014.
Hierarchical Recurrent Neural Networks for Long-Term Dependencies. Salah Hihi, Yoshua Bengio, Neural Information Processing Systems. Salah Hihi and Yoshua Bengio. Hierarchical Recurrent Neural Networks for Long-Term Dependen- cies. In Neural Information Processing Systems, 1996.
Long Short-Term Memory. Sepp Hochreiter, Jurgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jurgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8): 1735-1780, November 1997.
On Using Very Large Target Vocabulary for Neural Machine Translation. Sebastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio, Association for Computational Linguistics. Sebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On Using Very Large Target Vocabulary for Neural Machine Translation. In Association for Computational Linguistics, 2015.
Adam: A Method for Stochastic Optimization. Diederik Kingma, Jimmy Ba, International Conference on Learning Representations. Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
A Clockwork RNN. Jan Koutnik, Klaus Greff, Faustino Gomez, Jurgen Schmidhuber, International Conference on Machine Learning. Jan Koutnik, Klaus Greff, Faustino Gomez, and Jurgen Schmidhuber. A Clockwork RNN. In International Conference on Machine Learning, 2014.
Latent Predictor Networks for Code Generation. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumin Wang, Phil Blunsom, Association for Computational Linguistics. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, FUmin Wang, and Phil Blunsom. Latent Predictor Networks for Code Generation. In Association for Computational Linguistics, 2016.
Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition. Liang Lu, Arnab Ghoshal, Steve Renals, Automatic Speech Recognition and Understanding Workshop. Liang Lu, Arnab Ghoshal, and Steve Renals. Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition. In Automatic Speech Recognition and Understanding Workshop, 2013.
Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models. Minh-Thang Luong, Christopher Manning, Association for Computational Linguistics. Minh-Thang Luong and Christopher Manning. Achieving Open Vocabulary Neural Machine Trans- lation with Hybrid Word-Character Models. In Association for Computational Linguistics, 2016.
Addressing the Rare Word Problem in Neural Machine Translation. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, Wojciech Zaremba, Association for Computational Linguistics. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Address- ing the Rare Word Problem in Neural Machine Translation. In Association for Computational Linguistics, 2015.
Learning Lexicons From Speech Using a Pronunciation Mixture Model. Ian Mcgraw, Ibrahim Badr, James Glass, IEEE Transactions on Audio, Speech, and Language Processing. 212Ian McGraw, Ibrahim Badr, and James Glass. Learning Lexicons From Speech Using a Pronun- ciation Mixture Model. IEEE Transactions on Audio, Speech, and Language Processing, 21(2), 2013.
Georg Stemmer, and Karel Vesely. The Kaldi Speech Recognition Toolkit. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannenmann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Automatic Speech Recognition and Understanding Workshop. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannenmann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. The Kaldi Speech Recognition Toolkit. In Automatic Speech Recognition and Understanding Workshop, 2011.
Sequence Level Training with Recurrent Neural Networks. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, International Conference on Learning Representations. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train- ing with Recurrent Neural Networks. In International Conference on Learning Representations, 2016.
Japanese and Korean Voice Search. Mike Schuster, Kaisuke Nakajima, IEEE International Conference on Acoustics, Speech and Signal Processing. Mike Schuster and Kaisuke Nakajima. Japanese and Korean Voice Search. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2012.
Bidirectional Recurrent Neural Networks. Mike Schuster, Kuldip Paliwal, IEEE Transactions on Signal Processing. 4511Mike Schuster and Kuldip Paliwal. Bidirectional Recurrent Neural Networks. IEEE Transactions on Signal Processing, 45(11), 1997.
Neural Machine Translation of Rare Words with Subword Units. Rico Sennrich, Barry Haddow, Alexandra Birch, Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with Subword Units. In Association for Computational Linguistics, 2016.
Automatic generation of subword units for speech recognition systems. Rita Singh, Bhiksha Raj, Richard Stern, IEEE Transactions on Speech and Audio Processing. 102Rita Singh, Bhiksha Raj, and Richard Stern. Automatic generation of subword units for speech recognition systems. IEEE Transactions on Speech and Audio Processing, 10(2), 2002.
Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, Quoc Le, Neural Information Processing Systems. Ilya Sutskever, Oriol Vinyals, and Quoc Le. Sequence to Sequence Learning with Neural Networks. In Neural Information Processing Systems, 2014.
Reinforcement Learning: An Introduction. Richard Sutton, Andrew Barto, MIT PressRichard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
Grammar as a foreign language. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey E Hinton, Neural Information Processing Systems. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. Gram- mar as a foreign language. In Neural Information Processing Systems, 2015a.
Show and Tell: A Neural Image Caption Generator. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, IEEE Conference on Computer Vision and Pattern Recognition. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and Tell: A Neural Image Caption Generator. In IEEE Conference on Computer Vision and Pattern Recognition, 2015b.
Order Matters: Sequence to sequence for sets. Oriol Vinyals, Samy Bengio, Manjunath Kudlur, International Conference on Learning Representations. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order Matters: Sequence to sequence for sets. In International Conference on Learning Representations, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, arXiv:1609.08144Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. Cliff Young, Jason Smith, Jason Riesa, Alex RudnickYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's Neu- ral Machine Translation System: Bridging the Gap between Human and Machine Translation. In arXiv:1609.08144, 2016.
Attend and Tell: Neural Image Caption Generation with Visual Attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio, Show, International Conference on Machine Learning. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In International Conference on Machine Learning, 2015.
Very deep convolutional networks for end-to-end speech recognition. Yu Zhang, William Chan, Navdeep Jaitly, IEEE International Conference on Acoustics, Speech and Signal Processing. Yu Zhang, William Chan, and Navdeep Jaitly. Very deep convolutional networks for end-to-end speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Process- ing, 2017.
Advances in All-Neural Speech Recognition. Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, Andreas Stolcke, arXiv:1609.05935Geoffrey Zweig, Chengzhu Yu, Jasha Droppo, and Andreas Stolcke. Advances in All-Neural Speech Recognition. In arXiv:1609.05935, 2016. |
14,992,224 | Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data | We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions by means of variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction. * Justin Bayer is also affiliated with sensed.io UG (haftungsbeschränkt), | [] | Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
Maximilian Karl
Chair of Robotics and Embedded Systems
Department of Informatics
Technische Universität München
Germany
Maximilian Soelch
Chair of Robotics and Embedded Systems
Department of Informatics
Technische Universität München
Germany
Justin Bayer
Chair of Robotics and Embedded Systems
Department of Informatics
Technische Universität München
Germany
Patrick Van Der Smagt
Chair of Robotics and Embedded Systems
Department of Informatics
Technische Universität München
Germany
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions by means of variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction. * Justin Bayer is also affiliated with sensed.io UG (haftungsbeschränkt),
Introduction
Estimating probabilistic models for sequential data is central to many domains, such as audio, natural language or physical plants [7,19,3,4,13]. The goal is to obtain a model p(x 1:T ) that best reflects a data set of observed sequences x 1:T . Recent advances in deep learning have paved the way to powerful models capable of representing high-dimensional sequences with temporal dependencies, e.g. [7,19,3,1].
A typical model assumption in systems theory is that the observed sequence x 1:T is generated by a corresponding latent sequence z 1:T . More specifically, state space models assume the latent sequence to be Markovian, i.e., z t contains all information on the distribution of z t+1 . Moreover, the emission distribution of x t is assumed to be determined by the corresponding z t . In short, we assume a latent state z t that holds all information available at time step t. Efficient inference of such latent states is only partially solved with state-space models. Under strong assumptions on the system, one can derive optimal Bayesian filters, such as the classical Kalman filter [11] for linear Gaussian models (LGMs). Yet, for less restrictive models, posterior distributions p(z 1:T | x 1:T ) are often intractable.
Leveraging a recently proposed estimator based on variational inference, stochastic gradient variational Bayes (SGVB, [12]), approximate inference of latent variables becomes tractable. Extensions to time series [1,3] resulted in considerable improvements of modeling quality in terms of compression, i.e., marginal likelihood of the data. Yet, in a wide range of applications, compression is of less importance than the recovery of interpretable, full-information latent states-a lacking feature in current approaches. This is crucial if the latent spaces are used in follow-up applications, of which model-based control is the most prominent and focus of this work.
The contribution of this work is, to our knowledge, the first model that (i) enforces the state-space model assumptions in latent space allowing for reliable and plausible long-term prediction of the observable system, (ii) inherits the merit of neural architectures to be trainable on raw data such as images, audio or other sensory inputs and (iii) scales to large data due to optimization of parameters based on stochastic gradient descent [2].
Background and Related Work
Probabilistic Modeling and Filtering of Dynamical Systems
We consider modeling a time-discrete, non-linear dynamical system with observations in some space X ⊂ R nx , depending on control inputs (or actions) from the space U ⊂ R nu . Elements of X can be high-dimensional sensory data such as raw images, or any other state observation. With x t ∈ X , let x 1:T = (x 1 , x 2 , . . . , x T ) be a sequence of length T of observations. Similarly, with u t ∈ U, let u 1:T = (u 1 , u 2 , . . . , u T ) be a corresponding sequence of equal length T of control inputs, which we consider as given. We are interested in deriving 3 a probabilistic model
p(x 1:T | u 1:T ).
We assume a state-space model with an underlying latent dynamical system over the space Z ⊂ R nz . Let z 1:T = (z 1 , z 2 , . . . , z T ), z t ∈ Z, be the corresponding latent sequence. Contrary to Z, the output space X is observable and may exhibit complex non-Markovian transitions. Formally, we assume the following graphical model:
p(x 1:T | u 1:T ) = p(x 1:T | z 1:T , u 1:T ) p(z 1:T | u 1:T ) dz 1:T(1)
Obtaining a hypothesis for the system dynamics is done via the estimation of the so-called emission model p(x 1:T | z 1:T , u 1:T ) and transition model p(z 1:T | u 1:T ). The latter is imperative for achieving good long-term results: a bad transition model can lead to divergence of the latent state.
Accordingly, we put special emphasis on it through a Bayesian treatment. Assuming that the transitions are non-stationary (and hence differ for each time step), we impose a prior distribution on a set of transition parameters β 1:T and marginalize it out subsequently:
(1) = p(x 1:T | z 1:T , u 1:T )p(z 1:T | β 1:T , u 1:T ) p(β 1:T ) dβ 1:T dz 1:T
To obtain state-space models, we impose assumptions on emission and state transition model:
p(x 1:T | z 1:T , u 1:T ) = T t=1 p(x t | z t )(3)
p(z 1:T | β 1:T , u 1:
T ) = T −1 t=0 p(z t+1 | z t , u t , β t )(4)
Eq.
(3) assumes that the current state z t contains all necessary information about x t . Likewise, Eq. (4) tells us that z t contains all necessary information for transition to z t+1 (given the current control input u t and transition parameters β t ). A typical instance of these assumptions are Linear Gaussian Models (LGMs), i.e., both state transition and emission model are affine transformations with Gaussian offset noise:
z t+1 = F t z t + B t u t + w t w t ∼ N (0, Q t ) (5) x t = H t z t + y t y t ∼ N (0, R t )(6)
Here, F t is referred to as the state transition matrix, B t as the control-input matrix. In our notation, one would typically set β t = w t , though other variants like β t = (F t , B t , w t ) are possible.
Finding efficient inference distributions p(z 1:T | x 1:T ) is in general an unsolved problem. Three examples are prediction, filtering, and smoothing: inference of z t from x 1:t−1 , x 1:t , or x 1:T , respectively. In classical systems theory, we consider the parameters of state transition and emission model as given. Under the strong assumptions (5) and (6) of LGMs, we can derive the provably optimal, well-known Kalman filters. While extensions of Kalman filters to nonlinear dynamical systems exist [10] and are successfully applied in many areas, they suffer from two major drawbacks: Firstly, its assumptions are restrictive and are violated in practical applications, leading to suboptimal results. Secondly, the parameters, particularly state-transition matrix F t and control-input matrix B t have to be known before filtering can be done. There have been efforts to learn such system dynamics, cf. [5], based on the expectation maximization (EM) algorithm. However, these algorithms are not applicable in cases where the true posterior distribution is intractable. This is the case if, e.g., image sequences are used, since the posterior is then highly nonlinear. Our new approach will tackle both issues.
Stochastic Gradient Variational Bayes (SGVB) for Time Series Distributions
Replacing the bottleneck layer of a deterministic auto-encoder with stochastic units z, the variational auto-encoder (VAE, [12,16]) learns complex marginal data distributions on x in an unsupervised fashion from simpler distributions via the graphical model
p(x) = p(x, z) dz = p(x | z)p(z) dz. In VAEs, p(x | z) ≡ p θ (x | z)
is typically parametrized by a neural network with parameters θ. Within this framework, models are trained by maximizing a lower bound to the marginal data log-likelihood via stochastic gradients:
ln p(x) ≥ E q φ (z|x) [ln p θ (x | z)] − KL(q φ (z | x) || p(z)) =: L SGV B (x, φ, θ)(7)
This is provably equivalent to minimizing the KL-divergence between the approximate posterior or recognition model q φ (z | x) and the true, but usually intractable posterior distribution p(z | x). q φ is parametrized by a neural network with parameters φ.
The principle of VAEs has been transferred to time series [1,3,9]. These models employ nonlinear state transitions in latent space, but violate Eq. (4): Observations are directly included in the transition process. Consequently, the state space Z does not reflect all information available. The recognition model q φ (z | x) becomes a powerful compression algorithm, prohibiting plausible long-term generative prediction. Such phenomena with generative models have been explained in [18].
Others have been specifically interested in applying variational inference for controlled dynamical systems. In [19], a VAE is used to learn the mapping to and from latent space. The state transition is modeled by a neural network that outputs a local transition matrix. The regularization of this network is clearly motivated by Eq. (7). However, it fails to be a mathematically correct lower bound to the marginal data likelihood. Moreover, their recognition model needs all observations that contain information w.r.t. the current state, i.e., a temporal i.i.d. assumption on data. This requires significant domain knowledge for data pre-processing.
In [14], the state-space assumptions (3) and (4) are only softly encoded in the KL-divergence term of their loss function (a variant of Eq. (7)). No gradient information about reconstruction error (the expectation term in (7)) is ever back-propagated through the transition. Hence, firstly a latent representation that enables good reconstruction is learned. This does not require information about time derivatives, i.e., information that can only be extracted from multiple observations. Secondly, the transition is learned to match this latent space. Indeed, experiments, cf. Section 4, show that their model fails to extract information such as velocity (and in general time derivatives). The latent space Z violates Eq. (4).
A key contribution of this paper is the reversal this effect, i.e., forcing the latent space to fit the transition, thus achieving the state-space model assumptions and full information in the latent states.
Deep Variational Bayes Filters
Reparametrizing the Transition
The central problem for learning latent states system dynamics is efficient inference of a latent space that obeys state-space model assumptions. If the latter are fulfilled, the latent space must contain all information. Previous approaches emphasized good reconstruction, so that the space only contains information necessary for reconstruction of one time step. To overcome this, we establish gradient paths through transitions over time so that the transition becomes the driving factor for shaping the Since z t contains all information about previous observations, so that they are mediately used for inference. latent space, rather than adjusting the transition to the recognition model's latent space. The key is to prevent the recognition model q φ (z 1:T | x 1:T ) from directly drawing the latent state z t .
Similar to the reparametrization trick from [12] for making the Monte Carlo estimate differential w.r.t. the parameters, we make the transition differentiable w.r.t. the last state and its parameters:
z t+1 = f (z t , u t , β t )(8)
Given the stochastic parameters β t , the state transition is deterministic (which in turn means that by marginalizing β t , we still have a stochastic transition). The immediate and crucial consequence is that errors in reconstruction of x t from z t are backpropagated directly through time.
This reparametrization has a couple of other important implications: The recognition model no longer infers latent states z t , but transition parameters β t . In particular, the gradient ∂z t+1 /∂z t is well-defined from (8)-gradient information can be backpropagated through the transition.
This is different to the method used in [14], where the transition is optimized by minimizing a KL divergence. No gradient from the generative model is backpropagated through the transitions.
Much like in Eq. (5), the stochastic parameters includes a corrective offset term w t , which emphasizes the notion of the recognition model as a filter. In theory, the learning algorithm could still learn the transition as z t+1 = w t . However, the introduction of β t also enables us to regularize the transition with meaningful priors, which not only prevents overfitting the recognition model, but also enforces meaningful manifolds in the latent space via transition priors. Ignoring the potential of the transition over time yields large penalties from these priors. Thus, the problems outlined in Section 2 are overcome by construction.
To install such transition priors, we split β t = (w t , v t ). The interpretation of w t is a sample-specific process noise which can be inferred from incoming data, like in Eq. (5). On the other hand, v t are universal transition parameters, which are sample-independent (and are only inferred from data during training). This corresponds to the idea of weight uncertainty in [8]. This interpretation implies the factorization of the recognition model:
q φ (β 1:T | x 1:T ) = q φ (w 1:T | x 1:T ) q φ (v 1:T )(9)
When using the fully trained model for generative sampling, i.e., sampling without input, the universal state transition parameters can still be drawn from q φ (v 1:T ), whereas w 1:T is drawn from the prior in the absence of input data. Fig. (1) shows the underlying graphical model and the inference procedure. Fig. (2a) shows a generic view on our new computational architecture. An example of a locally linear transition parametrization will be given in Section 3.3.
qφ(wt | ·)
the input/conditional is task-dependent qφ(vt)
β t ∼ qφ(β t ) = qφ(wt | ·)qφ(vt) transition in latent state space zt+1 = f (zt, ut, β t ) zt zt+1 ut pθ(xt+1 | zt+1)
(a) General scheme for arbitrary transitions.
z t u t v t w t β t α t = f ψ (z t , u t ) (e.g., neural network) (A, B, C) t = M i=1 α (i) t (A, B, C) (i) z t+1 = A t z t + B t u t + C t w t z t+1
(b) One particular example of a latent transitions: local linearity. Figure 2: Left: General architecture for DVBF. Stochastic transition parameters β t are inferred via the recognition model, e.g., a neural network. Based on a sampled β t , the state transition is computed deterministically. The updated latent state z t+1 is used for predicting x t+1 . For details, see Section 3.1. Right: Zoom into latent space transition (red box in left figure). One exemplary transition is shown, the locally linear transition from Section 3.3.
The Lower Bound Objective Function
In analogy to Eq. (7), we now derive a lower bound to the marginal likelihood p(x 1:T | u 1:T ). After reflecting the Markov assumptions (3) and (4) in the factorized likelihood (2), we have: Our experiments show that an annealed version of (10) is beneficial to the overall performance:
p(x 1:T | u 1:T ) = p(β 1:T ) T t=1 p θ (x t | z t ) T −1 t=0 p(z t+1 | z t , u t , β t )(10 ) = E q φ [c i ln p θ (x 1:T | z 1:T ) − ln q φ (β 1:T | x 1:T , u 1:T ) + c i ln p(w 1:T ) + ln p(v 1:T )]
Here, c i = max(1, 0.01 + i/T A ) is an inverse temperature that increases linearly in the number of gradient updates i until reaching 1 after T A annealing iterations. Similar annealing schedules have been applied in, e.g., [6,15,17]. Additionally, the transition prior p(β 1:T ) was estimated during optimization, i.e. through an empirical Bayes approach. In all experiments, we used a diagonal Gaussian parameterized by its means and variances.
Example: Locally Linear Transitions
We have derived a learning algorithm for time series with particular focus on transitions in latent space. Inspired by [19], this section will show how to learn locally linear state transitions in latent space. To parametrize the transition, we have to specify Eq. (8). In this case we set
z t+1 = A t z t + B t u t + C t w t , t = 1, . . . , T,(12)
where w t is a stochastic sample from the the recognition model and A t , B t , and C t are matrices of matching dimensions. They are stochastic functions of z t and u t (thus local linearity). We
set v t = {A (i) t , B (i) t , C (i) t | i = 1, .
. . , M }, i.e., by drawing from q φ (v t ), which is independent of observations, we draw a set of 3M basis matrices, and finally yield A t , B t , and C t as state-dependent and control-dependent linear combinations:
A t = M i=1 α (i) t A (i) t α t = f ψ (z t , u t ) ∈ R M B t = M i=1 α (i) t B (i) t C t = M i=1 α (i) t C (i) t
The computation is depicted in Fig. (2b). The function f ψ can be, e.g., a (deterministic) neural network with weights ψ. As a subset of the generative parameters θ, ψ is part of the trainable parameters of our model. The weight vector α t is shared between the three matrices. There is a correspondence to Eq. (5): A t and F t , B t and B t , as well as C t C T t and Q t are related.
Experiments and Results
In this section we validate that DVBF with locally linear transitions (DVBF-LL) (Section 3.3) outperforms Deep Kalman Filters (DKF, [14]) in recovering latent spaces with full information. We do not include E2C [19] since it necessitates specific data pre-processing and does not provide a correct lower bound. The experimental setup is described in the Supplementary Material.
Dynamic Pendulum
In order to test our algorithm on truly non-Markovian observations of a dynamical system, we simulated a dynamic torque-controlled pendulum governed by the differential equation ml 2φ (t) = −µφ(t) + mgl sin ϕ(t) + u(t), m = l = 1, µ = 0.5, g = 9.81, via numerical integration, and then converted the ground-truth angle ϕ into an image observation in X . The one-dimensional control corresponds to angle acceleration (which is proportional to joint torque). Angle and angular velocity fully describe the system. As we can see in Fig. (3a), DVBF-LL learned a two-dimensional manifold embedding, i.e., it encoded the angle in polar coordinates (thus circumventing the discontinuity of angles modulo 2π). The bottom row shows regressions underlining the performance: There exists a high correlation between latent states and ground-truth angle and angular velocity for DVBF-LL. On the contrary, Fig. (3b) verifies our prediction that DKF is equally capable of learning the angle, but extracts little to no information on angular velocity.
The OLS regression results shown in Table (1) validate this observation. 4 Predicting sin(ϕ) and cos(ϕ), i.e., polar coordinates of the ground-truth angle ϕ, works almost equally well for DVBF-LL and DKF, with DVBF-LL slightly outperforming DKF. For predicting the ground truth velocityφ, DVBF-LL shows remarkable performance. DKF, instead, contains hardly any information, resulting a very low goodness-of-fit score of R 2 = 0.035. [14] trained on the same pendulum dataset. The latent space plot shows that DKF fails to learn velocities of the pendulum. It is therefore not able to capture all information for representing the full pendulum state. (4) shows that the strong relation between ground truth and latent state is beneficial for generative sampling. All plots show 100 time steps of a pendulum starting from the exact same latent state and not being actuated. The top row plots show a purely generative walk in the latent space on the left, and a walk in latent space that is corrected by filtering observations on the right. We can see that both follow a similar trajectory to an attractor. The generative model is more prone to noise when approaching the attractor.
The bottom plot shows the first 45 steps of the corresponding observations (top row), reconstructions (middle row), and generative samples (without correcting from observations). Interestingly, DVBF works very well even though the sequence is much longer than all training sequences (indicated by the red line).
Table (2) shows values of the lower bound to the marginal data likelihood (for DVBF-LL, this corresponds to Eq. (11)). We see that DVBF-LL outperforms DKF in terms of compression, but only with a slight margin, which does not reflect the better generative sampling as [18] argues. x 21
x 100
x 1 (a) Generative latent walk.
x 21
x 100
Bouncing Ball
The bouncing ball experiment features a ball rolling within a bounding box in a plane. The system has a two-dimensional control input, added to the directed velocity of the ball. If the ball hits the wall, it bounces off, so that the true dynamics are highly dependent on the current position and velocity of the ball. The system's state is four-dimensional, two dimensions each for position and velocity.
Consequently, we use a DVBF-LL with four latent dimensions. Fig. (5) shows that DVBF again captures the entire system dynamics in the latent space.
Conclusion
We have proposed Deep Variational Bayes Filters (DVBF), a new method to learn state space models from raw non-Markovian sequence data. DVBFs make use of stochastic gradient variational Bayes to overcome intractable inference and thus naturally scale to large data sets. In a series of vision-based experiments we demonstrated that latent states can be recovered which identify the underlying physical quantities. The generative model showed stable long-term predictions far beyond the sequence length used during training.
Figure 1 :
1Left: Graphical model for one transition under state-space model assumptions. The updated latent state z t+1 depends on the previous state z t , control input u t , and transition parameters β t . z t+1 contains all information for generating observation x t+1 . Diamond nodes indicate a deterministic dependency on parent nodes. Right: Inference performed during training (or while filtering).
dβ 1:T dz 1:T Due to the deterministic transition given β t+1 , the last term is a product of Dirac distributions and the overall distribution simplifies greatly:p(x 1:T | u 1:T ) = p(β 1:T ) T t=1 p θ (x t | z t ) zt=f (zt−1,ut−1,β t−1 ) dβ 1:T = p(β 1:T )p θ (x 1:T | z 1:T ) dβ 1:TThe last formulation is for notational brevity: The term p θ (x 1:T | z 1:T ) is not independent of β 1:T and u 1:T . We now derive the objective function, a lower bound to the data likelihood:ln p(x 1:T | u 1:T ) = ln p(β 1:T )p θ (x 1:T | z 1:T ) q φ (β 1:T | x 1:T , u 1:T ) q φ (β 1:T | x 1:T , u 1:T ) dβ 1:T ≥ q φ (β 1:T | x 1:T , u 1:T ) ln p θ (x 1:T | z 1:T ) p(β 1:T ) q φ (β 1:T | x 1:T , u 1:T ) dβ 1:T = E q φ [ln p θ (x 1:T | z 1:T ) − ln q φ (β 1:T | x 1:T , u 1:T ) + ln p(β 1:T )] (10) = E q φ [ln p θ (x 1:T | z 1:T )] − KL(q φ (β 1:T | x 1:T , u 1:T ) || p(β 1:T )) (11) =: L DVBF (x 1:T , θ, φ | u 1:T )
Fig. ( 3
3) shows the latent spaces for identical input data learned by DVBF-LL and DKF, respectively, colored with the ground truth in the top row. It should be noted that latent samples are shown, not means of posterior distributions. The state-space model was allowed to use three latent dimensions.
Figure 3 :
3(a) Our DVBF-LL model trained on pendulum image sequences. The upper plots show the latent space with coloring according to the ground truth with angles on the left and angular velocities on the right. The lower plots show regression results for predicting ground truth from the latent representation. The latent space plots show clearly that all information for representing the full state of a pendulum is encoded in each latent state. (b) DKF from
truth (top), reconstructions (middle), generative samples (bottom) from identical initial latent state.
Figure 4 :
4(a) Latent space walk in generative mode. (b) Latent space walk in filtering mode. (c) Ground truth and samples from recognition and generative model. The reconstruction sampling has access to observation sequence and performs filtering. The generative samples only get access to the observations once for creating the initial state while all subsequent samples are predicted from this single initial state. The red bar indicates the length of training sequences. Samples beyond show the generalization capabilities for sequences longer than during training. walk bouncing ball. (b) Latent space velocities.
Figure 5 :
5(a) Two dimensions of 4D bouncing ball latent space. Colored with ground truth ball position. Ground truth x and y coordinates are combined into a regular 3×3 checkerboard coloring. (b) Remaining two latent dimensions colored with ball velocities in x and y direction.
Table 1 :
1Results for pendulum OLS regressions of all latent states on respective dependent variable.Dependent
ground truth
variable
DVBF-LL
DKF
Log-Likelihood
R 2
Log-Likelihood
R 2
sin(ϕ)
3990.8
0.961
1737.6
0.929
cos(ϕ)
7231.1
0.982
6614.2
0.979
ϕ
−11139
0.916
−20289
0.035
Fig.
Table 2 :
2Average test set objective function values for pendulum experiment. Lower Bound = Reconstruction Error − KL divergenceDVBF-LL
798.56
802.06
3.50
DKF
784.70
788.58
3.88
The case without control inputs can be recovered by setting U = ∅, i.e., not conditioning on control inputs.
Linear regression is a natural choice: After transforming the ground truth to polar coordinates, an affine transformation should be a good fit for predicting ground truth from latent states. We also tried nonlinear regression with vanilla neural networks. While not being shown here, the results underlined the same conclusion.
AcknowledgementsThis work has been supported in part by the TACMAN project, EC Grant agreement no. 610967, within the FP7 framework programme.We would like to thank Jost Tobias Springenberg, Adam Kosiorek, and Moritz Münst for valuable input.A Supplementary to Lower BoundA.1 Annealed KL-DivergenceWe used the analytical solution of the annealed KL-divergence in (10) for optimization:
Learning stochastic recurrent networks. Justin Bayer, Christian Osendorfer, arXiv:1411.7610arXiv preprintJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610, 2014.
Large-scale machine learning with stochastic gradient descent. Léon Bottou, Proceedings of COMPSTAT'2010. COMPSTAT'2010SpringerLéon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010, pages 177-186. Springer, 2010.
A recurrent latent variable model for sequential data. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, Yoshua Bengio, abs/1506.02216CoRRJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. CoRR, abs/1506.02216, 2015.
Pilco: A model-based and data-efficient approach to policy search. Marc Deisenroth, Carl E Rasmussen, Proceedings of the 28th International Conference on machine learning (ICML-11). the 28th International Conference on machine learning (ICML-11)Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465-472, 2011.
Parameter estimation for linear dynamical systems. Zoubin Ghahramani, Geoffrey E Hinton, CRG-TR-96-2University of Toronto, Dept. of Computer ScienceTechnical ReportZoubin Ghahramani and Geoffrey E Hinton. Parameter estimation for linear dynamical systems. Technical report, Technical Report CRG-TR-96-2, University of Toronto, Dept. of Computer Science, 1996.
Variational learning for switching state-space models. Zoubin Ghahramani, Geoffrey E Hinton, Neural computation. 124Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models. Neural computation, 12(4):831-864, 2000.
Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.0850arXiv preprintAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Keeping the neural networks simple by minimizing the description length of the weights. E Geoffrey, Drew Hinton, Van Camp, Proceedings of the sixth annual conference on Computational learning theory. the sixth annual conference on Computational learning theoryACMGeoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5-13. ACM, 1993.
J Matthew, David Johnson, Alexander B Duvenaud, Wiltschko, R Sandeep, Ryan P Datta, Adams, arXiv:1603.06277Structured VAEs: Composing probabilistic graphical models and variational autoencoders. arXiv preprintMatthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. Structured VAEs: Composing probabilistic graphical models and variational autoen- coders. arXiv preprint arXiv:1603.06277, 2016.
New extension of the kalman filter to nonlinear systems. J Simon, Julier, Jeffrey K Uhlmann, AeroSense'97. Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. In AeroSense'97, pages 182-193. International Society for Optics and Photonics, 1997.
New results in linear filtering and prediction theory. E Rudolph, Richard S Kalman, Bucy, Journal of basic engineering. 831Rudolph E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journal of basic engineering, 83(1):95-108, 1961.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Learning gp-bayesfilters via gaussian process latent variable models. Jonathan Ko, Dieter Fox, Autonomous Robots. 301Jonathan Ko and Dieter Fox. Learning gp-bayesfilters via gaussian process latent variable models. Autonomous Robots, 30(1):3-23, 2011.
. Uri Rahul G Krishnan, David Shalit, Sontag, arXiv:1511.05121Deep Kalman filters. arXiv preprintRahul G Krishnan, Uri Shalit, and David Sontag. Deep Kalman filters. arXiv preprint arXiv:1511.05121, 2015.
Stephan Mandt, James Mcinerney, Farhan Abrol, Rajesh Ranganath, David Blei, arXiv:1411.1810Multicanonical stochastic variational inference. arXiv preprintStephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and David Blei. Multi- canonical stochastic variational inference. arXiv preprint arXiv:1411.1810, 2014.
Stochastic backpropagation and approximate inference in deep generative models. Danilo J Rezende, Shakir Mohamed, Daan Wierstra, Proceedings of the 31st International Conference on Machine Learning (ICML-14). Tony Jebara and Eric P. Xingthe 31st International Conference on Machine Learning (ICML-14)JMLR Workshop and Conference ProceedingsDanilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Tony Jebara and Eric P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1278-1286. JMLR Workshop and Conference Proceedings, 2014.
Danilo Jimenez Rezende, Shakir Mohamed, arXiv:1505.05770Variational inference with normalizing flows. arXiv preprintDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
Lucas Theis, arXiv:1511.01844Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprintLucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
Embed to control: A locally linear latent dynamics model for control from raw images. Manuel Watter, Jost Springenberg, Joschka Boedecker, Martin Riedmiller, Advances in Neural Information Processing Systems. Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2728-2736, 2015.
In all our experiments, we use sequences of 15 raw images of the respective system with 16×16 pixels each, i.e., observation space X ⊂ R 256 , as well as control inputs of varying dimension and interpretation depending on the experiment. We used training, validation and test sets with 500 sequences each. Control input sequences were drawn randomly. motor babbling")In all our experiments, we use sequences of 15 raw images of the respective system with 16×16 pixels each, i.e., observation space X ⊂ R 256 , as well as control inputs of varying dimension and interpretation depending on the experiment. We used training, validation and test sets with 500 sequences each. Control input sequences were drawn randomly ("motor babbling").
Implementation details for DVBF in Pendulum Experiment • Input: 15 timesteps of 16 2 observation dimensions and 1 action dimension • Latent Space: 3 dimensions • Observation Network p(x t |z t ) = N (x t ; µ(z t ), σ): 128 ReLU + 16 2 identity output • Recognition Model: 128 ReLU + 6 identity output q. w t |z t , x t+1 , u t ) = N (w t ; µ, σ), (µ, √ σ) = f (z t , x t+1 , u tB.2 Implementation details for DVBF in Pendulum Experiment • Input: 15 timesteps of 16 2 observation dimensions and 1 action dimension • Latent Space: 3 dimensions • Observation Network p(x t |z t ) = N (x t ; µ(z t ), σ): 128 ReLU + 16 2 identity output • Recognition Model: 128 ReLU + 6 identity output q(w t |z t , x t+1 , u t ) = N (w t ; µ, σ), (µ, √ σ) = f (z t , x t+1 , u t )
16 softmax output • Initial Network w 1 ∼ p(x 1:T ): Fast Dropout BiRNN with: 128 ReLU + 3 identity output • Initial Transition z. 20128 ReLU + 3 identity output • Optimizer: adadelta, 0.1 step rate • Inverse temperature. c 0 = 0.01, updated every 10th gradient update, T A = 4000 epochs • Batch-size• Transition Network α t (z t ): 16 softmax output • Initial Network w 1 ∼ p(x 1:T ): Fast Dropout BiRNN with: 128 ReLU + 3 identity output • Initial Transition z 1 (w 1 ): 128 ReLU + 3 identity output • Optimizer: adadelta, 0.1 step rate • Inverse temperature: c 0 = 0.01, updated every 10th gradient update, T A = 4000 epochs • Batch-size: 20
Implementation details for DVBF in Bouncing Ball Experiment • Input: 15 timesteps of 16 2 observation dimensions and 2 action dimension • Latent Space: 4 dimensions • Observation Network p(x t |z t ) = N (x t ; µ(z t ), σ): 128 ReLU + 16 2 identity output • Recognition Model: 128 ReLU + 6 identity output q. w t |z t , x t+1 , u t ) = N (w t ; µ, σ), (µ, √ σ) = f (z t , x t+1 , u tB.3 Implementation details for DVBF in Bouncing Ball Experiment • Input: 15 timesteps of 16 2 observation dimensions and 2 action dimension • Latent Space: 4 dimensions • Observation Network p(x t |z t ) = N (x t ; µ(z t ), σ): 128 ReLU + 16 2 identity output • Recognition Model: 128 ReLU + 6 identity output q(w t |z t , x t+1 , u t ) = N (w t ; µ, σ), (µ, √ σ) = f (z t , x t+1 , u t )
16 softmax output • Initial Network w 1 ∼ p(x 1:T ): Fast Dropout BiRNN with: 128 ReLU + 3 identity output • Initial Transition z. 20128 ReLU + 3 identity output • Optimizer: adadelta, 0.1 step rate • Inverse temperature. c 0 = 0.01, updated every 10th gradient update, T A = 4000 epochs • Batch-size• Transition Network α t (z t ): 16 softmax output • Initial Network w 1 ∼ p(x 1:T ): Fast Dropout BiRNN with: 128 ReLU + 3 identity output • Initial Transition z 1 (w 1 ): 128 ReLU + 3 identity output • Optimizer: adadelta, 0.1 step rate • Inverse temperature: c 0 = 0.01, updated every 10th gradient update, T A = 4000 epochs • Batch-size: 20
temperature: c 0 = 0.01Implementation details for DKF in Pendulum Experiment • Input: 15 timesteps of 16 2 observation dimensions and 1 action dimension • Latent Space: 3 dimensions • Observation Network p(x t |z t ) = N. 128 Sigmoid + 128 Sigmoid + 6 output • Optimizer: adam, 0.001 step rate • Inverse. updated every 10th gradient update, T A = 2000 iterations • Batch-sizeB.4 Implementation details for DKF in Pendulum Experiment • Input: 15 timesteps of 16 2 observation dimensions and 1 action dimension • Latent Space: 3 dimensions • Observation Network p(x t |z t ) = N (x t ; µ(z t ), σ(z t )): 128 Sigmoid + 128 Sigmoid + 2 16 2 identity output • Recognition Model: Fast Dropout BiRNN 128 Sigmoid + 128 Sigmoid + 3 identity output • Transition Network p(z t |z t−1 , u t−1 ): 128 Sigmoid + 128 Sigmoid + 6 output • Optimizer: adam, 0.001 step rate • Inverse temperature: c 0 = 0.01, updated every 10th gradient update, T A = 2000 iterations • Batch-size: 200 |
8,728,609 | MULTI-VIEW RECURRENT NEURAL ACOUSTIC WORD EMBEDDINGS | Recent work has begun exploring neural acoustic word embeddings-fixeddimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.One of the useful properties of this multi-view approach is that, unlike earlier work on acoustic word embeddings, it produces both acoustic and orthographic embeddings that can be directly compared. This makes it possible to use the same learned embeddings for multiple single-view and cross-view tasks. Our multi-view embeddings produce improved results over earlier work on acoustic word discrimination, as well as encouraging results on cross-view discrimination and word similarity. 1 | [
5882977,
216848261,
3226120,
13805769,
6628106,
11440692,
1957433
] | MULTI-VIEW RECURRENT NEURAL ACOUSTIC WORD EMBEDDINGS
Wanjia He [email protected]
Department of Computer Science
University of Chicago Chicago
60637ILUSA
Weiran Wang [email protected]
Toyota Technological Institute at Chicago Chicago
60637ILUSA
Karen Livescu [email protected]
Toyota Technological Institute at Chicago Chicago
60637ILUSA
MULTI-VIEW RECURRENT NEURAL ACOUSTIC WORD EMBEDDINGS
Published as a conference paper at ICLR 2017
Recent work has begun exploring neural acoustic word embeddings-fixeddimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.One of the useful properties of this multi-view approach is that, unlike earlier work on acoustic word embeddings, it produces both acoustic and orthographic embeddings that can be directly compared. This makes it possible to use the same learned embeddings for multiple single-view and cross-view tasks. Our multi-view embeddings produce improved results over earlier work on acoustic word discrimination, as well as encouraging results on cross-view discrimination and word similarity. 1
INTRODUCTION
Word embeddings-continuous-valued vector representations of words-are an almost ubiquitous component of recent natural language processing (NLP) research. Word embeddings can be learned using spectral methods (Deerwester et al., 1990) or, more commonly in recent work, via neural networks (Bengio et al., 2003;Mnih & Hinton, 2007;Mikolov et al., 2013;Pennington et al., 2014). Word embeddings can also be composed to form embeddings of phrases, sentences, or documents Kiros et al., 2015;Wieting et al., 2016;Iyyer et al., 2015).
In typical NLP applications, such embeddings are intended to represent the semantics of the corresponding words/sequences. In contrast, embeddings that represent the way a word or sequence sounds are rarely considered. In this work we address this problem, starting with embeddings of individual words. Such embeddings could be useful for tasks like spoken term detection (Fiscus et al., 2007), spoken query-by-example search (Anguera et al., 2014), or even speech recognition using a whole-word approach (Gemmeke et al., 2011;Bengio & Heigold, 2014). In tasks that involve comparing speech segments to each other, vector embeddings can allow more efficient and more accurate distance computation than sequence-based approaches such as dynamic time warping (Levin et al., 2013(Levin et al., , 2015Kamper et al., 2016;Settle & Livescu, 2016;Chung et al., 2016).
We consider the problem of learning vector representations of acoustic sequences and orthographic (character) sequences corresponding to single words, such that the learned embeddings represent the way the word sounds. We take a multi-view approach, where we jointly learn the embeddings for character and acoustic sequences. We consider several contrastive losses, based on learning from pairs of matched acoustic-orthographic examples and randomly drawn mismatched pairs. The losses correspond to different goals for learning such embeddings; for example, we might want the embeddings of two waveforms to be close when they correspond to the same word and far when they correspond to different ones, or we might want the distances between embeddings to correspond to some ground-truth orthographic edit distance.
OUR APPROACH
In this section, we first introduce our approach for learning acoustic word embeddings in a multiview setting, after briefly reviewing related approaches to put ours in context. We then discuss the particular neural network architecture we use, based on bidirectional long short-term memory (LSTM) networks (Hochreiter & Schmidhuber, 1997).
MULTI-VIEW LEARNING OF ACOUSTIC WORD EMBEDDINGS
Previous approaches have focused on learning acoustic word embeddings in a "single-view" setting. In the simplest approach, one uses supervision of the form "acoustic segment x is an instance of the word y", and trains the embedding to be discriminative of the word identity. Formally, given a dataset of paired acoustic segments and word labels {(x i , y i )} N i=1 , this approach solves the following optimization:
min f,h obj classif y := 1 N N i (h(f (x i )), y i ) ,(1)
where network f maps an acoustic segment into a fixed-dimensional feature vector/embedding, h is a classifier that predicts the corresponding word label from the label set of the training data, and the loss measures the discrepancy between the prediction and ground-truth word label (one can use any multi-class classification loss here, and a typical choice is the cross-entropy loss where h has a softmax top layer). The two networks f and h are trained jointly. Equivalently, one could consider the composition h(f (x)) as a classifier network, and use any intermediate layer's activations as the features. We refer to the objective in (1) as the "classifier network" objective, which has been used in several prior studies on acoustic word embeddings (Bengio & Heigold, 2014;Kamper et al., 2016;Settle & Livescu, 2016).
This objective, however, is not ideal for learning acoustic word embeddings. This is because the set of possible word labels is huge, and we may not have enough instances of each label to train a good classifier. In downstream tasks, we may encounter acoustic segments of words that did not appear in the embedding training set, and it is not clear that the classifier-based embeddings will have reasonable behavior on previously unseen words.
An alternative approach, based on Siamese networks (Bromley et al., 1993), uses supervision of the form "segment x 1 is similar to segment x 2 , and is not similar to segment x 3 ", where two segments are considered similar if they have the same word label and dissimilar otherwise. Models based on Siamese networks have been used for a variety of representation learning problems in NLP (Hu et al., 2014;Wieting et al., 2016), vision (Hadsell et al., 2006), and speech (Synnaeve et al., 2014;Kamper et al., 2015) including acoustic word embeddings (Kamper et al., 2016;Settle & Livescu, 2016). A typical objective in this category enforces that the distance between (x 1 , x 3 ) is larger than the distance between (x 1 , x 2 ) by some margin:
min f obj siamese := 1 N N i max 0, m + dis f (x 1 i ), f (x 2 i ) − dis f (x 1 i ), f (x 3 i ) ,(2)
where the network f extracts the fixed-dimensional embedding, the distance function dis (·, ·) measures the distance between the two embedding vectors, and m > 0 is the margin parameter. The term "Siamese" (Bromley et al., 1993;Chopra et al., 2005) refers to the fact that the triplet (x 1 , x 2 , x 3 ) share the same embedding network f .
Unlike the classification-based loss, the Siamese network loss does not enforce hard decisions on the label of each segment. Instead it tries to learn embeddings that respect distances between word pairs, which can be helpful for dealing with unseen words. The Siamese network approach also uses more examples in training, as one can easily generate many more triplets than (segment, label) pairs, and it is not limited to those labels that occur a sufficient number of times in the training set.
The above approaches treat the word labels as discrete classes, which ignores the similarity between different words, and does not take advantage of the more complex information contained in the character sequences corresponding to word labels. The orthography naturally reflects some aspects of similarity between the words' pronunciations, which should also be reflected in the acoustic embeddings. One way to learn features from multiple sources of complementary information is using a multi-view representation learning setting. We take this approach, and consider the acoustic segment and the character sequence to be two different views of the pronunciation of the word.
While many deep multi-view learning objectives are applicable (Ngiam et al., 2011;Srivastava & Salakhutdinov, 2014;Sohn et al., 2014;Wang et al., 2015), we consider the multi-view contrastive loss objective of (Hermann & Blunsom, 2014), which is simple to optimize and implement and performs well in practice. In this algorithm, we embed acoustic segments x by a network f and character label sequences c by another network g into a common space, and use weak supervision of the form "for paired segment x + and its character label sequence c + , the distance between their embedding is much smaller than the distance between embeddings of x + and an unmatched character label sequence c − ". Formally, we optimize the following objective with such supervision:
min f,g obj 0 := 1 N N i max 0, m + dis f (x + i ), g(c + i ) − dis f (x + i ), g(c − i ) ,(3)
where c − i is a negative character label sequence of x + i to be contrasted with the positive/correct character sequence c + i , and m is the margin parameter. In this paper we use the cosine distance,
dis (a, b) = 1 − a a , b b . 2
Note that in the multi-view setting, we have multiple ways of generating triplets that contain one positive pair and one negative pair each. Below are the other three objectives we explore in this paper:
min f,g obj 1 := 1 N N i max 0, m + dis f (x + i ), g(c + i ) − dis g(c + i ), g(c − i ) ,(4)min f,g obj 2 := 1 N N i max 0, m + dis f (x + i ), g(c + i ) − dis f (x − i ), g(c + i ) ,(5)min f,g obj 3 := 1 N N i max 0, m + dis f (x + i ), g(c + i ) − dis f (x + i ), f (x − i ) .(6)
x − i in (5) and (6) refers to a negative acoustic feature sequence, that is one with a different label from x + i . We note that obj 1 and obj 3 contain distances between same-view embeddings, and are less thoroughly explored in the literature. We will also consider combinations of obj 0 through obj 3 .
Finally, thus far we have considered losses that do not explicitly take into account the degree of difference between the positive and negative pairs (although the learned embeddings may implicitly learn this through the relationship between sequences in the two views). We also consider a costsensitive objective designed to explicitly arrange the embedding space such that word similarity is respected. In (3), instead of a fixed margin m, we use:
m(c + , c − ) := m max · min (t max , editdis(c + , c − )) t max ,(7)
where t max > 0 is a threshold for edit distances (all edit distances above t max are considered equally bad), and m max is the maximum margin we impose. The margin is set to m max if the edit distance between two character sequences is above t max ; otherwise it scales linearly with the edit distance editdis(c + , c − )). We use the Levenshtein distance as the edit distance. Here we explore the costsensitive margin with obj 0 , but it could in principle be used with other objectives as well.
2 In experiments, we use the unit-length vector a a as the embedding. It tends to perform better than f (x) and more directly reflects the cosine similarity. This is equivalent to adding a nonlinear normalization layer on top of f .
RECURRENT NEURAL NETWORK ARCHITECTURE
Since the inputs of both views have a sequential structure, we implement both f and g with recurrent neural networks and in particular long-short term memory networks (LSTMs). Recurrent neural networks are the state-of-the-art models for a number of speech tasks including speech recognition Graves et al. (2013), and LSTM-based acoustic word embeddings have produced the best results on one of the tasks in our experiments (Settle & Livescu, 2016).
As shown in Figure 1, our f and g are produced by multi-layer (stacked) bidirectional LSTMs. The inputs can be any frame-level acoustic feature representation and vector representation of the characters in the orthographic input. At each layer, two LSTM cells process the input sequence from left to right and from right to left respectively. At intermediate layers, the outputs of the two LSTMs at each time step are concatenated to form the input sequence to the next layer. At the top layer, the last time step outputs of the two LSTMs are concatenated to form a fixed-dimensional embedding of the view, and the embeddings are then used to calculate the cosine distances in our objectives.
RELATED WORK
We are aware of no prior work on multi-view learning of acoustic and character-based word embeddings. However, acoustic word embeddings learned in other ways have recently begun to be studied. Levin et al. (2013) proposed an approach for embedding an arbitrary-length segment of speech as a fixed-dimensional vector, based on representing each word as a vector of dynamic time warping (DTW) distances to a set of template words. This approach produced improved performance on a word discrimination task compared to using raw DTW distances, and was later also applied successfully for a query-by-example task (Levin et al., 2015). One disadvantage of this approach is that, while DTW handles the issue of variable sequence lengths, it is computationally costly and involves a number of DTW parameters that are not learned. (2017) jointly trained CNN embeddings of images and spoken captions, and showed that word-like unit embeddings can be extracted from the speech model. CNNs require normalizing the duration of the input sequences, which has typically been done via padding. RNNs, on the other hand, are more flexible in dealing with very different-length sequences. Chen et al. (2015) used long short-term memory (LSTM) networks with a classification loss to embed acoustic words for a simple (single-query) query-by-example search task. Chung et al. (2016) learned acoustic word embeddings based on recurrent neural network (RNN) autoencoders, and found that they improve over DTW for a word discrimination task similar to that of Levin et al. (2013). Audhkhasi et al. (2017) learned autoencoders for acoustic and written words, as well as a model for comparing the two, and applied these to a keyword search task.
Evaluation of acoustic word embeddings in downstream tasks such as speech recognition and search can be costly, and can obscure details of embedding models and training approaches. Most evaluations have been based on word discrimination -the task of determining whether two speech segments correspond to the same word or not -which can be seen as a proxy for query-by-example search (Levin et al., 2013;Kamper et al., 2016;Settle & Livescu, 2016;Chung et al., 2016). One difference between word discrimination and search/recognition tasks is that in word discrimination the word boundaries are given. However, prior work has been able to apply results from word discrimination Levin et al. (2013) to improve a query-by-example system without known word boundaries Levin et al. (2015), by simply applying their embeddings to non-word segments as well.
The only prior work focused on vector embeddings of character sequences explicitly aimed at representing their acoustic similarity is that of Ghannay et al. (2016), who proposed evaluations based on nearest-neighbor retrieval, phonetic/orthographic similarity measures, and homophone disambiguation. We use related tasks here, as well as acoustic word discrimination for comparison with prior work on acoustic embeddings.
EXPERIMENTS AND RESULTS
The ultimate goal is to gain improvements in speech systems where word-level discrimination is needed, such as speech recognition and query-by-example search. However, in order to focus on the content of the embeddings themselves and to more quickly compare a variety of models, it is desirable to have surrogate tasks that serve as intrinsic measures of performance. Here we consider three forms of evaluation, all based on measuring whether cosine distances between learned embeddings correspond well to desired properties.
In the first task, acoustic word discrimination, we are given a pair of acoustic sequences and must decide whether they correspond to the same word or to different words. This task has been used in several prior papers on acoustic word embeddings Kamper et al. (2015Kamper et al. ( , 2016; Chung et al. (2016); Settle & Livescu (2016) and is a proxy for query-by-example search. For each given spoken word pair, we calculate the cosine distance between their embeddings. If the cosine distance is below a threshold, we output "yes" (same word), otherwise we output "no" (different words). The performance measure is the average precision (AP), which is the area under the precision-recall curve generated by varying the threshold and has a maximum value of 1.
In our multi-view setup, we embed not only the acoustic words but also the character sequences. This allows us to use our embeddings also for tasks involving comparisons between written and spoken words. For example, the standard task of spoken term detection (Fiscus et al., 2007) involves searching for examples of a given text query in spoken documents. This task is identical to queryby-example except that the query is given as text. In order to explore the potential of multi-view embeddings for such tasks, we design another proxy task, cross-view word discrimination. Here we are given a pair of inputs, one a written word and one an acoustic word segment, and our task is to determine if the acoustic signal is an example of the written word. The evalution proceeds analogously to the acoustic word discrimination task: We output "yes" if the cosine distance between the embeddings of the written and spoken sequences are below some threshold, and measure performance as the average precision (AP) over all thresholds.
Finally, we also would like to obtain a more fine-grained measure of whether the learned embeddings capture our intuitive sense of similarity between words. Being able to capture word similarity may also be useful in building query or recognition systems that fail gracefully and produce humanlike errors. For this purpose we measure the rank correlation between embedding distances and character edit distances. This is analogous to the evaluation of semantic word embeddings via the rank correlation between embedding distances and human similarity judgments (Finkelstein et al., 2001;Hill et al., 2015). In our case, however, we do not use human judgments since the ground-truth edit distances themselves provide a good measure. We refer to this as the word similarity task, and we apply this measure to both pairs of acoustic embeddings and pairs of character sequence embeddings. Similar measures have been proposed by Ghannay et al. (2016) to evaluate acoustic word embeddings, although they considered only near neighbors of each word whereas we consider the correlation across the full range of word pairs.
In the experiments described below, we first focus on the acoustic word discrimination task for purposes of initial exploration and hyperparameter search, and then largely fix the models for evaluation using the cross-view word discrimination and word similarity measures.
DATA
We use the same experimental setup and data as in Kamper et al. (2015Kamper et al. ( , 2016; Settle & Livescu (2016). The task and setup were first developed by (Carlin et al., 2011). The data is drawn from the Switchboard English conversational speech corpus (Godfrey et al., 1992). The spoken word segments range in duration from 50 to 200 frames (0.5 -2 seconds). The train/dev/test splits contain 9971/10966/11024 pairs of acoustic segments and character sequences, corresponding to 1687/3918/3390 unique words. In computing the AP for the dev or test set, all pairs in the set are used, yielding approximately 60 million word pairs.
The input to the embedding model in the acoustic view is a sequence of 39-dimensional vectors (one per frame) of standard mel frequency cepstral coefficients (MFCCs) and their first and second derivatives. The input to the character sequence embedding model is a sequence of 26-dimensional one-hot vectors indicating each character of the word's orthography.
MODEL DETAILS AND HYPERPARAMETER TUNING
We experiment with different neural network architectures for each view, varying the number of stacked LSTM layers, the number of hidden units for each layer, and the use of single-or bidirectional LSTM cells. A coarse grid search shows that 2-layer bidirectional LSTMs with 512 hidden units per direction per layer perform well on the acoustic word discrimination task, and we keep this structure fixed for subsequent experiments (see Appendix A for more details). We use the outputs of the top-layer LSTMs as the learned embedding for each view, which is 1024-dimensional if bidirectional LSTMs are used.
In training, we use dropout on the inputs of the acoustic view and between stacked layers for both views. The architecture is illustrated in Figure 1. For each training example, our contrastive losses require a corresponding negative example. We generate a negative character label sequence by uniformly sampling a word label from the training set that is different from the positive label. We perform a new negative label sampling at the beginning of each epoch. Similarly, negative acoustic feature sequences are uniformly sampled from all of the differently labeled acoustic feature sequences in the training set.
The network weights are initialized with values sampled uniformly from the range [−0.05, 0.05].
We use the Adam optimizer (Kingma & Ba, 2015) for updating the weights using mini-batches of 20 acoustic segments, with an initial learning rate tuned over {0.0001, 0.001}. Dropout is used at each layer, with the rate tuned over {0, 0.2, 0.4, 0.5}, in which 0.4 usually outperformed others. The margin in our basic contrastive objectives 0-3 is tuned over {0.3, 0.4, 0.5, 0.6, 0.7}, out of which 0.4 and 0.5 typically yield best results. For obj 0 with the cost-sensitive margin, we tune the maximum margin m max over {0.5, 0.6, 0.7} and the threshold t max over {9, 11, 13}. We train each model for up to 1000 epochs. The model that gives the best AP on the development set is used for evaluation on the test set.
EFFECTS OF DIFFERENT OBJECTIVES
We presented four contrastive losses (3)-(6) and potential combinations in Section 2.1. We now explore the effects of these different objectives on the word discrimination tasks. Table 1 shows the development set AP for acoustic and cross-view word discrimination achieved using the various objectives. We tuned the objectives for the acoustic discrimination task, and then used the corresponding converged models for the cross-view task. Of the simple contrastive objectives, obj 0 and obj 2 (which involve only cross-view distances) slightly outperform the other two on the acoustic word discrimination task. The best-performing objective is the "symmetrized" objective obj 0 + obj 2 , which significantly outperforms all individual objectives (and the combination of the four). Finally, the cost-sensitive objective is very competitive as well, while falling slightly short of the best performance. We note that a similar objective to our obj 0 + obj 2 was used by Vendrov et al. (2016) for the task of caption-image retrieval, where the authors essentially use all non-paired
Method
Test AP Test AP (acoustic) (cross-view) MFCCs + DTW (Kamper et al., 2016) 0.214 Correspondence autoencoder + DTW (Kamper et al., 2015) 0.469 Phone posteriors + DTW (Carlin et al., 2011) 0.497 Siamese CNN (Kamper et al., 2016) 0.549 Siamese LSTM (Settle & Livescu, 2016) 0.671 Our multi-view LSTM obj 0 + obj 2 0.806 0.892 Table 2: Final test set AP for different word discrimination approaches. The first line is a baseline using no word embeddings, but rather applying dynamic time warping (DTW) to the input MFCC features. The second and third lines are prior results using no word embeddings (but rather using DTW with learned correspondence autoencoder-based or phone posterior features, trained on larger external (in-domain) data). The remaining prior work corresponds to using cosine similarity between acoustic word embeddings.
examples from the other view in the minibatch as negative examples (instead of random sampling one negative example as we do) to be contrasted with one paired example. Figure 2 shows the progression of the development set AP for acoustic word discrimination over 1000 training epochs, using several of the objectives, where AP is evaluated every 5 epochs. We observe that even after 1000 epochs, the development set AP has not quite saturated, indicating that it may be possible to further improve performance.
Overall, our best-performing objective is the combined obj 0 + obj 2 , and we use it for reporting final test-set results. Table 2 shows the test set AP for both the acoustic and cross-view tasks using our final model ("multi-view LSTM"). For comparison, we also include acoustic word discrimination results reported previously by Kamper et al. (2016); Settle & Livescu (2016). Previous approaches have not addressed the problem of learning embeddings jointly with the text view, so they can not be evaluated on the cross-view task. Table 3 gives our results on the word similarity tasks, that is the rank correlation (Spearman's ρ) between embedding distances and orthographic edit distance (Levenshtein distance between character sequences). We measure this correlation for both our acoustic word embeddings and for our text embeddings. In the case of the text embeddings, we could of course directly measure the Levenshtein distance between the inputs; here we are simply measuring how much of this information the text embeddings are able to retain.
WORD SIMILARITY TASKS
Objective ρ (acoustic embedding) ρ (text embedding) fixed-margin (obj 0 ) 0.179 0.207 cost-sensitive margin (obj 0 ) 0.240 0.270 Table 3: Word similarity results using fixed-margin and cost-sensitive objectives, given as rank correlation (Spearman's ρ) between embedding distances and orthographic edit distances.
Interestingly, while the cost-sensitive objective did not produce substantial gains on the word discrimination tasks above, it does greatly improve the performance on this word similarity measure. This is a satisfying observation, since the cost-sensitive loss is trying to improve precisely this relationship between distances in the embedding space and the orthographic edit distance.
Although we have trained our embeddings using orthographic labels, it is also interesting to consider how closely aligned the embeddings are with the corresponding phonetic pronunciations. For comparison, the rank correlation between our acoustic embeddings and phonetic edit distances is 0.226, and for our text embeddings it is 0.241, which are relatively close to the rank correlations with orthographic edit distance. A future direction is to directly train embeddings with phonetic sequence supervision rather than orthography; this setting involves somewhat stronger supervision, but it is easy to obtain in many cases.
Another interesting point is that the performance is not a great deal better for the text embeddings than for the acoustic embeddings, even though the text embeddings have at their disposal the text input itself. We believe this has to do with the distribution of words in our data: While the data includes a large variety of words, it does not include many very similar pairs. In fact, of all possible pairs of unique training set words, fewer than 2% have an edit distance below 5 characters. Therefore, there may not be sufficient information to learn to distinguish detailed differences among character sequences, and the cost-sensitive loss ultimately does not learn much more than to separate different words. In future work it would be interesting to experiment with data sets that have a larger variety of similar words. Figure 3 gives a 2-dimensional t-SNE (van der Maaten & Hinton, 2008) visualization of selected acoustic and character sequences from the development set, including some that were seen in the training set and some previously unseen words. The previously seen words in this figure were selected uniformly at random among those that appear at least 15 times in the development set (the unseen words are the only six that appear at least 15 times in the development set). This visualization demonstrates that the acoustic embeddings cluster very tightly and are very close to the text embeddings, and that unseen words cluster nearly as well as previously seen ones.
VISUALIZATION OF LEARNED EMBEDDINGS
While Figure 3 shows the relationship among the multiple acoustic embeddings and the text embeddings, the words are all very different so we cannot draw conclusions about the relationships between words. Figure 4 provides another visualization, this time exploring the relationship among the text embeddings of a number of closely related words, namely all development set words ending in "-ly", "-ing", and "-tion". This visualization confirms that related words are embedded close together, with the words sharing a suffix forming fairly well-defined clusters.
CONCLUSION
We have presented an approach for jointly learning acoustic word embeddings and their orthographic counterparts. This multi-view approach produces improved acoustic word embedding performance over previous approaches, and also has the benefit that the same embeddings can be applied for both spoken and written query tasks. We have explored a variety of contrastive objectives: ones with a fixed margin that aim to separate same and different word pairs, as well as a cost-sensitive loss that aims to capture orthographic edit distances. While the losses generally perform similarly for word discrimination tasks, the cost-sensitive loss improves the correlation between embedding distances and orthographic distances. One interesting direction for future work is to directly use knowledge about phonetic pronunciations, in both evaluation and training. Another direction is to extend our approach to directly train on both word and non-word segments. : Visualization via t-SNE of character sequence embeddings for words with the suffixes "-ly" (blue), "-ing" (red), and "-tion" (green).
A ADDITIONAL ANALYSIS
We first explore the effect of network architectures for our embedding models. We learn embeddings using objective obj 0 and evaluate them on the acoustic and cross-view word discrimination tasks. The resulting average precisions on the development set are given in Table 4. All of the models were trained for 1000 epochs, except for the 1-layer unidirectional models which converged after 500 epochs. It is clear that bidirectional LSTMs are more successful than unidirectional LSTMs for these tasks, and two layers of LSTMs are much better than a single layer of LSTMs. We did not observe significant further improvement by using more than two layers of LSTMs. For all other experiments, we fix the architecture to 2-layer bidirectional LSTMs for each view.
Architecture
Dev AP Dev AP (acoustic word discrimination) (cross-view word discrimination) 1-layer unidirectional 0.379 0.616 1-layer bidirectional 0.466 0.690 2-layer bidirectional 0.659 0.791 Table 4: Average precision (AP) for acoustic and cross-view word discrimination tasks on the development set, using embeddings learned with objective obj 0 and different LSTM architectures. Figure 5: Precision-recall curve (left: two-layer bidirectional LSTM trained with obj 0 + obj 2 for word discrimination task) and scatter plot of embedding distances vs. orthographic distances (right: cost-sensitive margin model for word similarity task), for our best embedding models.
In Figure 5 we also give the precision-recall curve for our best models, as well as the scatter plot of cosine distances between acoustic embeddings vs. orthographic edit distances.
Figure 1 :
1Illustration of our embedding architecture and contrastive multi-view approach.
Kamper et al. (2016) andSettle & Livescu (2016) later improved on Levin et al.'s word discrimination results using convolutional neural networks (CNNs) and recurrent neural networks (RNNs) trained with either a classification or contrastive loss. Bengio & Heigold (2014) trained convolutional neural network (CNN)-based acoustic word embeddings for rescoring the outputs of a speech recognizer, using a loss combining classification and ranking criteria. Maas et al. (2012) trained a CNN to predict a semantic word embedding from an acoustic segment, and used the resulting embeddings as features in a segmental word-level speech recognizer. Harwath and Glass Harwath & Glass (2015); Harwath et al. (2016); Harwath & Glass
Figure 2 :
2Development set AP for several objectives on acoustic word discrimination.
Figure 3 :
3Visualization via t-SNE of acoustic word embeddings (colored markers) and corresponding character sequence embeddings (text), for a set of development set words with at least 15 acoustic tokens. Words seen in training are in lower-case; unseen words are in upper-case.
Figure 4
4Figure 4: Visualization via t-SNE of character sequence embeddings for words with the suffixes "-ly" (blue), "-ing" (red), and "-tion" (green).
Table 1 :
1Word discrimination performance
with different objectives.
Our tensorflow implementation is available at https://github.com/opheadacheh/Multi-view-neural-acoustic-words-embeddings
ACKNOWLEDGMENTSThis research was supported by a Google Faculty Award and by NSF grant IIS-1321015. The opinions expressed in this work are those of the authors and do not necessarily reflect the views of the funding agency. This research used GPUs donated by NVIDIA Corporation. We thank Herman Kamper and Shane Settle for their assistance with the data and experimental setup.
Query by example search on speech at mediaeval. Xavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Szöke, Andi Buzo, Florian Metze, MediaEval. Xavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Szöke, Andi Buzo, and Florian Metze. Query by example search on speech at mediaeval 2014. In MediaEval, 2014.
Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kingsbury. End-to-end ASR-free keyword search from speech. Kartik Audhkhasi, Andrew Rosenberg, arXiv:1701.04313arXiv preprintKartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kings- bury. End-to-end ASR-free keyword search from speech. arXiv preprint arXiv:1701.04313, 2017.
Word embeddings for speech recognition. Samy Bengio, Georg Heigold, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. Samy Bengio and Georg Heigold. Word embeddings for speech recognition. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2014.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of Machine Learing Research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learing Research, 3(Feb):1137-1155, 2003.
Signature verification using a siamese time delay neural network. Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, Roopak Shah, Advances in Neural Information Processing Systems (NIPS). Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, and Roopak Shah. Signature verifi- cation using a siamese time delay neural network. In Advances in Neural Information Processing Systems (NIPS), pp. 737-744, 1993.
Rapid evaluation of speech representations for spoken term discovery. A Michael, Samuel Carlin, Aren Thomas, Hynek Jansen, Hermansky, Proc. Interspeech. InterspeechMichael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. Rapid evaluation of speech representations for spoken term discovery. In Proc. Interspeech, 2011.
Query-by-example keyword spotting using long short-term memory networks. Guoguo Chen, Carolina Parada, Tara N Sainath, Proc. ICASSP. ICASSPGuoguo Chen, Carolina Parada, and Tara N Sainath. Query-by-example keyword spotting using long short-term memory networks. In Proc. ICASSP, 2015.
Learning a similarity metric discriminatively, with application to face verification. Sumit Chopra, Raia Hadsell, Yann Lecun, IEEE Computer Society Conf. Computer Vision and Pattern Recognition. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In IEEE Computer Society Conf. Computer Vision and Pattern Recognition, pp. 539-546, 2005.
Unsupervised learning of audio segment representations using sequence-to-sequence recurrent neural networks. Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Proc. Interspeech. InterspeechYu-An Chung, Chao-Chung Wu, Chia-Hao Shen, and Hung-Yi Lee. Unsupervised learning of audio segment representations using sequence-to-sequence recurrent neural networks. In Proc. Inter- speech, 2016.
Indexing by latent semantic analysis. Scott Deerwester, T Susan, George W Dumais, Furnas, K Thomas, Richard Landauer, Harshman, Journal of the American society for information science. 416391Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41 (6):391, 1990.
Placing search in context: The concept revisited. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, Eytan Ruppin, Proceedings of the 10th international conference on World Wide Web. the 10th international conference on World Wide WebLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, 2001.
Results of the 2006 spoken term detection evaluation. G Jonathan, Jerome Fiscus, Ajot, S John, George Garofolo, Doddingtion, Proc. SIGIR. SIGIRCiteseer7Jonathan G Fiscus, Jerome Ajot, John S Garofolo, and George Doddingtion. Results of the 2006 spoken term detection evaluation. In Proc. SIGIR, volume 7, pp. 51-57. Citeseer, 2007.
Exemplar-based sparse representations for noise robust automatic speech recognition. Tuomas Jort F Gemmeke, Antti Virtanen, Hurmalainen, IEEE Transactions on Acoustics, Speech, and Language Processing. 197Jort F Gemmeke, Tuomas Virtanen, and Antti Hurmalainen. Exemplar-based sparse representations for noise robust automatic speech recognition. IEEE Transactions on Acoustics, Speech, and Language Processing, 19(7):2067-2080, 2011.
Evaluation of acoustic word embeddings. Sahar Ghannay, Yannick Esteve, Nathalie Camelin, Paul Deleglise, Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP. ACL Workshop on Evaluating Vector-Space Representations for NLPSahar Ghannay, Yannick Esteve, Nathalie Camelin, and Paul Deleglise. Evaluation of acoustic word embeddings. In Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP, 2016.
SWITCHBOARD: Telephone speech corpus for research and development. J John, Godfrey, C Edward, Jane Holliman, Mcdaniel, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. John J Godfrey, Edward C Holliman, and Jane McDaniel. SWITCHBOARD: Telephone speech corpus for research and development. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 1992.
Speech recognition with deep recurrent neural networks. Alex Graves, Abdel Rahman Mohamed, Geoffrey Hinton, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur- rent neural networks. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2013.
Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, IEEE Computer Society Conf. Computer Vision and Pattern Recognition. Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In IEEE Computer Society Conf. Computer Vision and Pattern Recognition, 2006.
Deep multimodal semantic embeddings for speech and images. David Harwath, James Glass, Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2015.
Learning word-like units from joint audio-visual analysis. David Harwath, James R Glass, arXiv:1701.07481arXiv preprintDavid Harwath and James R Glass. Learning word-like units from joint audio-visual analysis. arXiv preprint arXiv:1701.07481, 2017.
Unsupervised learning of spoken language with visual context. David Harwath, Antonio Torralba, James Glass, Advances in Neural Information Processing Systems (NIPS). David Harwath, Antonio Torralba, and James Glass. Unsupervised learning of spoken language with visual context. In Advances in Neural Information Processing Systems (NIPS), 2016.
Multilingual distributed representations without word alignment. Karl Moritz Hermann, Phil Blunsom, arXiv:1312.6173Int. Conf. Learning Representations. cs.CLKarl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without word alignment. In Int. Conf. Learning Representations, 2014. arXiv:1312.6173 [cs.CL].
SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Felix Hill, Roi Reichart, Anna Korhonen, Computational Linguistics. 414Felix Hill, Roi Reichart, and Anna Korhonen. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguistics, 41(4), 2015.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997.
Convolutional neural network architectures for matching natural language sentences. Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen, Advances in Neural Information Processing Systems (NIPS). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems (NIPS), 2014.
Deep unordered composition rivals syntactic methods for text classification. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, Hal Daumé, Iii , Proc. Association for Computational Linguistics. Association for Computational LinguisticsMohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. Deep unordered com- position rivals syntactic methods for text classification. In Proc. Association for Computational Linguistics, 2015.
Unsupervised neural network based feature extraction using weak top-down constraints. Herman Kamper, Micah Elsner, Aren Jansen, Sharon J Goldwater, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. Herman Kamper, Micah Elsner, Aren Jansen, and Sharon J. Goldwater. Unsupervised neural net- work based feature extraction using weak top-down constraints. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2015.
Deep convolutional acoustic word embeddings using word-pair side information. Herman Kamper, Weiran Wang, Karen Livescu, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. Herman Kamper, Weiran Wang, and Karen Livescu. Deep convolutional acoustic word embeddings using word-pair side information. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2016.
ADAM: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, Int. Conf. Learning Representations. Diederik Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In Int. Conf. Learning Representations, 2015.
Antonio Torralba, and Sanja Fidler. Skip-thought vectors. Ryan Kiros, Yukun Zhu, R Ruslan, Richard Salakhutdinov, Raquel Zemel, Urtasun, Advances in Neural Information Processing Systems (NIPS). Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS), 2015.
Fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings. Keith Levin, Katharine Henry, Aren Jansen, Karen Livescu, Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)Keith Levin, Katharine Henry, Aren Jansen, and Karen Livescu. Fixed-dimensional acoustic embed- dings of variable-length segments in low-resource settings. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2013.
Segmental acoustic indexing for zero resource keyword search. Keith Levin, Aren Jansen, Benjamin Van Durme, IEEE Int. Conf. Acoustics, Speech and Sig. Proc. Keith Levin, Aren Jansen, and Benjamin Van Durme. Segmental acoustic indexing for zero resource keyword search. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2015.
Word-level acoustic modeling with convolutional vector regression. L Andrew, Maas, D Stephen, Miller, M Tyler, Y Andrew, Patrick Ng, Nguyen, Proc. ICML Workshop on Representation Learning. ICML Workshop on Representation LearningAndrew L Maas, Stephen D Miller, Tyler M O'neil, Andrew Y Ng, and Patrick Nguyen. Word-level acoustic modeling with convolutional vector regression. In Proc. ICML Workshop on Represen- tation Learning, 2012.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems (NIPS). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen- tations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS), 2013.
Three new graphical models for statistical language modelling. Andriy Mnih, Geoffrey Hinton, ICML. Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. In ICML, 2007.
Multimodal deep learning. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, Andrew Ng, ICML. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. Multimodal deep learning. In ICML, pp. 689-696, 2011.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proc. Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language essingJeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In Proc. Conference on Empirical Methods in Natural Language Processing, 2014.
Discriminative acoustic word embeddings: Recurrent neural network-based approaches. Shane Settle, Karen Livescu, Proc. IEEE Workshop on Spoken Language Technology (SLT). IEEE Workshop on Spoken Language Technology (SLT)Shane Settle and Karen Livescu. Discriminative acoustic word embeddings: Recurrent neural network-based approaches. In Proc. IEEE Workshop on Spoken Language Technology (SLT), 2016
Grounded compositional semantics for finding and describing images with sentences. Richard Socher, Andrej Karpathy, V Quoc, Christopher D Le, Andrew Y Manning, Ng, Transactions of the Association for Computational Linguistics. 2Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded compositional semantics for finding and describing images with sentences. Trans- actions of the Association for Computational Linguistics, 2:207-218, 2014.
Improved multimodal deep learning with variation of information. Kihyuk Sohn, Wenling Shang, Honglak Lee, Advances in Neural Information Processing Systems (NIPS). Kihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variation of information. In Advances in Neural Information Processing Systems (NIPS), pp. 2141-2149, 2014.
Multimodal learning with deep boltzmann machines. Nitish Srivastava, Ruslan Salakhutdinov, Journal of Machine Learing Research. Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines. Journal of Machine Learing Research, pp. 2949-2980, 2014.
Phonetics embedding learning with side information. Gabriel Synnaeve, Thomas Schatz, Emmanuel Dupoux, Proc. IEEE Workshop on Spoken Language Technology (SLT). IEEE Workshop on Spoken Language Technology (SLT)Gabriel Synnaeve, Thomas Schatz, and Emmanuel Dupoux. Phonetics embedding learning with side information. In Proc. IEEE Workshop on Spoken Language Technology (SLT), 2014.
Visualizing data using t-SNE. J P Laurens, Geoffrey E Van Der Maaten, Hinton, Journal of Machine Learing Research. 9Laurens J. P. van der Maaten and Geoffrey E. Hinton. Visualizing data using t-SNE. Journal of Machine Learing Research, 9:2579-2605, November 2008.
Order-embeddings of images and language. Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun, Int. Conf. Learning Representations. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In Int. Conf. Learning Representations, 2016.
On deep multi-view representation learning. Weiran Wang, Raman Arora, Karen Livescu, Jeff Bilmes, ICML. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In ICML, pp. 1083-1092, 2015.
Towards universal paraphrastic sentence embeddings. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, Int. Conf. Learning Representations. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrastic sentence embeddings. In Int. Conf. Learning Representations, 2016. |
264,490,854 | FAIRRET: A FRAMEWORK FOR DIFFERENTIABLE FAIRNESS REGULARIZATION TERMS | Current tools for machine learning fairness only admit a limited range of fairness definitions and have seen little integration with automatic differentiation libraries, despite the central role these libraries play in modern machine learning pipelines.We introduce a framework of fairness regularization terms (FAIRRETs) which quantify bias as modular objectives that are easily integrated in automatic differentiation pipelines.By employing a general definition of fairness in terms of linear-fractional statistics, a wide class of FAIRRETs can be computed efficiently.Experiments show the behavior of their gradients and their utility in enforcing fairness with minimal loss of predictive power compared to baselines.Our contribution includes a PyTorch implementation of the FAIRRET framework.Related Work Fairness tools are classified as preprocessing, inprocessing or postprocessing(Mehrabi et al., 2021).FAIRRETs perform inprocessing, as they are minimized during training. | [] | FAIRRET: A FRAMEWORK FOR DIFFERENTIABLE FAIRNESS REGULARIZATION TERMS
26 Oct 2023
A Preprint
Ghent University
Maarten Buyl [email protected]
Ghent University
Marybeth Defrance [email protected]
Ghent University
Tijl De Bie [email protected]
Ghent University
FAIRRET: A FRAMEWORK FOR DIFFERENTIABLE FAIRNESS REGULARIZATION TERMS
26 Oct 20232BC136051216615205FBD6A849DC43D9arXiv:2310.17256v1[cs.LG]
Current tools for machine learning fairness only admit a limited range of fairness definitions and have seen little integration with automatic differentiation libraries, despite the central role these libraries play in modern machine learning pipelines.We introduce a framework of fairness regularization terms (FAIRRETs) which quantify bias as modular objectives that are easily integrated in automatic differentiation pipelines.By employing a general definition of fairness in terms of linear-fractional statistics, a wide class of FAIRRETs can be computed efficiently.Experiments show the behavior of their gradients and their utility in enforcing fairness with minimal loss of predictive power compared to baselines.Our contribution includes a PyTorch implementation of the FAIRRET framework.Related Work Fairness tools are classified as preprocessing, inprocessing or postprocessing(Mehrabi et al., 2021).FAIRRETs perform inprocessing, as they are minimized during training.
Introduction
Many machine learning fairness methods aim to enforce mathematical formalizations of non-discrimination principles (Mehrabi et al., 2021), often by requiring statistics to be equal between groups (Agarwal et al., 2018).For example, we may require that men and women receive positive decisions at equal rates in binary classification (Dwork et al., 2012).The main interest in fairness tools is to meet such constraints without destroying the accuracy of the ML model.
A large class of these fairness tools utilizes regularization terms, i.e. quantifications of unfairness that can be added to the existing error term of an unfair ML model (Kamishima et al., 2012;Berk et al., 2017;Zafar et al., 2019;Padala and Gujar, 2021;Padh et al., 2021;Buyl and De Bie, 2022).The modularity of such loss terms appears to align well with the paradigm of automatic differentiation libraries like PyTorch (Paszke et al., 2019), which have become the bedrock of modern machine learning pipelines.However, the practical use of this modularity has seen little interest thus far.
Contributions Hence, we formalize a modular framework of fairness regularization terms (FAIRRETs) and unify recent advances in fairness tools.A FAIRRET quantifies a model's unfairness as a single value that is minimized like any other objective through automatic differentiation.
In this paper, we implement two types of FAIRRETs: FAIRRETs that directly penalize the violation of fairness constraints and FAIRRETs that minimize the distance between a model and its projection onto the set of fair models.These FAIRRETs make use of linear-fractional statistics (Celis et al., 2019), which support a wider range of fairness notions than the exclusively linear statistics typically considered in literature (Zafar et al., 2019;Agarwal et al., 2018;Alghamdi et al., 2020).Moreover, our framework generalizes to the simultaneous handling of multiple sensitive traits and (a weaker form of) fairness with respect to continuous sensitive variables.
We visualize the gradients of the proposed FAIRRETs and evaluate their empirical performance in enforcing fairness notions compared to baselines.We infer that fairness notions with linear-fractional statistics are far harder to achieve than those with linear statistics, though the latter are far more popularly studied in literature.
The most straightforward and popular approach to fairness regularization terms is to directly penalize the violation of the fairness constraint (Zemel et al., 2013;Padala and Gujar, 2021;Wick et al., 2019), which we formalize as a FAIRRET.We also take inspiration from postprocessing methods that project classifiers onto the set of fair classifiers (Alghamdi et al., 2020;Wei et al., 2020) by penalizing the cost of this projection (Buyl and De Bie, 2021).
Our framework makes extensive use of the observation by Celis et al. (2019) that many fairness definitions can be expressed as a parity between linear-fractional statistics.They propose a meta-algorithm to generically find optimal classifiers that satisfy this constraint.Instead, we employ a simpler (yet sufficiently expressive) linear-fractional form and propose a novel algorithm to use them in the construction of linear constraints such that a meta-algorithm is not necessary.
Popular fairness toolkits such as Fairlearn (Bird et al., 2020) and AIF360 (Bellamy et al., 2018) expect the underlying model in the form of scikit-learn Estimators1 that can be retrained at-will in fairness meta-algorithms.Instead, our proposed FAIRRETs act as a loss term that can simply be added within a training step.The aforementioned toolkits have some integration with automatic differentiation libraries in adversarial fairness approaches (Zhang et al., 2018), yet these still require full control over the training process and lack generality in the fairness notions they can enforce.
Two PyTorch-specific projects with similar goals as our paper are FairTorch (Masashi, 2020) and the Fair Fairness Benchmark (FFB) (Han et al., 2023).However, neither present a formal framework and both only support a limited range of fairness definitions.
Fairness in Binary Classification
In fair binary classification, we are provided with random variables (X, S, ) with X ∈ R the -dimensional feature vector of an individual, S ∈ R their -dimensional sensitive feature vector and ∈ {0, 1} the binary output label.Any expectations in the rest of the paper are taken over the joint distribution of these random variables (X, S, ).
The goal is to learn a classifier such that its predictions (X) match while avoiding discrimination with respect to S. In this section, we will assume directly provides binary decisions, i.e. : R → {0, 1}, as this is expected in traditional formalizations of fairness.However, since such 'hard' classifiers are not differentiable, we will instead be learning probabilistic classifiers in Sec. 3. Further note that our definition of sensitive features S as real-valued and -dimensional vectors is a generalization of typical fairness definitions which assume a categorical (or binary) domain for sensitive features (Verma and Rubin, 2018).We will one-hot encode such categorical traits, e.g. by encoding 'white' or 'non-white' as the vectors S = (1, 0) ⊤ and S = (0, 1) ⊤ respectively.Our generalization allows us to take multiple non-exclusive sensitive traits into account by mapping them to different values in the same vector S for ∈ [ ] = {0, ..., − 1}.Additionally, by letting ∈ R, we allow soft specifications of identity rather than requiring hard discretization.
Partition Fairness
Though we will allow any feature vector S ∈ R in our framework, popular fairness definitions require every person to belong to exactly one demographic group.We call this partition fairness.
Definition 1.In partition fairness, S is a one-hot encoding, i.e. ∈ {0, 1} and ∈ [ ] = 1.
Example 1.A straightforward, popular definition in partition fairness is Demographic Parity (DP), also known as statistical parity (Dwork et al., 2012;Verma and Rubin, 2018).It enforces
∀𝑘 ∈ [𝑑 𝑠 ] : 𝑃( 𝑓 (X) = 1 | 𝑆 𝑘 = 1) = 𝑃( 𝑓 (X) = 1)
which states that all groups ought to get positive predictions at the same rate (i.e. the overall rate).
Let 𝛾(𝑘
; 𝑓 ) ≜ E[𝑆 𝑘 𝑓 (X) ] E[𝑆 𝑘 ] . It is easily shown that 𝛾(𝑘; 𝑓 ) = 𝑃( 𝑓 (X) = 1 | 𝑆 𝑘 = 1). Thus also 𝑃( 𝑓 (X) = 1 | 𝑆 𝑘 = 1) = 𝑃( 𝑓 (X) = 1) ⇐⇒ 𝛾(𝑘; 𝑓 ) = E[ 𝑓 (X)].
In Example 1, fairness is formalized by requiring a statistic to be equal across groups.This principle can be generalized to a wide class of parity-based fairness notions.In particular, we consider those expressed through linearfractional statistics (Celis et al., 2019).(Dwork et al., 2012) 0 1 1 0 Conditional Demographic Parity (Wachter et al., 2020) 0 (X) (X) 0 Equal Opportunity (Hardt et al., 2016) 0 Y Y 0 False Positive Parity (Hardt et al., 2016) 0 1 -Y 1 -Y 0 Predictive Parity (Chouldechova, 2017) 0 Y 0 1 False Omission Parity Y -Y 1 -1 Accuracy Equality (Berk et al., 2021) 1 -Y 2Y -1 1 0 Treatment Equality (Berk et al.,
𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 (𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 ))] E[𝑆 𝑘 (𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 ))
] with 0 , 1 , 0 , and 1 all functions that do not depend on S or .Let Γ denote all such statistics.Also, let
𝛾( 𝑓 ) ≜ E[ 𝛼 0 (X,𝑌 )+ 𝑓 (X) 𝛽 0 (X,𝑌 ) ] E[ 𝛼 1 (X,𝑌 )+ 𝑓 (X) 𝛽 1 (X,𝑌 ) ]
denote the overall statistic value without conditioning on S. Definition 3. A fairness notion is expressed through a statistic ∈ Γ.The set F of classifiers that adhere to the fairness notion is defined as
F 𝛾 ≜ 𝑓 : R 𝑑 𝑥 → {0, 1} | ∀𝑘 ∈ [𝑑 𝑠 ] : 𝛾(𝑘; 𝑓 ) = 𝛾( 𝑓 )
i.e. the statistic (; ) for each equals the overall statistic ( ).
Indeed, the DP fairness notion in Example 1 is expressed as a fairness notion as defined in Def. 3 with linear-fractional statistics as defined in Def. 2. The same holds for the following notions.Example 2. Equalized Opportunity (EO) (Hardt et al., 2016) only computes DP for actual positives = 1.Its statistic (Chouldechova, 2017), which compares the precision statistic (X) ] .Example 4. Treatment Equality (TE) (Berk et al., 2021) balances the ratios of false negatives over false positives, i.e.
𝛾 is thus the recall 𝑃( 𝑓 (X) = 1 | 𝑌 = 1, 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X)𝑌 ] E[𝑆 𝑘 𝑌 ] . Example 3. Predictive Parity (PP)𝑃(𝑌 = 1 | 𝑓 (X) = 1, 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X)𝑌 ] E[𝑆 𝑘
𝛾(𝑘
; 𝑓 ) = E[𝑆 𝑘 (1− 𝑓 (X) )𝑌 ] E[𝑆 𝑘 𝑓 (X) (1−𝑌 ) ]
. Unlike the other notions, its is not a probability.Table 1 summarizes the and functions of several fairness notions (Verma and Rubin, 2018) with linear-fractional statistics.Their derivations are found in Appendix A.1.Definition 4. A linear-fractional statistic ∈ Γ is linear when 1 (X, ) ≡ 0.
Let Γ L ⊂ Γ denote the set of all linear statistics.
Fairness notions with linear statistics ∈ Γ L are thus identified in Table 1 by checking the column for 1 .Such notions are especially useful because the fairness constraint in Def. 3 is easily written as a linear constraint over classifier .In turn, this makes the set of fair classifiers F a convex set, which leads to convex optimization problems (Boyd and Vandenberghe, 2004).Thus, a constrained optimization of can be efficiently performed if is itself convex (Zafar et al., 2019).
However, fairness notions with linear-fractional statistics ∈ Γ \ Γ L do not directly lead to linear constraints in Def. 3. To facilitate optimization, we therefore propose to narrow the set of fair classifiers F to the subset where the statistics are all equal in a particular value .Definition 5. Fix a ∈ R. A -fixed fairness notion is expressed through a linear-fractional statistic ∈ Γ such that the set F () of classifiers that adhere to the fairness notion is defined as
F 𝛾 (𝑐) ≜ 𝑓 : R 𝑑 𝑥 → {0, 1} | ∀𝑘 ∈ [𝑑 𝑠 ] : 𝛾(𝑘; 𝑓 ) = 𝑐 .
Proposition 1.With ∈ Γ, the -fixed fairness notion F () enforces linear constraints: (; ) = ⇐⇒ E[ ((X, , ) + (X) (X, , ))] = 0 where (X, , ) = 0 (X, ) − 1 (X, ) and (X, , ) = 0 (X, ) − 1 (X, ).
Using Prop. 1, we can still obtain linear constraints for fairness notions F with linear-fractional statistics ∈ Γ \ Γ L by considering their -fixed variant F () instead.This sacrifices a degree of freedom because statistics (; ) are no longer allowed to be equal for any overall statistic ( ), they must now do so for the specific case where ( ) = .However, there are values that still lead to interesting sets F ().In the FAIRRETs we propose, we take an unfair classifier ℎ and fix = (ℎ) to construct the set of all fair classifiers F ((ℎ)) that would result from a fair redistribution of scores in ℎ over the sensitive groups.
Though Prop. 1 is inspired by Celis et al. (2019), our use of this result vastly differs.Instead of fixing the statistics to a single value , they set many pairs of upper and lower bounds for each group's statistics, giving rise to as many optimization programs.They then propose a meta-algorithm that searches the best classifier over each of these programs.A meta-algorithm is not necessary in our framework, as we will allow to evolve during training.While we have no formal convergence guarantees for this approach, empirical results show it works well in practice.
Beyond Partition Fairness
Having firmly rooted our definitions in partition fairness (Def.1), we now abandon its assumptions.First, we allow ∈ R. Second, we extend to multiple sensitive features with ∈ R.
Continuous Sensitive Values
Admitting continuous values for someone's sensitive trait, i.e. ∈ R allows us to take naturally continuous features, such as age, into account.Also, it provides an opportunity for an imprecise specification of demographic group membership.
For instance, instead of exactly knowing the gender of an individual, we may only have a probability available, e.g. because it is noisily predicted by a third-party classifier, or to protect the individual's privacy.By allowing ∈ (0, 1), the attribute could then express 'woman-ness' instead of a binary 'woman' or 'not woman'.Thus, we also allow individuals to themselves quantify how strongly they identify with a group, rather than requiring a binary membership.
Our notation already generalizes to non-binary values; they can simply be filled in for linear-fractional statistics ∈ Γ as defined in Def. 2. Fairness as formalized in Def. 3 can then still be enforced through (; ) = ( ).Remark 1. Partition fairness constraints are derived from the ideal that a set of distinct groups are treated equally, as measured through a statistic.This does not directly apply for a non-binary .For example, if there is only one, continuous sensitive variable ( 0 ) = S such as the age of an individual, then we cannot compare (0; ) to another group's statistics.Instead, (0; ) must be compared to a value independent of S.
Enforcing (; ) = ( ) is then a sensible choice, as it satisfies key properties one can expect from a fairness measure.First, the constraint is met when ≡ , i.e. when is a deterministic constant.Second, it holds if has no linear influence on the nominator and denominator of , i.e. cov( , 0 (X, ) + (X) 0 (X, )) = cov( , 1 (X, ) + (X) 1 (X, )) = 0 =⇒ (; ) = ( ) For a full derivation of this result, we refer to Appendix B.1.
Multiple Axes of Discrimination
By allowing ∈ R, we support that S contains information about people from several sensitive traits, e.g.gender, ethnicity, and religion.Because these each form a possible axis of discrimination, we can 'sum' these sources of discrimination by combining the constraints.
For example, if pairs of sensitive features ( 0 , 1 ) and ( 2 , 3 ) each partition the dataset, then fairness requires both (0; ) = (1; ) = ( ) and (2; ) = (3; ) = ( ).Combined, these constraints make up the fairness definition in Def. 3. The use of one-hot notations for sensitive values thus already allows us to combine axes of discrimination for categorical sensitive traits.Remark 2. An important limitation is that we only view fairness separately per axis of discrimination.Outside the partition fairness setting, this means that some intersections of sensitive groups, e.g.'black woman', will not be represented in the constraints that enforce fairness with respect to 'black' and 'woman' separately (Kearns et al., 2018).A toy example is given in Appendix B.2.
Fairness Regularization Terms
The popular approach to modern machine learning is to construct pipelines consisting of modular, parameterized components that are differentiable from the objective to the input.We therefore use probabilistic classifier models
SmoothMax D KL -projection D JS -projection D SED -projection 0.0 0.2 0.4 0.6 0.8 1.0 h −1.0 −0.5 0.0 0.5 1.0 ∇ h R γ γ(0; h) γ(h) individuals with S 0 = 1 0.0 0.2 0.4 0.6 0.8 1.0 h γ(1; h) γ(h)
individuals with S 1 = 1
Figure 1: The model ℎ was trained on the ACSIncome dataset without FAIRRET (i.e. = 0) and ends up with disparate positive rates (0; ℎ) > (ℎ) > (1; ℎ) for the one-hot encoded sensitive variables ( 0 , 1 ).These should be brought closer to the overall positive rate ( ).We show probability scores ℎ and the gradients 3 of several FAIRRETs with respect to ℎ.The gradients are normalized by dividing them by their maximum absolute value per FAIRRET and per group.They are positive for samples with 0 = 1, implying their scores should decrease, and vice versa for 1 = 1.
ℎ : R → (0, 1) from now on, where decisions are sampled from a Bernoulli distribution with parameter ℎ(X).Let H denote the hypothesis class of these models.
Remark 3. Fairness statistics (; ℎ) over the output of a probabilistic classifier ℎ only approximately verify their respective fairness notions, as these were only defined for hard classifiers with a binary output (Lohaus et al., 2020).In Appendix B.3, we discuss the impact of this approximation and how its fidelity can be traded-off with the quality of the gradient of (; ℎ) with respect to ℎ.
In binary classification, we minimize a loss L (ℎ) over the probabilistic classifier ℎ given output labels , e.g. the cross-entropy.In fair binary classification we additionally pursue ℎ ∈ F :
min ℎ∈ F 𝛾 L 𝑌 (ℎ)(1)
For linear-fractional statistics, the constraint is linear when considering the -fixed variant of F (using Prop.1).However, for non-convex models ℎ, the constrained optimization of ℎ will remain non-convex as well.In the general case, we thus relax ℎ ∈ F and instead incur a cost to ℎ ∉ F .
Definition 6.A fairness regularization term (FAIRRET) (ℎ) : H → R ≥0 quantifies the unfairness of the model ℎ ∈ H with respect to the fairness notion defined through statistic .
A FAIRRET is strict if it holds that ℎ ∈ F 𝛾 ⇐⇒ 𝑅 𝛾 (ℎ) = 0.
The objective in Eq. ( 1) is then relaxed as
min ℎ L 𝑌 (ℎ) + 𝜆𝑅 𝛾 (ℎ).(2)
with a hyperparameter.The objective in Eq. ( 2) is equivalent to Eq. ( 1) for → ∞ if is strict.
Remark 4. We call a regularization term, yet its purpose is not to reduce model complexity or improve generalization performance, in contrast to traditional regularization in machine learning (Kukačka et al., 2017).Instead, we aim to limit the hypothesis class of ℎ to the set of fair classifiers.
In what follows, we introduce two archetypes of FAIRRETs: violation and projection.We visualize ∇ ℎ for each FAIRRET in Fig. 1 with the positive rate statistic (thereby enforcing DP).
Violation FAIRRETs
To quantify ℎ ∉ F , we can start from the violation v(ℎ) of the constraint that defines F :
v 𝑘 (ℎ) = 𝛾(𝑘; ℎ) 𝛾(ℎ) − 1 (3) with v : H → R 𝑑 𝑠 a vector-valued function with components v 𝑘 . Clearly, v(ℎ) = 0 ⇐⇒ ℎ ∈ F 𝛾 .
Note that v(ℎ) is normalized2 by (ℎ) such that a classifier cannot minimize v(ℎ) by uniformly downscaling its statistics without reducing relative differences between groups (Celis et al., 2019).Definition 7. We define the Norm FAIRRET as (ℎ) ≜ ∥v(ℎ) ∥, with ∥•∥ a norm over R .
Many variants of the Norm FAIRRET have been proposed, e.g. by Zemel et al. (2013), Padala and Gujar (2021), Wick et al. (2019) and Chuang and Mroueh (2020).However, fairness evaluation metrics often only consider the maximal violation.Hence, we propose the SmoothMax variant.Definition 8. We define the SmoothMax FAIRRET as
𝑅 𝛾 (ℎ) ≜ log 𝑘 ∈ [𝑑 𝑠 ] exp(v 𝑘 (ℎ)) − log 𝑑 𝑠
Because the SmoothMax performs the log-sum-exp operation over the violation, it can be considered a smooth approximation of the maximum.We subtract log to ensure the FAIRRET is strict.
Generally, violation FAIRRETs can be characterized as functions of the violation v(ℎ).This lends them interpretability, but it also means that the gradient3 ∇ ℎ decomposes as
∇ ℎ 𝑅 𝛾 = 𝜕v 𝜕ℎ ⊤ ∇ v 𝑅 𝛾 (4)
with v ℎ the Jacobian 3 of v(ℎ).The gradients of violation FAIRRETs thus only differ in the ∇ v gradient.Hence, the Norm FAIRRET is excluded from Fig. 1 because its gradients equal those of SmoothMax after normalization. Figure 1 also suggests that violation FAIRRETs convey little information on how each individual ℎ(X) score should be modified.Instead, they merely direct scores to uniformly increase or decrease within each group.
Projection FAIRRETs
Recent postprocessing approaches to fairness redistribute all individual probability scores of a model ℎ(X) to a fair scores vector with a minimal loss in predictive power.For example, Alghamdi et al. (2020) project the scores onto the fair set F as a postprocessing step.Yet, the cost of this projection can be seen as a quantification of unfairness that may be minimized as a FAIRRET during training.
Given a statistical divergence or distance , we can generally define such a projection FAIRRET as
𝑅 𝛾 (ℎ) ≜ min 𝑓 ∈ F 𝛾 (𝛾 (ℎ) ) E[𝐷 ( 𝑓 (X) ∥ ℎ(X))].(5)
Importantly, we do not project ℎ onto the general fair set F , but on the -fixed subset F () with = (ℎ).The -fixing is done such that the projection only requires linear constraints for linear-fractional statistics (see Prop. 1).Equation ( 5) is then a convex optimization problem if we limit ourselves to a that is convex with respect to , which is the case for all projections discussed here.In particular, we -fix to the overall statistic (ℎ) of ℎ because this ensures ℎ can always be projected onto itself if it is already fair, as then ℎ ∈ F ((ℎ)).Definition 9.The KL -projection uses the binary Kullback-Leibler divergence
𝐷 KL ( 𝑓 (X) ∥ ℎ(X)) ≜ 𝑓 (X) log 𝑓 (X) ℎ(X) + (1 − 𝑓 (X)) log 1 − 𝑓 (X) 1 − ℎ(X)
.
The KL -divergence is both a Csiszar divergence and a Bregman divergence (Amari, 2009).Also, the cross-entropy error minimized in L (ℎ) equals KL ( ∥ ℎ(X)) up to a constant.The minimization of Eq. ( 2) thus comes down to simultaneously minimizing KL between ℎ(X) and the data , and between ℎ(X) and the closest ∈ F ((ℎ)) (Buyl and De Bie, 2021).
Definition 10.The JS -projection uses the binary Jensen-Shannon divergence.
𝐷 JS ( 𝑓 (X) ∥ ℎ(X)) ≜ 1 2 𝐷 KL ( 𝑓 (X) ∥ 𝑚(X)) + 1 2 𝐷 KL (ℎ(X) ∥ 𝑚(X))
with (X) = 1 2 (X) + 1 2 ℎ(X).
Just like KL , the JS -divergence is a Csiszar divergence.However, the JS -divergence is symmetric with respect to its arguments and ℎ, which is not the case for the KL -divergence.
Definition 11.The SED -projection uses the squared Euclidean distance between the two points (1 − (X), (X))
and
(1 − ℎ(X), ℎ(X)): 𝐷 SED ( 𝑓 (X) ∥ ℎ(X)) ≜ 2( 𝑓 (X) − ℎ(X)) 2 .
SED is a Bregman divergence between the Bernoulli distributions with parameters (X) and ℎ(X).
In practice, we evaluate projection FAIRRETs (ℎ) in two steps.
(i) * = arg min
𝑓 ∈ F 𝛾 (𝛾 (ℎ) ) E[𝐷 ( 𝑓 (X) ∥ ℎ(X))] (ii) 𝑅 𝛾 (ℎ) = E[𝐷 ( 𝑓 * (X) ∥ ℎ(X))]
While keeping ℎ fixed, step (i) computes the overall statistic (ℎ) and then finds the projection * through constrained optimization.Subsequently, step (ii) keeps * fixed and computes E[ ( * (X) ∥ ℎ(X))] as a function of ℎ, which we use to compute the gradient with respect to ℎ.This gradient differs from the actual gradient of the optimization as a function of ℎ in Eq. ( 5), because the latter would require us to treat * as a function of ℎ.However, by treating * as fixed instead (without backpropagating through it), we significantly simplify the FAIRRET's implementation.The optimization in step (i) can then be solved generically using specialized libraries such as cvxpy (Agrawal et al., 2018;Diamond and Boyd, 2016).In our experiments, we find that only 10 optimization steps is enough to get a reasonable approximation of the solution.We refer to Appendix C.2 for a discussion of this approximation and Appendix C.1 for a visualization of each projection * .
Figure 1 shows that the gradients of the projection FAIRRETs increase with higher values of ℎ.We hypothesize this occurs when (; ℎ) > (ℎ) because (; ℎ) is more easily decreased by reducing higher ℎ values than lower ones.Conversely, when (; ℎ) < (ℎ), there is more to gain from increasing lower ℎ values than higher ones.The sharp bend of the gradients of the SED -projection is explained in Appendix C.1 through an analysis of the projected distributions.
Analysis
Proposition 2. All FAIRRETs presented in this paper (i.e.Def. 7, 8, 9, 10 and 11) are strict.
Hence, all proposed FAIRRETs can indeed be properly regarded as quantifications of unfairness.Proofs are provided in Appendix A.3.
Moreover, they are differentiable with respect to ℎ. Violation FAIRRETs owe this to the differentiability of and projection FAIRRETs to the differentiability of .Hence, FAIRRETs are easily implemented with an automatic differentiation library like PyTorch.Their computational overhead is unaffected by the complexity of the parameters of the model ℎ, as the gradients
∇ 𝜽 L 𝑌 = 𝜕ℎ 𝜕𝜽 ⊤ ∇ ℎ L 𝑌 and ∇ 𝜽 𝑅 𝛾 = 𝜕ℎ 𝜕𝜽 ⊤
∇ ℎ of both loss functions in Eq. ( 2) share the computation of the Jacobian ℎ .It is common to minimize L using mini-batches; the same batches can be used to minimize .Indeed, this is done in our experiments.Yet, though this makes FAIRRETs more scalable, insufficient batch sizes will lead to poor approximations of the statistics .Clearly, the mean violation v(ℎ) in Eq. ( 3) computed over mini-batches is not an unbiased estimate of the actual violation computed over all data.We thus report the mean SmoothMax loss for increasing batch sizes in Appendix C.3.
Experiments
Setup
Experiments were conducted on the Bank (Moro et al., 2014), CreditCard (Yeh and hui Lien, 2009), LawSchool4 , and ACSIncome (Ding et al., 2021) datasets.Each has multiple sensitive features, including some continuous.The classifier ℎ was a fully connected neural net with hidden layers of sizes [256,128,32] followed by a sigmoid and did not take sensitive features S as input.We trained with all FAIRRETs discussed in Sec. 3 but only report results of Norm, JS -projection and SED -projection in Appendix C.4 to avoid clutter here.The remaining FAIRRETs, SmoothMax and KL -projection, were representative for their archetype.These are compared against three baselines implemented in the Fair Fairness Benchmark (FFB) by Han et al. (2023), as their implementation provides these baselines as loss terms in idiomatic PyTorch.They are PRemover (Kamishima et al., 2012), HSIC (Pérez-Suay et al., 2017), and AdvDebias (Adel et al., 2019) (where the reverse of the adversary's loss is the fairness loss term).In contrast to the FAIRRET implementations, they only accept a single, categorical sensitive attribute.Each FAIRRET and FFB fairness loss was added to the cross-entropy loss according to Eq. ( 2) in a separate training run for a range of strengths > 0.
We measured fairness over the four statistics in Table 1 that relate to Demographic Parity (DP), Equal Opportunity (EO), Predictive Parity (PP), and Treatment Equality (TE) respectively.Violation of each fairness notion is computed as max v (ℎ) (see Eq. ( 3)).Each FAIRRET was minimized with respect to each in a separate training run (and only the optimized violation is reported).The three FFB baselines only consider one fairness notion, which is to maximize independence between the model's output and the sensitive attributes.Their violation is reported for each statistic .
In summary, there was an experiment run for each dataset, fairness method, fairness strength , and statistic (except for the FFB baselines).Finally, we also use the Unfair baseline with = 0.Each of these combinations was repeated across 10 random seeds with each different train/test splits.
Appendix D provides further details on the experiment setup, i.e. the datasets, hyperparameters, the baselines implementation, the computation of the confidence ellipses and runtimes.
Results
Test set results are visualized in Fig. 2; train set results are found in Appendix C.5 (and display the same trends).We separately discuss the notions with linear and with linear-fractional statistics.
For DP and EO, which have linear statistics, both the SmoothMax and KL -projection FAIRRETs are effectively used to minimize the fairness violation with respect to multiple sensitive attributes while minimally suffering a loss in AUROC scores, though the projection FAIRRET clearly performs better than the violation-based SmoothMax FAIRRET.As expected, the FFB baselines perform worse than the methods implemented in our FAIRRET framework, since they cannot be configured to optimize the same general range of fairness definitions.Also, their implementation only minimizes bias with respect to a single sensitive attribute, and so they are oblivious to some of the components in S that the violation in Fig. 2 measures.We report their violations on this single attribute in Appendix C.6, though the FAIRRETs still outperform them there as well.For PP and TE, which have linear-fractional statistics, all methods appear to struggle far more.SmoothMax is most consistent and never makes the fairness violation worse, yet the KL -projection in most cases makes both the fairness violation and the AUROC worse.The same occurs for the FFB baselines.To some extent, this can be attributed to overfitting, as SmoothMax leads to a significantly more consistent reduction of the train set fairness violation than the test set (see Appendix C.5).Still, non-linear fairness notions are clearly harder to optimize, which aligns with the results of Celis et al. (2019).Though Barocas et al. (2019) conclude that sufficiency (a notion related to PP) 'often comes for free', further work is needed to better understand how such notions can be consistently achieved.
Conclusion
The FAIRRET framework allows for a wide range of fairness definitions and tools by comparing linear-fractional statistics for each sensitive feature.We implement several FAIRRETs and show how they are easily integrated in existing machine learning pipelines utilizing automatic differentiation.
Empirically, violation FAIRRETs like SmoothMax consistently lead to trade-offs between fairness and AUROC, though the more involved projection FAIRRETs like the KL -projection clearly outperform them on fairness definitions with linear statistics.However, all methods struggle with fairness notions that have linear-fractional statistics like PP and TE, which have mostly been ignored in prior work.This signals a lucrative direction for future research.
A Proofs
A.1 Table 1 Here, we show how the 0 , 1 , 0 , and 1 functions are derived for each of the fairness notions in Table 1.These fairness notions are typically defined in the partition fairness setting, and so we will make the same assumptions here (see Def. 1).Before discussing each fairness notion separately, we make some general observations.First, there may be a concern that our fairness constraint, i.e. ∀ ∈ [ ] : (; ) = ( ), requires each group's statistic (; ) to equal the overall statistic ( ) ≜ (1; ), whereas popular surveys of fairness definitions such as by Verma and Rubin (2018) instead only require that each group's statistic is the same, i.e.The reverse ( ⇐= ) is less straightforward.Clearly,
∀𝑘, 𝑙 ∈ [𝑑
∀𝑘, 𝑙 ∈ [𝑑 𝑠 ] : 𝛾(𝑘; 𝑓 ) = 𝛾(𝑙; 𝑓 ) ⇐⇒ ∃𝑐 ∈ R : ∀𝑘 ∈ [𝑑 𝑠 ] : 𝛾(𝑘; 𝑓 ) = 𝑐 (6) It thus suffices to show 𝑐 ≡ 𝛾( 𝑓 ) in such cases. Let 𝑔 0 (X, 𝑌 ) = 𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 ) and 𝑔 1 (X, 𝑌 ) = 𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 ). Then 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑔 0 (X, 𝑌 )] E[𝑆 𝑘 𝑔 1 (X, 𝑌 )]
= , , 0 (X, )(, , ) , , 1 (X, )(, , ) = , 0 (X, )(, , = 1) , 1 (X, )(, , = 1)
= 𝑋,𝑌 𝑔 0 (X, 𝑌 )𝑃(𝑋, 𝑌 | 𝑆 𝑘 = 1) 𝑋,𝑌 𝑔 1 (X, 𝑌 )𝑃(𝑋, 𝑌 | 𝑆 𝑘 = 1) = E[𝑔 0 (X, 𝑌 ) | 𝑆 𝑘 = 1] E[𝑔 1 (X, 𝑌 ) | 𝑆 𝑘 = 1]
where the third line used the partition fairness assumptions.
Thus, Eq. ( 6) also entails that E
[𝑔 0 (X, 𝑌 ) | 𝑆 𝑘 = 1] = 𝑐 E[𝑔 1 (X, 𝑌 ) | 𝑆 𝑘 = 1].
By the law of total expectation, we now have
𝛾( 𝑓 ) = E[𝑔 0 (X, 𝑌 )] E[𝑔 1 (X, 𝑌 )] = E[E[𝑔 0 (X, 𝑌 ) | 𝑆]] E[E[𝑔 1 (X, 𝑌 ) | 𝑆]] = 𝑘 E[𝑔 0 (X, 𝑌 ) | 𝑆 𝑘 = 1]𝑃(𝑆 𝑘 = 1) 𝑘 E[𝑔 1 (X, 𝑌 ) | 𝑆 𝑘 = 1]𝑃(𝑆 𝑘 = 1) = 𝑘 𝑐 E[𝑔 1 (X, 𝑌 ) | 𝑆 𝑘 = 1]𝑃(𝑆 𝑘 = 1) 𝑘 E[𝑔 1 (X, 𝑌 ) | 𝑆 𝑘 = 1]𝑃(𝑆 𝑘 = 1) = 𝑐
where the third line again used partition fairness.We thus indeed always have = ( ).Applying Eq. ( 6) then completes the proof.□
A second observation used to derive Table 1 is that fairness notions tend to be defined over probabilities involving binary variables.To formulate fairness notions in terms of expectations, we can thus extensively use these binary variables as indicator functions.For example, the frequency of true positives ( (X) = 1, = 1) is easily counted as E[ (X) ].
These observations can be applied to all fairness definitions presented by Verma and Rubin (2018) in their Sections 3.1 and 3.2 and we refer to their work for a detailed interpretation of each definition.
We now show how Table 1 can be constructed by presenting the statistic of each fairness definition and (a simplified form of) their function under our notation.
• demographic parity (a.k.a statistical parity) (Dwork et al., 2012) was already discussed in Example 1.It requires equal positive rates (
(X) = 1 | 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X) ] E[𝑆 𝑘 ] .
• conditional demographic parity (a.k.a conditional statistical parity) (Corbett-Davies et al., 2017;Wachter et al., 2020) conditions demographic parity on a part of the input.This is typically an input feature with respect to which we are allowed to discriminate.Let : R → {0, 1} be an arbitrary function with a binary output.Then the fairness definition requires equal ( ) ] .Note that this statistic is easily extended to also allow non-binary functions.
(X) = 1 | 𝜁 (X), 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X) 𝜁 (X) ] E[𝑆 𝑘 𝜁 (X
• equal opportunity (Hardt et al., 2016) was already discussed in Example 2. It considers positive rates over only the positive samples ( = 1).It thus compares false negative rates, or equivalently true positive rates or recall (
(X) = 1 | 𝑌 = 1, 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X)𝑌 ] E[𝑆 𝑘 𝑌 ] .
• false positive parity (a.k.a.predictive equality) (Hardt et al., 2016;Corbett-Davies et al., 2017) is similar to equal opportunity, but for the negative class.Hence, it compares true negative rates, or equivalently false positive rates (
(X) = 1 | 𝑌 = 0, 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X) (1−𝑌 ) ] E[𝑆 𝑘 (1−𝑌 ) ]
. If combined with equal opportunity, it enforces equalized odds (Hardt et al., 2016).
• predictive parity (Chouldechova, 2017) was already discussed in Example 3. It compares the positive predictive value, a.k.a the precision
𝑃(𝑌 = 1 | 𝑓 (X) = 1, 𝑆 𝑘 = 1), i.e. 𝛾(𝑘; 𝑓 ) = E[𝑆 𝑘 𝑓 (X)𝑌 ] E[𝑆 𝑘 𝑓 (X) ] .
• false omission parity is similar to predictive parity, but for the negative class.It is not explicitly discussed in (Verma and Rubin, 2018), yet it clearly compares false omission rates ( = 1 | (X) = 0, = 1), i.e.
𝛾(𝑘; 𝑓
) = E[𝑆 𝑘 (1− 𝑓 (X) )𝑌 ] E[𝑆 𝑘 (1− 𝑓 (X) )
] .• accuracy equality (Berk et al., 2021) compares accuracies ( = (X) | = 1) across groups.
Hence, it computes the relative amount of true positives and true negatives of each group, i.e. (; ) =
E[𝑆 𝑘 ( 𝑓 (X)𝑌 +(1− 𝑓 (X) ) (1−𝑌 ) ) ] E[𝑆 𝑘 ]
.
• treatment equality (Berk et al., 2021) was already discussed in Example 4. It requires the fraction of false negatives over false positives to be equal (or vice versa), and therefore does not represent a probability.Its statistic is thus (;
) = E[𝑆 𝑘 (1− 𝑓 (X) )𝑌 ] E[𝑆 𝑘 𝑓 (X) (1−𝑌 ) ] .
A.2 Proposition 1
Proof.Considering the definition of linear-fractional statistics ∈ Γ LF in Def. 3, we define the shorthand notations (X, , ) = 0 (X, ) − 1 (X, ) and (X, , ) = 0 (X, ) − 1 (X, ).It is then straightforward to see that
𝛾(𝑘; 𝑓 ) = 𝑐 ⇐⇒ E[𝑆 𝑘 (𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 ))] E[𝑆 𝑘 (𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 ))] = 𝑐 ⇐⇒ E[𝑆 𝑘 (𝛼(X, 𝑌 , 𝑐) + 𝑓 (X) 𝛽(X, 𝑌 , 𝑐))] = 0
where the last step uses the linearity of the expectation operator E. The resulting constraint is linear with respect to since and are both functions of functions that do not depend on .□
A.3 Proposition 2
To show the strictness of the proposed FAIRRETs, we use a separate strategy for violation and projection FAIRRETs.
A.3.1 Violation FAIRRETs
Proof.Per Def. 3, it is easily seen that v(ℎ) = 0 ⇐⇒ ℎ ∈ F .This also holds in the case where (ℎ) = 0, as we can define v (ℎ) = |(; ℎ)| there instead.Hence, we only need to show that (ℎ) = 0 ⇐⇒ v(ℎ) = 0 to show the strictness of violation FAIRRETs .
For the Norm FAIRRET, the strictness is obvious: a norm defined over a vector space is always assumed to equal zero only for the vector of zeros, i.e.
𝑅 𝛾 (ℎ) ≜ ∥v(ℎ)∥ = 0 ⇐⇒ v(ℎ) = 0.
For the SmoothMax FAIRRET, the strictness follows from the log adjustment term:
𝑅 𝛾 (ℎ) ≜ log ∑︁ 𝑘 ∈ [𝑑 𝑠 ] exp(v 𝑘 (ℎ)) − log 𝑑 𝑠 = 0 ⇐⇒ ∑︁ 𝑘 ∈ [𝑑 𝑠 ] exp(v 𝑘 (ℎ)) = 𝑑 𝑠
⇐⇒ v(ℎ) = 0 where the last step follows from the non-negativity of .□
A.3.2 Projection FAIRRETs
Proof.A projection FAIRRET is easily shown to be strict if its divergence measure satisfies
𝐷 ( 𝑓 (X) ∥ ℎ(X)) > 0 ∧ 𝐷 ( 𝑓 (X) ∥ ℎ(X)) = 0 ⇐⇒ 𝑓 (X) = ℎ(X)(7)
Indeed, if ℎ ∈ F , then also ℎ ∈ F ((ℎ)) by construction.We can then choose = ℎ and use Eq. ( 7) to get
𝐷 ( 𝑓 (X) ∥ ℎ(X)) = 0. This shows ℎ ∈ F 𝛾 =⇒ 𝑅 𝛾 (ℎ) = 0.
Conversely, the assumed properties of in Eq. ( 7) entail
𝑅 𝛾 (ℎ) ≜ min 𝑓 ∈ F 𝛾 (𝛾 (ℎ) ) E[𝐷 ( 𝑓 (X) ∥ ℎ(X))] = 0 ⇐⇒ ∃ 𝑓 ∈ F 𝛾 (𝛾(ℎ)) : ∀X ∈ R 𝑑 𝑠 : 𝑓 (X) = ℎ(X) ⇐⇒ ℎ ∈ F 𝛾 (𝛾(ℎ))
where the second line assumes5 the support of X equals R .Since F ((ℎ)) ⊂ F , we have (ℎ) = 0 =⇒ ℎ ∈ F .
To show the strictness of the projection FAIRRETs, it thus suffices to show that the divergences in Def. 9, 10 and 11 all satisfy the requirements in Eq. ( 7).
It is a well-known property of the Kullback-Leibler divergence KL that Eq. ( 7) holds (Csiszar, 1975).
The properties of the KL -divergence also trivially imply non-negativity for the Jensen-Shannon divergence.Moreover, it entails 7) indeed also holds for JS .
𝐷 JS ( 𝑓 (X) ∥ ℎ(X)) ≜ 𝐷 KL ( 𝑓 (X) ∥ 𝑚(X)) + 𝐷 KL (ℎ(X) ∥ 𝑚(X)) 2 = 0 ⇐⇒ 𝐷 KL ( 𝑓 (X) ∥ 𝑚(X)) = 𝐷 KL (ℎ(X) ∥ 𝑚(X)) = 0 ⇐⇒ 𝑓 (X) = 𝑚(X) = ℎ(X) which means Eq. (
Finally, it is clear that SED satisfies Eq. ( 7), as the Euclidean distance is non-negative and only zero for overlapping points.□
B Additional Discussion
B.1 Addendum to Remark 1
Observe that, for ∈ Γ LF :
𝛾(𝑘; 𝑓 ) ≜ E[𝑆 𝑘 (𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 ))] E[𝑆 𝑘 (𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 ))]
and
𝛾( 𝑓 ) ≜ E[𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 )] E[𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 )]
In Remark 1, we (non-exhaustively) mention two cases for ∈ R where (; ) = ( ) holds:
1. is deterministic, i.e. ≡ for a constant ∈ R. Due to the linearity of the expectation operator, we trivially have (; ) = ( ).Though this case is degenerate, we should indeed expect a fairness criterion to hold if the random variable expresses no information about an individual (and as such cannot be grounds for discrimination).2. has no linear influence on the nominator or denominator of , i.e.
cov(𝑆
𝑘 , 𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 )) = cov(𝑆 𝑘 , 𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 )) = 0.(8)
Indeed, using the definition of the covariance operator, we have
𝛾(𝑘; 𝑓 ) = cov(𝑆 𝑘 , 𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 )) + E[𝛼 0 (X, 𝑌 ) + 𝑓 (X) 𝛽 0 (X, 𝑌 )] cov(𝑆 𝑘 , 𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 )) + E[𝛼 1 (X, 𝑌 ) + 𝑓 (X) 𝛽 1 (X, 𝑌 )]
.
Unfortunately, Eq. ( 8) is only a sufficient condition for (; ) = ( ) in general.Still, it becomes a necessary condition for the DP fairness notion (which always has cov( , 1 (X,
)+ 𝑓 (X) 𝛽 1 (X, 𝑌 )) = cov(𝑆 𝑘 , 1) = 0).
Yet, though we argue (; ) = ( ) is sensible to enforce for continuous variables ∈ R, it must be stressed that linear-fractional statistics check linear effects of on (X).Higher-order moments will thus not be measured.
For example, if 0 denotes 'man-ness' and 1 denotes 'woman-ness', then individuals who identify with a non-binary gender may quantify their sensitive features as 0 = 50% and 1 = 50%.However, linear covariance statistics (like ours) will not consider specific discrimination directed at those with, e.g., 1 = 50%.Instead, this will be taken into account as 'half as influential' compared to individuals who identify as 1 = 100%.
B.2 Addendum to Remark 2
In Remark 2, we mention that intersections of demographic groups will not be considered in fairness constraints if partition fairness does not hold.
For example, assume we want to check DP fairness with statistic (;
) = E[𝑆 𝑘 𝑓 (X) ] E[𝑆 𝑘 ]
for four uniformly selected samples with scores (0.7, 0.3, 0.7, 0.3) given by , values (1, 1, 0, 0) for sensitive variable 0 and values (1, 0, 0, 1) for sensitive variable 1 .Then (0; ) = (1; ) = 0.5.However,
E[𝑆 0 𝑆 1 𝑓 (X) ] E[𝑆 𝑘 ] = 0.7 and E[ (1−𝑆 0 ) (1−𝑆 1 ) 𝑓 (X) ] E[𝑆 𝑘 ] = 0.3.
Here, the individual with sensitive feature vector S = (0, 0), i.e. at the intersection of 0 = 0 and 1 = 0, thus receives worse scores than the others.
B.3 Addendum to Remark 3
In our discussion of fairness definitions, we assume that we are enforcing constraints on classifiers : R → {0, 1}, i.e. classifiers with a hard decision boundary.In practice, however, it is common to base decisions off of a parameterized regressor : R → R, e.g. with a neural network.For example, we can then deterministically collect decisions from as (X) = (X) >0 with the indicator function.However, such a thresholding function has a gradient of zero with respect to and a discontinuity in (X) = 0.
Several works have investigated directly using (X) in fairness constraints (Zafar et al., 2019).For example in DP fairness, directly computing (; ℎ) will already enforce that the mean scores () should be equal across groups.Though this leads to interesting convex optimization problems for a fair , it has been noted that this only observes a relaxed form of actual fairness constraints (Lohaus et al., 2020).
A middle ground is to construct a probabilistic classifier as a Bernoulli distribution with parameter ℎ(X) = ( (X)) where denotes the logistic function () = (1 + exp(−)) −1 .Hence, (X) is fully specified through ( (X) = 1 | X) = ℎ(X).In the definition of , we can then simply replace the classification function in the expectation by the scoring model ℎ, because
E[E[ 𝑓 (X) | X]] = E[ℎ(X)].
Still, the fact that a probabilistic classifier is now not only dependent on X but also on the randomness involved in sampling (X) introduces some noise into the decision process that we may wish to avoid.There is also the danger that in practice, real-world decisions would still be made according to a hard threshold on the scores ℎ(X), e.g. on (X) = ℎ(X) >0.5 .
If this becomes an issue, then we could make use of recent work (Padh et al., 2021;Bendekgey and Sudderth) that has investigated surrogate functions that are better approximations of the thresholding function •>0 than the logistic function .For example, a suitable surrogate is the scaled logistic function () ≜ (), since lim →∞ () = >0 for ≠ 0. However, note also that lim →∞ () = 0. Surrogate functions like thus allow us to trade-off the hardness of the classification with the quality of its gradient.
C Additional Results
C.1 Projection Visualization
In Fig. 3, we visualize the projected distributions for the example in Fig. 1.The KL -and JS -projections appear to transform the shape of the probability distribution, whereas the SED -projection appear to linearly shift the probabilities within a group.
For the latter, this appears to form a problem, because its behavior leads to a large gap in densities between ℎ and the projection * at the edges of the [0, 1] interval.For example for 0 = 1, the scores are linearly shifted to the left (lower probability scores), meaning that no scores are left in the high probability range, and too many are allocated to the low probability range.The 'blocking' of this shift on the low end for 0 = 1 and on the high end for 1 = 1 is in fact the reason for the 'crack' in the gradients of the SED -projection in Fig. 1, since the probabilities of ℎ cannot be projected beyond these edges.
C.2 Approximate Projection
In Sec.3.2, we suggest that the convex optimization of the projections * can already converge quite well after only 10 iterations.By placing such a limit, we can significantly reduce the computational cost of (ℎ), though (ℎ) will be an overestimate (as projections * with a smaller divergence to ℎ may not have been found yet).
In Fig. 4, we observe that training with at most 10 iterations per projection is indeed enough to minimize the DP violation while also being much faster to compute.Unexpectedly, we even observe the DP violation to be slightly lower after training with at most 10 iterations than with more than 10, which could indicate that fewer iterations can have a positive effect on the training process.
C.3 Mini-batching the FAIRRET
In Sec.3.3, we propose that FAIRRETs can be minimized using the same mini-batches that we use to minimize the cross-entropy loss L .However, batches clearly need to be large enough to adequately represent the imbalances for all sensitive features S. Hence, we report an experiment in Fig. 5 where we take an unfair classifier ℎ and compute the mean SmoothMax loss over batch sizes with increasing granularity.
From a batch size of approximately 1024, the mean loss over all batches already closely matches the SmoothMax loss computed over the full test set (39 133 samples).Note that for very small batch sizes, the loss becomes mostly meaningless, as batches will not even contain all members of each protected group.To be on the safe side, we use a batch size of 4096 in all our experiments.
C.4 Other FAIRRET Results
The test set results on the Norm, JS -projection and SED -projection FAIRRETs are shown in Fig. 6.The Norm FAIRRET follows a very similar gradient as the SmoothMax FAIRRET, whereas the JS -projection and SED -projection give similar results as the KL -projection.
C.5 Train Set Results
In Fig. 7, we show the train set results for our main experiments.These follow the same trends as the test set results, though with higher AUROC scores and lower fairness violations.The most important difference is the performance of the SmoothMax fairret, e.g. on the CreditCard dataset, which obtains a significantly lower fairness violation for the difficult PP and TE fairness notions than it did on the test set.1, we show the probability scores of both ℎ (full line) and the projected distributions * of each projection FAIRRET (dotted lines).The -axis shows the KDE densities of these scores, all on the same scale.
C.6 Fairness Violations for a Single Sensitive Attribute
The FFB implementation (and indeed most fairness tools) only mitigate bias with respect to a single, categorical sensitive feature.In our datasets (see Sec. D.1), these were the following:
D Additional Experiment Details D.1 Datasets
We used four different datasets for the evaluation of our framework, namely the Bank Marketing dataset (Moro et al., 2014), Credit card clients dataset (Yeh and hui Lien, 2009), Law School Admissions dataset6 and the ACSIncome dataset from Folktables (Ding et al., 2021).Their main advantages are their range of sensitive attributes, their recency and their curation quality.
We deviate from the normal practice of using the German Credit and Adult data sets as advised by Fabris et al. (2022) due to the "contrived prediction tasks, noisy data, severe coding mistakes, limitations in encoding sensitive attributes, and age" of these datasets.
D.1.1 Bank
The Bank marketing dataset was collected by a Portuguese bank between May 2008 and June 2013.The dataset includes all information the bank has on a client, information about the previous attempt for the client to subscribe for a long-term deposit, some economic information and if the client decided to go for a long-term deposit following the telephone call.
The dataset itself contains the information of 41 108 telephone contacts.Important to note for this dataset is that the outcomes are severely unbalanced with only 4 640 of outcomes belonging to the positive class.
The sensitive attributes which were used as such in this dataset were the age and marital status of a person.A person's eduction was not included as a sensitive attribute as discriminating on that value could arguably be justifiable in this situation.
During preprocessing, we dropped five features: three relating to the outcome of the previous marketing campaign as often there was no record of a previous contact and two features relating to the date when the current marketing call took place.If the value of certain features were unknown then they were were mapped onto the 'False' value, except for the case if the marital status of the person was unknown.In that instance the row was simply dropped, since this only occurred for 80 samples in the entire dataset.
D.1.2 CreditCard
The Credit Card clients dataset is data from a bank in Taiwan in October 2005.The goal is to predict whether a client would default on their credit in the next month.The features include the allocated credit of the client, personal information, the status of previous payments, amounts of previous payments and the amounts on previous bill statements.
In total the dataset contains 30 000 records.In 5529 of these instances the client defaulted in the next month.
This dataset contains a wide range of sensitive attributes, namely sex, education, marital status and age.
A total of 522 samples were removed from the dataset as they contained values unspecified in the documentation of the data or if their eduction status fell under the category others.That was mainly done as only 123 samples were of this category, making it too small to maintain adequate statistical power for the fairness measure of the group.
D.1.3 LawSchool
The Law School admissions dataset contains information about whether a student was accepted to the law school, which we tried to predict in our experiments.The dataset also contains the student's race, gender, whether they live where they applied to law school, what college they applied to, the year they did this, and their LSAT and GPA scores.
Although this dataset is hand-curated by the SEAPHE project, it is still fairly sizeable with 65 535 samples.Only 24.84% of these samples had a positive outcome.
The dataset only contains two sensitive attributes: race and sex.This is the only dataset where the age features are not available.Interestingly, the range of values for the race attribute is fairly balanced across all races, except for white which is overly represented with 39 742 samples.
In the preprocessing step only the year of the application column was not used as a feature.
D.1.4 ACSIncome
The ACSIncome dataset has the familiar goal of predicting whether an individual's annual income is above $50 000.worked at least one hour per week and reported an income above $100.A myriad of personal information is available in the dataset.
The dataset is significantly larger than the others used, with 195 665 samples.It is also more balanced as it has 80 335 positive samples.
Four sensitive attributes are used for the calculations.In this case age, marital status, sex and race.Information about employment and education is not included as sensitive attributes.
Due to the large amount of race groups in the survey, it is necessary to simplify this trait in order to have each group be at least a total of 1% of the data set to guarantee statistical significance for each group.
D.2 Hyperparameters
In addition to the hyperparameters discussed in Sec.4.1, we also mention that our model was optimized with the Adam optimizer implementation of PyTorch, with a learning rate of 0.001 and a batch size of 4096.The loss was minimized over 100 epochs, with = 0 for the first 20 to avoid constraining ℎ before it learns anything.
To find these hyperparameters, we took the 80%/20% train/test split already generated for each seed, and further divided the train set into a smaller train set and a validation set with relative sizes 80% and 20% respectively.Keeping ellipses assume a 2-dimensional point, i.e. the 2-dimensional mean estimator, to have a multivariate normal distribution that can be characterized through the sample mean and standard error statistics.
Our implementation of the confidence ellipses follows a featured implementation on matplotlib8 .However, a crucial difference is that this implementation computes a confidence interval for a 2-dimensional random variable based on the covariance matrix for the standard deviation of samples of that variable.Following observations by Schubert and Kirchner (Schubert and Kirchner, 2014), we instead want to show the uncertainty of the mean estimator, which should use the standard deviation of that estimator, i.e. the covariance for the standard error.This is accomplished by dividing the covariance matrix in the matplotlib implementation by the number of seeds (5) we use in our experiments.
D.5 Computation Cost and Runtimes
We report some of the runtimes in Fig. 9 that illustrate the difference in computational cost between FAIRRETs and baselines during the main experiments of Sec. 4. Note that both our FAIRRET implementation and the FFB implementation were designed for intuitive use in research and not yet optimized for runtime speed.All experiments in Sec. 4 were conducted on an internal server equipped with a 12 Core Intel(R) Xeon(R) Gold processor and 256 GB of RAM.All experiments, including preliminary and failed experiments, cost approximately 100 hours per CPU.
Figure 2 :
2
Figure2: Mean test set results with confidence ellipse for the standard error.Each marker is a separate combination of dataset, FAIRRET, FAIRRET strength, and statistic.Results in the lower right are optimal.Failed runs (with an AUROC far worse than the rest) are omitted.
] : (; ) = (; ) However, these constraints are equivalent.Proposition 3. In partition fairness, with ∈ Γ: ∀ ∈ [ ] : (; ) = ( ) ⇐⇒ ∀, ∈ [ ] : (; ) = (; ) Proof.The forward ( =⇒ ) relation is trivial: if all group statistics (; ) = ( ), then necessarily they are also all equal.
Figure 3 :
3
Figure3: Starting from the same setup as in Fig.1, we show the probability scores of both ℎ (full line) and the projected distributions * of each projection FAIRRET (dotted lines).The -axis shows the KDE densities of these scores, all on the same scale.
•
The fairness violations for these single features are shown in Fig.8.Importantly, the FAIRRET experiments were redone by training with only this single feature in mind when computing the FAIRRET loss.Even for the DP fairness notion, the FAIRRETs remain competitive and the KL -projection obtains the best trade-off.
Figure 4 :
4
Figure 4: (left) Test set DP violation, with a similar experiment setup as Fig. 2 on the ACSIncome dataset.Each result bar results from a separate training run with the KL -projection fairret that was minimizing the DP violation.The configurations only differ in the maximum number of iterations used in the convex optimizations that compute the actual KL -projections * (see Sec. 3.2).(right) The total training time of these runs with standard error.
Figure 5 :
5
Figure 5: Test set SmoothMax loss (ℎ) with the positive rate statistic (enforcing the DP notion), computed for an unfair model ℎ trained with the same setup as in Fig.1.Each loss is computed over the entire test dataset, but chunked using different batch sizes.For smaller batch sizes, the mean SmoothMax loss is an overestimate of actual SmoothMax loss computed over all 39 133 samples.
The source of the data is the US census study.This only includes individuals above the age of 16, who indicate having
Figure 6 :
6
Figure 6: Test set results for the experiments in Fig. 2, but with different FAIRRETs.
Figure 9 :
9
Figure 9: Runtimes for the ACSIncome dataset experiments discussed in Sec. 4 with strength = 1 (except for the Unfair baseline).The FAIRRETS were optimizing the DP fairness notion.Note the log scale.
Table 1 :
1
Fairness definitions and their and functions.Conditional Demographic Parity encompasses many notions with an arbitrary function conditioned on the input X.
Fairness Definition𝛼 0𝛽 0𝛼 1𝛽 1Demographic Parity
Definition 2. A linear-fractional statistic computes values (; ) ∈ R for sensitive variable and classifier : R → {0, 1}.We assume is differentiable with respect to .It takes the form
2021)Y-Y01 -Y
https://scikit-learn.org/1.3/developers/develop.html describes these Estimators
In cases where 𝛾(ℎ) = 0, we can simply use v 𝑘 (ℎ) = |𝛾(𝑘; ℎ)| instead. We assume ℎ(X) ∈ (0, 1), so this only occurs in degenerate cases for the notions in Table1(like when all 𝑌 = 0 for Equal Opportunity).
There is some abuse of notation here. When taking the gradient or Jacobian with respect to ℎ, we take it with respect to the vector of 𝑛 outputs of ℎ for a set of 𝑛 input features sampled from the distribution over X.
Curated and published by the SEAPHE project
If the support of X does not equal R 𝑑 𝑠 , then we can simply reformulate our framework to only consider function outputs 𝑓 (X) and ℎ(X) for points X that do lie in its support.
Curated and published by the SEAPHE project
https://github.com/ahxt/fair_fairness_benchmark/commit/abec4de80455831ce8d2e158629dfb738a572201
https://matplotlib.org/3.7.0/gallery/statistics/confidence_ellipse.html.
AcknowledgmentsThe research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) (ERC Grant Agreement no.615517), and under the European Union's Horizon 2020 research and innovation programme (ERC Grant Agreement no.963924), from the Special Research Fund (BOF) of Ghent University (BOF20/IBF/117), from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme, and from the FWO (project no.G0F9816N, 3G042220).MB is supported by a doctoral scholarship from the Special Research Fund (BOF) of Ghent University (reference number: BOF20/DOC/144).Ethics StatementThe FAIRRET framework was made as a technical tool to help unveil and address a mathematical formalization of fairness in machine learning systems.However, such tools should never be considered a sufficient solution to truly achieve fairness in real-world decision processes(Buyl and De Bie, 2023), e.g. because the social, human component of fairness is completely outside the control of this framework(Selbst et al., 2019).There is a significant risk that technologies such as ours may anyway be abused to suggest discriminatory bias has been 'removed' from a decision process without actually addressing underlying injustices(Hoffmann, 2019).the FAIRRET strength = 0, we performed a grid search on the neural architecture in range in combination with the learning rate in range [0.01, 0.005, 0.001, 0.0005, 0.0001].We then selected the combination with the best AUROC on the validation set of the Bank dataset.The number of epochs and the batch size were tuned manually based on the convergence properties of the validation AUROC.D.3 FFB BaselinesFor allFFB Han et al. (2023)baselines, we used the publicly available implementation 7 with only minor adjustments to make them fit in our experiment pipeline.From their implementation, we used the following baselines.• HSIC minimizes the Hilbert-Schmidt Independence Criterion between the model's prediction probabilities and the sensitive attributes (Pérez-Suay et al., 2017).• PRemover minimizes the mutual information between the model's prediction probabilities and the sensitive attributes(Kamishima et al., 2012).2, but with fairness violations computed for a single sensitive feature.For these results, FAIRRET experiments were redone and optimized for only this feature specifically.• AdvDebias maximizes an adversary's cross entropy between its predictions for the sensitive attribute and the actual sensitive attribute, given the last hidden layer of ℎAdel et al. (2019).after trying several configurations, we achieved the most stable results with an adversarial net of one hidden layer of size 32.We again stress these implementations only optimize for the specific fairness notion listed above and only for a single categorical sensitive attribute.At the time of writing, the FFB also contains implementations for a regularization term that minimizes the gap to obtain DP and EO, but this is highly similar to our violation fairrets and would not be an informative comparison.D.4 Confidence EllipsesThe confidence ellipses we use in Fig.1, Fig.6and Fig.7are uncommon in machine learning literature.Yet, they work well for our purpose of comparing trade-offs between metrics that may be noisy depending on randomness during training and dataset split selection.Recall that 1-dimensional confidence intervals typically assume a mean estimator to be normally distributed.The confidence interval then denotes the uncertainty of the sample mean using the standard error.Similarly, confidence
One-Network Adversarial Fairness. Tameem Adel, Isabel Valera, Zoubin Ghahramani, Adrian Weller, 10.1609/aaai.v33i01.33012412Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceJuly 201933
A Reductions Approach to Fair Classification. Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, Hanna Wallach, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningPMLRJuly 2018
A rewriting system for convex optimization problems. Akshay Agrawal, Robin Verschueren, Steven Diamond, Stephen Boyd, Journal of Control and Decision. 512018
Shun-Ichi Amari. 𝛼 -Divergence Is Unique, Belonging to Both f-Divergence and Bregman Divergence Classes. Wael Alghamdi, Shahab Asoodeh, Hao Wang, Flavio P Calmon, Dennis Wei, Karthikeyan Natesan, Ramamurthy , 10.1109/ISIT44484.2020.9173988doi: 10.1109/TIT. 2009.20304852020 IEEE International Symposium on Information Theory (ISIT). IEEEJune 2020. November 200955Model Projection: Theory and Applications to Fair Machine Learning
Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org. Solon Barocas, Moritz Hardt, Arvind Narayanan, 2019
AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. K E Rachel, Kuntal Bellamy, Michael Dey, Hind, C Samuel, Stephanie Hoffman, Kalapriya Houde, Pranay Kannan, Jacquelyn Lohia, Sameep Martino, Aleksandra Mehta, Seema Mojsilovic, Karthikeyan Nagar, John Natesan Ramamurthy, Diptikalyan Richards, Prasanna Saha, Moninder Sattigeri, Kush R Singh, Yunfeng Varshney, Zhang, October 2018
Harry Bendekgey, Erik B Sudderth, Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints.
A Convex Framework for Fair Regression. Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth, June 2017
Fairness in criminal justice risk assessments: The state of the art. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth, 10.1177/0049124118782533Sociological Methods & Research. 5012021
Fairlearn: A toolkit for assessing and improving fairness in AI. Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, Kathleen Walker, MSR- TR-2020-32May 2020MicrosoftTechnical Report
Convex Optimization. Stephen P Boyd, Lieven Vandenberghe, 2004Cambridge University PressCambridge, UK; New York
The KL-Divergence Between a Graph Model and its Fair I-Projection as a Fairness Regularizer. Maarten Buyl, Tijl De Bie, Machine Learning and Knowledge Discovery in Databases. Springer International Publishing2021
Optimal Transport of Classifiers to Fairness. Maarten Buyl, Tijl De Bie, Advances in Neural Information Processing Systems. December 202235
Inherent Limitations of AI Fairness. Maarten Buyl, Tijl De Bie, June 2023
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K Vishnoi, 10.1145/3287560.3287586Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyAtlanta GA USAACMJanuary 2019
Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Alexandra Chouldechova, 10.1089/big.2016.0047Big Data. 2167-646152June 2017
Ching-Yao Chuang, Youssef Mroueh, FAIR MIXUP: FAIRNESS VIA INTERPOLATION. International Conference on Learning Representations. 2020
Algorithmic Decision Making and the Cost of Fairness. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq, 10.1145/3097983.3098095Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '17New York, NY, USAAssociation for Computing MachineryAugust 2017
$I$-Divergence Geometry of Probability Distributions and Minimization Problems. I Csiszar, The Annals of Probability. 0091-1798311975
CVXPY: A Python-embedded modeling language for convex optimization. Steven Diamond, Stephen Boyd, Journal of Machine Learning Research. 17832016
Retiring adult: New datasets for fair machine learning. Frances Ding, Moritz Hardt, John Miller, Ludwig Schmidt, Advances in Neural Information Processing Systems. 202134
Fairness through awareness. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel, 10.1145/2090236.2090255Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12. the 3rd Innovations in Theoretical Computer Science Conference, ITCS '12New York, NY, USAAssociation for Computing MachineryJanuary 2012
Algorithmic fairness datasets: The story so far. Alessandro Fabris, Stefano Messina, Gianmaria Silvello, Gian Antonio Susto, 10.1007/s10618-022-00854-zData Mining and Knowledge Discovery. 1573-756X366November 2022
Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, Xia Hu, FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods. June 2023
Equality of Opportunity in Supervised Learning. Moritz Hardt, Eric Price, Eric Price, Nati Srebro, Advances in Neural Information Processing Systems. Curran Associates, Inc201629
Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information. Lauren Anna, Hoffmann, 10.1080/1369118X.2019.1573912Communication & Society. 1369-118X227June 2019
Fairness-Aware Classifier with Prejudice Remover Regularizer. Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, Jun Sakuma, 10.1007/978-3-642-33486-33Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. Peter A Flach, Tijl De Bie, Nello Cristianini, Berlin, HeidelbergSpringer2012
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningPMLRJuly 2018
Regularization for Deep Learning: A Taxonomy. Jan Kukačka, Vladimir Golkov, Daniel Cremers, October 2017
Too Relaxed to Be Fair. Michael Lohaus, Michael Perrot, Ulrike Von, Luxburg , Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLRNovember 2020
. Sode Masashi, Fairtorch, Dec 2020. Version 0.1.2
A Survey on Bias and Fairness in Machine Learning. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, 10.1145/3457607ACM Computing Surveys. 0360-0300546July 2021
A data-driven approach to predict the success of bank telemarketing. Sérgio Moro, Paulo Cortez, Paulo Rita, org/10.1016/j.dss.2014.03.001Decision Support Systems. 0167-9236622014
FNNC: Achieving fairness through neural networks. Manisha Padala, Sujit Gujar, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20Yokohama, Yokohama, JapanJanuary 2021
Addressing fairness in classification with a model-agnostic multi-objective algorithm. Kirtan Padh, Diego Antognini, Emma Lejal-Glaude, Boi Faltings, Claudiu Musat, Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence. the Thirty-Seventh Conference on Uncertainty in Artificial IntelligencePMLRDecember 2021
PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. Curran Associates, Inc201932
Fair Kernel Learning. Adrián Pérez-Suay, Valero Laparra, Gonzalo Mateo-García, Jordi Muñoz-Marí, Luis Gómez-Chova, Gustau Camps-Valls, 10.1007/978-3-319-71249-921Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. Michelangelo Ceci, Jaakko Hollmén, Ljupčo Todorovski, Celine Vens, Sašo Džeroski, ChamSpringer International Publishing2017
Ellipse area calculations and their applicability in posturography. Patric Schubert, Marietta Kirchner, 10.1016/j.gaitpost.2013.09.001Gait & Posture. 0966-6362391January 2014
Fairness and Abstraction in Sociotechnical Systems. Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, Janet Vertesi, 10.1145/3287560.3287598Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyAtlanta GA USAACMJanuary 2019
Fairness definitions explained. Sahil Verma, Julia Rubin, 10.1145/3194770.3194776Proceedings of the International Workshop on Software Fairness. the International Workshop on Software FairnessGothenburg SwedenACMMay 2018
Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai. Sandra Wachter, Brent Mittelstadt, Chris Russell, 10.2139/ssrn.3547922Computer Law and Security Review. 413547922Mar 2020
Optimized Score Transformation for Fair Classification. Dennis Wei, Karthikeyan Natesan Ramamurthy, Flavio Calmon, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. the Twenty Third International Conference on Artificial Intelligence and StatisticsPMLRJune 2020
Unlocking Fairness: A Trade-off Revisited. Michael Wick, Jean-Baptiste Tristan, Advances in Neural Information Processing Systems. Curran Associates, Inc201932
The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. I-Cheng Yeh, Che Hui, Lien , 10.1016/j.eswa.2007.12.020.URLhttps://www.sciencedirect.com/science/article/pii/S0957417407006719Expert Systems with Applications. 0957-41743622009
Fairness Constraints: A Flexible Approach for Fair Classification. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, Krishna P Gummadi, Journal of Machine Learning Research. 1533-792820752019
Learning Fair Representations. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork, Proceedings of the 30th International Conference on Machine Learning. the 30th International Conference on Machine LearningPMLRMay 2013
Mitigating Unwanted Biases with Adversarial Learning. Brian Hu Zhang, Blake Lemoine, Margaret Mitchell, 10.1145/3278721.3278779Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18. the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES '18New York, NY, USAAssociation for Computing MachineryDecember 2018 |
231,847,016 | DISCOVERING A SET OF POLICIES FOR THE WORST CASE REWARD | We study the problem of how to construct a set of policies that can be composed together to solve a collection of reinforcement learning tasks. Each task is a different reward function defined as a linear combination of known features. We consider a specific class of policy compositions which we call set improving policies (SIPs): given a set of policies and a set of tasks, a SIP is any composition of the former whose performance is at least as good as that of its constituents across all the tasks. We focus on the most conservative instantiation of SIPs, setmax policies (SMPs), so our analysis extends to any SIP. This includes known policy-composition operators like generalized policy improvement. Our main contribution is a policy iteration algorithm that builds a set of policies in order to maximize the worst-case performance of the resulting SMP on the set of tasks. The algorithm works by successively adding new policies to the set. We show that the worst-case performance of the resulting SMP strictly improves at each iteration, and the algorithm only stops when there does not exist a policy that leads to improved performance. We empirically evaluate our algorithm on a grid world and also on a set of domains from the DeepMind control suite. We confirm our theoretical results regarding the monotonically improving performance of our algorithm. Interestingly, we also show empirically that the sets of policies computed by the algorithm are diverse, leading to different trajectories in the grid world and very distinct locomotion skills in the control suite. . Fast reinforcement learning with generalized policy updates. . Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018. Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95-110, 1956. Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013. Reinforcement learning with soft state aggregation. In Advances in neural information processing systems, pp. 361-368, 1995. Nathan Sprague and Dana Ballard. Multiple-goal reinforcement learning with modular sarsa (0). 2003. | [] | DISCOVERING A SET OF POLICIES FOR THE WORST CASE REWARD
Tom Zahavy
Andre Barreto
Daniel J Mankowitz
Shaobo Hou
Brendan O'donoghue
Iurii Kemaev
Satinder Singh Deepmind
DISCOVERING A SET OF POLICIES FOR THE WORST CASE REWARD
Published as a conference paper at ICLR 2021
We study the problem of how to construct a set of policies that can be composed together to solve a collection of reinforcement learning tasks. Each task is a different reward function defined as a linear combination of known features. We consider a specific class of policy compositions which we call set improving policies (SIPs): given a set of policies and a set of tasks, a SIP is any composition of the former whose performance is at least as good as that of its constituents across all the tasks. We focus on the most conservative instantiation of SIPs, setmax policies (SMPs), so our analysis extends to any SIP. This includes known policy-composition operators like generalized policy improvement. Our main contribution is a policy iteration algorithm that builds a set of policies in order to maximize the worst-case performance of the resulting SMP on the set of tasks. The algorithm works by successively adding new policies to the set. We show that the worst-case performance of the resulting SMP strictly improves at each iteration, and the algorithm only stops when there does not exist a policy that leads to improved performance. We empirically evaluate our algorithm on a grid world and also on a set of domains from the DeepMind control suite. We confirm our theoretical results regarding the monotonically improving performance of our algorithm. Interestingly, we also show empirically that the sets of policies computed by the algorithm are diverse, leading to different trajectories in the grid world and very distinct locomotion skills in the control suite. . Fast reinforcement learning with generalized policy updates. . Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070, 2018. Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly, 3(1-2):95-110, 1956. Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. arXiv preprint arXiv:1301.4666, 2013. Reinforcement learning with soft state aggregation. In Advances in neural information processing systems, pp. 361-368, 1995. Nathan Sprague and Dana Ballard. Multiple-goal reinforcement learning with modular sarsa (0). 2003.
INTRODUCTION
Reinforcement learning (RL) is concerned with building agents that can learn to act so as to maximize reward through trial-and-error interaction with the environment. There are several reasons why it can be useful for an agent to learn about multiple ways of behaving, i.e., learn about multiple policies. The agent may want to achieve multiple tasks (or subgoals) in a lifelong learning setting and may learn a separate policy for each task, reusing them as needed when tasks reoccur. The agent may have a hierarchical architecture in which many policies are learned at a lower level while an upper level policy learns to combine them in useful ways, such as to accelerate learning on a single task or to transfer efficiently to a new task. Learning about multiple policies in the form of options (Sutton et al., 1999a) can be a good way to achieve temporal abstraction; again this can be used to quickly plan good policies for new tasks. In this paper we abstract away from these specific scenarios and ask the following question: what set of policies should the agent pre-learn in order to guarantee good performance under the worst-case reward? A satisfactory answer to this question could be useful in all the scenarios discussed above and potentially many others.
There are two components to the question above: (i) what policies should be in the set, and (ii) how to compose a policy to be used on a new task from the policies in the set. To answer (ii), we propose the concept of a set improving policy (SIP). Given any set of n policies, a SIP is any composition of these policies whose performance is at least as good as, and generally better than, that of all of the constituent policies in the set. We present two policy composition (or improvement) operators that lead to a SIP. The first is called set-max policy (SMP). Given a distribution over states, a SMP chooses from n policies the one that leads to the highest expected value. The second SIP operator is generalized policy improvement (Barreto et al., 2017, GPI). Given a set of n policies and their associated action-value functions, GPI is a natural extension of regular policy improvement in which the agent acts greedily in each state with respect to the maximum over the set of action-values functions. Although SMP provides weaker guarantees than GPI (we will show this below), it is more amenable to analysis and thus we will use it exclusively for our theoretical results. However, since SMP's performance serve as a lower bound to GPI's, the results we derive for the former also apply to the latter. In our illustrative experiments we will show this result empirically. Now that we have fixed the answer to (ii), i.e., how to compose pre-learned policies for a new reward function, we can leverage it to address (i): what criterion to use to pre-learn the policies. Here, one can appeal to heuristics such as the ones advocating that the set of pre-learned policies should be as diverse as possible (Eysenbach et al., 2018;Gregor et al., 2016;Grimm et al., 2019;Hansen et al., 2019). In this paper we will use the formal criterion of robustness, i.e., we will seek a set of policies that do as well as possible in the worst-case scenario. Thus, the problem of interest to this paper is as follows: how to define and discover a set of n policies that maximize the worst possible performance of the resulting SMP across all possible tasks? Interestingly, as we will discuss, the solution to this robustness problem naturally leads to a diverse set of policies.
To solve the problem posed above we make two assumptions: (A1) that tasks differ only in their reward functions, and (A2) that reward functions are linear combinations of known features. These two assumptions allow us to leverage the concept of successor features (SFs) and work in apprenticeship learning. As our main contribution in this paper, we present an algorithm that iteratively builds a set of policies such that SMP's performance with respect to the worst case reward provably improves in each iteration, stopping when no such greedy improvement is possible. We also provide a closed-form expression to compute the worst-case performance of our algorithm at each iteration. This means that, given tasks satisfying Assumptions A1 and A2, we are able to provably construct a SIP that can quickly adapt to any task with guaranteed worst-case performance.
Related Work. The proposed approach has interesting connections with hierarchical RL (HRL) (Sutton et al., 1999b;Dietterich, 2000). We can think of SMP (and GPI) as a higher-level policy-selection mechanism that is fixed a priori. Under this interpretation, the problem we are solving can be seen as the definition and discovery of lower-level policies that will lead to a robust hierarchical agent.
There are interesting parallels between robustness and diversity. For example, diverse stock portfolios have less risk. In robust least squares (El Ghaoui & Lebret, 1997;Xu et al., 2009), the goal is to find a solution that will perform well with respect to (w.r.t) data perturbations. This leads to a min-max formulation, and there are known equivalences between solving a robust (min-max) problem and the diversity of the solution (via regularization) (Xu & Mannor, 2012). Our work is also related to robust Markov decision processes (MDPs) (Nilim & El Ghaoui, 2005), but our focus is on a different aspect of the problem. While in robust MDPs the uncertainty is w.r.t the dynamics of the environment, here we focus on uncertainty w.r.t the reward and assume that the dynamics are fixed. More importantly, we are interested in the hierarchical aspect of the problem -how to discover and compose a set of policies. In contrast, solutions to robust MDPs are typically composed of a single policy.
In Apprenticeship Learning (AL; Abbeel & Ng, 2004) the goal is also to solve a min-max problem in which the agent is expected to perform as well as an expert w.r.t any reward. If we ignore the expert, AL algorithms can be used to find a single policy that performs well w.r.t any reward. The solution to this problem (when there is no expert) is the policy whose SFs have the smallest possible norm. When the SFs are in the simplex (as in tabular MDPs) the vector with the smallest 2 norm puts equal probabilities on its coordinates, and is therefore "diverse" (making an equivalence between the robust min-max formulation and the diversity perspective). In that sense, our problem can be seen as a modified AL setup where: (a) no expert demonstrations are available (b) the agent is allowed to observe the reward at test time, and (c) the goal is to learn a set of constituent policies.
PRELIMINARIES
We will model our problem of interest using a family of Markov Decision Processes (MDPs). An MDP is a tuple M (S, A, P, r, γ, D), where S is the set of states, A is the set of actions, P = {P a | a ∈ A} is the set of transition kernels, γ ∈ [0, 1] is the discount factor and D is the initial state distribution. The function r : S × A × S → R defines the rewards, and thus the agent's objective; here we are interested in multiple reward functions, as we explain next.
Let φ(s, a, s ) ∈ [0, 1] d be an observable vector of features (our analysis only requires the features to be bounded; we use [0, 1] for ease of exposition). We are interested in the set of tasks induced by all possible linear combinations of the features φ. Specifically, for any w ∈ R d , we can define a reward function r w (s, a, s ) = w · φ(s, a, s ). Given w, the reward r w is well defined and we will use the terms w and r w interchangeably to refer to the RL task induced by it. Formally, we are interested in the following set of MDPs:
M φ {(S, A, P, r w , γ, D) | w ∈ W}.
(1)
In general, W is any convex set, but we will focus on the 2 d-dimensional ball denoted by W = B 2 . This choice is not restricting, since the optimal policy in an MDP is invariant with respect to the scale of the rewards and the 2 ball contains all the directions.
A policy in an MDP M ∈ M φ , denoted by π ∈ Π, is a mapping π : S → P(A), where P(A) is the space of probability distributions over A. For a policy π we define the successor features (SFs) as
ψ π (s, a) (1 − γ) · E ∞ t=0 γ t φ(s t , a t , s t+1 )|P, π, s t = s, a t = a .(2)
The multiplication by 1 − γ together with the fact that the features φ are in [0, 1] assures that ψ π (s, a) ∈ [0, 1] d for all (s, a) ∈ S × A. 1 We also define SFs that are conditioned on the initial state distribution D and the policy π as: ψ π E[ψ π (s, a)|D, π] = E s∼D,a∼π(s) ψ π (s, a). It should be clear that the SFs are conditioned on D and π whenever they are not written as a function of states and actions like in Eq.
(2). Note that, given a policy π, ψ π is simply a vector in [0, 1] d . Since we will be dealing with multiple policies, we will use superscripts to refer to them-that is, we use π i to refer to the i-th policy. To keep the notation simple, we will refer to the SFs of policy π i as ψ i . We define the action-value function (or Q-function) of policy π under reward r w as
Q π w (s, a) (1 − γ)E ∞ t=0 γ t φ(s t , a t , s t+1 ) · w|P, π, s t = s, a t = a = ψ π (s, a) · w.
We define the value of a policy π as v π
w (1 − γ)E ∞ t=0 γ t w · φ(s t )|π, P, D = ψ π · w. Note that v π
w is a scalar, corresponding to the expected value of policy π under the initial state distribution D, given by
v π w = E[Q π w (s, a)|D, π] = E s∼D,a∼π(s) Q π w (s, a).(3)
3 COMPOSING POLICIES TO SOLVE A SET OF MDPS As described, we are interested in solving all the tasks w ∈ W in the set of MDPs M φ defined in (1). We will approach this problem by learning policies associated with specific rewards w and then composing them to build a higher-level policy that performs well across all the tasks. We call this higher-level policy a generalized policy, defined as (Barreto et al., 2020): Definition 1 (Generalized policy). Given a set of MDPs M φ , a generalized policy is a function π : S × W → P(A) that maps a state s and a task w onto a distribution over actions.
We can think of a generalized policy as a regular policy parameterized by a task, since for a fixed w we have π(·; w) : S → P(A). We now focus our attention on a specific class of generalized policies that are composed of other policies: Definition 2 (SIP). Given a set of MDPs M φ and a set of n policies Π n = {π i } n i=1 , a set improving policy (SIP) π SIP is any generalized policy such that:
v SIP Π n ,w ≥ v i w for all π i ∈ Π n and all w ∈ W,(4)
where v SIP Π n ,w and v i w are the value functions of π SIP Π n (·; w) and the policies π i ∈ Π n under reward r w .
We have been deliberately vague about the specific way the policies π i ∈ Π n are combined to form a SIP to have as inclusive a concept as possible. We now describe two concrete ways to construct a SIP. Definition 3 (SMP). Let Π n = {π i } n i=1 be a set of n policies and let v i be the corresponding value functions defined analogously to (3) for an arbitrary reward. A set-max policy (SMP) is defined as π SMP Π n (s; w) = π k (s), with k = arg max i∈ [1,...,n] v i w .
Combining the concepts of SMP and SFs we can build a SIP for M φ . Given the SFs of the policies π i ∈ Π n , {ψ i } n i=1 , we can quickly compute a generalized SMP as π SMP Π n (s; w) = π k (s), with k = arg max i∈ [1,...,n] {w · ψ i }.
Since the value of a SMP under reward w is given by v SMP Π n ,w = max i∈[1,...,n] v i w , it trivially qualifies as a SIP as per Definition 2. In fact, the generalized policy π SMP Π n defined in (5) is in some sense the most conservative SIP possible, as it will always satisfy (4) with equality. This means that any other SIP will perform at least as well as the SIP induced by SMP. We formalize this notion below: Lemma 1. Let π SMP Π n be a SMP defined as in (5) and let π : S × W → P(A) be any generalized policy. Then, given a set of n policies Π n , π is a SIP if and only if v π Π n ,w ≥ v SMP Π n ,w for all w ∈ W.
Due to space constraints, all the proofs can be found in the supplementary material. Lemma 1 allows us to use SMP to derive results that apply to all SIPs. For example, a lower bound for v SMP Π n ,w automatically applies to all possible v SIP Π n ,w . Lemma 1 also allows us to treat SMP as a criterion to determine whether a given generalized policy qualifies as a SIP. We illustrate this by introducing a second candidate to construct a SIP called generalized policy improvement (Barreto et al., 2017;2018;2020, GPI): Definition 4 (GPI policy). Given a set of n policies Π n = {π i } n i=1 and corresponding Q-functions Q i w computed under an arbitrary reward w, the GPI policy is defined as
π GPI Π n (s; w) = arg max a max i Q i w (s, a).
Again, we can combine GPI and SFs to build a generalized policy. Given the SFs of the policies π i ∈ Π n , {ψ i } n i=1 , we can quickly compute the generalized GPI policy as π GPI Π n (s; w) = arg max a max i ψ i (s, a) · w. Note that the maximization in GPI is performed in each state and uses the Q-functions of the constituent policies. In contrast, SMP maximizes over value functions (not Q-functions), with an expectation over states taken with respect to the initial state distribution D. For this reason, GPI is a stronger composition than SMP. We now formalize this intuition:
Lemma 2. For any reward w ∈ W and any set of policies Π n , we have that v GPI Π n ,w ≥ v SMP Π n ,w .
Lemma 2 implies that for any set of policies it is always better to use a GPI policy rather than an SMP (as we will confirm in the experiments). As a consequence, it also certifies that the generalized GPI policy π GPI Π n (s; w) qualifies as a SIP (Lemma 1). We have described two ways of constructing a SIP by combining SMP and GPI with SFs. Other similar strategies might be possible, for example by using local SARSA (Russell & Zimdars, 2003;Sprague & Ballard, 2003) as the basic mechanism to compose a set of value functions. We also note that in some cases it is possible to define a generalized policy (Definition 1), that is not necessarily a SIP (Eq. (5)), but is guaranteed to perform better than any SIP in expectation. For example, a combination of maximization, randomization and local search have been shown to be optimal in expectation among generalized policies in tabular MDPs with collectible rewards (Zahavy et al., 2020c). That said, we note that some compositions of policies that may at first seem like a SIP do not qualify as such. For example, a mixed policy is a linear (convex) combination of policies that assigns probabilities to the policies in the set and samples from them. When the mixed policy is mixing the best policy in the set with a less performant policy then it will result in a policy that is not as good as the best single policy in the set (Zahavy et al., 2020c).
Problem formulation. We are now ready to formalize the problem we are interested in. Given a set of MDPs M φ , as defined in (1), we want to construct a set of n policies Π n = {π i } n i=1 , such that the performance of the SMP defined on that set π SMP Π n will have the optimal worst-case performance over all rewards w ∈ W. That is, we want to solve the following problem:
arg max Π n ⊆Π min w v SMP Π n ,w .(6)
Note that, since v SMP Π n ,w ≤ v SIP Π n ,w for any SIP, Π n and w, as shown in Lemma 1, by finding a good set for (6) we are also improving the performance of all SIPs (including GPI).
AN ITERATIVE METHOD TO CONSTRUCT A SET-MAX POLICY
We now present and analyze an iterative algorithm to solve problem (6). We begin by defining the worst case or adversarial reward associated with the generalized SMP policy: Definition 5 (Adversarial reward for an SMP). Given a set of policies Π n , we denote byw SMP Π n = arg min w∈B2 v SMP Π n ,w the worst case reward w.r.t the SMP π SMP Π n defined in (5). In addition, the value of the SMP w.
r.t tow SMP Π n is defined byv SMP Π n = min w∈B2 v SMP Π n ,w .
We are interested in finding a set of policies Π n such that the performance of the resulting SMP will be optimal w.r.t its adversarial rewardw SMP Π n . This leads to a reformulation of (6) as a max-min-max optimization for discovering robust policies:
arg max Π n ⊆Πv SMP Π n = arg max Π n ⊆Π min w∈B2 v SMP Π n ,w = arg max Π n ⊆Π min w∈B2 max i∈[1,..,n] ψ i · w.(7)
Algorithm 1 SMP worst case policy iteration
Initialize: Sample w ∼ N (0,1), Π 0 ← { }, π 1 ← arg max π∈Π w · ψ π , t ← 1 v SMP Π 1 ← −||ψ 1 || repeat Π t ← Π t−1 + {π t } w SMP Π t ← solution to (8) π t+1 ← solution of the RL taskw SMP Π t t ← t + 1 until v tw SMP Π n ≤v SMP Π t−1 return Π t−1
The order in which the maximizations and the minimization are performed in (7) is important. (i) The inner maximization over policies (or SFs), by the SMP, is performed last. This means that, for a fixed set of policies Π n and a fixed reward w, SMP selects the best policy in the set.
(ii) The minimization over rewards w happens second, that is, for a fixed set of policies Π n , we compute the value of the generalized SMP π SMP Π n (·; w) for any reward w, and then minimize the maximum of these values. (iii) Finally, for any set of policies, there is an associated worst case reward for the SMP, and we are looking for policies that maximize this value.
The inner maximization (i) is simple: it comes down to computing n dot-products ψ i · w, i = 1, 2, . . . , n, and comparing the resulting values. The minimization problem (ii) is slightly more complicated, but fortunately easy to solve. To see this, note that this problem can be rewritten as:
w SMP Π n = arg min w∈B2 max i∈[1,...,n] {w · ψ 1 , . . . , w · ψ n }. s.t. ||w|| 2 − 1 ≤ 0.(8)
Eq. (8) is a convex optimization problem that can be easily solved using standard techniques, like gradient descent, and off-the-shelf solvers (Diamond & Boyd, 2016;Boyd et al., 2004). We note that the minimizer of Eq. (8) is a function of policy set. As a result, the set forces the worst case reward to make a trade-off -it has to "choose" the coordinates it "wants" to be more adversarial for. This trade-off is what encourages the worst case reward to be diverse across iterations (w.r.t different sets). We note that this property holds since we are optimizing over B 2 but it will not necessary be the case for other convex sets. For example, in the case of B ∞ the internal minimization problem in the above has a single solution -a vector with -1 in all of its coordinates.
The outer maximization problem (iii) can be difficult to solve if we are searching over all possible sets of policies Π n ⊆ Π. Instead, we propose an incremental approach in which policies π i are successively added to an initially empty set Π 0 . This is possible because the solutionw SMP Π n of (8) gives rise to a well-defined RL problem in which the rewards are given by r w (s, a, s ) =w SMP Π n · φ(s, a, s ). This problem can be solved using any standard RL algorithm. So, once we have a solutionw SMP Π n for (8), we solve the induced RL problem using any algorithm and add the resulting policy π n+1 to Π n (or, rather, the associated SFs ψ n+1 ).
Algorithm 1 has a step by step description of the proposed method. The algorithm is initialized by adding a policy π 1 that maximizes a random reward vector w to the set Π 0 , such that Π 1 = {π 1 }. At each subsequent iteration t the algorithm computes the worst case rewardw SMP Π t w.r.t to the current set Π t by solving (8). The algorithm then finds a policy π t+1 that solves the task induced byw SMP Π t . If the value of π t+1 w.r.tw SMP Π t is strictly larger thanv SMP Π t the algorithm continues for another iteration, with π t+1 added to the set. Otherwise, the algorithm stops. As mentioned before, the set of policies Π t computed by Algorithm 1 can also be used with GPI. The resulting GPI policy will do at least as well as the SMP counterpart on any task w (Lemma 2); in particular, the GPI's worst-case performance will be lower bounded byv SMP Π n .
THEORETICAL ANALYSIS
Algorithm 1 produces a sequence of policy sets Π 1 , Π 2 , . . . The definition of SMP guarantees that enlarging a set of policies always leads to a soft improvement in performance, sov SMP Π t+1 ≥v SMP Π t ≥ . . . ≥v SMP {π 1 } . We now show that the improvement in each iteration of our algorithm is in fact strict. Theorem 1 (Strict improvement). Let Π 1 , . . . , Π t be the sets of policies constructed by Algorithm 1. We have that the worst-case performance of the SMP induced by these set is strictly improving in each iteration, that is:v SMP Π t+1 >v SMP Π t . Furthermore, when the algorithm stops, there does not exist a single policy π t+1 such that adding it to Π t will result in improvement:
π t+1 ∈ Π s.t.v SMP Π t +{π} >v SMP Π t .
In general we cannot say anything about the value of the SMP returned by Algorithm 1. However, in some special cases we can upper bound it. One such case is when the SFs lie in the simplex.
Lemma 3 (Impossibility result). For the special case where the SFs associated with any policy are in the simplex, the value of the SMP w.r.t the worst case reward for any set of policies is less than or equal to −1/ √ d. In addition, there exists an MDP where this upper bound is attainable.
One example where the SFs are in the simplex is when the features φ are "one-hot vectors", that is, they only have one nonzero element. This happens for example in a tabular representation, in which case the SFs correspond to stationary state distributions. Another example are the features induced by state aggregation, since these are simple indicator functions associating states to clusters (Singh et al., 1995). We will show in our experiments that when state aggregation is used our algorithm achieves the upper bound of Lemma 3 in practice.
Finally, we observe that not all the policies in the set Π t are needed at each point in time, and we can guarantee strict improvement even if we remove the "inactive" policies from Π t , as we show below. Definition 6 (Active policies). Given a set of n policies Π n , and an associated worst case reward w SMP Π n , the subset of active policies Π a (Π n ) are the policies in Π n that achievev SMP Π n w.r.tw SMP Π n :
Π a (Π n ) = π ∈ Π n : ψ π ·w SMP Π n =v SMP Π n .
Theorem 2 (Sufficiency of Active policies). For any set of policies Π n , π SMP Πa(Π n ) achieves the same value w.r.t the worst case reward as π SMP Π n , that is,v SMP Π n =v SMP Πa(Π n ) .
Theorem 2 implies that once we have foundw SMP Π n we can remove the inactive policies from the set and still guarantee the same worst case performance. Furthermore, we can continue with Algorithm 1 to find the next policy by maximizingw SMP Π n and guarantee strict improvement via Theorem 1. This is important in applications that have memory constraints, since it allows us to store fewer policies.
EXPERIMENTS
We begin with a 10 × 10 grid-world environment ( Fig. 1(d)), where the agent starts in a random place in the grid (marked in a black color) and gains/loses reward from collecting items (marked with white color). Each item belongs to one of d − 1 classes (here with d = 5) and is associated with a marker: 8, O, X, Y . In addition, there is one "no item" feature (marked in gray color). The features are one-hot vectors, i.e., for i ∈ [1, d − 1], φ i (s) equals one when item i is in state s and zero otherwise (similarly φ d (s) equals one when there is no item in state s). The objective of the agent is to pick up the "good" objects and avoid "bad" objects, depending on the weights of the vector w.
In Fig. 1(a) we report the performance of the SMP π SMP Π t w.r.tw SMP Π t for d = 5. At each iteration (x-axis) of Algorithm 1 we train a policy for 5 · 10 5 steps to maximizew SMP Π t . We then compute the SFs of that policy using additional 5 · 10 5 steps and evaluate it w.r.tw SMP Π t . As we can see, the performance of SMP strictly improves as we add more policies to the set (as we stated in Theorem 1). In addition, we compare the performance of SMP with that of GPI, defined on the same sets of policies (Π t ) that were discovered by Algorithm 1. Since we do not know how to computew GPI Π t (the worst case reward for GPI), we evaluate GPI w.r.tw SMP Π n (the blue line in Fig. 1(a)). Inspecting Fig. 1(a), we can see that the GPI policy indeed performs better than the SMP as Lemma 2 indicates. We note that the blue line (in Fig. 1(a)) does not correspond to the worst case performance of the GPI policy. Instead, we can get a good approximation for it because we have that:w SMP Π n · Figure 1: Experimental results in a 2D grid world. Fig. 1(a) presents the performance of the SMP and GPI w.r.t the worst case reward. Fig. 1(b) compares Algorithm 1 with two baselines, where we show the worst case performance, relative to the upper bound, in a logarithmic scale. Fig. 1(c) visualizes the SFs of the policies in the set and Fig. 1(d) presents trajectories that were taken by different policies.
ψ(π SMP Π n ) ≤w GPI Π n · ψ(π GPI Π n ) ≤w SMP Π n · ψ(π GPI Π n );
i.e., the worst case performance of GPI (in the middle) is guaranteed to be between the green and blue lines in Fig. 1(a). This also implies that the upper bound in Lemma 3 does not apply for the blue line.
We also compare our algorithm to two baselines in Fig. 1(b) (for d = 10): (i) Orthogonal -at iteration t we train policy π t to maximize the reward w = e t (a vector of zeroes with a one on the t-th coordinate) such that a matrix with the vectors w in its columns forms the identity matrix; (ii) Random: at iteration t we train policy π t to maximize reward w ∼Ñ (0,1), i.e., we sample a vector of dimension d from a Normal Gaussian distribution and normalize it to have a norm of 1. While all the methods improve as we add policies to the set, Algorithm 1 clearly outperforms the baselines.
In Fig. 1(c) and Fig. 1(d) we visualize the policies that were discovered by Algorithm 1. Fig. 1(c) presents the SFs of the discovered policies, where each row (color) corresponds to a different policy and the columns correspond to the different features. We do not enumerate the features from 1 to d, but instead we label them with markers that correspond to specific items (the x-axis labels). In Fig. 1(d) we present a trajectory from each policy. We note that both the colors and the markers match between the two figures: the red color corresponds to the same policy in both figures, and the item markers in Fig. 1(d) correspond to the coordinates in the x-axis of Fig. 1(c).
Inspecting the figures we can see that the discovered policies are qualitatively diverse: in Fig. 1(c) we can see that the SFs of the different policies have different weights for different items, and in Fig. 1(d) we can see that the policies visit different states. For example, we can see that the teal policy has a larger weight for the no item feature (Fig. 1(c)) and visits only no-item states ( Fig. 1(d)) and that the green policy has higher weights for the 'Y' and 'X' items ( Fig. 1(c)) and indeed visits them ( Fig. 1(d)). Finally, in Fig. 2, we compare the performance of our algorithm with that of the baseline methods over a test set of rewards. The only difference is in how we evaluate the algorithms. Specifically, we sampled 500 reward signals from the uniform distribution over the unit ball. Recall that at iteration t each algorithm has a set of policies Π t , so we evaluate the SMP defined on this set, π SMP Π t , w.r.t each one of the test rewards. Then, for each method, we report the mean value obtained over the test rewards and repeat this procedure for 10 different seeds. Finally, we report the mean and the confidence interval over the seeds. Note that the performance in this experiment will necessarily be better than the in Fig. 1(a) because here we evaluate average performance rather than worst-case performance. Also note that our algorithm was not designed to optimize the performance on this "test set", but to optimize the performance w.r.t the worst case. Therefore it is not necessarily expected to outperform the baselines when measured on this metric. Inspecting Figure Fig. 2(a) we can see that our algorithm (denoted by SMP) performs better than the two baselines. This is a bit surprising for the reasons mentioned above, and suggests that optimising for the worst case also improves the performance w.r.t the entire distribution (transfer learning result). At first glance, the relative gain in performance might seem small. Therefore, the baselines might seem preferable to some users due to their simplicity. However, recall that the computational cost for computing the worst case reward is small compared to finding the policy the maximizes it, and therefore the relative cost of the added complexity is low.
The last observation suggests that we should care about how many policies are needed by each method to achieve the same value. We present these results in Fig. 2(b). Note that we use exactly the same data as in Fig. 2(a) but present it in a different manner. Inspecting the figure, we can see that the baselines require more policies to achieve the same value. For example, to achieve a value of 0.07, the SMP required 2 policies, while the baselines needed 4; and for a value of 0.1 the SMP required 4 policies while the baselines needed 7 and 9 respectively.
DeepMind Control Suite. Next, we conducted a set of experiments in the DM Control Suite (Tassa et al., 2018). We focused on the setup where the agent is learning from feature observations corresponding to the positions and velocities of the "body" in the task (pixels were only used for visualization). We considered the following six domains: 'Acrobot', 'Cheetah', 'Fish', 'Hopper', 'Pendulum', and 'Walker'. In each of these tasks we do not use the extrinsic reward that is defined by the task, but instead consider rewards that are linear in the observations (of dimensions 6, 17, 21, 15, 3, and 24 respectively). At each iteration of Algorithm 1 we train a policy for 2 · 10 6 steps using an actor-critic (and specifically STACX (Zahavy et al., 2020d)) to maximizew SMP Π t , add it to the set, and compute a neww SMP Π t+1 . Fig. 3(a) presents the performance of SMP in each iteration w.r.tw SMP Π t . As we can see, our algorithm is indeed improving in each iteration. In addition, we present the average number of active policies (Definition 6) in each iteration with bars. All the results are averaged over 10 seeds and presented with 95% Gaussian confidence intervals. Fig. 3(b) presents the SFs of the active policies at the end of training (the seed with the maximum number of active policies was selected). We perform PCA dimensionality reduction such that each point in the scatter plot corresponds to the SFs of one of the active policies. We also report the variance explained by PCA: values close to 1 indicate that the dimensionality reduction has preserved the original variance. Examining the figures we can see that our algorithm is strictly improving (as Theorem 1 predicts) and that the active policies in the set are indeed diverse; we can also see that adding more policies is correlated with improving performance.
Finally, in Fig. 4(a), Fig. 4(b) and Fig. 4(c) we visualize the trajectories of the discovered policies in the Cheetah, Hopper and Walker environments. Although the algorithm was oblivious to the extrinsic reward of the tasks, it was still able to discover different locomotion skills, postures, and even some "yoga poses" (as noted by the label we gave each policy on the left). The other bodies (Acrobot, Pendulum and Fish) have simpler bodies and exhibited simpler movement in various directions and velocities, e.g. the Pendulum learned to balance itself up and down. The supplementary material contains videos from all the bodies.
CONCLUSION
We have presented an algorithm that incrementally builds a set of policies to solve a collection of tasks defined as linear combinations of known features. The policies returned by our algorithm can be composed in multiple ways. We have shown that when the composition is a SMP its worst-case performance on the set of tasks will strictly improve at each iteration of our algorithm. More generally, the performance guarantees we have derived also serve as a lower bound for any composition of policies that qualifies as a SIP. The composition of policies has many applications in RL, such as for example to build hierarchical agents or to tackle a sequence of tasks in a continual learning scenario. Our algorithm provides a simple and principled way to build a diverse set of policies that can be used in these and potentially many other scenarios.
ACKNOWLEDGEMENTS
We would like to thank Remi Munos and Will Dabney for their comments and feedback on this paper. A PROOFS Lemma 1. Let π SMP Π n be a SMP defined as in (5) and let π : S × W → P(A) be any generalized policy. Then, given a set of n policies Π n , π is a SIP if and only if v π Π n ,w ≥ v SMP Π n ,w for all w ∈ W.
Proof. We first show that the fact that π is a SIP implies that v π Π n ,w ≥ v SMP Π n ,w for all w. For any w ∈ W, we have v π Π n ,w ≥ v i w for all π i ∈ Π n (SIP as in Definition 2)
≥ max i∈[1,...,n] v i w = v SMP Π n ,w .
We now show the converse:
v π Π n ,w ≥ v SMP Π n ,w = max i∈[1,...,n] v i w (SMP as in Definition 3) ≥ v i w for all π i ∈ Π n .
Lemma 2. For any reward w ∈ W and any set of policies Π n , we have that v GPI Π n ,w ≥ v SMP Π n ,w .
Proof. We know from previous results in the literature Barreto et al. (2017) that Q GPI (Π n )(s, a) ≥ Q π (s, a) for all (s, a) ∈ S × A and any π ∈ Π n .
Thus, we have that ∀s ∈ S: v GPI Π n ,w (s) = Q GPI Π n ,w (s, π GPI (s)) ≥ max
π∈Π n ,a∈A Q π w (s, a) ≥ max π∈Π n E a∼π [Q π w (s, a)] = max π∈Π n v π w (s) = v SMP Π n ,w (s),
where the second inequality is due to Jensen's inequality.
Therefore: v GPI Π n ,w (s) ≥ v SMP Π n ,w (s) E D [v GPI Π n ,w (s)] ≥ E D [v SMP Π n ,w (s)] v GPI Π n ,w ≥ v SMP Π n ,w
Lemma 3 (Impossibility result). For the special case where the SFs associated with any policy are in the simplex, the value of the SMP w.r.t the worst case reward for any set of policies is less than or equal to −1/ √ d. In addition, there exists an MDP where this upper bound is attainable.
Proof. For the impossibility result. we have that
max Π n ⊆Π min w∈B2 v SMP Π n ,w = min w∈B2 max π∈Π v π w (9) = min w∈B2 max π∈Π ψ(π) · w ≤ min w∈B2 max ψ∈∆ d−1 ψ(π) · w (10) = min w∈B2 max i w i (11) = − 1 √ d .(12)
The equality in Eq. (9) follows from the fact that Π is set of all possible policies and therefore the largest possible subset (the maximizer of the first maximization). In that case the second maximization (by the SMP) is equivalent to selecting the optimal policy in the MDP. Notice that the order of maximization-minimization here is in the reversed when compared to AL, i.e., for each reward the SMP chooses the best policy in the MDP, while in AL the reward is chosen to be the worst possible w.r.t any policy. The inequality in Eq. (10) follows from the fact that we increase the size of the optimization set in the inner loop, and the equality in Eq. (12) follows from the fact that a maximizer in the inner loop puts maximal distribution on the largest component of w.
Feasibility. To show the feasibility of the upper bound in the previous impossibility result we give an example of an MDP in which a set of d policies achieves the upper bound. The d policies are chosen such that their stationary distributions form an orthogonal basis.
min w∈B2 v SMP Π n ,w = min w∈B2 max ψ∈{ψ 1 ,...,ψ d } w · ψ = min w∈B2 max ψ∈∆ d−1 w · ψ = − 1 √ d ,(13)
which follows from the fact that the maximization over the simplex is equivalent to a maximization over pure strategies.
Lemma 4 (Reformulation of the worst-case reward for an SMP). Let {ψ i } n i=1 be n successor feature vectors. Let w * be the adversarial reward, w.r.t the SMP defined given these successor features. That is, w * is the solution for arg min w max i∈ [1,...,n] {w · ψ 1 , . . . , w · ψ n } s.t.
||w|| 2 − 1 ≤ 0
Let w * i be the solution to the following problem for i ∈ [1, . . . , n]: arg min
w w · ψ i s.t. ||w|| 2 − 1 ≤ 0 w · (ψ j − ψ i ) ≤ 0(15)
Then, w * = arg min i w * i .
Proof. For any solution w * to Eq. (8) there is some policy i in the set that is one of its maximizers.
Since it is the maximizer w.r.t w * , its value w.r.t w * is bigger or equal to that of any other policy in the set. Since we are checking the solution among all i ∈ [1, . . . , n], one of them must be the solution.
Theorem 2 (Sufficiency of Active policies). For any set of policies Π n , π SMP Πa(Π n ) achieves the same value w.r.t the worst case reward as π SMP Π n , that is,v SMP Π n =v SMP Πa(Π n ) .
Proof. Let Π n = {π i } n i=1
. Denote by J a subset of the indices [1, . . . , n] that corresponds to the indices of the active policies such that Π a (Π n ) = {π j } j∈J . We can rewrite problem Eq. (14) as follows: minimize γ s.t. γ ≥ w · ψ i i = 1, . . . , n w 2 ≤ 1.
Let (γ , w ) be any optimal points. The set of inactive policies i ∈ J satisfy γ > w · ψ i . Since these constraints are not binding we can drop them from the formulation and maintain the same optimal objective value, i.e.,
minimize γ s.t. γ ≥ w · ψ j j ∈ J w 2 ≤ 1,(17)
has the same optimal objective value,v SMP Π n , as the full problem. This in turn can be rewritten
minimize max j∈J w · ψ j s.t. w 2 ≤ 1,(18)
with optimal valuev SMP Πa(Π n ) , which is therefore equal tov SMP Π n .
Lemma 5 (κ is binding).
Proof. Denote byẇ a possible solution where the constraint ẇ 2 ≤ 1 is not binding, i.e., ( ẇ 2 < 1,κ = 0). In addition, denote the primal objective forẇ byv = max i∈ [1,...,n] {ẇ · ψ i }. To prove the lemma, we are going to inspect two cases: (i)v ≥ 0 and (ii)v < 0. For each of these two cases we will show that there exists another feasible solutionw that achieves a lower valuew for the primal objective (w <v), and thereforeẇ is not the minimizer.
For the first casev ≥ 0, consider the vector
w = (−1, −1, . . . , −1)/ √ d.
w is a feasible solution to the problem, since w 2 = 1. Since all the SFs have positive coordinates, we have that if they are not all exactly 0, then the primal objective evaluated atw is stictly negative: max i∈[1,...,n] {w · ψ 1 , . . . ,w · ψ n } < 0.
We now consider the second case ofv < 0. Notice that multiplyingẇ by a positive constant c would not change the maximizer, i.e., arg max i∈ [1,...,n] {cẇ · ψ i } = arg max i∈ [1,...,n] {ẇ · ψ i }. Sincev < 0, it means thatẇ/ ẇ (c = 1/ ẇ ) is a feasible solution and a better minimizer thanẇ. Thereforeẇ is not the minimizer.
We conclude that the constraint κ is always binding, i.e. w 2 = 1, κ > 0.
Theorem 1 (Strict improvement). Let Π 1 , . . . , Π t be the sets of policies constructed by Algorithm 1.
We have that the worst-case performance of the SMP induced by these set is strictly improving in each iteration, that is:v SMP Π t+1 >v SMP Π t . Furthermore, when the algorithm stops, there does not exist a single policy π t+1 such that adding it to Π t will result in improvement:
π t+1 ∈ Π s.t.v SMP Π t +{π} >v SMP Π t . Proof. We have that v SMP Π t = min w∈B2 max ψ∈Ψ t ψ · w ≤ max ψ∈Ψ t ψ ·w SMP Π t+1 ≤ max ψ∈Ψ t+1 ψ ·w SMP Π t+1 = v SMP Π t+1 .(19)
The first inequality is true because we replace the minimization over w withw SMP Π t+1 , and the second inequality is true because we add a new policy to the set. Thus, we will focus on showing that the first inequality is strict. We do it in two steps. In the first step, we will show that the problem min w∈B2 max ψ∈Ψ t ψ · w has a unique solution w t . Thus, for the first inequality to hold with equality it must be thatw SMP Π t+1 =w SMP Π t . However, we know that, since the algorithm did not stop, ψ t+1 ·w SMP Π t > v SMP Π t , hence a contradiction. We will now show that min w∈B2 max ψ∈Ψ t ψ · w has a unique solution. Before we begin, we refer the reader to Lemma 4 and Theorem 2 where we reformulate the problem to a form that is simpler to analyze. We begin by looking at the partial Lagrangian of Eq. (17):
L(w, γ, κ, λ) = γ + j∈J λ j (ψ j · w − γ) + κ( w 2 − 1).
The variable κ ≥ 0 is associated with the constraint w 2 ≤ 1. Denote by (λ , κ ) any optimal dual variables and note that by complementary slackness we know that either κ > 0 and w 2 = 1 or κ = 0 and w 2 < 1. Lemma 5 above, guarantees that the constraint is in fact binding -only solutions with κ > 0 and w 2 = 1 are possible solutions. Notice that this is correct due to the fact that the SFs have positive coordinates and not all of them are 0 (as in our problem formulation).
Consequently we focus on the case where κ > 0 under which the Lagrangian is strongly convex in w, and therefore the problem min w,γ L(w, γ, λ , κ )
has a unique solution. Every optimizer of the original problem must also minimize the Lagrangian evaluated at an optimal dual value, and since this minimizer is unique, it implies that the minimizer of the original problem is unique (Boyd et al., 2004, Sect. 5.5.5).
For the second part of the proof, notice that if the new policy π t+1 does not achieve better reward w.r.t v SMP Π t than the policies in Π t then we have that:
v SMP Π t+1 = min w∈B2 max π∈Π t+1 ψ(π) · w ≤ max π∈Π t+1 ψ(π) ·w SMP Π t = max π∈Π t ψ(π) ·w SMP Π t = v SMP Π t ;
thus, it is necessary that the policy π t+1 will achieve better reward w.r.t v SMP Π t to guarantee strict improvement.
B AL
In AL there is no reward signal, and the goal is to observe and mimic an expert. The literature on AL is quite vast and dates back to the work of (Abbeel & Ng, 2004), who proposed a novel framework for AL. In this setting, an expert demonstrates a set of trajectories that are used to estimate the SFs of its policy π E , denoted by ψ E . The goal is to find a policy π, whose SFs are close to this estimate, and hence will have a similar return with respect to any weight vector w, given by
arg max π min w∈B2 w · ψ π − ψ E = arg max π −||ψ π − ψ E || = arg min π ||ψ π − ψ E ||.(20)
The projection algorithm (Abbeel & Ng, 2004) solves this problem in the following manner. The algorithm starts with an arbitrary policy π 0 and computes its feature expectation ψ 0 . At step t, the reward function is defined using weight vector w t = ψ E −ψ t−1 and the algorithm finds a policy π t that maximizes it, whereψ t is a convex combination of SFs of previous (deterministic) policies ψ t = t j=1 α j ψ j . In order to get that ψ T − ψ E ≤ , the authors show that it suffices to run the algorithm for T = O( k (1−γ) 2 2 log( k (1−γ) )) iterations. Recently, it was shown that this algorithm can be viewed as a Frank-Wolfe method, also known as the Conditional Gradient (CG) algorithm (Zahavy et al., 2020a). The idea is that solving Eq. (20) can be seen as a constrained convex optimization problem, where the optimization variable is the SFs, the objective is convex, and the SFs are constrained to be in the SFs polytope K, given as the following convex set:
Definition 7 (The SFs polytope). K = x : k+1 i=1 a i ψ i , a i ≥ 0, k+1 i=1 a i = 1, π i ∈ Π .
In general, convex optimization problems can be solved via the more familiar projected gradient descent algorithm. This algorithm takes a step in the reverse gradient direction z t+1 = x t +α t ∇ h (x t ), and then projects z t+1 back into K to obtain x t+1 . However, in some cases, computing this projection may be computationally hard. In our case, projecting into K is challenging since it has |A| |S| vertices (feature expectations of deterministic policies). Thus, computing the projection explicitly and then finding π whose feature expectations are close to this projection, is computationally prohibitive.
The CG algorithm (Frank & Wolfe, 1956) (Algorithm 2) avoids this projection by finding a point y t ∈ K that has the largest correlation with the negative gradient. In AL, this step is equivalent to finding a policy whose SFs has the maximal inner product with the current gradient, i.e., solve an MDP whose reward vector w is the negative gradient. This is a standard RL (planning) problem and can be solved efficiently, for example, with policy iteration. We also know that there exists at least one optimal deterministic policy for it and that PI will return a solution that is a deterministic policy (Puterman, 1984).
Algorithm 2 The CG method Frank & Wolfe (1956) 1: Input: a convex set K, a convex function h, learning rate schedule α t . 2: Initiation: let x 0 ∈ K 3: for t = 1, . . . , T do 4: y t = arg max y∈K −∇ h (x t−1 ) · y 5:
x t = (1 − α t )x t−1 + α t y t 6: end for For smooth functions, CG requires O(1/t 2 ) iterations to find an −optimal solution to Eq. (20). This gives a logarithmic improvement on the result of (Abbeel & Ng, 2004). In addition, it was shown in (Zahavy et al., 2020a) that since the optimization objective is strongly convex, and the constraint set is a polytope, it is possible to use a variant of the CG algorithm, known as Away steps conditional gradient (ASCG) (Wolfe, 1970). ASCG attains a linear rate of convergence when the set is a polytope (Guélat & Marcotte, 1986;Garber & Hazan, 2013;Jaggi, 2013), i.e., it converges after O(log(1/ ) iterations. See (Zahavy et al., 2020a) for the exact constants and analysis.
There are some interesting relations between our problem and AL with "no expert", that is, solving arg min π ||ψ π || (21)
In terms of optimization, this problem is equivalent to Eq. (20), and the same algorithms can be used to solve them.
Both AL with "no expert" and our algorithm can be used to solve the same goal: achieve good performance w.r.t the worst case reward. However, AL is concerned with finding a single policy, while our algorithm is explicitly designed to find a set of policies. There is no direct connection between the policies that are discovered from following these two processes. This is because the intrinsic rewards that are maximised by each algorithm are essentially different. Another way to think about this is that since the policy that is returned by AL is a mixed policy, its goal is to return a set of policies that are similar to the expert, but not diverse from one another. From a geometric perspective, the policies returned by AL are the nodes of the face in the polytope that is closest to the demonstrated SFs. Even more concretely, if the SFs of the expert are given exactly (instead of being approximated from trajectories), then the AL algorithm would return a single vertex (policy). Finally, while a mixed policy can be viewed as a composition of policies, it is not a SIP. Therefore, it does not encourage diversity in the set.
C REGULARIZING W
In this section we experimented with constraining the set of rewards to include only vectors w whose mean is zero. Since we are using CVXPY (Diamond & Boyd, 2016) to optimize for w (Eq. (8)), this requires adding a simple constraint d i=1 w i = 0 to the minimization problem. Note that constraining the mean to be zero does not change the overall problem qualitatively, but it does potentially increase the difference in the relative magnitude of the elements in w. Since it makes the resulting w's have more zero elements, i.e., it makes the w's more sparse, it can also be viewed as a method to regularize the worst case reward. Adding this constraint increased the number of w's (and corresponding policies) that made a difference to the optimal value (Definition 5). To see this, note that the green curve in Fig. 5(a) converges to the optimal value in 2 iterations while the the green curve in Fig. 1(a)) does so in 3 iterations. As a result, the policies that were discovered by the algorithm are more diverse. To see this observe that the SFs in Fig. 5(b) are more focused on specific items than the SFs in Fig. 1(c). In Fig. 5(c) and Fig. 5(d) we verified that this increased diversity continues to be the case when we increase the feature dimension d.
Figure 2 :
2Experimental results with regularized w.
Figure 3 :
3Experimental results in Deepmind Control Suite.
Figure 4 :
4Experimental results in Deepmind Control Suite.
Figure 5 :
5Experimental results with regularized w.
Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning.Artificial intelligence, 112(1-2):181-211, 1999a.
Richard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: a framework
for temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181-211, August
1999b. doi: http://dx.doi.org/10.1016/S0004-3702(99)00052-1.
Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden,
Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint
arXiv:1801.00690, 2018.
Philip Wolfe. Convergence theory in nonlinear programming. Integer and nonlinear programming,
pp. 1-36, 1970.
Huan Xu and Shie Mannor. Robustness and generalization. Machine learning, 86(3):391-423, 2012.
Huan Xu, Constantine Caramanis, and Shie Mannor. Robust regression and lasso. In Advances in
neural information processing systems, pp. 1801-1808, 2009.
Tom Zahavy, Alon Cohen, Haim Kaplan, and Yishay Mansour. Apprenticeship learning via frank-
wolfe. AAAI, 2020, 2020a.
Tom Zahavy, Alon Cohen, Haim Kaplan, and Yishay Mansour. Average reward reinforcement
learning with unknown mixing times. The Conference on Uncertainty in Artificial Intelligence
(UAI), 2020b.
Tom Zahavy, Avinatan Hasidim, Haim Kaplan, and Yishay Mansour. Planning in hierarchical
reinforcement learning: Guarantees for using local policies. In Algorithmic Learning Theory, pp.
906-934, 2020c.
Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado van Hasselt, David
Silver, and Satinder Singh. A self-tuning actor-critic algorithm. Advances in neural information
processing systems, 2020d.
While we focus on the most common, discounted RL criteria, all of our results will hold in the finite horizon and average reward criteria (see, for example, Puterman(1984)). Concretely, in these scenarios there exist normalizations for the SFs whose effect are equivalent to that of the multiplication by 1 − γ. In the finite-horizon case we can simply multiply the SFs by 1/H. In the average reward case, there is no multiplication(Zahavy et al., 2020b) and the value function is measured under the stationary distribution (instead of D).
Apprenticeship learning via inverse reinforcement learning. Pieter Abbeel, Y Andrew, Ng, Proceedings of the twenty-first international conference on Machine learning. the twenty-first international conference on Machine learning9ACM, 2004. (a) SMP vs. GPI, d=5 (b) SFs, d=5 (c) SMP vs. GPI, d=9 (d) SFsPieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1. ACM, 2004. (a) SMP vs. GPI, d=5 (b) SFs, d=5 (c) SMP vs. GPI, d=9 (d) SFs, d=9 |
49,907,212 | Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors | We study the problem of generating adversarial examples in a black-box setting in which only lossoracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and we demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less often than the current state-of-the-art. 1 * Equal contribution 1 The code for reproducing our work is available at https://git.io/blackbox-bandits. arXiv:1807.07978v3 [stat.ML] to break through the optimality of current methods. We evaluate our approach on the task of generating black-box adversarial examples, where the methods obtained from integrating two example priors significantly outperform state-of-the-art approaches.Concretely, in this work:1. We formalize the gradient estimation problem as the central problem in the context of query-efficient black-box attacks. We then show how the resulting framework unifies the previous attack methodology. We prove that the least squares method, a classic primitive in signal processing, not only constitutes an optimal solution to the general gradient estimation problem but also is essentially equivalent to the current-best black-box attack methods.2. We demonstrate that, despite this seeming optimality of these methods, we can still improve upon them by exploiting an aspect of the problem that has been not considered previously: the priors we have on the distribution of the gradient. We identify two example classes of such priors, and show that they indeed lead to better predictors of the gradient.3. Finally, we develop a bandit optimization framework for generating black-box adversarial examples which allows for the seamless integration of priors. To demonstrate its effectiveness, we show that leveraging the two aforementioned priors yields black-box attacks that are 2-5 times more query efficient and less failure-prone than the state of the art. | [
604334,
3488815,
6706414,
17707860,
9059612
] | Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
Andrew Ilyas [email protected]
MIT
MIT
Logan Engstrom [email protected]
MIT
MIT
Aleksander Mądry [email protected]
MIT
MIT
Mit
MIT
MIT
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
We study the problem of generating adversarial examples in a black-box setting in which only lossoracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and we demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less often than the current state-of-the-art. 1 * Equal contribution 1 The code for reproducing our work is available at https://git.io/blackbox-bandits. arXiv:1807.07978v3 [stat.ML] to break through the optimality of current methods. We evaluate our approach on the task of generating black-box adversarial examples, where the methods obtained from integrating two example priors significantly outperform state-of-the-art approaches.Concretely, in this work:1. We formalize the gradient estimation problem as the central problem in the context of query-efficient black-box attacks. We then show how the resulting framework unifies the previous attack methodology. We prove that the least squares method, a classic primitive in signal processing, not only constitutes an optimal solution to the general gradient estimation problem but also is essentially equivalent to the current-best black-box attack methods.2. We demonstrate that, despite this seeming optimality of these methods, we can still improve upon them by exploiting an aspect of the problem that has been not considered previously: the priors we have on the distribution of the gradient. We identify two example classes of such priors, and show that they indeed lead to better predictors of the gradient.3. Finally, we develop a bandit optimization framework for generating black-box adversarial examples which allows for the seamless integration of priors. To demonstrate its effectiveness, we show that leveraging the two aforementioned priors yields black-box attacks that are 2-5 times more query efficient and less failure-prone than the state of the art.
Introduction
Recent research has shown that neural networks exhibit significant vulnerability to adversarial examples, or slightly perturbed inputs designed to fool the network prediction. This vulnerability is present in a wide range of settings, from situations in which inputs are fed directly to classifiers [SZS + 14, CMV + 16] to highly variable real-world environments [KGB16,AEIK18]. Researchers have developed a host of methods to construct such attacks [GSS15, MFF, CW17, MMS + 18], most of which correspond to first order (i.e., gradient based) methods. These attacks turn out to be highly effective: in many cases, only a few gradient steps suffice to construct an adversarial perturbation.
A significant shortcoming of many of these attacks, however, is that they fundamentally rely on the white-box threat model. That is, they crucially require direct access to the gradient of the classification loss of the attacked network. In many real-world situations, expecting this kind of complete access is not realistic. In such settings, an attacker can only issue classification queries to the targeted network, which corresponds to a more restrictive black box threat model.
Recent work [CZS + 17, BHLS17, IEA + 18] provides a number of attacks for this threat model. Chen et. al [CZS + 17] show how to use a basic primitive of zeroth order optimization, the finite difference method, to estimate the gradient from classification queries and then use it (in addition to a number of optimizations) to mount a gradient based attack. The method indeed successfully constructs adversarial perturbations. It comes, however, at the cost of introducing a significant overhead in terms of the number of queries needed. For instance, attacking an ImageNet [RDS + 15] classifier requires hundreds of thousands of queries. Subsequent work [IEA + 18] improves this dependence significantly, but still falls short of fully mitigating this issue (see Section 4.1 for a more detailed analysis).
Our contributions
We revisit zeroth-order optimization in the context of adversarial example generation, both from an empirical and theoretical perspective. We propose a new approach for generating black-box adversarial examples, using bandit optimization in order to exploit prior information about the gradient, which we show is necessary Table 1: Summary of effectiveness of 2 and ∞ ImageNet attacks on Inception v3 using NES, bandits with time prior (Bandits T ), and bandits with time and data-dependent priors (Bandits T D ). Note that in the first column, the average number of queries is calculated only over successful attacks, and we enforce a query limit of 10,000 queries. For purposes of direct comparison, the last column calculates the average number of queries used for only the images that NES (previous SOTA) was successful on. Our most powerful attack uses 2-4 times fewer queries, and fails 2-5 times less often.
Attack
Avg Adversarial examples are natural inputs to a machine learning system that have been carefully perturbed in order to induce misbehaviour of the system, under a constraint on the magnitude of the pertubation (under some metric). For image classifiers, this misbehaviour can be either classification as a specific class other than the original one (the targeted attack) or misclassification (the untargeted attack). For simplicity and to make the presentation of the overarching framework focused, in this paper we restrict our attention to the untargeted case. Both our algorithms and the whole framework can be, however, easily adapted to the targeted setting. Also, we consider the most standard threat model in which adversarial perturbations must have p -norm, for some fixed p, less than some p .
First-order adversarial attacks
Suppose that we have some classifier C(x) with a corresponding classification loss function L(x, y), where x is some input and y its corresponding label. In order to generate a misclassified input from some input-label pair (x, y), we want to find an adversarial example x which maximizes L(x , y) but still remains p -close to the original input. We can thus formulate our adversarial attack problem as the following constrained optimization task:
x = arg max x : x −x p ≤ p L(x , y)
First order methods tend to be very successful at solving the problem despite its non-convexity [GSS15, CW17, MMS + 18]. A first order method used as the backbone of some of the most powerful white-box adversarial attacks for p bounded adversaries is projected gradient descent (PGD). This iterative method, given some input x and its correct label y, computes a perturbed input x k by applying k steps of the following update (with x 0 = x)
x l = Π Bp(x, ) (x l−1 + ηs l ) with s l = Π ∂Bp(0,1) ∇ x L(x l−1 , y)(1)
Here, Π S is the projection onto the set S, B p (x , ε ) is the p ball of radius ε around x , η is the step size, and ∂U is the boundary of a set U . Also, as is standard in continuous optimization, we make s l be the projection of the gradient ∇ x L(x l−1 , y) at x l−1 onto the unit p ball. This way we ensure that s l corresponds to the unit p -norm vector that has the largest inner product with ∇ x L(x l−1 , y). (Note that, in the case of the 2 -norm, s l is simply the normalized gradient but in the case of, e.g., the ∞ -norm, s l corresponds to the sign vector, sgn (∇ x L(x l−1 , y)) of the gradient.) So, intuitively, the PGD update perturbs the input in the direction that (locally) increases the loss the most. Observe that due to the projection in (1), x k is always a valid perturbation of x, as desired.
Black-box adversarial attacks
The projected gradient descent (PGD) method described above is designed to be used in the context of so-called white-box attacks. That is, in the setting where the adversary has full access to the gradient ∇ x L(x, y) of the loss function of the attacked model. In many practical scenarios, however, this kind of access is not available-in the corresponding, more realistic black-box setting, the adversary has only access to an oracle that returns for a given input (x, y), only the value of the loss L(x, y).
One might expect that PGD is thus not useful in such black-box setting. It turns out, however, that this intuition is incorrect. Specifically, one can still estimate the gradient using only such value queries. (In fact, this kind of estimator is the backbone of so-called zeroth-order optimization frameworks [Spa05].) The most canonical primitive in this context is the finite difference method. This method estimates the directional
derivative D v f (x) = ∇ x f (x)
, v of some function f at a point x in the direction of a vector v as
D v f (x) = ∇ x f (x), v ≈ (f (x + δv) − f (x)) /δ.(2)
Here, the step size δ > 0 governs the quality of the gradient estimate. Smaller δ gives more accurate estimates but also decreases reliability, due to precision and noise issues. Consequently, in practice, δ is a tunable parameter. Now, we can just use finite differences to construct an estimate of the gradient. To this end, one can find the d components of the gradient by estimating the inner products of the gradient with all the standard basis vectors e 1 , . . . , e d :
∇ x L(x, y) = d k=1 e k (L(x + δe k , y) − L(x, y)) /δ ≈ d k=1 e k ∇ x L(x, y), e k(3)
We can then easily implement the PGD attack (c.f. (1)) using this estimator:
x l = Π Bp(x, ) (x l−1 + η s l ) with s l = Π ∂Bp(0,1) ∇ x L(x l−1 , y)(4)
Indeed, [CZS + 17] were the first to use finite differences methods in this basic form to power PGD-based adversarial attack in the black-box setting. This basic attack was shown to be successful but, since its query complexity is proportional to the dimension, its resulting query complexity was prohibitively large. Figure 1: The fraction of correctly estimated coordinates of sgn(∇ x L(x, y)) required to successfully execute the single-step PGD (also known as FGSM) attack, with = 0.05. In the experiment, for each k, the top k percent -chosen either by magnitude (top-k) or randomly (random-k) -of the signs of the coordinates are set correctly, and the rest are set to +1 or −1 at random. The adversariality rate is the portion of 1,000 random ImageNet images misclassified after one FGSM step. Observe that, for example, estimating only 20% of the coordinates correctly leads to misclassification in the case of more than 60% of images.
Black-box attacks with imperfect gradient estimators
In the light of the above discussion, one can wonder if the algorithm (4) can be made more query-efficient. A natural idea here would be to avoid fully estimating the gradient and rely instead only on its imperfect estimators. This gives rise to the following question: How accurate of an gradient estimate is necessary to execute a successful PGD attack?
We examine this question first in the simplest possible setting: one in which we only take a single PGD step (i.e., the case of k = 1). Previous work [GSS15] indicates that such an attack can already be quite powerful. So, we study how the effectiveness of this attack varies with gradient estimator accuracy. Our experiments, shown in Figure 1, suggest that it is feasible to generate adversarial examples without estimating correctly even most of the coordinates of the gradient. For example, in the context of ∞ attacks, setting a randomly selected 20% of the coordinates in the gradient to match the true gradient (and making the remaining coordinates have random sign) is sufficient to fool the classifier on more than 60% images with single-step PGD. Our experiments thus demonstrate that an adversary is likely to be able to cause a misclassification by performing the iterated PGD attack, even when driven by a gradient estimate that is largely imperfect.
The gradient estimation problem
The above discussion makes it clear that successful attacks do not require a perfect gradient estimation, provided this estimate is suitably constructed. It is still unclear, however, how to efficiently find this kind of imperfect but helpful estimator. Continuous optimization methodology suggests that the key characteristic needed from our estimator is for it to have a sufficiently large inner product with the actual gradient. We thus capture this challenge as the following gradient estimation problem:
Definition 1 (Gradient estimation problem). For an input/label pair (x, y) and a loss function L, let g * = ∇ x L(x, y) be the gradient of L at (x, y). Then the goal of the gradient estimation problem is to find a unit vector g maximizing the inner product E g T g * ,
from a limited number of (possibly adaptive) function value queries L(x , y ). (The expectation here is taken over the randomness of the estimation algorithm.)
One useful perspective on the above gradient estimation problem stems from casting the recovery of g * in (5) as an underdetermined vector estimation task. That is, one can view each execution of the finite difference method (see (2)) as computing an inner product query in which we obtain the value of the inner product of g * and some chosen direction vector A i . Now, if we execute k such queries, and k < d (which is the regime we are interested in), the information acquired in this process can be expressed as the following (underdetermined) linear regression problem Ag * = y, where the rows of the matrix A correspond to the queries A 1 , . . . , A k and the entries of the vector y gives us the corresponding inner product values.
Relation to compressive sensing The view of the gradient estimation problem we developed bears striking similarity to the compressive sensing setting [FR13]. Thus one might wonder if the toolkit of that area could be applied here. Compressive sensing crucially requires, however, certain sparsity structure in the estimated signal (here, in the gradient g * ) and, to our knowledge, the loss gradients do not exhibit such a structure. (We discuss this further in Appendix B.)
The least squares method In light of this, we turn our attention to another classical signal-processing method: norm-minimizing 2 least squares estimation. This method approaches the estimation problem posed in (5) by casting it as an undetermined linear regression problem of the form Ag * = b, where we can choose the matrix A (the rows of A correspond to inner product queries with g * ). Then, it obtains the solution g to the regression problem by solving:
min g g 2 s.t. A g = y.(6)
A reasonable choice for A (via [JL84] and related results) is the distance-preserving random Gaussian projection matrix, i.e. A ij normally distributed.
The resulting algorithm turns out to yield solutions that are approximately those given by Natural Evolution Strategies (NES), which [IEA + 18] previously applied to black-box attacks. In particular, in Appendix A, we prove the following theorem.
Theorem 1 (NES and Least Squares equivalence). Letx N ES be the Gaussian k-query NES estimator of a d-dimensional gradient g and letx LSQ be the minimal-norm k-query least-squares estimator of g. For any p > 0, with probability at least 1 − p we have that
x LSQ , g − x N ES , g ≤ O k d · log 3 k p ||g|| 2 .
Note that when we work in the underdetermined setting, i.e., when k d (which is the setting we are interested in), the right hand side bound becomes vanishingly small. Thus, the equivalence indeed holds. In fact, using the precise statement (given and proved in Appendix A), we can show that Theorem 1 provides us with a non-vacuous equivalence bound. Further, it turns out that one can exploit this equivalence to prove that the algorithm proposed in [IEA + 18] is not only natural but optimal, as the least-squares estimate is an information-theoretically optimal gradient estimate in the regime where k = d, and an error-minimizing estimator in the regime where k << d.
Theorem 2 (Least-squares optimality (Proof in Appendix A)). For a linear regression problem y = Ag with known A and y, unknown g, and isotropic Gaussian errors, the least-squares estimator is finite-sample efficient, i.e. the minimum-variance unbiased (MVU) estimator of the latent vector g.
Theorem 3 (Least-squares optimality (Proof in [Mei94])). In the underdetermined setting, i.e. when k << d, the minimum-norm least squares estimate (x LSQ in Theorem 1) is the minimum-variance (and thus minimum-error, since bias is fixed) estimator with no empirical loss.
Black-box adversarial attacks with priors
The optimality of least squares strongly suggests that we have reached the limit of query-efficiency of black-box adversarial attacks. But is this really the case? Surprisingly, we show that an improvement is still possible.
The key observation is that the optimality we established of least-squares (and by Theorem 1, the NES approach in [IEA + 18]) holds only for the most basic setting of the gradient estimation problem, a setting where we assume that the target gradient is a truly arbitrary and completely unknown vector.
However, in the context we care about this assumption does not hold -there is actually plenty of prior knowledge about the gradient available. Firstly, the input with respect to which we compute the gradient is not arbitrary and exhibits locally predictable structure which is consequently reflected in the gradient. Secondly, when performing iterative gradient attacks (e.g. PGD), the gradients used in successive iterations are likely to be heavily correlated.
The above observations motivate our focus on prior information as an integral element of the gradient estimation problem. Specifically, we enhance Definition 1 by making its objective
E g T g * I]
, where I is prior information available to us.
This change in perspective gives rise to two important questions: does there exist prior information that can be useful to us?, and does there exist an algorithmic way to exploit this information? We show that the answer to both of these questions is affirmative.
Gradient priors
Consider a gradient ∇ x L(x, y) of the loss function corresponding to some input (x, y). Does there exist some kind of prior that can be extracted from the dataset {x i }, in general, and the input (x, y) in particular, that can be used as a predictor of the gradient? We demonstrate that it is indeed the case, and give two example classes of such priors.
Time-dependent priors The first class of priors we consider are time-dependent priors, a standard example of which is what we refer to as the "multi-step prior." We find that along the trajectory taken by estimated gradients, successive gradients are in fact heavily correlated. We show this empirically by taking steps along the optimization path generated by running the NES estimator at each point, and plotting the normalized inner product (cosine similarity) between successive gradients, given by Figure 2 demonstrates that there indeed is a non-trivial correlation between successive gradients-typically, the gradients of successive steps (using step size from [IEA + 18]) have a cosine similarity of about 0.9. Successive gradients continue to correlate at higher step sizes: Appendix B shows that the trend continues even at step size 4.0 (a typical value for the total perturbation bound ε). This indicates that there indeed is a potential gain from incorporating this correlation into our iterative optimization. To utilize this gain, we intend to use the gradients at time t − 1 as a prior for the gradient at time t, where both the prior and the gradient estimate itself evolve over iterations.
∇ x L(x t , y), ∇ x L(x t+1 , y) ||∇ x L(x t , y)|| 2 ||∇ x L(x t+1 , y)|| 2 t ∈ {1 . . . T − 1}.(8)
Data-dependent priors We find that the time-dependent prior discussed above is not the only type of prior one can exploit here. Namely, we can also use the structure of the inputs themselves to reduce query complexity (in fact, the existence of such data-dependent priors is what makes machine learning successful in the first place).
In the case of image classification, a simple and heavily exploited example of such a prior stems from the fact that images tend to exhibit a spatially local similarity (i.e. pixels that are close together tend to be similar). We find that this similarity also extends to the gradients: specifically, whenever two coordinates (i, j) and (k, l) of ∇ x L(x, y) are close, we expect ∇ x L(x, y) ij ≈ ∇ x L(x, y) kl too. To corroborate and quantify this phenomenon, we compare ∇ x L(x, y) with an average-pooled, or "tiled", version (with "tile length" k) of the same signal. An example of such an average-blurred gradient can be seen in Appendix B. More concretely, we apply to the gradient the mean pooling operation with kernel size (k, k, 1) and stride (k, k, 1), then upscale the spatial dimensions by k. We then measure the cosine similarity between the average-blurred gradient and the gradient itself. Our results, shown in Figure 3, demonstrate that the gradients of images are locally similar enough to allow for average-blurred gradients to maintain relatively high cosine similarity with the actual gradients, even when the tiles are large. Our results suggest that we can reduce the dimensionality of our problem by a factor of k 2 (for reasonably large k) and still estimate a vector pointing close to the same direction as the original gradient. This factor, as we show later, leads to significantly improved black-box adversarial attack performance.
A framework for gradient estimation with priors
Given the availability of these informative gradient priors, we now need a framework that enables us to easily incorporate these priors into our construction of black-box adversarial attacks. Our proposed method builds on the framework of bandit optimization, a fundamental tool in online convex optimization [Haz]. In the bandit optimization framework, an agent plays a game that consists of a sequence of rounds. In round t, the agent must choose a valid action, and then by playing the action incurs a loss given by a loss function t (·) that is unknown to the agent. After playing the action, he/she only learns the loss that the chosen action incurs; the loss function is specific to the round t and may change arbitrarily between rounds. The goal of the agent is to minimize the average loss incurred over all rounds, and the success of the agent is usually quantified by comparing the total loss incurred to that of the best expert in hindsight (the best single-action policy). By the nature of this formulation, the rounds of this game can not be treated as independentto perform well, the agent needs to keep track of some latent record that aggregates information learned over a sequence of rounds. This latent record usually takes a form of a vector v t that is constrained to a specified (convex) set K. As we will see, this aspect of the bandit optimization framework will provide us with a convenient way to incorporate prior information into our gradient prediction.
An overview of gradient estimation with bandits. We can cast the gradient estimation problem as an bandit optimization problem in a fairly direct manner. Specifically, we let the action at each round t be a gradient estimate g t (based on our latent vector v t ), and the loss t correspond to the (negative) inner product between this prediction and the actual gradient. Note that we will never have a direct access to this loss function t but we are able to evaluate its value on a particular prediction vector g t via the finite differences method (2) (which is all that the bandits optimization framework requires us to be able to do).
Just as this choice of the loss function t allows us to quantify performance on the gradient estimation problem, the latent vector v t will allow us to algorithmically incorporate prior information into our predictions. Looking at the two example priors we consider, the time-dependent prior will be reflected by carrying over the latent vector between the gradient estimations at different points. Data-dependent priors will be captured by enforcing that our latent vector has a particular structure. For the specific prior we quantify in the preceding section (data-dependent prior for images), we will simply reduce the dimensionality of the latent vector via average-pooling ("tiling"), removing the need for extra queries to discern components of the gradient that are spatially close.
Implementing gradient estimation in the bandit framework
We now describe our bandit framework for adversarial example generation in more detail. Note that the algorithm is general and can be used to construct black-box adversarial examples where the perturbation is constrained to any convex set ( p -norm constraints being a special case). We discuss the algorithm in its general form, and then provide versions explicitly applied to the 2 and ∞ cases.
As previously mentioned, the latent vector v t ∈ K serves as a prior on the gradient for the corresponding round t -in fact, we make our prediction g t be exactly v t projected onto the appropriate space, and thus we set K to be an extension of the space of valid adversarial perturbations (e.g. R n for 2 examples, [−1, 1] n for ∞ examples). Our loss function t is defined as
t (g) = − ∇L(x, y), g ||g|| ,(9)
for a given gradient estimate g, where we access this inner product via finite differences. Here, L(x, y) is the classification loss on an image x with true class y.
The crucial element of our algorithm will thus be the method of updating the latent vector v t . We will adapt here the canonical "reduction from bandit information" [Haz]. Specifically, our update procedure is parametrized by an estimator ∆ t of the gradient ∇ v t (v), and a first-order update step A (K × R dim(K) → K), which maps the latent vector v t and the estimated gradient of t with respect to v t (which we denote ∆ t ) to a new latent vector v t+1 . The resulting general algorithm is presented as Algorithm 1.
Algorithm 1 Gradient Estimation with Bandit Optimization
1: procedure Bandit-Opt-Loss-Grad-Est(x, y init ) 2: v 0 ← A(φ) 3:
for each round t = 1, . . . , T do 4:
// Our loss in round t is t (g t ) = − ∇ x L(x, y init ), g t 5: g t ← v t−1 6: ∆ t ← Grad-Est(x, y init , v t−1 ) // Estimated Gradient of t 7: v t ← A(v t−1 , ∆ t ) 8: g ← v T 9: return Π ∂K [g]
In our setting, we make the estimator ∆ of the gradient −∇ v ∇L(x, y), v of the loss be the standard spherical gradient estimator (see [Haz]). We take a two-query estimate of the expectation, and employ antithetic sampling which results in the estimate being computed as
∆ = (v + δu) − (v − δu) δ u,(10)
where u is a Gaussian vector sampled from N (0, 1 d I). The resulting algorithm for calculating the gradient estimate given the current latent vector v, input x and the initial label y is Algorithm 2.
Algorithm 2 Single-query spherical estimate of ∇ v ∇L(x, y), v 1: procedure Grad-Est(x, y, v) // Note that due to cancellations we can actually evaluate ∆ with only two queries to L 8:
return ∆
A crucial point here is that the above gradient estimator ∆ t parameterizing the bandit reduction has no direct relation to the "gradient estimation problem" as defined in Section 2.4. It is simply a general mechanism by which we can update the latent vector v t in bandit optimization. It is the actions g t (equal to v t ) which provide proposed solutions to the gradient estimation problem from Section 2.4.
The choice of the update rule A tends to be natural once the convex set K is known. For K = R n , we can simply use gradient ascent:
v t = A(v t−1 , ∆ t ) := v t−1 + η · ∆ t(11)
and the exponentiated gradients (EG) update when the constraint is an ∞ bound (i.e. K = [−1, 1] n ):
p t−1 = 1 2 (v t−1 + 1) p t = A(g t−1 , ∆ t ) := 1 Z p t−1 exp(η · ∆ t ) s.t. Z = p t−1 exp(η · ∆ t ) + (1 − p t−1 ) exp(−η · ∆ t ) v t = 2p t − 1
Finally, in order to translate our gradient estimation algorithm into an efficient method for constructing black-box adversarial examples, we interleave our iterative gradient estimation algorithm with an iterative update of the image itself, using the boundary projection of g t in place of the gradient (c. f. (1)). This results in a general, efficient, prior-exploiting algorithm for constructing black-box adversarial examples. The resulting algorithm in the 2 -constrained case is shown in Algorithm 3. x 0 ← x init // Adversarial image to be constructed 5:
while C(x) = y init do 6: g t ← v t−1 7:
x t ← x t−1 + h · gt ||gt||2 // Boundary projection g ||gt|| standard PGD: c.f. [Rig15] 8:
∆ t ← Grad-Est(x t−1 , y init , v t−1 ) // Estimated Gradient of t 9: v t ← v t−1 + η · ∆ t
10:
t ← t + 1 return x t−1
Experiments and evaluation
We evaluate our bandit approach described in Section 3 and the natural evolutionary strategies (NES) approach of [IEA + 18] on their effectiveness in generating untargeted adversarial examples. We consider both the 2 and ∞ threat models on the ImageNet [RDS + 15] dataset, in terms of success rate and query complexity. We give results for attacks on the Inception-v3, Resnet-50, and VGG16 classifiers. We further investigate loss and gradient estimate quality over the optimization trajectory in each method.
In evaluating our approach, we test both the bandit approach with time prior (Bandits T ), and our bandit approach with the given examples of both the data and time priors (Bandits T D ). We use 10,000 randomly selected images (scaled to [0, 1]) to evaluate all approaches. For NES, Bandits T , and Bandits T D we found hyperparameters (given in Appendix C, along with the experimental parameters) via grid search.
Results
For ImageNet, we record the effectiveness of the different approaches in both threat models in Table 1 ( 2 and ∞ perturbation constraints), where we show the attack success rate and the mean number of queries (of the successful attacks) needed to generate an adversarial example for the Inception-v3 classifier (results for other classifiers in Appendix E). For all attacks, we limit the attacker to at most 10,000 oracle queries. As shown in Table 1, our bandits framework with both data-dependent and time prior (Bandits T D ), is six and three times less failure-prone than the previous state of the art (NES [IEA + 18]) in the ∞ and 2 settings, respectively. Despite the higher success rate, our method actually uses around half as many queries as NES.
In particular, when restricted to the inputs on which NES is successful in generating adversarial examples, our attacks are 2.5 and 5 times as query-efficient for the ∞ and 2 settings, respectively.
We also further quantify the performance of our methods in terms of black-box attacks, and gradient estimation. Specifically, we first measure average queries per success after reaching a certain success rate (Figure 4a), which indicates the dependence of the query count on the desired success rate. The data shows that for any fixed success rate, our methods are more query-efficient than NES, and (due to the exponential trend) suggest that the difference may be amplified for higher success rates. We then plot the loss of the classifier over time (averaged over all images), and performance on the gradient estimation problem for both ∞ and 2 cases (which, crucially, corresponds directly to the expectation we maximize in (7). We show these three plots for ∞ in Figure 4, and show the results for 2 (which are extremely similar) in Appendix D, along with CDFs showing the success of each method as a function of the query limit. We find that on every metric in both threat models, our methods strictly dominate NES in terms of performance. Correlation with g * Figure 4: (left) Average number of queries per successful image as a function of the number of total successful images; at any desired success rate, our methods use significantly less queries per successful image than NES, and the trend suggests that this gap increases with the desired success rate. (center) The loss over time, averaged over all images; (right) The correlation of the latent vector with the true gradient g, which is precisely the gradient estimation objective we define.
Related work
All known techniques for generating adversarial examples in the black-box setting so far rely on either iterative optimization schemes (our focus) or so-called substitute networks and transferability.
In the first line of work, algorithms use queries to gradually perturb a given input to maximize a corresponding loss, causing misclassification. Nelson et. al [NRH + 12] presented the first such iterative attack on a special class of binary classifiers. Later, Xu et. al [XQE16] gave an algorithm for fooling a real-world system with black-box attacks. Specifically, they fool PDF document malware classifier by using a genetic algorithms-based attack. Soon after, Narodytska et. al [NK17] described the first black-box attack on deep neural networks; the algorithm uses a greedy search algorithm that selectively changes individual pixel values. Chen et. al [CZS + 17] were the first to design black-box attack based on finite-differences and gradient based optimization. The method uses coordinate descent to attack black-box neural networks, and introduces various optimizations to decrease sample complexity. Building on the work of [CZS + 17], Ilyas et. al [IEA + 18] designed a black-box attack strategy that also uses finite differences but via natural evolution strategies (NES) to estimate the gradients. They then used their algorithm as a primitive in attacks on more restricted threat models.
In a concurrent line of work, Papernot et. al [PMG + 17] introduced a method for attacking models with so-called substitute networks. Here, the attacker first trains a model -called a substitute network -to mimic the target network's decision boundaries. The attacker then generates adversarial examples on the substitute network, and uses them to attack the original target mode. Increasing the rate at which adversarial examples generated from substitute networks fool the target model is a key aim of substitute networks work. In [PMG + 17], the attacker generates a synthetic dataset of examples labeled by the target classifier using black-box queries. The attacker then trains a substitute network on the dataset. Adversarial examples generated with methods developed with recent methods [PMG + 17, LCLS17] tend to transfer to a target MNIST classifier. We note, however, that the overall query efficiency of this type of methods tends to be worse than that of the gradient estimation based ones. (Their performance becomes more favorable as one becomes interested in attacking more and more inputs, as the substitute network has to be trained only once.)
Conclusion
We develop a new, unifying perspective on black-box adversarial attacks. This perspective casts the construction of such attacks as a gradient estimation problem. We prove that a standard least-squares estimator both captures the existing state-of-the-art approaches to black-box adversarial attacks, and actually is, in a certain natural sense, an optimal solution to the problem.
We then break the barrier posed by this optimality by considering a previously unexplored aspect of the problem: the fact that there exists plenty of extra prior information about the gradient that one can exploit to mount a successful adversarial attack. We identify two examples of such priors: a "time-dependent" prior that corresponds to similarity of the gradients evaluated at similar inputs, and a "data-dependent" prior derived from the latent structure present in the input space.
Finally, we develop a bandit optimization approach to black-box adversarial attacks that allows for a seamless integration of such priors. The resulting framework significantly outperforms the state-of-the-art methods, achieving a factor of two to six improvement in terms of success rate and query efficiency. Our results thus open a new avenue towards finding priors for construction of even more efficient black-box adversarial attacks.
A Proofs
Theorem 1 (NES and Least Squares equivalence). Letx N ES be the Gaussian k-query NES estimator of a d-dimensional gradient g and letx LSQ be the minimal-norm k-query least-squares estimator of g. For any p > 0, with probability at least 1 − p we have that
x LSQ , g − x N ES , g ≤ O k d · log 3 k p ||g|| 2 ,
and in particular,
x LSQ , g − x N ES , g ≤ 8 2k d · log 3 2k + 2 p 1 + κ √ d ||g|| 2
with probability at least 1 − p, where κ ≤ 2 log 2k(k + 1) p .
Proof. Let us first recall our estimation setup. We have k query vectors δ i ∈ R d drawn from an i.i.d Gaussian distribution whose expected squared norm is one, i.e. δ i ∼ N (0, 1 d I), for each 1 ≤ i ≤ k. Let the vector y ∈ R k denote the inner products of δ i s with the gradient, i.e.
y i := δ i , g ,
for each 1 ≤ i ≤ k. We define the matrix A to be a k × d matrix with the δ i s being its rows. That is, we have Ag = y. Now, recall that the closed forms of the two estimators we are interested in are given bŷ
x N ES = A T y = A T Aĝ x LSQ = A T (AA T ) −1 y = A T (AA T ) −1 Ag, which implies that x N ES , g = g T A T Ag x LSQ , g = g T A T (AA T ) −1 Ag.
We can bound the difference between these two inner products as
x LSQ , g − x N ES , g = g T A T (AA T ) −1 − I Ag ≤ g T A T (AA T ) −1 − I ||Ag|| ≤ (AA T ) −1 − I ||Ag|| 2 .(12)
Now, to bound the first term in (12), observe that (Note that the first term in the above sum has been canceled out.) This gives us that
I − (AA T ) −1 ≤ ∞ l=1 AA T − I l ≤ AA T − I 1 − ||AA T − I|| ≤ 2 AA T − I ,
as long as AA T − I ≤ 1 2 (which, as we will see, is indeed the case with high probability). Our goal thus becomes bounding AA T − I = λ max (AA T − I), where λ max (·) denotes the largest (in absolute value) eigenvalue. Observe that AA T and −I commute and are simultaneously diagonalizable. As a result, for any 1 ≤ i ≤ k, we have that the i-th largest eigenvalue λ i (AA T − I) of AA T − I can be written as
λ i (AA T − I) = λ i (AA T ) + λ i (−I) i = λ i (AA T ) − 1. So, we need to bound λ max (AA T − I) = max λ 1 (AA T ) − 1, 1 − λ k (AA T )
To this end, recall that E[AA T ] = I (since the rows of A are sampled from the distribution N (0, 1 d I)), and thus, by the covariance estimation theorem of Gittens and Tropp [GT11] (see Corollary 7.2) (and union bounding over the two relevant events), we have that
Pr(λ max (AA T − I) ≥ ε) = Pr(λ 1 (AA T ) ≥ 1 + ε or λ k (AA T ) ≥ 1 − ε) = Pr(λ 1 (AA T ) ≥ λ 1 (I) + ε or λ k (AA T ) ≥ λ k (I) − ε) ≤ 2k · exp − dε 2 32k .
Setting ε = 32k log(2(k + 1)/p) d , ensuring that ε ≤ 1 2 , gives us
Pr λ max (AA T ) − 1 ≥ 32k log(2(k + 1)/p) d ≤ k k + 1 p.
and thus
(AA T ) −1 − I ≤ 32k log(2(k + 1)/p) d ,(13)
with probability at least 1 − k k+1 p. To bound the second term in (12), we note that all the vectors δ i are chosen independently of the vector g and each other. So, if we consider the set {ĝ,δ 1 , . . . ,δ k } of k + 1 corresponding normalized directions, we have (see, e.g., [GTPS16]) that the probability that any two of them have the (absolute value of) their inner product be larger than some ε = 2 log(2(k+1)/p) d is at most
exp −(k + 1) 2 e −d(ε ) 2 /2 = exp −2 k + 1 p ≤ p 2(k + 1)
.
On the other hand, we note that each δ i is a random vector sampled from the distribution N (0, 1 d I d ), so we have that (see, e.g., Lemma 1 in [LM00]), for any 1 ≤ i ≤ k and any ε > 0,
Pr ||δ i || 2 ≥ 1 + ε ≤ exp − (ε ) 2 d 4 .
Setting ε = 2 log(2k(k + 1)/p) d Theorem 2 (Least-Squares Optimality). For a fixed projection matrix A and under the following observation model of isotropic Gaussian noise: y = Ag + ε where ε ∼ N (0, εId), the least-squares estimator as in Theorem 1,x LSQ = A T (AA T ) −1 y is a finite-sample efficient (minimum-variance unbiased) estimator of the parameter g.
Proof. Proving the theorem requires an application of the Cramer-Rao Lower Bound theorem:
Theorem 3 (Cramer-Rao Lower Bound). Given a parameter θ, an observation distribution p(x; θ), and an unbiased estimatorθ that uses only samples from p(x; θ), then (subject to Fisher regularity conditions trivially satisfied by Gaussian distributions),
Cov θ − θ = E (θ − θ)(θ − θ) T ≥ [I(θ)] −1 where I(θ) is the Fisher matrix: [I(θ)] ij = −E ∂ log p(x; θ) ∂θ i ∂θ j
Now, note that the Cramer-Rao bound implies that if the variance of the estimatorθ is the inverse of the Fisher matrix,θ must be the minimum-variance unbiased estimator. Recall the following form of the Fisher matrix:
I(θ) = E ∂ log p(x; θ) ∂θ ∂ log p(x; θ) ∂θ T(14)
Now, suppose we had the following equality, which we can then simplify using the preceding equation:
I(θ) θ − θ = ∂ log p(x; θ) ∂θ (15) I(θ) θ − θ I(θ) θ − θ T = ∂ log p(x; θ) ∂θ ∂ log p(x; θ) ∂θ T (16) E I(θ) θ − θ I(θ) θ − θ T = E ∂ log p(x; θ) ∂θ ∂ log p(x; θ) ∂θ T (17) I(θ)E (θ − θ)(θ − θ) T I(θ) = I(θ)(18)
Multiplying the preceding by [I(θ)] −1 on both the left and right sides yields:
E (θ − θ)(θ − θ) T = [I(θ)] −1 ,(19)
which tells us that (15) is a sufficient condition for finite-sample efficiency (minimal variance). We show that this condition is satisfied in our case, where we have y ∼ Ag + ε,θ =x LSQ , and θ = g. We begin by computing the Fisher matrix directly, starting from the distribution of the samples y:
p(y; g) = 1 (2πε) d exp 1 2ε (y − Ag) T (y − Ag) (20) log p(y; g) = d 2 log (2πε) + 1 2ε (y − Ag) T (y − Ag) (21) ∂ log p(y; g) ∂g = 1 2ε 2A T (y − Ag) (22) = 1 ε A T (y − Ag)(23)
Using (14),
I(g) = E 1 ε A T (y − Ag) 1 ε A T (y − Ag) T (25) = 1 ε 2 A T E (y − Ag)(y − Ag) T A (26) = 1 ε 2 A T (εId)A (27) = 1 ε A T A(28)
Finally, note that we can write:
I(g)(x LSQ − g) = 1 ε A T A(A T (AA T ) −1 y − g) (29) = 1 ε (A T y − A T Ag) (30) = ∂ log p(y; g) ∂g ,(31)
which concludes the proof, as we have shown thatx LSQ satisfies the condition (15), which in turn implies finite-sample efficiency.
Claim 1. Applying the precise bound that we can derive from Theorem 1 on an ImageNet-sized dataset (d = 300000) and using k = 100 queries (what we use in our ∞ threat model and ten times that used for our 2 threat model),
x LSQ , g − x N ES , g ≤ 5 4 ||g|| 2 .
For 10 queries,
x LSQ , g − x N ES , g ≤ 1 2 ||g|| 2 .
B Omitted Figures
B.1 Compressive Sensing
Compressed sensing approaches can, in some cases, solve the optimization problem presented in Section 2.4. However, these approaches require sparsity to improve over the least squares method. Here we show the lack of sparsity in gradients through a classifier on a set of canonical bases for images. In Figure 5, we plot the fraction of 2 weight accounted for by the largest k components in randomly chosen image gradients when using two canonical bases: standard and wavelet (db4). While lack of sparsity in these bases does not strictly preclude the existence of a basis on which gradients are sparse, it suggests the lack of a fundamental structural sparsity in gradients through a convolutional neural network.
B.2 Tiling
An example of the tiling procedure applied to a gradient can be seen in Figure 6. Figure 6: Average blurred gradient with kernel size or "tile length" 5. The original gradient can be seen in 6a, and the "tiled" or average blurred gradient can be seen in 6b
B.3 Time-dependent Priors at Higher Step Sizes
We show in Figure 7 that the correlation between successive gradients on the NES trajectory are signficantly correlated, even at much higher step sizes (up to 2 norm of 4.0, which is a typical value for ε, the total adversarial perturbation bound and thus an absolute bound on step size). This serves as further motivation for the time-dependent prior.
C Hyperparameters
D Full Results
Figure 8: Average loss and cosine distance versus number of queries used over the approaches' optimization trajectories in the two threat models (averaged over 100 images). Figure 9: Cumulative distribution functions for the number of queries required to create an adversarial example in the 2 and ∞ settings for the NES, bandits with time prior (Bandits T ), and bandits with time and data-dependent priors (Bandits T D ) approaches. Note that the CDFs do not converge to one, as the approaches sometimes cannot find an adversarial example in less than 10,000 queries. Figure 10: The average number of queries used per successful image for each method when reaching a specified success rate: we compare NES [IEA + 18], Bandits T (our method with time prior only), and Bandits T D (our method with both data and time priors) and find that our methods strictly dominate NES-that is, for any desired sucess rate, our methods take strictly less queries per successful image than NES.
E Results for other Classifiers
Here, we give results for the ImageNet dataset, comparing our best method (Bandits T D ) and NES for Inception-v3 (also shown in Table 1), VGG16, and ResNet50 classifiers. Note that we do not fine-tune the hyperparameters to the new classifiers, but simply use the hyperparameters found for Inception-v3. Nevertheless, our best method consistently outperforms NES on black-box attacks. Table 4: Summary of effectiveness of ∞ and 2 ImageNet attacks on Inception v3, ResNet-50, and VGG16 (I, R, V) using NES and bandits with time and data-dependent priors (Bandits T D ). Note that in the first column, the average number of queries is calculated only over successful attacks, and we enforce a query limit of 10,000 queries. For purposes of direct comparison, the last column calculates the average number of queries used for only the images that NES (previous SOTA) was successful on. Our most powerful attack uses 2-4 times fewer queries, and fails 2-5 times less often.
Figure 2 :Figure 3 :
23Cosine similarity between the gradients at the current and previous steps along the optimization trajectory of NES PGD attacks, averaged over 1000 random ImageNet images. Cosine similarity of "tiled" image gradient with original image gradient versus the length of the square tiles, averaged over 5,000 randomly selected ImageNet images.
1 , q 2 } ← {v + δu, v − δu} // Antithetic samples 4: t (q 1 ) = − ∇L(x, y), q 1 ≈ L(x,y)−L(x+ ·q1,y) // Gradient estimation loss at q 1 5: t (q 2 ) = − ∇L(x, y), q 2 ≈ L(x,y)−L(x+ ·q2,y) // Gradient estimation loss at q
Algorithm 3
3Adversarial Example Generation with Bandit Optimization for 2 norm perturbations 1: procedure Adversarial-Bandit-L2(x init , y init ) ← 0 1×d // If data prior, d < dim(x); v t (∆ t ) up (down)-sampled before (after) line 8 4:
((
AA T ) −1 = I − (I − AA T ) I − AA T ) l and thus I − (AA T ) −1 = ∞ l=1 (AA T − I) l .
Figure 5 :
5Sparsity in standard, wavelet (db4 wavelets), and PCA-constructed bases for the gradients of 5,000 randomly chosen example images in the ImageNet validation set. The y-axis shows the mean fraction of 2 weight held by the largest k vectors over the set of 5,000 chosen images. The x-axis varies k. The gradients are taken through a standardly trained Inception v3 network. None of the bases explored induce significant sparsity.
Figure 7 :
7Figure 2repeated for several step sizes, showing that the successive correlation between gradients continues even at higher step sizes.
T (time prior) Bandits T D (time + data)
Table 2 :
2Hyperparameters for the NES approach.Hyperparameter
Value
ImageNet ∞ ImageNet 2
Samples per step
100
10
Learning Rate
0.01
0.3
Table 3 :
3Hyperparameters for the bandits approach (variables names as used in pseudocode).Hyperparameter
Value
ImageNet ∞ ImageNet 2
η (OCO learning rate)
100
0.1
h (Image p learning rate)
0.01
0.5
δ (Bandit exploration)
1.0
0.01
η (Finite difference probe)
0.1
0.01
Tile size (Data-dependent prior only)
(6px) 2
(6px) 2
AcknowledgmentsWe thank Ludwig Schmidt for suggesting the connection between the least squares method and the natural estimation strategies.AI was supported by an Analog Devices Graduate Fellowship. LE was supported in part by an MIT-IBM Watson AI Lab research grant, the Siebel Scholars Foundation, and NSF Frontier grant CNS-10413920. AM was supported in part by a Google Research Award, and the NSF grants CCF-1553428 and CNS-1815221.yields P ||δ i || 2 ≥ 1 + 2 log(2(k + 1)k/p) d ≤ p 2k(k + 1) .Applying these two bounds (and, again, union bounding over all the relevant events), we get thatwith probability at most p k+1 .Finally, by plugging the above bound and the bound (13) into the bound(12), we obtain thatwith probability 1 − p, where κ = 2 log 2k(k + 1) p .This completes the proof.
Synthesizing robust adversarial examples. Anish Athalye, Logan Engstrom, Andrew Ilyas, Kevin Kwok, International Conference on Machine Learning (ICML). Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In International Conference on Machine Learning (ICML), 2018.
Exploring the space of black-box attacks on deep neural networks. Arjun Nitin, Warren Bhagoji, Bo He, Dawn Li, Song, arXiv:1712.09491In ArXiv preprintArjun Nitin Bhagoji, Warren He, Bo Li, and Dawn Song. Exploring the space of black-box attacks on deep neural networks. In ArXiv preprint arXiv:1712.09491, 2017.
Hidden voice commands. Pratyush Cmv + 16] Nicholas Carlini, Tavish Mishra, Yuankai Vaidya, Micah Zhang, Clay Sherr, David Shields, Wenchao Wagner, Zhou, USENIX Security Symposium. CMV + 16] Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. Hidden voice commands. In USENIX Security Symposium, 2016.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, Security and Privacy (SP), 2017 IEEE Symposium. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, 2017.
Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh, ACM Workshop on Artificial Intelligence and Security. CZS + 17CZS + 17] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In ACM Workshop on Artificial Intelligence and Security, 2017.
A mathematical introduction to compressive sensing. Simon Foucart, Holger Rauhut, Birkhäuser Basel. 1Simon Foucart and Holger Rauhut. A mathematical introduction to compressive sensing, volume 1. Birkhäuser Basel, 2013.
Explaining and harnessing adversarial examples. J Ian, Jonathon Goodfellow, Christian Shlens, Szegedy, International Conference on Learning Representations (ICLR). Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
Tail bounds for all eigenvalues of a sum of random matrices. Alex Gittens, Joel Tropp, arxiv:1104.4513In Arxiv preprintAlex Gittens and Joel Tropp. Tail bounds for all eigenvalues of a sum of random matrices. In Arxiv preprint arxiv:1104.4513, 2011.
Approximation with random bases: Pro et contra. Ivan Alexander N Gorban, Yu Tyukin, V Danil, Konstantin I Prokhorov, Sofeikov, Information Sciences. Alexander N Gorban, Ivan Yu Tyukin, Danil V Prokhorov, and Konstantin I Sofeikov. Approxi- mation with random bases: Pro et contra. In Information Sciences, 2016.
Introduction to online convex optimization. Elad Hazan, Elad Hazan. Introduction to online convex optimization.
Black-box adversarial attacks with limited queries and information. Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin, Kevin Kwok, International Conference on Machine Learning (ICML). IEA + 18[IEA + 18] Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin, and Kevin Kwok. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning (ICML), 2018.
Extensions of lipschitz mappings into a hilbert space. B William, Joram Johnson, Lindenstrauss, William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. 1984.
Adversarial machine learning at scale. Alexey Kurakin, Ian J Goodfellow, Samy Bengio, International Conference on Learning Representations (ICLR). Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR), 2016.
Delving into transferable adversarial examples and black-box attacks. Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song, International Conference on Learning Representations (ICLR. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations (ICLR), 2017.
Adaptive estimation of a quadratic functional by model selection. Beatrice Laurent, Pascal Massart, Annals of Statistics. Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. In Annals of Statistics, 2000.
Bias, variance and the combination of least squares estimators. Ron Meir, Neural Information Processing Systems (NeurIPS). Ron Meir. Bias, variance and the combination of least squares estimators. In Neural Information Processing Systems (NeurIPS), 1994.
Deepfool: a simple and accurate method to fool deep neural networks. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Pascal Fawzi, Frossard, Computer Vision and Pattern Recognition (CVPR). Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Computer Vision and Pattern Recognition (CVPR).
Towards deep learning models resistant to adversarial attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, International Conference on Learning Representations (ICLR). MMS + 18[MMS + 18] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
Simple black-box adversarial attacks on deep neural networks. Nina Narodytska, Shiva Kasiviswanathan, Computer Vision and Pattern Recognition (CVPR) Workshops. Nina Narodytska and Shiva Kasiviswanathan. Simple black-box adversarial attacks on deep neural networks. In Computer Vision and Pattern Recognition (CVPR) Workshops, 2017.
Query strategies for evading convex-inducing classifiers. Blaine Nelson, I P Benjamin, Ling Rubinstein, Huang, D Anthony, Joseph, J Steven, Satish Lee, J D Rao, Tygar, In Journal of Machine Learning Research. JMLRNRH + 12[NRH + 12] Blaine Nelson, Benjamin IP Rubinstein, Ling Huang, Anthony D Joseph, Steven J Lee, Satish Rao, and JD Tygar. Query strategies for evading convex-inducing classifiers. In Journal of Machine Learning Research (JMLR), 2012.
Practical black-box attacks against machine learning. Nicolas Papernot, Patrick Mcdaniel, Ian Goodfellow, Somesh Jha, Ananthram Berkay Celik, Swami, PMG + 17ACM Asia Conference on Computer and Communications Security. PMG + 17] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In ACM Asia Conference on Computer and Communications Security, 2017.
. Jia Rds + 15] Olga Russakovsky, Hao Deng, Jonathan Su, Sanjeev Krause, Sean Satheesh, Zhiheng Ma, Andrej Huang, Aditya Karpathy, Michael Khosla, Alexander C Bernstein, Li Berg, Fei-Fei, RDS + 15] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.
ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV). ImageNet Large Scale Visual Recognition Challenge. In International Journal of Computer Vision (IJCV), 2015.
Mathematics of machine learning course notes: Projected gradient descent. Phillipe Rigollet, Phillipe Rigollet. Mathematics of machine learning course notes: Projected gradient descent, 2015.
Introduction to stochastic search and optimization: estimation, simulation, and control. C James, Spall, John Wiley & Sons65James C Spall. Introduction to stochastic search and optimization: estimation, simulation, and control, volume 65. John Wiley & Sons, 2005.
Rethinking the inception architecture for computer vision. Svi + ] Christian, Vincent Szegedy, Sergey Vanhoucke, Jonathon Ioffe, Zbigniew Shlens, Wojna, Computer Vision and Pattern Recognition (CVPR). SVI + ] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Computer Vision and Pattern Recognition (CVPR).
Intriguing properties of neural networks. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus, International Conference on Learning Representations (ICLR). SZS + 14[SZS + 14] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
Automatically evading classifiers. Weilin Xu, Yanjun Qi, David Evans, Proceedings of the 2016 Network and Distributed Systems Symposium. the 2016 Network and Distributed Systems SymposiumWeilin Xu, Yanjun Qi, and David Evans. Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium, 2016. |
221,655,222 | Information Laundering for Model Privacy | In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An informationlaundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries to the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design. | [] | Information Laundering for Model Privacy
Xinran Wang
School of Statistics
Electrical and Computer Engineering
Department of Mathematics
School of Statistics
University of Minnesota
University of Utah
Stanford University
University of Minnesota
Yu Xiang
School of Statistics
Electrical and Computer Engineering
Department of Mathematics
School of Statistics
University of Minnesota
University of Utah
Stanford University
University of Minnesota
Jun Gao
School of Statistics
Electrical and Computer Engineering
Department of Mathematics
School of Statistics
University of Minnesota
University of Utah
Stanford University
University of Minnesota
Jie Ding
School of Statistics
Electrical and Computer Engineering
Department of Mathematics
School of Statistics
University of Minnesota
University of Utah
Stanford University
University of Minnesota
Information Laundering for Model Privacy
1
In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An informationlaundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries to the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.
I. INTRODUCTION
An emerging number of applications involve the following user-scenario. Alice developed a model that takes a specific query as input and calculates a response as output. The model is a stochastic black-box that may represent a novel type of ensemble models, a known deep neural network architecture with sophisticated parameter tuning, or a physical law described by stochastic differential equations. Bob is a user that sends a query to Alice and obtains the corresponding response for his specific purposes, whether benign or adversarial. Examples of the above scenario include many recent Machine-Learningas-a-Service (MLaaS) services [1]- [3] and artificial intelligence chips, where Alice represents a learning service provider, and Bob represents users. $ % $ * $ * $ ' Fig. 1. Illustration of (a) Alice's effective system for public use, and (b) Alice's idealistic system not for public use. In the figure, K * denotes the already-learned model/API, K1 denotes the kernel that perturbs the input data query by potential adversaries, and K2 denotes the kernel that perturbs the output response from K * to publish the final response Y .
Suppose that Bob obtains sufficient paired input-output data as generated from Alice's black-box model, it is conceivable that Bob could treat it as supervised data and reconstruct Alice's model to some extent.
From the view of Alice, her model may be treated as valuable and private. As Bob that queries the model may be benign or adversarial, Alice may intend to offer limited utility for the return of enhanced privacy.
The above concern naturally motivates the following problem.
(Q1) How to enhance the privacy for an already-learned model? Note that the above problem is not about data privacy, where the typical goal is to prevent adversarial inference of the data information during data transmission or model training. In contrast, model privacy concerns an already-established model.
We propose to study a general approach to jointly maneuver the original query's input and output so that Bob finds it challenging to guess Alice's core model. As illustrated in Figure 1a, Alice's model is treated as a transition kernel (or communication channel) that producesỸ conditional on any givenX.
Compared with an honest service Alice would have provided (Figure 1b), the inputX is a maneuvered version of Bob's original input X; Moreover, Alice may choose to return a perturbed outcome Y instead ofỸ to Bob. Consequently, the apparent kernel from Bob's input query X to the output response Y is a cascade of three kernels, denoted by K in Figure 1a. The above perspective provides a natural and general framework to study model privacy. Admittedly, if Alice produces a (nearly) random response, adversaries will find it difficult to steal the model, while benign users will find it useless. Consequently, we raise another problem.
(Q2) How to formulate the model privacy-utility tradeoff, and what is the optimal way of imposing privacy? To address this question, we formulate a model privacy framework from an information-theoretic perspective, named information laundering. We briefly describe the idea below. The general goal is to jointly design the input and output kernels (K 1 and K 2 in Figure 1a) that deliberately maneuver the intended input and output for queries to the model so that 1) the effective kernel (K in Figure 1a) for Bob is not too far away from the original kernel (K * in Figure 1a), and 2) adversarial acquisition of the model becomes difficult. Alternatively, Alice 'launders' the input-output information maximally given a fixed utility loss. To find the optimal way of information laundering, we propose an objective function that involves two components: the first being the information shared between X,X and betweenỸ , Y , and the second being the average Kullback-Leibler (KL) divergence between the conditional distribution describing K and K * . Intuitively, the first component controls the difficulty of guessing K * sandwiched between two artificial kernels K 1 and K 2 , while the second component ensures that overall utility is maximized under the same privacy constraints. By optimizing the objective for varying weights between the components, we can quantify the fundamental tradeoffs between model utility and privacy.
A. Related Work
We introduce some closely related literature below. Section III-C will incorporate more technical discussions on some related but different frameworks, including information bottleneck, local data privacy, information privacy, and adversarial model attack.
A closely related subject of study is data privacy, which has received extensive attention in recent years due to societal concerns [4]- [8]. Data privacy concerns the protection of (usually personal) data information from different perspectives, including lossless cryptography [9], [10], randomized data collection [11], [12], statistical database query [13], [14]. A common goal in data privacy is to obfuscate individual-level data values while still enabling population-wide learning. In contrast, the subject of model privacy focuses on protecting a single learned model ready to deploy. For example, we want to privatize a classifier to deploy on the cloud for public use, whether the model is previously trained from raw image data or a data-private procedure.
Another closely related subject is model extraction proposed in [15], where Bob's goal is to reconstruct Alice's model from several queries' inputs and outputs, knowing what specific model Alice uses. For example, suppose that Alice's model is a generalized linear regression with p features. In that case, it is likely to be reconstructed using p queries of the expected mean (a known function of Xβ) by solving equations [15]. In the supervised learning scenario, when only labels are returned to any given input, model extraction could be cast as an active learning problem where the goal is to query most efficiently [16]. Despite existing work from model reconstruction perspective, principled methods and theories to enhance model privacy remain an open problem.
B. Contributions and Outline
The main contributions of this work are three folds. First, we develop a novel concept, theory, and method, generally referred to as information laundering, to study model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to privatize an already-learned model for public use. To the best of the authors' knowledge, we present the first framework to study model privacy in a principled manner. Second, under the developed information-theoretic framework, we cast the tradeoffs between model privacy and utility as a general optimization problem. We derive the optimal solution using the calculus of variations and provide extensive discussions on the solution's insights from different angles. Third, we develop a concrete algorithm, prove its convergence, and elaborate on some specific cases.
The paper is organized as follows. In Section II, we describe the problem formulation and a general approach to protect the model. In Section III, we propose the information laundering method that casts the model privacy-utility tradeoff as an optimization problem and derives a general solution. In Section III-C, we provide some additional discussions of the related frameworks, including information bottleneck, local data privacy, information privacy, and adversarial model attack. In Section V, we conclude the paper with some potential future work. In the Appendix, we provide the proofs of the main results and experimental studies.
II. FORMULATION
A. Background
The private model can be obtained from general learning methods, and its deployment means that it will return a response for a given input query. Suppose that X and Y are the input and output alphabets (data space), respectively. A model in the above definition is also referred to as a communication channel in information theory. A model can be regarded as the input-output (or Alice's application programming interface, API) offered to
Bob. Examples include a regression/classification model that outputs predicted labels, a clustering model that outputs the probabilities of belonging to specific groups, and a stochastic differential equation system that outputs the likely paths for various inputs variables. It does not matter where the model comes from since we are only concerned about the privacy of a fixed given model. The (authentic) model of Alice is denoted by p K * .
What is model privacy? Our perspective is that privacy is not an intrinsic quantity associated with a model; instead, it is a measure of information that arises from interactions between the model and its queries. In our context, the interactions are through X (offered by Bob) and Y (offered by Alice). The key idea of enhancing Alice's model privacy is to let Alice output noisy predictionsỸ for any input X so that Bob cannot easily infer Alice's original model. Similarly, Alice may choose to manipulate X as well before passing it through K * . Alternatively, Alice intends to 1) impose some ambiguity between X,X, and between Y,Ỹ , which conceivably will produce response deviating from the original one, and 2) seek the K closest to K * under the same amount of ambiguity imposed. Motivated by the above concepts, we introduce the following notion. The information-laundered model of Alice is denoted by p K .
Definition 2 (Information-laundered model): A information-laundered model with respect to a given model K * is a model K that consists of three internal kernels K = K 1 • K * • K 2 (illustrated in Figure 1).
B. Notation
We let p K * (· | ·), p K 1 (· | ·), p K 2 (· | ·), p K (· | ·) denote the kernels that represent the authentic model, input kernel, output kernel, and the information-laundered model, respectively. We let p X (·) denote the marginal distribution of X. Similar notation is for pX(·), pỸ (·), and p Y (·). Note that the p Y implicitly depends on the above conditional distributions. We use p K 1 •K * (· | ·) and p K * •K 2 (· | ·) to denote cascade conditional distributions ofỸ | X and Y |X, respectively.
Throughout the paper, random variables are denoted by capital letters. Suppose that X ∈ X ,X ∈X ,
Y ∈Ỹ, and Y ∈ Y.
For technical convenience, we will assume that X ,X ,ỸY are finite alphabets unless otherwise stated. We will discuss some special cases when some of them are the same. Our theoretical results apply to continuous alphabets as well under suitable conditions. For notational convenience, we write the sum x∈X u(x) as x u(x) for any function u.
With a slight abuse of notation, we will use p to denote a distribution, density function, or transition kernel, depending on the context.
III. INFORMATION LAUNDERING
A. The Information Laundering Principle
The information laundering method is an optimization problem formulated from the concept of KLdivergence between the (designed) effective kernel and the original kernel, with constraints of the privacy leakage during the model-data interaction. In particular, we propose to minimize the following objective function over (p K 1 , p K 2 ),
L(p K 1 , p K 2 ) ∆ = E X∼px D KL (p K * (· | X), p K (· | X)) + β 1 I(X;X) + β 2 I(Y ;Ỹ ).(1)
In the above, K 1 and K 2 are implicitly involved in each additive term of L, and β 1 ≥ 0, β 2 ≥ 0 are constants that determine the utility-privacy tradeoffs. Small values of β 1 and β 2 (e.g., zeros) pushes the K to be the same as K * , while large values of β 1 pushesX to be nearly-independent with X (similarly for β 2 ). It is worth mentioning that the principle presumes a given alphabet (or representation) forX and Y . The variables to optimize over is the transition laws X →X andỸ → Y .
The objective in (1) may be interpreted in the following way. On the one hand, Alice aims to develop an effective system of K that resembles the authentic one K * for the utility of benign users. This goal is realized through the first term in (1), which is the average divergence between two system dynamics. On the other hand, Alice's model privacy leakage is through interactions with Bob, which in turn is through the input X (publicly offered by Bob) and output Y (publicly offered by Alice). Thus, we control the information propagated through both the input-interfaces and out-interfaces, leading to the second and third terms in (1).
We note that the above objective function may also be formulated in alternative ways from different perspectives. For example, we may change the third term to be β 2 I(Y ;Ỹ | X,X), interpreted in the way that Alice will design K 1 first, and then design K 2 conditional on K 1 . Likewise, we may change the second term to be β 1 I(X;X |Ỹ , Y ), meaning that K 2 is designed first. From Bob's perspective, we may also change the third term to β 2 I(Y ;Ỹ | X), interpreted for the scenario where Bob conditions on the input information X during model extraction. Additionally, from the perspective of adaptive interactions between Alice and Bob, we may consider p X as part of the optimization and solve the max-min problem
max p X min p K 1 ,p K 2 L(p K 1 , p K 2 )
. We leave these alternative views to future work.
B. The optimal solution
We derive the solution that corresponds to the optimal tradeoffs and point out some nice interpretations of the results. The derivation is nontrivial as the functional involves several nonlinear terms of the variables to optimize over. Note that for the notation defined in Subsection II-B, only p X and p K * are known and others are (implicitly) determined by p K 1 , p K 2 .
Theorem 1: The optimal solution of (1) satisfies the following equations.
p K 1 (x | x) = κ x pX(x) exp 1 β 1 E Y |X=x∼p K * p K * •K 2 (Y |x) p K (Y | x) − β 2 β 1 EỸ ,Y |X=x log p K 2 (Y |Ỹ ) p Y (Y ) ,(2)p K 2 (y |ỹ) = τỹp Y (y) exp 1 β 2 pỸ (ỹ) E X∼p X p K * (y | X) · p K 1 •K * (ỹ | X) p K (y | X) ,(3)
where κ x and τỹ are normalizing constants implicitly defined so that the conditional density function integrates to one.
Note that the distributions ofX,Ỹ , Y , andỸ , Y |X, implicitly depend on p K 1 and p K 2 . The above theorem naturally leads to an iterative algorithm to estimate the unknown conditional distributions p K 1 and p K 2 . In particular, we may alternate Equations (2) and (3) to obtain p ( )
K 1 (x | x), p ( ) K 2 (y |ỹ) from p ( −1) K 1 (x | x), p ( −1)
K 2 (y |ỹ) at step = 1, 2, . . . with random initial values at = 0. The pseudocode is summarized in Algorithm 1.
In the next theorem, we show that the convergence of the algorithm. The sketch of the proof is described below. First, we treat the original objective L as another functional J of four independent Algorithm 1 Optimized Information Laundering (OIL)
input Input distribution p X , private model p K * , alphabets X ,X ,Ỹ, Y for X,X,Ỹ , Y , respectively.
output Transition kernels p K1 and p K2 1: Let p (0) X and p (0) Y denote the uniform distribution onX and Y, respectively. 2
: for t = 0 → T − 1 do 3: Calculate p (t+1) K1 (x | x) = κ x p (t) X (x) exp 1 β 1 E Y |x∼p K * p (t) K * •K2 (Y |x) p (t) K (Y | x) − β 2 β 1 EỸ ,Y |x∼p (t) K * •K 2 log p (t) K2 (Y |Ỹ ) p (t) Y (Y ) , p (t+1) K2 (y |ỹ) = τỹp (t) Y (y) exp 1 β 2 p (t) Y (ỹ) E X∼p X p K * (y | X) · p (t+1) K1•K * (ỹ | X) p (t+1,t) K (y | X) , p (t+1) X (x) = x p (t+1) K1 (x | x)p X (x), p (t+1) Y (y) = ỹ p (t+1) K2 (y |ỹ)p (t+1) Y (ỹ), where p (t+1) K1•K * , p (t) K * •K2 , and p (t+1,t) K denote the kernels cascaded from (p (t+1) K1 , p K * ), (p K * , p (t) K2 ), and (p (t+1) K1 , p K * , p (t) K2 ), respectively, and p (t+1) Y is the marginal from (p (t+1) X , p K * , p (t+1) K2 ). 4: end for 5: Return p K1 = p (T ) K1 , p K2 = p (T ) K2 .
variables, p K 1 , p K 2 , h 1 , h 2 , evaluated at h 1 = pX and h 2 = p Y . Using a technique historically used to prove the convergence of the Blahut-Arimoto algorithm for calculating rate-distortion functions in information theory, we show that J ≥ L. We also show that L is convex in each variable so that the objective function is non-increasing in each alternation between four equations. Since L ≥ 0, the convergence is implied by the monotone convergence theorem.
Theorem 2: Algorithm 1 converges to a minimum that satisfies equations (2) and (3).
Note that the minimum is possibly a local minimum. We will later show the convergence to a global minimum in a particular case. Next, we provide interpretations of the parameters and how they affect the final solution.
A large β 1 in the optimization of (1) indicates a higher weight on the term I(X;X). In the extreme case when β 1 = ∞, minimizing I(X;X) is attained whenX is independent with X. Consequently, the effective model of Alice produces a fixed distribution of responses for whatever Bob queries. The above observation is in line with the derived equation (2), which will become p K 1 (x | x) ≈ κ x pX(x) (and thus κ x ≈ 1) for a large β 1 > 0.
Similar to the effect of β 1 , a larger β 2 imposes more independence betweenỸ and Y . In the case β 2 = ∞, Alice may pass the input to her internal model K * but output random results. This can be seen 8 from either the Formulation (1) or Equation (3).
For the first expectation in equation (2), the term may be interpreted as the average likelihood ratio of y conditional onx against x. From Equation (2), it is more likely to transit from x tox in the presence of a larger likelihood ratio. This result is intuitively appealing because a large likelihood ratio indicates that
x may be replaced withx without harming the overall likelihood of observing Y . Intuitive explanations to other terms could be similarly made.
C. Further Discussions on Related Work
Information Bottleneck: extracting instead of privatizing information. The information bottleneck method [17] is an information-theoretic approach that aims to find a parsimonious representation of raw data X, denoted byX, that contains the maximal information of a variable Y of interest. The method has been applied to various learning problems such as clustering, dimension reduction, and theoretical interpretations for deep neural networks [18]. Formally, the information bottleneck method assumes the Markov chainX
→ X → Y,(4)
and seeks the the optimal transition law from X toX by minimizing the functional L(pX |X ) = I(X;X) − βI(X; Y ), with β being is a tuning parameter that controls the tradeoffs between compression rate (the first term) and amount of meaningful information (second term). The alphabet of the aboveX needs to be pre-selected and often much smaller in size compared to the alphabet of X to meet the purpose of compression. In other words, the information that X provides about Y is passed through a 'bottleneck' formed by the parsimonious alphabet ofX.
A similarity between the information bottleneck method and the particular case of information laundering in Subsection B is that they both optimize a functional of the transition law of X →X. Nevertheless, their objective and formulation are fundamentally different. First, the objective of information bottleneck is to compress the representation while preserving meaningful information, under the assumption of (4);
Our goal is to distort X while minimizing the gap between the (random) functionality of X → Y , under a different Markov chain X →X → Y .
Data Privacy and Information Privacy: protecting data instead of a model. The tradeoffs between individual-level data privacy and population-level learning utility have motivated active research on what is generally referred to as 'local data privacy' across multiple fields such as data mining [11], security [12], statistics [19], and information theory [20], [21]. For example, a popular framework is the local differential privacy [11], [12], [22], where raw data X is suitably randomized (often by adding Laplace noises) into Y so that the ratio of conditional densities
e −α ≤ p Y |X (y | x 1 ) p Y |X (y | x 2 ) ≤ e α(5)
for any y, x 1 , x 2 ∈ X , where α > 0 is a pre-determined value that quantities the level of privacy. In the above, X and Y represent the private data and the processed data to be collected or publicly distributed. The requirement (5) guarantees that the KL-divergence between p Y |x 1 and p Y |x 2 is universally upper-bounded by a known function of α (see, e.g., [19]), meaning that x 1 and x 2 are barely distinguishable from the observed y. Note that the above comparison is made between two conditional distributions, while the comparison in information laundering (recall the first term in (1)) is made between two transition kernels.
The local differential privacy framework does not need to specify a probability space for X, since the notion of data privacy is only built on conditional distributions. Another related framework is the information privacy [20], which assumes a probabilistic structure on X and a Markov chain X →Ỹ → Y .
In the above chain, X is the private raw data,Ỹ is a set of measurement points to transmit or publicize, and Y is a distortion ofỸ that is eventually collected or publicized. We deliberately chose the above notation of X,Ỹ , Y , so that the Markov chain appears similar to the special case of information laundering in Subsection IV. Nevertheless, the objective of information privacy is to minimize I(X; Y ) over p Y |Ỹ subject to utility constraints, assuming that the joint distribution of X,Ỹ is known. In other words, the goal is to maximally hide the information of X. In the context of information laundering, the system input X is provided by users and is known.
Adversarial Model Attack: rendering harm instead of utility to a model. The adversarial model attack literature concerns the adversarial use of specially crafted input data to cause a machine learning model, often a deep neural network, to malfunction [23]- [25]. For example, an adversarial may inject noise into an image so that a well-trained classifier produces an unexpected output, even if the noise is perceptually close to the original one. A standard attack is the so-called (Adaptive) Black-Box Attack against classifiers hosted by a model owner, e.g., Amazon and Google [26], [27]. For a target model K * , a black-box adversary has no information about the training process of K * but can access the target model through query-response interfaces. The adversary issues (adaptive) queries and record the returned labels to train a local surrogate model. The surrogate model is then used to craft adversarial samples to maximize the target model's prediction error.
If we let X,X, Y denote the model input, adversarially perturbed input, and output, respectively, then we may draw a similarity between adversarial model attack and the particular case of information laundering in Subsection B since they both look for the law X →X. The main difference is in the objective. While the model attack aims to find an input domain that maximally distorts the model, information laundering aims to maintain a small model discrepancy. Under our notation, a possible formulation for the model attack is to seek max pX |X E X∼p X D KL (p K * (· | X), p K * (· |X)) under a constraint of pX |X .
IV. SPECIAL CASE: INFORMATION LAUNDERING OF THE OUTPUT (Y ) ONLY
Two special cases of an information-laundered system are illustrated in Figure 2. Here, we elaborate on one case and include the other special case in the Appendix. Suppose that K 1 is an identity map and let β 1 = 0. In other words, we alter the output data only (Figure 2b). Then the optimization problem (1) reduces to minimizing
L(p K 2 ) ∆ = E X∼px D KL (p K * (· | X), p K (· | X)) + β 2 I(Y ;Ỹ ).(6)
Corollary 1: The solution to the optimization problem (6) satisfies
p K 2 (y |ỹ) = τỹp Y (y) exp 1 β 2 pỸ (ỹ) E X∼p X p K * (y | X) · p K * (ỹ | X) p K (y | X) ,(7)
where τỹ is a normalizing constant. In particular, if K * is deterministic, equation (7) becomes
p K 2 (y |ỹ) = τỹp Y (y) exp 1 β 2 pỸ (ỹ) x:f (x)=y p X (x) 1 y=ỹ p K (y | x) = τỹp Y (y) exp 1 y=ỹ β 2 p K 2 (y | y)(8)
To exemplify the proposed methodology, we study a specific case with the following conditions. 1) X may be large or continuously-valued,Ỹ = Y is a moderately-large alphabet, 2)Ỹ = Y so thatỸ and Y are in the same space,
3) K * is deterministic.
Under the above scenario, we can apply Algorithm 1 and Corollary 1 to obtain a simplified procedure below (denoted by OIL-Y). At each time step t = 1, 2, . . . ,, for eachỹ, y ∈ Y, we calculate p (t+1)
K 2 (y |ỹ) = τỹp (t) Y (y) exp 1 y=ỹ β 2 p (t) K 2 (y | y) , where τ −1 y = y p (t) Y (y) exp 1 y=ỹ β 2 p (t) K 2 (y | y) , p (t+1) Y (y) = rỹp (t+1) K 2 (y |ỹ), where rỹ = x:f (x)=ỹ p X (x).(9)
Note that the above rỹ is the probability that Alice observesỹ as an output of K * * if Bob inputs X ∈ p X . Therefore, rỹ can be easily estimated to be the empirical frequency of observingỹ at the end of Alice.
Note that since Y is a finite alphabet, we can use a matrix representation for easy implementation. In particular, we represent the elements of Y by 1, . . . , a, where a = card(Y). We then represent p K 2 by P ∈ R a×a , and p Y by q ∈ R a , where P y,ỹ = p K 2 (y |ỹ). Such a representation will lead to a matrix form of the above procedure, summarized in Algorithm 2.
Algorithm 2 OIL-Y (a special case of Algorithm 1, in the matrix form)
input Input distribution p X , private model p K * output Transition kernels p K2 : Y × Y → [0, 1] represented by P ∈ R a×a , where a = card(Y) 1: Estimate r = [r 1 , . . . , r a ] from p X and p K * as in equation (9) 2: Initialize the entries of P (0) and q (0) (respectively representing p K2 , p Y ) to be 1/a
3: for t = 0 → T − 1 do 4: Calculate P (t+1) = q (t) × 1 T , diag(P ), where 1 = [1, . . . , 1]
denote the a × 1 vector.
5:
Update diag(P (t+1) ) ← diag(P (t+1) ) · exp{1/(β 2 diag(P (t) )}, where the operations are element-wise 6: Scale each column (conditional distribution) of P (t+1) so that it sums to one 7: Calculate q (t+1) = P (t+1) × r Theorem 3: Suppose that K * is deterministic. The alternating equation (9), or its matrix form in Algorithm 1, converges to a global minimum of the problem (6).
V. CONCLUSION AND FURTHER REMARKS
Despite extensive studies on data privacy, little has been studied for enhancing model privacy. Motivated by the emerging concern of model privacy from the perspective of machine learning service providers, we develop a novel methodology to enhance the privacy of any given model of interest. We believe that the developed principles, theories, and insights can lead to new resilient machine learning algorithms and services. Interesting future work includes application studies on a case-by-case basis that are built upon the developed principle. Theoretically, there are three open problems left from the work that deserves further research. First, how does the imposed constraint of mutual information affect the rate of convergence from the adversary perspective for specific models (e.g., generalized linear models, decision trees, neural networks)? Second, we assumed finite alphabets for technical convenience. How to emulate our current technical machinery to analyze continuously-valued alphabets? Third, what would be the relative importance of laundering X versus Y , and will this depend on specific learning problems?
Appendix. In Appendices A and B, we first include two particular cases of information laundering that were not included in the main part of the paper. We then include the proofs of the theorems in Appendix C. Experimental results are included in Appendices D, E, and F to demonstrate the algorithm convergence, model privacy-utility tradeoffs, and how tradeoff parameters and unbalanced samples may influence the optimized information laundering.
APPENDIX
A. Special cases: deterministic model K * Suppose that for each givenx, the conditional distribution p K (· |x) assigns all the mass atỹ. In other words, K * reduces to a deterministic function mapping eachx ∈ X to a uniqueỹ ∈ Y, which is denoted byỹ = f (x). For example, Alice's model is a classifier that takes input features and returns hard-thresholded classification labels. In this case, Theorem 1 implies the following corollary. We will use this result in later sections.
Corollary 2:
The optimal solution of (1) satisfies the following equations.
p K 1 (x | x) = κ x pX(x) exp 1 β 1 p K 2 (f (x) | f (x)) x p K 2 (f (x) | f (x ))p K 1 (x | x) − β 2 β 1 E Y |Ỹ =f (x) log p K 2 (Y | f (x)) p Y (Y ) , p K 2 (y |ỹ) = τỹp Y (y) exp 1 β 2 pỸ (ỹ) x:f (x)=y p X (x) p K 1 •K * (ỹ | x) p K (y | x) ,
where κ x and τỹ are normalizing constants implicitly defined so that the conditional density function integrates to one.
B. Information laundering of the input (X) only
Suppose that K 2 is an identity map and let β 2 = 0 so that we only maneuver the input data (Figure 2a).
Then the optimization problem (1) reduces to minimizing
L(p K 1 ) ∆ = E X∼px D KL (p K * (· | X), p K (· | X)) + β 1 I(X;X).(10)
Corollary 3: The optimal solution of (10) satisfies the following equations.
p K 1 (x | x) = κ x pX(x) exp 1 β 1 E Y |X=x∼p K * p K * (Y |X =x) p K (Y | X = x) ,(11)
where κ x is an implicitly defined normalizing constant. In particular, if K * is deterministic, equation (11) becomes
p K 1 (x | x) = κ x pX(x) exp 1 f (x)=f (x) β 1 x :f (x)=f (x ) p K 1 (x | x) .(12)
As we can see from Corollaries 1 and 3, for a deterministic K * (represented by f ), the simplified equation of (8) is similar to that of (12). The subtle difference that one has a sum while the other does not is because f may not be a one-to-one mapping. perform text-based clustering, which takes text data as input and returns one of the four categories (denoted by 0, 1, 2, 3) as output. The texts are transformed into vectors of numerical values using the technique of term frequency-inverse document frequency (TF-IDF) [30]. In the transformation, metadata such as headers, signature blocks, and quotation blocks are removed. To evaluate the out-sample utility, we split the data into two parts using the default option provided in [29], which results in a training part (2245 samples, 49914 features) and a testing part (1494 samples, 49914 features). The above split between the training and testing is based upon messages posted before and after a specific date.
Alice trains a classifier using the Naive Bayes method and records the frequency of observing each category [0.220.270.210.30] (r in Algorithm 2). Then, Alice runs the OIL-Y Algorithm (under a given β 2 ) to obtain the transition probability matrix P ∈ [0, 1] 4×4 . In other words, the effective system provided by Alice is the cascade of the learned classifier, and P determines the Markov transition. Alice's resulting out-sample performance from the testing data is recorded in Figure 4a, where we considered different β's summarized in Table I. As we expected, a larger value of β 2 cuts off more information propagated from Y to Y , resulting in a degraded out-sample performance of Alice's effective system.
We also visualize the model privacy-utility tradeoff by the following procedure. First, we approximate the utility that quantifies the useful information conveyed by Alice. With Alice's trained model and the optimally laundered Y (from training data), we retrain another Naive Bayes classifier and generate predictions on the testing data, denoted by y pred K . Meanwhile, we apply Alice's authentic model to generate predictions on the testing data, denoted by y pred K * . We approximate the model utility as the accuracy measure between y pred K and y pred K * . The model utility can be approximated by other measures. We also considered retraining methods such as tree-based classifiers and average F1-score in computing the model utility, and the results are consistent in the data experiments. Second, we approximate the privacy leakage as Alice's prediction accuracy on the testing data. Intuitively speaking, for a given utility, larger out-sample prediction accuracy indicates less information laundered, indicating a higher privacy leakage of Alice's internal model. We plot the model leakage against utility obtained from our proposed solution in Figure 4b.
For comparison, we considered a benchmark method described below. The conditional probability mass function p K 2 (· |ỹ) given eachỹ is independently drawn from a Dirichlet distribution with parameters . . . , b, a, b, . . . , b], where a is theỹth entry. An interpretation of the parameter is that a larger a/b favors a larger probability mass at y =ỹ (and thus less noise). We consider different pairs of (a, b) so that the tradeoff curve matches the counterpart curve from our proposed method. The curve is averaged over 50 independent replications. As shown in Figure 4b, the results indicate that our proposed solution produces less leakage (and thus better privacy) for a given utility.
[b,
We also plot heatmaps illustrating the transition laws p K 2 (y |ỹ) obtained from the proposed information laundering in Figure 5. We considered two cases, where there are 20% class-0 labels, and where there are 1% class-0 labels (by removing related samples from the original dataset). Intuitively, once we reduce the size of class-0 data in (b), the transition probabilities p K 2 (0 |ỹ) for eachỹ should be smaller compared with those in (a) as class-0 is no longer 'important'. Our expectation is aligned with Figure 5, where the first row in (b) are indicated by darker colors compared with that in (a), meaning that the class-0 is less likely to be observed.
F. Data study: Life Expectancy regression
In this experimental study, we use the 'life expectancy' dataset provided by kaggle open-source data [31], originally collected from the World Health Organization (WHO). The data was collected from 193 countries from 2000 to 2015, and Alice's model is a linear regression that predicts life expectancy using potential factors such as demographic variables, immunization factors, and mortality rates. This experiment is intended to illustrate the utility-privacy tradeoff and our proposed solution in regression contexts. Table I. In the regression model, we quantize the output alphabet Y by 30 points equally-spaced in between µ ± 3σ, where µ, σ represent the mean and the standard deviation of Y in the training data. We then applied a similar procedure as in Subsection F, except that we use the empirical R 2 score as the underlying measure of utility and leakage. The empirical R 2 score has been commonly used for evaluating regression performance, and it can be negative, meaning that the predictive performance is worse than sample mean-based prediction [32]. In particular, we obtain tradeoff curves in Figure 6, where we compared the information laundering results based on the proposed technique and Dirichlet-based technique (similar to that in Subsection F). The different β's and Dirichlet parameters are summarized in Table II. The detailed performance values are also summarized in Table II.
To illustrate the impact of tradeoffs, we considered two cases corresponding to β 2 = 1 and β 2 = 20.
We compute the transition laws p K 2 (y |ỹ) obtained from Algorithm 2 and illustrate them in the first Figure 5. We also take the snapshot at the yearỸ = 69 and plot the conditional density function p K 2 (· |Ỹ = 69) (as approximated by the quantizers) in the second row of Figure 5. The visualized results are aligned with our expectation that a larger penalty of model leakage will cause a more dispersed transition law. Table II.
The work was supported by the Army Research Office (ARO) under grant number W911NF-20-1-0222. arXiv:2009.06112v1 [cs.CR]
Definition 1 (
1Learned model): A learned model is a kernel p : X × Y → [0, 1], which induces a class of conditional distributions {p(· | x) : x ∈ X }.
Fig. 2 .
2Illustration of Alice's information-laundered system for public use, by (a) alternating input only, and (b) alternating output only. The notations are similar to those inFigure 1.
K2 that is represented by P (T ) . Moreover, we proved the convergence to the global minimum for the alternating equations in the above scenario. The same technique can be emulated to show a similar result when we employ K 1 (instead of K 2 ) only. The result is summarized in Theorem 3.
Fig. 3 .
3Visualization of Algorithm 2 in terms of the convergence (row 1) and the final transition probabilities (row 2), for β = 100, 10, 1 (corresponding to three columns).
Fig. 4 .
4Visualization of (a) Alice's out-sample performance against the tradeoff parameter β2 in Information Laundering, and (b) Alice's model utility-privacy tradeoffs under the information laundering technique and the random benchmark using Dirichlet-generated transition laws. Detailed parameters are summarized in
Fig. 5 .
5Heatmap showing the transition law pK 2 (y |ỹ) for information laundering, under (a) 20% of class-0 labels, and (b) 1% of class-0 labels. In contrast with the case (a), the class-0 is negligible in (b) and thus the transition probabilities pK 2 (0 |ỹ) for eachỹ becomes smaller (as indicated by darker colors).TABLEII SUMMARY OF THE TRADEOFF PARAMETERS USED FOR THE OIL-Y ALGORITHM AND RANDOM BENCHMARK FROM DIRICHLET DISTRIBUTIONS (AVERAGED OVER 50 INDEPENDENT REPLICATIONS), AND THE CORRESPONDING MODEL UTILITY (AS EVALUATED BY THE CLOSENESS OF ALICE'S AUTHENTIC AND EFFECTIVE SYSTEMS), AS WELL AS THE MODEL PRIVACY LEAKAGE (AS EVALUATED BY ALICE'S OUT-SAMPLE ACCURACY). THE UNDERLYING METRIC USED IS THE EMPIRICAL R 2 , WHICH CAN BE LESS THAN ZERO.
Fig. 6 .
6Visualization of (a) Alice's out-sample performance against the tradeoff parameter β2 in Information Laundering, and (b) Alice's model utility-privacy tradeoffs under the information laundering technique and the random benchmark using Dirichlet-generated transition laws. Detailed parameters are summarized in
Fig. 7 .
7Heatmap (row 1) showing the transition laws optimized from information laundering, under (a) β2 = 1, and (b) β2 = 20. The snapshots of probability mass functions of Y conditional onỸ = 69 are also visualized (row 2).
TABLE I
ISUMMARY OF THE TRADEOFF PARAMETERS USED FOR THE OIL-Y ALGORITHM AND RANDOM BENCHMARK FROM DIRICHLET DISTRIBUTIONS (AVERAGED OVER 50 INDEPENDENT REPLICATIONS), AND THE CORRESPONDING MODEL UTILITY (AS EVALUATED BY THE CLOSENESS OF ALICE'S AUTHENTIC AND EFFECTIVE SYSTEMS), AS WELL AS THE MODEL PRIVACY LEAKAGE (AS EVALUATED BY ALICE'S OUT-SAMPLE ACCURACY).Proposed
β
0
1
2
5
20
50
Utility
1.00
0.86
0.78 0.68
0.46
0.30
Leakage
0.79
0.64
0.53 0.45 $0.35
0.30
Random
Benchmark
a, b
100, 1 20, 1 10, 1
5, 2
5, 3
10, 10
Utility
0.96
0.88
0.79 0.49
0.39
0.23
Leakage
0.77
0.70
0.62 0.40
0.34
0.27
C. ProofsProof 1 (Proof of Theorem 1): Introducing Lagrange multipliers, λ 1 (x) for the normalization of the conditional distributions p K 1 (· | x) at each x, λ 2 (ỹ) for the normalization of the conditional distributions p K 2 (· |ỹ) at eachỹ. The Lagrangian of (1) can be written asup to an additive constant c that is determined by the known p X and p K * .It can be verified thatUsing(14)-(17), for a given x andx, we calculate the derivatives of each term in (13) with respect to16Taking equations (18)-(22)into(13), we obtain the first-order equationwhereλ(x) = λ 1 (x)/p X (x) − β 2 . Rearranging the terms in Equation(23), we obtainwhich implies Equation(2).Similarly, taking derivatives with respect to p K 2 (y |ỹ) for givenỹ and y, it can be verified thatLetting Equation (24) be zero and rearranging it, we obtain Equation(3).Proof 2 (Proof of Theorem 2):We define the following functional of four variables:We will use the following known result [28, Lemma 10.8.1]. Suppose that X and Y have a joint distribution with density p XY , and the marginal densities are p X , p Y , respectively. Then a density functionimplies that minimizing the objective function in(1)can be written as a quadruple minimization17It can be verified from (23) and its preceding identities thatThus, J(p K 1 , p K 2 , h 1 , h 2 ) is convex in each of the variables.We begin with a choice of initial p K 2 , h 1 , h 2 , and calculate the p K 1 that minimizes the objective. Using the method of Lagrange multipliers for this minimization (in a way similar to(13)), we obtain the solution of p K 1 shown in the first equation of Line 3, Algorithm 1. Similarly, we obtain the second equation in Algorithm 1. For the conditional distributions p K 1 and p K 2 , we then calculate the marginal distributions h 1 (ofx) that minimizes(25). Note that the terms of (25) involving h 1 may be rewritten aswhich, by the aforementioned lemma, is minimized by the third equation of Line 3, Algorithm 1.Similar arguments apply for h 2 . Consequently, each iteration step in Algorithm 1 reduces J. By the nonnegativeness of KL-divergence, J + c ≥ L ≥ 0, where L is in (1) and c is introduced in (13). Therefore, J has a lower bound, and the algorithm will converge to a minimum. Note that J(p K 1 , p K 2 , h 1 , h 2 ) is convex in each of the variables independently but not in the variables' product space. The current proof does not imply the convergence to a global minimum.Proof 3 (Proof of Theorem 3):Similar to the technique used in the above proof of Theorem 2, we cast the optimization problem in (6) as a double minimization.18We only need to check that J is strongly convex in its arguments. Direct calculations show thatThe above equations indicate that the determinant of the Hessian satisfieswhich further implies the convexity of J in the product space of p K 2 and h 2 .D. Visualization of Algorithm 2We provide a toy example to visualize Algorithm 2. In the simulation, we choose an alphabet of size 100, and pỸ as described by r ∈ [0, 1] a is uniform-randomly generated from the probability simplex.We independently replicate the experiment 50 times, each time running Algorithm 2 for 30 iterations, and calculate the average of the following results. First, we record P (t+1) − P (t) 1 /a at each iteration t, which traces the convergence of the estimated transition probabilities. Second, we record the final transition probability matrix into a heat-map where P y,ỹ means the estimated p K 2 (y |ỹ). The experiments are performed for β = 100, 10, 1, corresponding to columns 1-3. The plots indicate the convergence of the algorithm, though the rate of convergence depends on β. They also imply the expected result that a small β induces an identity transition while a large β inducesỸ that is nearly independent with Y .E. Data study: News Text classificationIn this experimental study, we use the '20-newsgroups' dataset provided by scikit-learn open-source library[29], which comprises news texts on various topics. The experiment is intended to illustrate the utility-privacy tradeoff and the optimality of our proposed solution compared with other methods. For better visualization we pick up the first four topics (in alphabetic order), which are 'alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware'. Suppose that the service Alice provides is to
Mobile learning (mlearning) based on cloud computing: mlearning as a service (mlaas). M M Alabbadi, UbiCompM. M. Alabbadi, "Mobile learning (mlearning) based on cloud computing: mlearning as a service (mlaas)," in UbiComp, 2011.
Mlaas: Machine learning as a service. M Ribeiro, K Grolinger, M A Capretz, ICMLA. IEEEM. Ribeiro, K. Grolinger, and M. A. Capretz, "Mlaas: Machine learning as a service," in ICMLA. IEEE, 2015, pp. 896-902.
Assisted learning and imitation privacy. X Xian, X Wang, J Ding, R Ghanadan, arxiv:2004.00566arXiv preprintX. Xian, X. Wang, J. Ding, and R. Ghanadan, "Assisted learning and imitation privacy," arXiv preprint arxiv:2004.00566, 2020.
The EU general data protection regulation (GDPR). P Voigt, A Von, Bussche, Springer International PublishingChamA Practical Guide. 1st EdP. Voigt and A. Von dem Bussche, "The EU general data protection regulation (GDPR)," A Practical Guide, 1st Ed., Cham: Springer International Publishing, 2017.
Biometrics security and privacy protection. N Evans, S Marcel, A Ross, A B J Teoh, IEEE Signal Processing Magazine. 325from the guest editorsN. Evans, S. Marcel, A. Ross, and A. B. J. Teoh, "Biometrics security and privacy protection [from the guest editors]," IEEE Signal Processing Magazine, vol. 32, no. 5, pp. 17-18, 2015.
Privacy as a feature for body-worn cameras. M S Cross, A Cavallaro, IEEE Signal Processing Magazine. 374in the spotlightM. S. Cross and A. Cavallaro, "Privacy as a feature for body-worn cameras [in the spotlight]," IEEE Signal Processing Magazine, vol. 37, no. 4, pp. 145-148, 2020.
Google security whitepaper. Google, Google, "Google security whitepaper," https://services.google.com/fh/files/misc/google security wp.pdf, Jan 2019.
Communicating about privacy: Towards people-centered and accountable design. Facebook, Facebook, "Communicating about privacy: Towards people-centered and accountable design," https://about.fb.com/wp-content/uploads/ 2020/07/Privacy-Transparency-White-Paper.pdf, July 2020.
Protocols for secure computations. A C Yao, Proc. SFCS. IEEE. SFCS. IEEEA. C. Yao, "Protocols for secure computations," in Proc. SFCS. IEEE, 1982, pp. 160-164.
Multiparty unconditionally secure protocols. D Chaum, C Crépeau, I Damgard, Proc. STOC. STOCD. Chaum, C. Crépeau, and I. Damgard, "Multiparty unconditionally secure protocols," in Proc. STOC, 1988, pp. 11-19.
Limiting privacy breaches in privacy preserving data mining. A Evfimievski, J Gehrke, R Srikant, Proc. SIGMOD/PODS03. SIGMOD/PODS03A. Evfimievski, J. Gehrke, and R. Srikant, "Limiting privacy breaches in privacy preserving data mining," in Proc. SIGMOD/PODS03, 2003, pp. 211-222.
What can we learn privately?. S P Kasiviswanathan, H K Lee, K Nissim, S Raskhodnikova, A Smith, SIAM J. Comput. 403S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith, "What can we learn privately?" SIAM J. Comput., vol. 40, no. 3, pp. 793-826, 2011.
Privacy-preserving datamining on vertically partitioned databases. C Dwork, K Nissim, Proc. CRYPTO. CRYPTOSpringerC. Dwork and K. Nissim, "Privacy-preserving datamining on vertically partitioned databases," in Proc. CRYPTO. Springer, 2004, pp. 528-544.
Differential privacy. C Dwork, Encyclopedia of Cryptography and Security. C. Dwork, "Differential privacy," Encyclopedia of Cryptography and Security, pp. 338-340, 2011.
Stealing machine learning models via prediction apis. F Tramèr, F Zhang, A Juels, M K Reiter, T Ristenpart, USENIX. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing machine learning models via prediction apis," in USENIX, 2016, pp. 601-618.
Model extraction and active learning. V Chandrasekaran, K Chaudhuri, I Giacomelli, S Jha, S Yan, arXiv:1811.02054arXiv preprintV. Chandrasekaran, K. Chaudhuri, I. Giacomelli, S. Jha, and S. Yan, "Model extraction and active learning," arXiv preprint arXiv:1811.02054, 2018.
N Tishby, F C Pereira, W Bialek, physics/0004057The information bottleneck method. arXiv preprintN. Tishby, F. C. Pereira, and W. Bialek, "The information bottleneck method," arXiv preprint physics/0004057, 2000.
Deep learning and the information bottleneck principle. N Tishby, N Zaslavsky, Proc. ITW. ITWIEEEN. Tishby and N. Zaslavsky, "Deep learning and the information bottleneck principle," in Proc. ITW. IEEE, 2015, pp. 1-5.
Minimax optimal procedures for locally private estimation. J C Duchi, M I Jordan, M J Wainwright, J. Am. Stat. Assoc. 113521J. C. Duchi, M. I. Jordan, and M. J. Wainwright, "Minimax optimal procedures for locally private estimation," J. Am. Stat. Assoc., vol. 113, no. 521, pp. 182-201, 2018.
Privacy against statistical inference. F Du Pin Calmon, N Fawaz, Proc. Allerton Conf. on Commun., Control and Computing. Allerton Conf. on Commun., Control and ComputingF. du Pin Calmon and N. Fawaz, "Privacy against statistical inference," in Proc. Allerton Conf. on Commun., Control and Computing, 2012, pp. 1401-1408.
Towards information privacy for the internet of things. M Sun, W P Tay, X He, arXiv:1611.04254arXiv preprintM. Sun, W. P. Tay, and X. He, "Towards information privacy for the internet of things," arXiv preprint arXiv:1611.04254, 2016. 13
Calibrating noise to sensitivity in private data analysis. C Dwork, F Mcsherry, K Nissim, A Smith, Theory of cryptography conference. SpringerC. Dwork, F. McSherry, K. Nissim, and A. Smith, "Calibrating noise to sensitivity in private data analysis," in Theory of cryptography conference. Springer, 2006, pp. 265-284.
Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. N Papernot, P Mcdaniel, I Goodfellow, arXiv:1605.07277arXiv preprintN. Papernot, P. McDaniel, and I. Goodfellow, "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples," arXiv preprint arXiv:1605.07277, 2016.
Simple black-box adversarial attacks on deep neural networks. N Narodytska, S Kasiviswanathan, Proc. CVPRW. IEEE. CVPRW. IEEEN. Narodytska and S. Kasiviswanathan, "Simple black-box adversarial attacks on deep neural networks," in Proc. CVPRW. IEEE, 2017, pp. 1310-1318.
Practical black-box attacks against machine learning. N Papernot, P Mcdaniel, I Goodfellow, S Jha, Z B Celik, A Swami, Proc. ASIA CCS. ASIA CCSN. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," in Proc. ASIA CCS, 2017, pp. 506-519.
Generic black-box end-to-end attack against rnns and other API calls based malware classifiers. I Rosenberg, A Shabtai, L Rokach, Y Elovici, arXiv:1707.05970arXiv preprintI. Rosenberg, A. Shabtai, L. Rokach, and Y. Elovici, "Generic black-box end-to-end attack against rnns and other API calls based malware classifiers," arXiv preprint arXiv:1707.05970, 2017.
Adversarial attacks and defences: A survey. A Chakraborty, M Alam, V Dey, A Chattopadhyay, D Mukhopadhyay, arXiv:1810.00069arXiv preprintA. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, "Adversarial attacks and defences: A survey," arXiv preprint arXiv:1810.00069, 2018.
Elements of information theory. T M Cover, John Wiley & SonsT. M. Cover, Elements of information theory. John Wiley & Sons, 1999.
Scikit-Learn, The 20 newsgroups text dataset. 2020Scikit-learn, "The 20 newsgroups text dataset," https://tinyurl.com/y26m6dvw, 2020.
Mining of massive datasets. A Rajaraman, J D Ullman, Cambridge University PressA. Rajaraman and J. D. Ullman, Mining of massive datasets. Cambridge University Press, 2011.
Life expectancy dataset. Kaggle, 2020Kaggle, "Life expectancy dataset," https://tinyurl.com/yxgaa4go, 2020.
The R2 score. Scikit-Learn, 2020Scikit-learn, "The R2 score," https://tinyurl.com/yy8m3u3d, 2020. |
263,310,924 | ON THE POWER OF THE WEISFEILER-LEMAN TEST FOR GRAPH MOTIF PARAMETERS | Seminal research in the field of graph neural networks (GNNs) has revealed a direct correspondence between the expressive capabilities of GNNs and the kdimensional Weisfeiler-Leman (kWL) test, a widely-recognized method for verifying graph isomorphism. This connection has reignited interest in comprehending the specific graph properties effectively distinguishable by the kWL test. A central focus of research in this field revolves around determining the least dimensionality k, for which kWL can discern graphs with different number of occurrences of a pattern graph P . We refer to such a least k as the WL-dimension of this pattern counting problem. This inquiry traditionally delves into two distinct counting problems related to patterns: subgraph counting and induced subgraph counting. Intriguingly, despite their initial appearance as separate challenges with seemingly divergent approaches, both of these problems are interconnected components of a more comprehensive problem: "graph motif parameters". In this paper, we provide a precise characterization of the WL-dimension of labeled graph motif parameters. As specific instances of this result, we obtain characterizations of the WL-dimension of the subgraph counting and induced subgraph counting problem for every labeled pattern P . Particularly noteworthy is our resolution of a problem left open in previous work concerning induced copies. We additionally demonstrate that in cases where the kWL test distinguishes between graphs with varying occurrences of a pattern P , the exact number of occurrences of P can be computed uniformly using only local information of the last layer of a corresponding GNN. We finally delve into the challenge of recognizing the WLdimension of various graph parameters. We give a polynomial time algorithm for determining the WL-dimension of the subgraph counting problem for given pattern P , answering an open question from previous work. We additionally show how to utilize deep results from the field of graph motif parameters, together with our characterization, to determine the WL-dimension of induced subgraph counting and counting k-graphlets. cally at http://oeis.org.Victor M. Preciado and AliJadbabaie. From local measurements to network spectral properties: Beyond degree distributions. In CDC, pp. 2686-2691. IEEE, 2010. David E. Roberson. Oddomorphisms and homomorphism indistinguishability over graphs of bounded degree. CoRR, abs/2206.10321, 2022. . Counting induced subgraphs: A topological approach to #w[1]hardness. Algorithmica, 82(8):2267-2291, 2020. | [] | ON THE POWER OF THE WEISFEILER-LEMAN TEST FOR GRAPH MOTIF PARAMETERS
2 Oct 2023
Matthias Lanzinger [email protected]
University of Oxford
Catholic University of Chile
Pablo Barceló [email protected]
University of Oxford
Catholic University of Chile
ON THE POWER OF THE WEISFEILER-LEMAN TEST FOR GRAPH MOTIF PARAMETERS
2 Oct 2023
Seminal research in the field of graph neural networks (GNNs) has revealed a direct correspondence between the expressive capabilities of GNNs and the kdimensional Weisfeiler-Leman (kWL) test, a widely-recognized method for verifying graph isomorphism. This connection has reignited interest in comprehending the specific graph properties effectively distinguishable by the kWL test. A central focus of research in this field revolves around determining the least dimensionality k, for which kWL can discern graphs with different number of occurrences of a pattern graph P . We refer to such a least k as the WL-dimension of this pattern counting problem. This inquiry traditionally delves into two distinct counting problems related to patterns: subgraph counting and induced subgraph counting. Intriguingly, despite their initial appearance as separate challenges with seemingly divergent approaches, both of these problems are interconnected components of a more comprehensive problem: "graph motif parameters". In this paper, we provide a precise characterization of the WL-dimension of labeled graph motif parameters. As specific instances of this result, we obtain characterizations of the WL-dimension of the subgraph counting and induced subgraph counting problem for every labeled pattern P . Particularly noteworthy is our resolution of a problem left open in previous work concerning induced copies. We additionally demonstrate that in cases where the kWL test distinguishes between graphs with varying occurrences of a pattern P , the exact number of occurrences of P can be computed uniformly using only local information of the last layer of a corresponding GNN. We finally delve into the challenge of recognizing the WLdimension of various graph parameters. We give a polynomial time algorithm for determining the WL-dimension of the subgraph counting problem for given pattern P , answering an open question from previous work. We additionally show how to utilize deep results from the field of graph motif parameters, together with our characterization, to determine the WL-dimension of induced subgraph counting and counting k-graphlets. cally at http://oeis.org.Victor M. Preciado and AliJadbabaie. From local measurements to network spectral properties: Beyond degree distributions. In CDC, pp. 2686-2691. IEEE, 2010. David E. Roberson. Oddomorphisms and homomorphism indistinguishability over graphs of bounded degree. CoRR, abs/2206.10321, 2022. . Counting induced subgraphs: A topological approach to #w[1]hardness. Algorithmica, 82(8):2267-2291, 2020.
INTRODUCTION
Context. Graph neural networks (GNNs) have gained increasing importance over the last decade due to their ability to process and operate over graph-structured data (Wu et al., 2021;Zhou et al., 2020;Joshi et al., 2020). Tasks in which GNNs excel include node classification (Xiao et al., 2022), graph classification (Errica et al., 2020), link prediction (Teru et al., 2020Zhu et al., 2021), and query answering over knowledge graphs (Daza & Cochez, 2020;Galkin et al., 2022). Based on these, GNNs have found applications in many fields, including social network analysis (Kipf & Welling, 2018), recommender systems (Ying et al., 2018), chemistry (Gilmer et al., 2017), semantic web (Hogan et al., 2022;Schlichtkrull et al., 2018), natural language processing (Marcheggiani & Titov, 2017), and combinatorial optimization (Dai et al., 2021).
The practical importance of GNNs has spurred the community to study their expressive power. This refers to the ability of GNNs to distinguish pairs of non-isomorphic graphs. As it was early observed in two landmark articles (Morris et al., 2019;Xu et al., 2019), the expressive power of so-called message-passing GNNs (MPGNNs) is precisely that of the Weisfeiler-Leman (WL) test (Weisfeiler & Leman, 1968), one of the most renowned methods for checking graph isomorphism.
As recently observed, this correspondence holds even in cases where graphs are node-and edgelabeled, naturally representing the rich structure present in knowledge graphs (Barceló et al., 2022). The above has ignited significant interest among experts in exploring which graph properties can be distinguished by the WL test, especially those that are crucial for the applications of MPGNNs (Arvind et al., 2020;Chen et al., 2020;Morris et al., 2020;Barceló et al., 2021;Huang et al., 2023;Bouritsas et al., 2023).
A class of graph properties that have received special attention in this context is the number of times a given "pattern" appears as a subgraph in a graph. The relevance of this class emanates from the fact that subgraph counts are used in several graph-related tasks, such as constructing graph kernels (Shervashidze et al., 2009;Kriege et al., 2018) and computing spectral graph information (Preciado & Jadbabaie, 2010). Subgraph counts also lie at the basis of some methods that measure the similarity between graphs (Alon et al., 2008). Therefore, it is crucial to comprehend the extent to which these methods align with the expressive capabilities of the WL test (or equivalently, MPGNNs).
In its more general version, the WL test is characterized by a parameterized dimension, k ≥ 1 (Cai et al., 1992;Morris et al., 2019). The k-dimensional WL test, or kWL test for short, iteratively colors the k-tuples of nodes in a graph until a fixpoint is reached. Two graphs are said to be distinguishable by kWL, if the multisets of colors of all k-tuples reached at this fixpoint are different.
A graph property f can be distinguished by kWL if for any two graphs G, H where f (G) ≠ f (H) only if G and H are distinguishable by kWL. We are then interested in the least k for which a parameter can be distinguished by kWL. We refer to this k as the WL-dimension of the parameter.
Several important results have been obtained over the last few years regarding the WL-dimension of counting the number of copies of a graph P (the pattern). We summarize some of such results next, by distinguishing the case in which we count copies of P (subgraphs) from the one in which we count induced copies of P (induced subgraphs). For the sake of clarity, we call the former the subgraph WL-dimension of P and the latter the induced subgraph WL-dimension of P .
• Counting subgraphs: An important notion in graph theory is the treewidth of a graph, which intuitively measures its degree of acyclicity (Diestel, 2012). The hereditary treewidth of a pattern P is, in broad terms, the largest treewidth of any of the homomorphic images of P . Arvind et al. (2020) initiated the study of the ability of the WL test to count subgraphs in terms of the notion of hereditary treewidth. They established that if the pattern P has hereditary treewidth k, then P has subgraph WL-dimension at most k. Nevertheless, the sole instance in which they were able to demonstrate the validity of the converse was for k = 1, i.e., they showed that P has subgraph WL-dimension one iff P has hereditary treewidth one. In the meantime, some partial results were obtained for the case when k = 2 over particular classes of graphs. For instance, by combining results in Arvind et al. (2020) and Fürer (2017), one obtains that the largest cycle (respectively, path) with a subgraph WL-dimension of two is that of length seven. Very recently, however, this gap has been closed. In fact, Neuen (2023) proves the following for each k ≥ 1: if P is a pattern, then P has subgraph WL-dimension k iff P has hereditary treewidth k. This also provides an alternative explanation for the aforementioned result on cycles (resp., paths), as one can observe that a cycle (resp., path) has hereditary treewidth two iff it is of length at most seven. • Counting induced subgraphs: Most of the existing results on counting induced subgraphs were obtained in Chen et al. (2020). The authors show that all patterns with k + 1 nodes have an induced subgraph WL-dimension bounded by k. Moreover, this is optimal for k = 1; i.e., no pattern with three or more nodes has induced subgraph WL-dimension one. It is not known if this correspondence continues to hold for k > 1, i.e., whether there are patterns with k + 2 or more nodes with induced subgraph WL-dimension k, for k > 1.
It is noteworthy that the previously mentioned results regarding counting induced subgraphs were achieved in a broader context compared to the results concerning counting subgraphs. Specifically, the former apply even to labeled graphs, where each node and edge is assigned a label, whereas the latter were derived for non-labeled graphs.
Therefore, we have different levels of understanding of the capabilities of the kWL test for counting subgraphs and induced subgraphs. Furthermore, these two issues have been addressed separately and employing distinct techniques, which enhances the perception that there is no inherent structural linkage between the two. However, as evidenced by research in the field of counting complexity, there exists a cohesive approach through which these types of problems can be examined. In fact, the counting of subgraphs and induced subgraphs are fundamentally interconnected, akin to opposite faces of a coin, as they can be represented as linear combinations of one another (Lovász, 1967). Thus, we can achieve insight into either of them by exploring the linear combinations of the other.
Expanding upon this idea, Curticapean et al. (2017) introduced a comprehensive framework for graph motif parameters, which are defined as linear combinations of subgraph counts. In this paper, we study the ability of the WL test to count graph motifs, which provides us with a general approach for studying problems related to subgraph counting in this setting. Our main contributions are summarized below. It is worth noting that all such results are derived within the framework established in Chen et al. (2020), which focuses on labeled graphs. This introduces an additional level of intricacy into all the proofs presented in this paper.
• By building on tools developed by Neuen (2023) and Seppelt (2023), we establish a precise characterization for the class of labeled graph motifs with WL-dimension k, for k ≥ 1. Specifically, for subgraph counting, this class precisely corresponds to the patterns of hereditary treewidth k, aligning with the characterization presented by Neuen (2023) for the case of unlabeled graphs. For induced subgraph counting, this class contains precisely the patterns featuring k + 1 nodes, thus resolving the open issue posed by Chen et al. (2020).
• The previous result characterizes for which graph motifs Γ the kWL test is able to distinguish between graphs with different numbers of occurrences of Γ. A natural question arises: Is it possible to obtain the number of occurrences of a graph motif Γ in a graph G by computing a function over the multiset of colors of k-tuples of vertices obtained from the kWL test? We answer this question affirmatively. This result can be of interest to researchers working on MPGNN applications, as it suggests that by designing a suitable MPGNN architecture, one might be able to count the number of subgraphs that appear in a given graph.
• We finally move into the problem of determining the WL-dimension for the problem of counting the occurrences of a given graph pattern P . Our characterization shows that for counting induced subgraphs this problem is trivial as the WL-dimension is precisely the number of vertices in P minus 1. For subgraph counting, in turn, the problem is nontrivial, as we have to check for each homomorphic image of P whether its treewidth is at most k. Since the number of homomorphic images of P is potentially exponential, this yields a naïve exponential time algorithm for the problem. We show that, in spite of this, the problem admits a polynomial time algorithm. The existence of such an algorithm was left open in Arvind et al. (2020) even for the case k = 2.
Since many of the proofs in the paper are extensive and complex, we have opted to place technical details in the appendix and offer proof sketches in the main body of the paper to conserve space.
PRELIMINARIES
Labeled graphs. We work with graphs that contain no self-loops and are node-and edge-labeled. Let Σ and ∆ be finite alphabets containing node and edge labels, respectively. A labelled graph is a tuple G = (V, E, λ, κ), where V is a finite set of nodes, E is a set of undirected edges, i.e., unordered pairs of nodes, λ ∶ V → Σ is a function such that λ(v) represents the label of node v, for each v ∈ V , and κ ∶ E → ∆ is a function such that κ(e) represents the label of edge e, for each e ∈ E.
Given labeled graphs G = (V, E, λ, κ) and
G ′ = (V ′ , E ′ , λ ′ , κ ′ ), a homomorphism from G to G ′ is a function f ∶ V → V ′ such that: (1) (u, v) ∈ E ⇒ (f (u), f (v)) ∈ E ′ , for each u, v ∈ V , (2) λ(v) = λ ′ (f (v)), for each v ∈ V , and (3) for each edge (u, v) ∈ E, it holds that κ(u, v) = κ ′ (f (u), f (v)).
If, in addition, f is a bijection and the first condition is satisfied by the stronger statement that (u, v) ∈ E ⇔ (f (u), f (v)) ∈ E ′ , for each u, v ∈ V , then we say that f is an isomorphism. We write Hom(G, G ′ ) for the set of all homomorphisms from G to G ′ and homs(G, G ′ ) for the number of homomorphisms in Hom(G, G ′ ).
For a k-tuplev = (v 1 , . . . , v k ) ∈ V (G) k , we write G[v] as a shortcut for G[{v 1 , . . . , v k }].
The atomic type atp(G,v) of a k-tuplev of vertices in a labeled graph G is some function such that,
for v ∈ V (G) k andw ∈ V (H) k , it holds that atp(G,v) = atp(H,w) if and only if the mapping v i ↦ w i is an isomorphism from G[v] into H[w].
Weisfeiler-Leman test. Let G be a labeled graph. The kWL test, for k > 0, iteratively colors the elements in V (G) k , that is, all tuples of k nodes from V (G). The color of tuplev ∈ V (G) k after i iterations, for i ≥ 0, is denoted c k i (v). This coloring is defined inductively, and its definition depends on whether k = 1 or k > 1.
• For k = 1, we have that
c k i (v) ∶= atp(G, v) if i = 0 c k i−1 (v), {{(κ(v, w), c k i−1 (w)) (v, w) ∈ V (G)}} if i ≥ 1
In other words, c k i (v) consists of the color of vertex v in the previous iteration, along with a multiset that includes, for each neighbor w of v in G, an ordered pair consisting of the color of w in the previous iteration and the label of the edge connecting v and w.
• For k > 1, we have that
c k i (v) ∶= atp(G,v) if i = 0 c k i−1 (v), {{ct(w, i − 1,v) w ∈ V (G)}} if i ≥ 1
Here, ct(w, i,v) denotes the color tuple for w ∈ V (G) andv ∈ V (G) k , and is defined as:
ct(w, i,v) = c k i (v[w 1]), . . . , c k i (v[w k]) , wherev[w j]
denotes the tuple that is obtained fromv by replacing its jth component with the element w. In other words, c k i (v) is formed by the color ofv in the previous iteration, along with a multiset that includes, for each node w in G, a tuple containing the color in the previous iteration for each tuple that can be derived fromv by substituting one of its components with the element w.
The kWL test stabilizes after finitely many steps. That is, for every labeled graph G there exists a t ≥ 0 such that, for eachv,w ∈ V (G) k , it holds that
c k t (v) = c k t (w) ⇐⇒ c k t+1 (v) = c k t+1 (w)
. We then define the color of tuplev in G as c k (v) ∶= c k t (v). We say that two labeled graphs G and H are indistinguishable by kWL (or G ≡ kWL H), if the k-WL algorithm yields the same coloring on both graphs, i.e.,
{{c k (v) v ∈ V (G) k )}} = {{c k (w) w ∈ V (H) k )}}.
The version of the WL test used in this paper is also known as the folklore kWL test (as defined, for instance, by Cai et al. (1992)). It is essential to note that an alternative version of this test, known as the oblivious kWL test, has also been explored in the machine learning field (Morris et al., 2019). Notably, it is established that for each k ≥ 1, the folklore kWL test possesses the same distinguishing capability as the oblivious (k + 1)WL test (Grohe & Otto, 2015).
Graph motif parameters. A labeled graph parameter is a function that maps labeled graphs into Q. A labeled graph motif parameter (Curticapean et al., 2017) is a labeled graph parameter such that, for a labeled graph G, its value in G is equivalent to a (finite) linear combination of homomorphism counts into G. More formally, function Γ is a labeled graph motif parameter, if there are fixed labeled graphs F 1 , . . . , F ℓ and constants µ 1 , . . . , µ ℓ ∈ Q ∖ {0}, such that for every labeled graph G:
Γ(G) = ℓ i=1 µ i homs(F i , G).(1)
We refer to the set {F 1 , . . . , F ℓ } as the support Supp(Γ) of Γ.
It is often not easy to see whether a function on graphs is a graph motif parameter. Fortunately, we know that certain model counting problems for large fragments of first-order logic are indeed labeled graph motif parameters. In the following, we view labeled graphs as structures. Logical formulas can then be defined over a signature that contains a unary relation symbol U σ , for each label σ ∈ Σ, and a binary relation symbol E δ , for each label δ ∈ ∆. Let ϕ be a formula with free variablesx of the form
∃ȳ ψ(ȳ,x) ∧ ψ * (x),
where ψ is a formula consisting of positive atoms, conjunction and disjunction, and ψ * is a conjunction of inequalities and possibly negated atoms. We call such formulas positive formulas with free constraints. We write #ϕ(G) for the number of assignments tox for which the formula is satisfied in labeled graph G (Curticapean et al., 2017). Theorem 1 (Dell et al. (2019)). Let ϕ be a positive formula with free constraints. Then the function #ϕ is a labeled graph motif parameter 1 .
Example 1. We start by showing that counting subgraphs and counting induced subgraphs are examples of graph motif parameters (for each fixed pattern). Suppose for simplicity that we consider graphs without node labels and with a single edge label, and let E be the corresponding relation symbol. Consider a graph H = (V, E) for which we want to count occurrences as a subgraph. We can do this via the following positive formula with free constraints:
φ H ∶= ⋀ (u,v)∈E E(x u , x v ) ∧ ⋀ u≠v, u,v∈V x u ≠ x v .
Notice that this formula has a free variable x v , for each node v ∈ V . If now we want to count the number of occurrences of H as an induced subgraph, we can simply extend φ H with the following conjunction:
⋀ (u,v) ∈E ¬E(x u , x v ). graph motif parameter by ind H .
Consider the following formula with free variables x 1 , . . . , x k :
φ IS ∶= ⋀ 1≤i≠j≤k ¬E(x i , x j )
The satisfying assignments to φ IS in graph G are precisely its independent sets of size at most k. We see then that counting k-independent sets is a graph motif parameter.
To further illustrate the theorem, we consider a more ad-hoc parameter. Let us think we are interested in k-independent sets I where all vertices in I share a neighbor of certain type, say with vertex label either U a or U b . It is difficult to intuit a priori whether this is a graph motif parameter. However, it can easily observed from the following formula that this is in fact the case:
φ ⋆ IS ∶= ⎛ ⎝ ⋀ 1≤i≠j≤k ¬E(x i , x j ) ⎞ ⎠ ∧ ∃y U a (y) ∨ U b (y) ∧ k ⋀ i=1 E(y, x i )
Counting graph motifs with the WL test. Take k > 0. Consider a graph parameter Γ. We state that the kWL test can distinguish Γ if, for any pair of labeled graphs G and H where Γ(G) ≠ Γ(H), it follows that G ≡ kWL H. The WL-dimension of a graph parameter Γ is the minimal k > 0 such that the kWL test can distinguish Γ.
CHARACTERIZING THE WL-DIMENSION OF GRAPH MOTIF PATTERNS
In this section, we provide a characterization of the WL-dimension of labeled graph motif parameters. To grasp the results presented in this section, it is crucial to first define the concept of the treewidth of a graph G. This concept, well-known in graph theory, aims to quantify the degree of acyclicity in G.
Let G = (V, E, λ, κ) be a labeled graph. A tree decomposition of G is a tuple (T, α), where T is a tree and α is a function that maps each node t of the tree T to a subset of the nodes in V , that satisfies the following:
• For each (u, v) ∈ E, there exists a node t ∈ T with {u, v} ∈ α(t).
• For each node v ∈ V , the set {t ∈ T v ∈ α(t)} is a subtree of T . In other words, the nodes t of T for which α(t) contains v are connected in T .
The width of a tree decomposition (T, α) of G is defined as max t∈T α(t) − 1. The treewidth of G is then defined as the minimum width of any of its tree decompositions. It is easy to see that G has treewidth one if and only if its underlying graph is a tree.
The following is the main result in this section. It characterizes the WL-dimension of a graph motif parameter in terms of the treewidth of its support set. Theorem 2. Let Γ be a labeled graph motif parameter. The WL-dimension of Γ is the maximum treewidth of any labeled graph in Supp(Γ). We now discuss our proof of Theorem 2. The full argument requires us to extend various existing results in the field to the setting of labeled graphs. We defer those proofs to the Appendix.
Let F be a class of labeled graphs. We write G ≡ F H if for every F ∈ F it holds that homs(F, G) = homs(F, H). Following Roberson (2022), we define the labeled homomorphism-distinguishing closure of F , denoted homclosure(F ), as the set of all labeled graphs L for which the following holds for every labeled graphs G, H: G ≡ F H ⇒ homs(L, G) = homs(L, H). We say that F is labeled homomorphism-distinguishing closed, if homclosure(F ) = F . A recent paper by Neuen establishes that the class of all (unlabeled) graphs of treewidth at most k is labeled homomorphism-distinguishing closed (Neuen, 2023). As we establish next, this extends to the labeled setting. Here we denote by LT k the class of all labeled graphs of treewidth at most k. Lemma 3. Fix k > 0. The class LT k is labeled homomorphism-distinguishing closed.
Proof Sketch. For our proof we show that a recent breakthrough result by Roberson (2022) holds also in the labeled case. The result depends on the notion of weak oddomorphisms which, roughly speaking, are a particular kind of homomorphism that satisfies additional constraints (see Appendix B for details). To simplify the presentation, we actually only show that Roberson's result holds for edge-labeled graphs, i.e., those labeled graphs where all vertices have the same label. We then show that this is enough and that we can lift the homomorphism-distinguishing closedness from the edge-labeled case to the general labeled case.
Formally, we show the labeled analogue of (Roberson, 2022, Theorem 6.2). Namely, that every class of labeled graphs F that satisfies the following two closure properties is also labeled homomorphismdistinguishing closed. (1) If F ∈ F , and F has a weak oddomorphism into G, then G ∈ F . (2) F is closed under restriction to connected components and disjoint union. Once we have established that the theorem still holds, we show that both properties are satisfied by LT k . For the first, in particular, we can build on parts of Neuen's argument for the unlabeled graph case. The second property is a well known property of treewidth and requires no further insights.
The second intermediate result that we need is the following lemma, proved recently by Seppelt.
Lemma 4 (Lemma 4 in (Seppelt, 2023)). Let F and L be classes of labeled graphs. Suppose F is finite and its elements are pairwise non-isomorphic. For each F ∈ F , let µ L be a element of R. Then F ⊆ homclosure(L), if for all labeled graphs G, H we have that:
G ≡ L H ⇒ F ∈F µ F ⋅ homs(F, G) = F ∈F µ F ⋅ homs(F, H).
We also need the following result which establishes that two labeled graphs G and H are indistinguishable by kWL if and only if homs(F, G) = homs(F, H), for every labeled graph F of treewidth k. The unlabeled analogue of this result is due to Dvorák (2010) (rediscovered by Dell et al. (2018)).
Lemma 5. For all labeled graphs
G, H we have G ≡ kWL H if and only if G ≡ LT k H.
Proof of Theorem 2. Let F be the support of Γ and k be the maximum treewidth of a labeled graph in F , for k > 0. We first show that kWL can distinguish Γ. Take two labeled graphs G, H with G ≡ kWL H. By Lemma 5, we also have that G ≡ LT k H. In particular, homs(F, G) = homs(F, H), for every F ∈ F , and therefore:
Γ(G) = F ∈F µ F ⋅ homs(F, G) = F ∈F µ F ⋅ homs(F, H) = Γ(H).
Suppose now, for the sake of contradiction, that the ℓWL test can distinguish Γ, for ℓ < k. Let G, H be arbitrary labeled graphs. Notice that we have the following:
G ≡ LT ℓ H ⇒ G ≡ ℓWL H ⇒ Γ(G) = Γ(H) ⇒ F ∈F µ F ⋅ homs(F, G) = F ∈F µ F ⋅ homs(F, H).
The first implication follows from Lemma 5 and the second one since the ℓWL test can distinguish Γ. But then Lemma 4 tells us that F ⊆ homclosure(LT ℓ ), and Lemma 3 that homclosure(LT ℓ ) = LT ℓ . This is a contradiction since F contains at least one labeled graph of treewidth k > ℓ.
COUNTING OCCURRENCES OF GRAPH MOTIF PATTERNS
In this section, we establish that if a graph motif parameter Γ has WL-dimension k, then one can actually obtain the number of occurrences of Γ in a graph G by looking independently only at the colors of the individual k-tuples, rather than the full multiset of stable colors for all k-tuples. This is of particular interest when looking at the kWL test from the perspective of MPGNNs. The natural expression of the kWL test in MPGNNs leads to a final layer that assigns a color to each k-tuple of vertices. So, if Γ is a labeled graph motif parameter, and we want to compute Γ(G) over a labeled graph G, then Theorem 6 shows that it is not necessary to combine the information of the final layer in a global way, but that there is some uniform function that can map each individual color to a rational number, the sum of which will exactly be Γ(G).
More formally, we show the following result.
Theorem 6. Let Γ be a labeled graph motif parameter and suppose that the maximum treewidth of a labeled graph in Supp(Γ) is at most k. Also, let C k denote the set of possible colors produced by the kWL test. Then there exists a computable function
θ Γ ∶ C k → Q such that Γ(G) = ∑v ∈V (G) k θ Γ (c k (v))
, for every labeled graph G.
Theorem 6 is obtained by using the following lemma, whose proof can be found in the appendix.
Lemma 7. Let F be a labeled graph with treewidth at most k and let C k denote the set of possible colors produced by the kWL test. There exists a computable function
η F ∶ C k → N such that homs(F, G) = ∑v ∈V (G) k η F (c k (v)), for each labeled graph G.
Theorem 6 is then obtained from Lemma 7 as follows. Let us assume that Supp Γ = {F 1 , . . . , F ℓ } and Γ(G) is defined as in Equation (1). We define
θ Γ ∶ C k → Q as θ Γ (c) = ∑ ℓ i=1 µ i ⋅ η Fi (c), for each c ∈ C k . We then have that Γ(G) = ℓ i=1 µ i ⋅ homs(F i , G) (Equation (1) = ℓ i=1 ⎛ ⎝ v∈V (G) k µ i ⋅ η Fi (c k v ) ⎞ ⎠ (Lemma 7) = v∈V (G) k ℓ i=1 µ i ⋅ η Fi (c k v ) = v∈V (G) k θ Γ (c k v ).
DETERMINING THE WL-DIMENSION FOR SUBGRAPH COUNTING
We finally move on the recognizability problem for WL-dimension. That is, for given graph motif parameter Γ, we want to know the WL-dimension of Γ. Arvind et al. (2020) previously raised the question for which labeled graph patterns H, the graph parameter Sub H can be expressed by 2WL.
Here, we give a polynomial time algorithm that decides the WL-dimension for all subgraph counting problems, thus also resolving this question as a special case. For the analogous problem Ind H , we have already seen in Example 2 that the recognition problem is, in fact, trivial. In addition, we determine the WL-dimension of counting counting k-graphlets, i.e., all connected induced subgraphs on k vertices, as illustrative examples of how further such results can be derived from the deep body of work in graph motif parameters.
We will begin with the case of subgraph counting. Recall that a minor of a labeled graph H is a labeled graph H ′ that can be obtained from H by removing nodes or edges, or contracting edges (that is, removing an edge and simultaneously merging its endpoints). By Theorem 2, and the following discussion on spasm(H), recognising the WL-dimension of Sub H is precisely the problem of recognising the maximum treewidth of of the homomorphic images of H. We can use classic results in the field of graph minors to express this as a property checkable in monadic second-order logic (MSO), for which model checking is tractable in our setting. We give a more detailed sketch of the argument below, full details are provided in the appendix. Theorem 8. Fix k > 0. There is a polynomial time algorithm for the following problem: given a labeled graph H, checking if the WL-dimension of Sub H is at most k.
Proof Sketch. The algorithm first checks whether the treewidth of H is at most k. It is well known that this can be decided in linear time for fixed k (Bodlaender, 1996). Obviously, H ∈ spasm(H), and thus if the treewidth of H is strictly larger than k, we are done and the algorithm rejects. By the Robertson-Seymour Theorem (Robertson & Seymour, 2004), there is a finite set of graphs F k , such that there is a F ∈ F k that is a minor of graph G if and only if the treewidth of H is at most k. We then show that there is an MSO formula ϕ F such that
H ⊧ ϕ F ⇐⇒ F is a minor of some homomorphic image of H. 2
Clearly then ϕ = ⋁ F ∈F k ϕ k is an MSO formula such that H ⊧ ϕ if and only if the maximum treewidth in the spasm is at most k. By a standard algorithmic metatheorem of Courcelle (Courcelle, 1990), deciding H ⊧ ϕ is possible in time f (ℓ, ϕ) H , where ℓ is the treewidth of H. As ℓ ≤ k by our initial check, and ϕ depends only on k, the problem is in O( H ) for fixed k.
We now move on to study the WL-dimension of counting k-graphlets, a popular problem in graph mining and analysis of social networks Bressan et al. (2017);Jin et al. (2018). Here, no dedicated algorithm is necessary as the WL-dimension will be immediate from observations about the respective support when viewed as graph motif parameters. While the problem of counting k-graphlets is itself popular, its analysis in our context serves as an examples of how the recently established theory of graph motif parameters can answer these types of recognizability questions.
The logical view, via positive formulas with free constraints, is not the only natural way to see that a graph parameter is indeed also a graph motif parameter. An alternative notion that is known to correspond to graph motif parameters is the problem of counting all induced subgraphs of size k, that satisfy some arbitrary (computable) property φ (Roth & Schmitt, 2020). Thus, with φ being the property of the graph being connected, this immediately tells us that counting k-graphlets is a graph motif parameter. In fact, Roth & Schmitt (2020) not only shows that every such property is a graph motif parameter, but they also determine precisely when the k-clique is in the support (i.e., has a coefficient ≠ 0). Moreover, in the case of properties that are closed under the removal of edges, there is a strong connection to combinatorial topology that can be leveraged to make this determination. Proposition 9. For k > 1, counting the number of k-graphlets has WL-dimension k − 1.
Proof Sketch. Let Ind C k be graph parameter that counts the number of induced subgraphs on k vertices that are connected, i.e., the number of k-graphlets. For graph property φ, let E φ k be the set of all edge-subsets of the complete graph on k vertices, such that the corresponding subgraph satisfies property φ. Roth & Schmitt (2020) showed that the problem of counting all induced subgraphs on k vertices that satisfy φ is a graph motif parameter, and that the k-clique is in the support of the parameter precisely when
A∈E φ k (−1) A ≠ 0.
Furthermore, they show that if φ is closed under removal of edges, then E φ k ∖ {0} forms a so-called simplical graph complex and the reduced Euler characteristic of this simplical complex is also nonzero exactly when the k-clique is in the support of the parameter (see (Jonsson, 2008) for details regarding these notions). Now, let us use C for the property of being connected and NC for its complement, i.e., disconnectedness. Property NC is closed under removal of edges and its simplical complexes are well understood (cf., (Jonsson, 2008)). In particular, their reduced Euler characteristic is known to be ±(k − 1)!, which is non-zero for natural k. Now, it is enough to observe that
A∈E NC k (−1) A + A∈E C k (−1) A = 0
as the properties are complementary and the sums thus form an alternating sum of binomials. Since the left-hand sum is non-zero, so is the right-hand sum for property C. We can therefore conclude that the k-clique is in the support of Ind C k 3 , demonstrating that the maximal treewidth in the support is at least k − 1. The argument of Roth & Schmitt (2020) also implicitly shows by construction that the support contains no graphs with more than k vertices, thus also confirming that the treewidth in Supp(Ind C k ) is no higher than k − 1. Applying Theorem 2 completes the argument.
Moreover, Theorem 1 is constructive and in many cases it is feasible to determine the support from logical formulations of parameters as in Example 1. For example, following the construction of Dell et al. (2019) for the formula φ IS reveals that counting k-independent sets also has WL-dimension k − 1. What is particularly noteworhty in this context, is that we can use the exact same methods that have been used to study the parameterized complexity of these problems. Just as the WL-dimension, the complexity of computing graph motif parameter Γ depends precisely on the maximum treewidth in Supp(Γ) (Curticapean et al., 2017). Analysis of the complexity thus revolves around the exact same question of how high the treewidth of the basis can become.
CONCLUSIONS AND LIMITATIONS
We have shown that recent developments in graph theory and counting complexity can be brought together to provide a precise characterization of the WL-dimension of labeled graph motif parameters. We have also shown that if a graph motif parameter Γ belongs to such a class, then the number of "appearances" of Γ in a given labeled graph G can be computed uniformly only from local information about the individual k-tuples from the output of the kWL test. The main limitation of our work is that it only concerns the worst-case behavior of the algorithms considered. In fact, the counterexamples our proof constructs for cases in which the kWL test is not capable of counting the number of occurrences of a graph motif Γ are rather complicated and do not necessarily resemble the cases encountered in practice. Therefore, it would be interesting to understand to what extent these results relate to average-case scenarios, or how well they align with practical applications. We leave this for future work. This section focuses on the proof of Lemma 7. While it is well known that the kWL test of a graph G determines the homomorphism counts from every graph H with tw(H) < k, we are interested in a slightly stronger statement. In particular, let H be a graph and letā be some tuple of k distinct vertices in H. We show that any two k-tuples of verticesv,w, possibly from different graphs, that obtain the same stable color by the kWL test, there are the same number of homomorphisms extendingā ↦v andā ↦w to the respective graphs ofv andw. As a consequence, we see that it is not necessary to know the full multiset of stable colors to determine the homomorphism count from some pattern F , but rather homs(F, ⋅) can be computed by computing a partial count for each k-tuple independently and simply summing them up. This is of particular interest when looking at the kWL test from the perspective of GNNs. The natural expression of the kWL test in MPGNNs leads to a final layer that assigns a color to each k-tuple of vertices. If we want to compute homs(F, ⋅) with a GNN, Lemma 7 shows that it is not necessary to combine the information of the final layer in some global way, but that there is some uniform function that can map each individual color to a rational number, the sum of which will exactly be homs(F, ⋅).
We proceed with some technical definitions. For tree decomposition (T, B) and subtree T ′ of T we will write B(T ′ ) to mean ⋃ u∈V (T ′ ) B(u). It will be convenient to consider nice tree decompositions. A nice TD of labeled graph G is a rooted TD where the bags of all leaves are singletons and all nonroot nodes are only of three types:
• An introduce node u is the only child of parent p, and has B(u) = B(p) ∪ {a} for some a ∈ V (G) ∖ B(p).
• A forget node u is the only child of parent p, and has B(u) = B(p)∖{a} for some a ∈ B(p).
• Split nodes u, v are the only two children of parent p, and have B(u) = B(v) = B(p).
It is well known that if a graph has treewidth k, then it also has a nice TD of width k. Proof. Suppose the statement were false. Then there is an edge {y, z} with z in B(α) but not in B(T ′ ), and y ∈ B(α). Let T * be the subtree of T ∖ T ′ that contains α. By connectedness, z occurs only in bags of T * , whereas y occurs in none of them. Hence the edge {y, z} cannot be subset of any bag, contradicting that (T, B) is a tree decomposition.
Our plan in the proof of Lemma 7 will be to connect nice tree decompositions to kWL colors. Intuitively, just as the kWL test moves "outward" one vertex at a time at each step, a nice tree decomposition changes only one vertex at a time from parent to child. It is known that Hom(F, ⋅) is determined by the local homomorphisms from F [B(u)] for each node u of a tree decomposition, together with the overlap of bags in neighboring nodes in the tree decomposition. Roughly speaking, we show that a color produced by the kWL test has enough information to determine these two facets.
Our formalization of this idea will require significant technical overhead for bookkeeping. To this end we introduce the following notioons. Let F, G, H be labeled graphs, letv ∈
V (G) k ,w ∈ V (H) k and let µ ∶ V (F ) → V (G), ν ∶ V (F ) → V (H).
For set S ⊆ V (F ) with S ≤ k we say that µ, ν are in (v,w)-strict S-agreement if µ maps vertices in S only to vertices inv and µ(x) = v i ⇐⇒ ν(x) = w i for all x ∈ S. For a tree T with node labels L(u) ⊆ V (F ) of F we say that two homomorphisms are in colorful strict T leaf agreement if for every leaf ℓ of T there existv ∈ V (G) k ,w ∈ V (H) k such that µ, ν are in (v,w)-strict L(ℓ)-agreement and for some i ≥ 1 either c k
i (v) = c k i (w) if L(ℓ) ≤ k or ct(a, i,v ⊖ ) = ct(b, i,w ⊖ ) wherev =v ⊖ a,w =w ⊖ b otherwise.
Lemma 11. Let F , G and H be labeled graphs, letv ∈ V (G) k ,w ∈ V (H) k with c k i (v) = c k i (w) for some i ≥ 1, and let S ⊆ V (F ) with S ≤ k. Let µ ∈ Hom(F, G), ν ∈ Hom(F, H) such that they are in (v,w)-strict S-agreement. Let F ′ be a labeled graph with vertex x such that F ′ − x = F and x is only adjacent to vertices in S. Then there is a bijection ι between Hom(F ′ , G) [µ] and
Hom(F ′ , H)[ν] such that 1. for all µ ′ ∈ Hom(F ′ , G)[µ] and ι(µ ′ ) there arev ′ =va,w ′ b for a ∈ V (G), b ∈ V (H), such that µ ′ , ι(µ ′ ) are in (v ′ ,w ′ )-strict S ∪ {x}-agreement, and 2. ct(a, i − 1,v) = ct(b, i − 1,w). Proof. Let S ′ = S ∪ {x}. Suppose µ ′ = µ ∪ {x ↦ a} is in Hom(F ′ , G). Since c k i (v) = c k i (w) there must be some b ∈ V (H) such that ct(a, i − 1,v) = ct(b, i − 1,w). Let ν ′ = ν ∪ {x ↦ b}.
Observe that because of the equal color tuples we have in particular, atp(G,va) = atp(H,wb). As x is only adjacent to vertices in S ′ we have that for
{y, x} ∈ E(F ′ ) that {µ ′ (y), µ ′ (x)} ∈ E(G) ⇐⇒ {v j , a} ∈ E(G) ⇐⇒ {w j , b} ∈ E(H) ⇐⇒ {ν ′ (y), ν ′ (x)} ∈ E(H).
The middle equivalence follows from atp(G,va) = atp(H,wb). Similarly, it follows immediately from equivalence of the atomic types, that the vertex labels of a and b must be the same, and that κ G ({v j , a}) = κ H ({w j , b}). Additionally, we clearly have that µ ′ , ν ′ are in (va,wb)-strict S ′agreement.
This concludes the argument for the individual homomorphisms. To see that this is indeed always a one-to-one correspondence it is enough to observe that because c k i (v) = c k i (w), for every distinct choice of a there must be an appropriate distinct choice of b as above.
Lemma 12. Let G and H be labeled graphs, let F be a labeled graph with a nice TD (T * , B) of width k, letā be a tuple made up of k vertices that match a bag of the (T
* , B). Letv ∈ V (G) k , w ∈ V (H) k with c k (v) = c k (w). Then Hom(F, G)[ā ↦v] = Hom(F, H)[ā ↦w] .
Proof. Let r be a node in T * with B(u) = {a i i ∈ [k]} and assume, w.l.o.g., that it is the root of T * . We argue inductively on subtrees T of T * that contain the node r that there is a bijection ι from Hom(F [T ], G)[ā ↦v] to Hom(F [T ], H)[ā ↦w] such that for every µ in the domain of ι, it and ι(µ) are in colorful strict T leaf agreement. Recall that colorful strict agreement requires agreement on colors (or color tuples) on some round i ≥ 1 of the kWL algorithm. We will not explicitly argue these indexes in the following but instead note that because c k (v) = c k (w), which forms our base case in the following induction, we can start from assuming agreement at an arbitrarily high i. It will be clear from our induction that starting from an i higher than the longest path from r to a leaf will be sufficient.
In the base case T contains only r.
Since c k (v) = c k (w) we have that {v i ↦ w i } i∈[k] is an isomor- phism from G[v] to H[w]. Thusā ↦v ∈ Hom(F [T ], G) iffā ↦w ∈ Hom(F [T ], H).
For the step, assume the statement holds for a tree T ′ . Suppose we extend T ′ be a node α that is adjacent in T , giving priority to split nodes (i.e., suppose we always extend first by all adjacent split nodes).
If α split node then the leaf bags remain unchanged. The only change that needs to be discussed is the special case that a new leaf is introduced. In that case, by definition the new leaf has the same bag as an existing leaf and the same witnesses for colorful strict T ′ leaf agreement apply to the new bag as well, that is ι satisfies the desired property also for T .
If α is not a split node, then let β be the leaf of T ′ that is adjacent (in T * ) to α. Suppose α is a forget node. By assumption any µ ∈ Hom(F [T ′ ], G)[ā ↦v] and ι(µ) are in colorful strict T ′ leaf agreement. Since B(T ) = B(T ′ ), the sets of homomorphisms do not change T ′ and T . It is therefore sufficient to check that the colorful strict leaf agreement constraints are also satisfied for leaf B(α). In that case, ι satisfies the desired properties also for T . Since B(α) ⊆ B(β),any µ, ν that are in (d,ē)-strict B(β)-agreement, are also in (d,ē)-strict B(α)-agreement. It remains to verify the color constraint ond,ē. If B(β) ≤ k, then we already have c k i (d) = c k i (ē) by assumption and are done. Otherwise we only know that ct(t, i,d ⊖ ) = ct(u, i,ē ⊖ ) ford =d ⊖ t,ē =ē ⊖ u. Let z be the vertex from B(β) that is forgotten by α. Let j be the index such that µ(z) = d j . We distinguish two cases: first, suppose that there is only one such index and there is at least one other y ∈ B(α) such that µ(y) = d j . Then there must be some index j ′ such that d j ′ is not in µ(B(α)). By strict agreement the same must hold for e j ′ and ν(B(α)). Hence, µ, ν are also in (ē[t j ′ ],d[u j ′ ])-strict B(α)-agreement, and we have c k i−1 (ē[t j ′ ]) = c k i−1 (d[u j ′ ]) by equality of the color tuples. For the other case, if z is the only vertex in B(α) that µ maps to d j or there is more than one index that equals µ(z), then simply take j ′ = j and proceed as in the other case. This concludes the case where α is a forget node.
Suppose that α is an introduce node that introduces vertex z and is adjacent (in T * ) to leaf β of T ′ . By Proposition 10 we have that all vertices of F [T ] adjacent (in F ) to z are in B(α). By assumption there is a bijection ι ′ from Hom(F [T ′ ], G)[ā ↦v] to Hom(F [T ′ ], H)[ā ↦w] that maps every homomorphism to one that is in colorful strict T ′ leaf agreement. Thus, in particular all homomorphisms are in (d,ē)-strict B(β)-agreement with c k i (d) = c k i (ē) (because we assume width k, an introduce node can only follow nodes with at most k vertices in the bag). We can therefore apply Lemma 11 to every pair µ, ι(µ) to see that there is a bijection ι ′ µ from Hom(F
[T ], G)[µ] to Hom(F [T ], H)[ι(µ)] such that every µ ′ ∈ Hom(F [T ], G)[µ]
, is in colorful strict T leaf agreement with ι ′ (µ ′ ). If B(α) ≤ k we can apply the same reasoning as for forget nodes above to observe that the appropriate constraint on the color follows from the lemma.
V (F [T ′ ]) is not a homomorphism from F [T ′ ]
to H (extendingā ↦w). But homomorphisms are closed under projection to induced subgraphs, so this is impossible. It follows that ι ′ is also surjective and therefore is the desired bijection.
As an alternative proof strategy, the main ideas from above can be used to define an algorithm that computes Hom(F, G)[ā ↦v] from c k (v). However, in practice the colors in the kWL test are not maintained in their explicit form as nestings of tuples and multisets but rather by compactly representing the ct tuples via some hash function. In that setting, the viability of such an algorithm would be limited.
Proof of Lemma 7. Observe that for any two distinct tuplesv,w ∈ V (G) k we have that the sets of homomorphisms Hom(F, G)[ā ↦v] and Hom(F, G)[ā ↦w] are disjoint. It is then straightforward to see that Hom(F, G) = ⊍v ∈V (G) k Hom(F, G)[ā ↦v] and hence homs(F, G) = ∑v ∈V (G) k Hom(F, G)[ā ↦v] . By Lemma 12 we have that Hom(F, G)[ā ↦v] depends only on c k (v). This naturally induces a function η F,ā that map the possible colorings of the kWL test C to the respective number of homomorphisms, i.e., such that
v∈V (G) k η F,ā (c k (v)) = v∈V (G) k Hom(F, G)[ā ↦v] = homs(F, G)
Another proof detail of Lemma 12 that we would like to emphasize is that it is not strictly necessary for the stable color ofv andw to be equal. Inspection of the proof reveals that it is sufficient for them to be equal for c k i , where i is greater than the maximal number of introduce nodes on a path from r to a leaf. Building on this observation, Lemma 12 actually also shows that especially for small patterns (the number of introduce nodes is of course no greater than the number of vertices) it is sufficient for the graphs to be equivalent under a limited number of steps of the kWL test, rather than equivalent in the stable color, to deduce that hom(F, ⋅) is the same in both graphs.
Theorem 13. Let F , G, and H be labeled graphs and let n = V (F ) . Suppose that G and H have the same color multiset after n steps of the kWL test. Then homs(F, G) = homs(F, H).
B TECHNICAL DETAILS FOR SECTION 3
The goal of this section is to show that for any labeled graph F that has treewidth greater than k, there are labeled graphs G and H such that G ≡ kW L H but homs(F, G) ≠ homs(F, H). Thus G and H are witnesses to the fact that the kWL test cannot express the function homs(F, ⋅).
To do so, we show that two important results on the functions homs for unlabeled graphs still hold in the labeled setting. To distinguish more clearly between the labeled and unlabeled case we will use the term plain graphs to be those labeled graphs where the vertex-and edge-label alphabets are singletons. For technical reasons we we will initially focus on edge-labeled graphs, which are labeled graphs where the vertex-label alphabet is a singleton. For labeled graph G, we write plain(G) for the plain graph obtained by making all edge and vertex labels of G the same (without changing the vertices or edges of G).
The main property of homs that we are interested in is equivalence for fixed classes of graphs. Formally, let F be a class of labeled graphs. We write G ≡ F H if for every F ∈ F it holds that homs(F, G) = homs(F, H). Two families of graph classes will be particularly important in the following. We write T k for the class of plain graphs with treewidth at most k, let LT k be the class of labeled graphs of treewidth at most k, and let ET k for the class of edge-labeled graphs with treewidth at most k.
The first result that we need is the labeled version of a classic result due to Dvorák (2010) and later rediscovered by Dell et al. (2018). We recall the lemma as stated in the main body.
Lemma 5. For all labeled graphs G, H we have G ≡ kWL H if and only if G ≡ LT k H.
We wish warn the reader about an unfortunate confluence of terminology at this point. Some of the literature that we build on also refers to labeled graphs. Unfortunately, our use and the uses in the immediately related literature are pairwise distinct and thus require extra care of the reader. We will add further clarification at points where this becomes directly relevant in this section.
For the second key result we need to recall what it means for a class of plain graphs to be homomorphism-distinguishing closed. A class of plain graphs F is homomorphism distinguishing closed if for every graph F ∈ F , there are graphs G, H such that G ≡ F H and homs(F, G) ≠ homs(F, H). The notion naturally extends to the setting of (edge-)labeled graphs by taking F , as a class of (edge-)labeled graphs and every mention of G, and H as (edge-)labeled graphs. To clearly distinguish the property in text going forward we will refer to the case where all involved graphs are labeled as labeled homomorphism-distinguishing closed, and analogously we use edge-labeled homomorphism-distinguishing closed when all graphs are edge-labeled. For plain graphs the following was very recently shown by Neuen Neuen (2023). (2023)). The class T k is homomorphism-distinguishing closed.
Theorem 14 (Neuen
Our second key ingredient will be to show the same holds in the labeled setting. That is, that LT k is labeled homomorphism-distinguishing closed. To simplify the most technical parts of the argument we in fact first show that ET k is edge-labeled homomorphism-distinguishing closed.
For our setting where every edge has exactly one label, we are not aware of any clear way to build directly on Theorem 14 to show the labeled versions. Our plan will therefore be to show that the machinery underlying the proof of Theorem 14 still works in our labeled setting.
The necessary machinery is due to Roberson (2022), who introduced the theory of oddomorphisms and showed their deep connection to homomorphism distinguishability. We recall the definition here for the labeled setting, note however that the definition does not respect labels, except implicitly through the definition of a homomorphism of labeled graphs. The neighborhood N G (v) of vertex v in labeled graph G is the set of adjacent vertices (ignoring any labels). Let F and G be labeled graphs and let φ be a homomorphism from F to G. A vertex a of F is odd/even (w. φ(a)). An oddomorphism from F to G is a homomorphism φ such that:
r.t. φ) if N F (a)∩φ −1 (v) is odd/even for every v ∈ N G (
1. every vertex a of F is either odd or even with respect to φ, and 2. for every v ∈ V (G), φ −1 (v) contains an odd number of odd vertices.
A homomorphism φ is a weak oddomorphism if there is a subgraph F ′ of F , such that φ V (F ′ ) is an oddomorphism from F ′ to G.
For plain graphs, Roberson showed that any class of graphs that is closed under weak oddomorphisms, disjoint unions, and restriction to connected components is homomorphism-distinguishing closed. It is one main goal of this section to show that the same also holds for edge-labeled graphs. Theorem 15 (Edge-labeled Version of Theorem 6.2 (Roberson, 2022)). Let F be a class of edgelabeled graphs such that 1. if F ∈ F and there is a weak oddomorphism from F to G, then G ∈ F , and 2. F is closed under disjoint unions and restrictions to connected components.
Then F is edge-labeled homomorphism-distinguishing closed.
Before we move on to the proof of Theorem 15, we first observe that the class ET k satisfies the conditions of the theorem. Indeed, Condition 15 is trivially satisfied as the treewidth of a graph is the maximum treewidth over the graph's connected components (and labels do not influence treewidth). For Condition 1 we can reuse the following key part of the proof of Theorem 14. Proposition 16 (Neuen (2023)). Let F be a plain graph in T k . If there is a weak oddomorphism from F to G, then G ∈ T k .
Recall that labels serve no explicit role in the definition of an oddomorphism beyond restricting the set of homomorphisms overall. Formalizing this observation we can show the same also for edge-labeled graphs. Lemma 17.
Let F ∈ ET k and G be a edge-labeled graph. If there is a weak oddomorphism from F to G, then G ∈ ET k .
Proof. First, observe that for labeled graphs F, G, if φ is an oddomorphism from F to G, then φ is also an oddomorphism from plain(F ) to plain(G). Now, if there is only a weak oddomorphism φ from F to G, there is some subgraph F ′ such that φ V (F ′ ) is an oddomorphism from F ′ to G. Hence it is also a oddomorphism from plain(F ′ ) to plain(G), meaning that where is a weak oddomorphism from plain(F ) to plain(G).
We have that tw(F ) = tw(plain(F )) ≤ k, and therefore by Proposition 16 plain(G) ∈ T k . And therefore also tw(G) = tw(plain(G)) ≤ k, i.e., G ∈ ET k .
Assuming Theorem 15, we then immediately obtain the key technical result for homomorphism distinguishing closedness in labeled graphs. Theorem 18. ET k is edge-labeled homomorphism distinguishing closed.
The rest of this section is organized as follows. We first verify that Theorem 15 holds in Appendix B.1. We then argue Lemma 5 in Appendix B.3. Finally, we lift Theorem 18 to the labeled homomorphism distinguishing closedness of LT k in Appendix B.2, thus proving Lemma 3. Finally, in Appendix B.4 we briefly note why Lemma 4 also holds in the labeled setting.
B.1 PROOF OF THEOREM 15
We will closely follow the proof of Theorem 6.2 in (Roberson, 2022). We will see that adapting the initial definitions in the right way will actually leave most of the framework for plain graphs intact. In many situations the effect of the edge-labels is on the argument is very subtle. For the sake of verifiability we therefore repeat the (slightly, in parts) modified arguments for the critical path in the unlabeled setting here. We wish to emphasize that, if not stated otherwise, all proofs are effectively due to Roberson.
We first recall the construction of Roberson for the case of plain graphs, roughly following the presentation of Neuen (2023). Let G be a plain graph and U ⊆ V (G). Define δ v,U as 1 if v ∈ U and 0 otherwise. The graph CFI(G, U ) is the graph with vertices
V (CFI(G, U )) = {(v, S) v ∈ V (G), S ⊆ I(v), S ≡ δ v,U mod 2}.
The graph has edge {(v, S), (u, T )} in E(CFI(G, U )) if and only if {v, u} ∈ E(G) and {v, u} ∈ S∆T .
The definition can be naturally adapted to the edge-labeled setting by simply inheriting the labels of the original edges in G. To this end, let G now be a edge-labeled graph. We will define the labeled graph CCFI(G, U ) with V (CCFI(G, U )) = V (CFI(G, U )), that is, the vertices are unaffected by the labels. For the edge-labeling function κ edges we simply say that κ({(v, S), (u, T )}) = κ ({v, u}). We will show that this definition preserves the key properties of CFI graphs in the edge-labeled setting. We will say that two vertices u, v are adjacent via label i if there is an edge {u, v} with label i. In fact, this will require almost no technical changes.
Lemma 19. Let G be a connected edge-labeled graph and let U,
U ′ ⊆ V (G). Then CCFI(G, U ) ≅ CCFI(G, U ′ ) if U ≡ U ′ mod 2. Proof. Let e = {u, v} ∈ E(G) and U ′ = U ∆e. We show that G U ≅ G ′ U . Let φ ∶ V (CCFI(G, U )) → V (CCFI(G, U ′ )) be φ((a, S)) = (a, S∆{e}) a = u or a = v (a, S) otherwise
As in the plain graph case, it is straightforward to verify that this is a bijection. Suppose that (a, S) T )). In the other case that {a, b} ≠ e, then S ′ and T ′ will both, individually, contain {a, b} exactly if S and T , respectively, did. So again {a, b} ∈ S ′ ∆T ′ . In both cases, the labels of the edge in CCFI(G, U ) and CCFI(G, U ′ ) is simply inherited from {a, b}. Thus φ preserves labeled adjacency.
and (b, T ) are adjacent via label i in CCFI(G, U ), i.e., {a, b} ∈ E i (G) and {a, b} ∈ S∆T . Let S ′ , T ′ such that φ((a, S)) = (a, S ′ ), φ((b, T )) = (b, T ′ ). If {a, b} = e, then S ′ = S∆{e} and T ′ = T ∆{e}, hence S∆T ′ = S∆T . Thus, {a, b} is also not in S ′ ∆T ′ and φ((a, S)) is adjacent to φ((b,
If (a, S) and (b, T ) are not adjacent via label i in CCFI(G, U ), then either {a, b} ∈ E i (G) or {a, b} ∈ E i (G) ∩ (S∆T ). In the first case, clearly φ((a, S)) and φ((b, T )) cannot be adjacent via label i either. In the latter case, we have shown in our argument above that {a, b} ∈ S ′ ∆T ′ if and only if {a, b} ∈ S∆T . Since {a, b} ∈ S∆T , φ((a, S)) and φ((b, T )) will not be adjacent via any label in CCFI(G, U ′ ).
We have shown that G U ≅ G ′ U for U ′ = U ∆e. This implies the lemma via the same final argument as in (Roberson, 2022, Lemma 3.2).
As a consequence we can focus our investigation on two specific graphs CCFI(G) ∶= CCFI(G, ∅) and its twist CCFI × (G) ∶= CCFI(G, {v}) for some arbitrary v ∈ V (G).
As a next step we adapt (Roberson, 2022, Lemma 3.4) to the edge-labeled setting. We first recall a definition from Roberson (2022) that carries over unchanged from the plain graph setting. For any U ⊆ V (G), the mapping (v, S) ↦ v is a homomorphism from CCFI(G, U ) to G. We will refer to this mapping as ρ in the following. For a homomorphism φ ∈ Hom(F, G) define Hom φ (F, CCFI(G, U )) = {ψ ∈ Hom(F, CCFI(G, U ) ρ ○ ψ = φ} and observe that the sets Hom φ (F, CCFI(G, U ) for φ ∈ Hom(F, G) partition Hom(F, CCFI(G, U )).
The following statement is subtle in our context. Except for qualifying graphs G and F to be edgelabeled, the statement is verbatim the same as Lemma 3.4 in (Roberson, 2022). That is, the labels are not explicitly considered in the system of equations. Note however that the equations are relative to some homomorphism from F to G, which makes the second set of equations implicitly follow the labeling. The fact that this system of equations does not change is the key to requiring only few changes in the later lemmas, as the argument proceeds mainly algebraically via this system of equations.
Lemma 20 (Edge-labeled version of Lemma 3.4 (Roberson, 2022)). Let G be a connected edgelabeled graph, let U ⊆ V (G), and let F be any edge-labeled graph. For a given φ ∈ Hom(F, G), define variables x a e for all a ∈ V (F ) and e ∈ I(φ(a). Then the elements of Hom φ (F, CCFI(G, U ) are in bijection with solutions of the following equations over Z 2 :
e∈I(φ(a)) x a e = δ φ(a),U for all a ∈ V (F ) (2) x a e + x b e = 0 for all {a, b} ∈ E(F ), where e = {φ(a), φ(b)} ∈ E(G)(3)
Proof. Suppose x a e that are a solution of the system of equations in the statement of the lemma. For a ∈ V (F ), let S(a) ⊆ E(G) be the edges incident to φ(a) (regardless of label) for which the variable x a e = 1, i.e., the set {e ∈ I(G) x a e = 1}. Let ψ be the mapping a ↦ (φ(a), S(a)). We first show that ψ ∈ Hom φ (F, CCFI(G, U )). By Equation (2), ψ(a) is guaranteed to be a vertex of CCFI(G, U ) for all a ∈ V (F ). To see that ψ preserves labeled adjacency, suppose {a, b} ∈ E i (F ).
Then e = {φ(a), φ(b)} ∈ E i (G) since φ ∈ Hom(F, G). From Equation (3) it follows that x a e = x b e . Then, by construction of CCFI(G, U ), ψ(a) is adjacent via label i to ψ(b) exactly if {φ(a), φ(b)} ∈ S(a)∆S(b).
Meaning that ψ(a) is adjacent via label i to ψ(b) if e is either in both S(a) and S(b) or neither, which is true because x a e = x b e . The arguments for injectivity and surjectivity from the original proof (Roberson, 2022, Lemma 3.4) apply verbatim.
Since we can use the same system of equations (which does not explicitly consider the edge labels) as used in Roberson (2022), the development from Section 3 of (Roberson, 2022) holds almost unchanged from here. For the sake of completeness we repeat the relevant parts for the proof of Theorem 15 here, including some discussion on why the proofs are unaffected by the labels.
Let G be a connected edge-labeled graph, F a edge-labeled graph, and let φ ∈ Hom(F, G). Let R be the set of pairs (a, e) such that a ∈ V (F ) and e ∈ E(φ(a)). We define the matrices A φ ∈ Z V (F )×R 2 and B φ ∈ Z E(F )×R 2 as: Let χ U be the characteristic vector of φ −1 (U ), i.e., the vector where the element corresponding to vertex a ∈ V (F ) is 1 if a ∈ φ −1 (U ) and 0 otherwise (i.e., the vector (δ φ(a),U ) a∈V (F ) . The system of equations from Lemma 20 can then be expressed as
A φ B φ x = χ U 0(6)
Theorem 21 (Edge-labeled version of Theorem 3.6 (Roberson, 2022)). Let G be a connected edgelabeled graph, let U ⊆ V (G), and let φ ∈ Hom(F, G) for some edge-labeled graph F . Then homs φ (F, CCFI(G)) > 0 and Theorem 22 (Edge-labeled Version of Theorem 3.13 (Roberson, 2022)). Let G be a connected edgelabeled graph and F be a edge-labeled graph. Then homs(F, CCFI(G)) ≥ homs(F, CCFI × (G)). Furthermore, the inequality is strict if and only if there exists a weak oddomorphism from F to G. If such a weak oddomorphism φ exists, there is a connected subgraph F ′ of F such that φ V (F ′ ) is an oddomorphism from F ′ to G.
Proof Sketch. The original proof of (Roberson, 2022, Theorem 3.13) works unchanged, although it might not be considered straightforward to observe that this is the case. We therefore repeat the key steps here with emphasis on the points where it needs to be observed that edge labels do not affect the argument. We wish to stress again that this proof is due to Roberson and we repeat it here only to allow for a more targeted discussion of why labels have no effect on the argument.
If homs(F, CCFI(G)) ≠ homs(F, CCFI × (G)), then by Theorem 21 there is a φ ∈ Hom(F, G) such that Equation (6) over Z 2 does not have a solution. The non-existence of such a solution is equivalent to the existence of solution to the system (the Fredholm Alternative)
(A φ ) T (B φ ) T (χ U ) T 0 z y = 0 1(7)
where U = {û} contains the vertex such that CCFI × (G) ∶= CCFI(G, {û}), y is indexed by vertices of F , and z is indexed by edges of F .
Since the argument creates a certificate for the non-equivalence of the homomorphism counts, it might be unintuitive that constructing a weak oddomorphism from this certificate still works in the presence of the additional constraints of the edge labels. However, the certificate is still relative to a φ ∈ Hom(F, G) that respects the edge labeling and as we will see, this is sufficient for the argument to still hold.
Observe that a solution (y, z) T to this system satisfies
(A φ ) T y = (B φ ) T . Let O = {a ∈ V (F ) y a = 1}
and let F ′ be the subgraph of F with all vertices and edges E(
F ′ ) = {e ∈ E(F ) z e = 1}.
The goal of the argument now is to show that φ is an oddomorphism from F ′ to G (and thus a weak oddomorphism w.r.t. F ). Since F ′ is a subgraph of F , it is clear that φ is still a homomorphism from F ′ to G, regardless of edge labels. The arguments for the properties of the oddomorphism themselves are parity arguments on the number of adjacent edges between φ −1 (v) and φ −1 (u) for adjacent v, u ∈ V (G). These arguments also hold unchanged in the edge-labeled setting as the labeled homomorphism φ still preserves all adjacencies.Finally, F ′ as constructed above may not be connected. As in the unlabeled case we can then replace F ′ be a connected component by showing that if there is a weak oddomorphism φ V (F ′ ) for some subgraph F ′ , then we can assume w.l.o.g., that F ′ is connected (Roberson, 2022, Lemma 3.12).
Lemma 23. Let F, G be edge-labeled graphs. Then homs(F, CCFI(G)) > homs(F, CCFI × (G)).
Proof. It is known that the identity id on V (G) is an oddomorphism from G to G in the plain graph case (Roberson, 2022, Lemma3.14). The definition of oddomorphism does not take the labels of edges into account (only the neighborhoods of vertices, which are unaffected by labels), and id is still a homomorphism in the edge-labeled setting. Hence, id must also be an oddomorphism from G to G in the edge-labeled setting. The statement then follows immediately from Theorem 22.
The lemma also demonstrates that CCFI(G) ≅ CCFI × (G) for any G.
This concludes our replication of the key statements of (Roberson, 2022, Section 3). We will require one additional statement that is not necessary in Roberson's original argument for the proof of Theorem 15. For technical reasons we will require an edge-labeled graph G such that for some F we can guarantee that homs(F, G) > 0. For plain graphs this is straightforward by taking a large enough clique for G. In the edge-labeled case we need an alternative gadget.
Lemma 24. Let ∆ be a finite edge-label alphabet and n ≥ 1. There is a connected edge-labeled graph G such that for every edge-labeled graph F with at edge-label alphabet ∆ and n vertices, homs(F, G) > 0.
Proof. Let C be the set of all (up to isomorphism) edge-labelings of the n vertex clique with labels ∆. Let G * be the disjoint union of all graphs in C. To get the desired graph G, fix an arbitrary vertex of each component of G * and replace them all by a single vertex with the same adjacency relations. Any n vertex graph F with edge-labels in ∆ is a subgraph of some element of G ′ ∈ C, and hence homs(F, G ′ ) > 0. Since G ′ is a subgraph of G, also homs(F, G) > 0.
We are now finally ready to prove Theorem 15. Again we can follow the original proof of Theorem 6.2 in (Roberson, 2022) very closely. We give the proof in full as some changes are made due to our streamlined presentation of the preceding statements.
Proof of Theorem 15. Let G be a graph not in F . Let G 1 , . . . , G ℓ be the number of connected components in G and let J be the connected graph from Lemma 24 such that homs(G i , J) > 0 for all i ∈ [ℓ]. By Condition (2), there is at least one connected component of G that is not in F , thus assume w.l.o.g., that G 1 ∈ F .
Let H be the disjoint union of CCFI(G 1 ) and J, and let H ′ be the disjoint union of CCFI × (G 1 ) and J. We will show that H ≡ F H ′ and homs(G, H) ≠ homs(G, H ′ ). For the former, suppose F ∈ F and connected. By Condition (1), there is no weak oddomorphism from F to G, and thus homs(F, CCFI(G 1 )) = homs(F, CCFI × (G 1 )) according to Theorem 22. Because F is connected each homomorphism into H is either a homomorphism into J or into CCFI(G 1 ), i.e., homs(F, H) = homs(F, CCFI(G 1 )) + homs(F, J). The same holds for H ′ , i.e., homs(F, H ′ ) = homs(F, CCFI × (G 1 )) + homs(F, J). Since both terms in both sums are equal we see that homs(F, H) = homs(F, H ′ ). Since this holds for all connected F ∈ F , by Condition (2), it holds for all F ∈ F .
We move on to show that homs(G, H) ≠ homs(G, H ′ ). Fist, observe that homs(G, H) = ∏ ℓ i=1 homs(G i , H) and similarly for H ′ . our goal will be to show that homs(
G i , H) ≥ homs(G i , H ′ ) > 0 for all i ∈ [ℓ]
, with at least one inequality being strict. For each i ∈ [ℓ] we have
homs(G i , H) = homs(G i , CCFI(G 1 )) + homs(G i , J) ≥ homs(G i , CCFI × (G 1 )) + homs(G i , J) = homs(G i , H ′ ) > 0
The equalities are by connectedness, as already argued above. The first inequality follows from Theorem 21. By Lemma 23 first inequality is strict for i = 1 and thus homs(G, H) > homs(G, H ′ ).
B.2 ON VERTEX LABELS
Our discussion so far has focused only on edge-labeled graphs, i.e., labeled graphs where the vertex labeling function λ maps all vertices to the same label. In this section, we show that the general case with arbitrary vertex labels can be reduced to the edge labeled case.
Let G = (V, E, λ, κ) be a labeled graph. Define EL(G) as the labeled graph (V, E, λ ′ , κ ′ ) where λ ′ (v) = 1 for all v ∈ V and κ ′ ({a, b}) = (λ(a), λ(b), κ({a, b}), b} ∈ E and λ(a) ≤ lex λ(b).
Proposition 25. Let F, G be labeled graphs. Then φ ∈ Hom(F, G) if and only if φ ∈ Hom(EL(F ), EL(G))
Proof. The implication left to right is straightforward. For the other direction, suppose φ ∈ Hom(EL(F ), EL(G)). It is clear that for any edge e ∈ E(F ), φ(e) ∈ E(G) as the sets are not changed by the EL function. Furthermore, since κ ′ (e) = κ ′ (φ(e)), the tuples must agree on the third component, i.e., κ(e) = κ(φ(e)). Now suppose e = {u, v} and, w.l.o.g., λ(u) ≤ λ(v).
Then the first component of κ ′ (e) is λ(u), and therefore so is the first component of κ ′ (φ(e)), i.e., λ(u) = λ(φ(u)). Similarly, from the second component we get λ(v) = λ(φ(v)).
If we encode a labeled graph as an edge-labeled graph, it is clear that this encoding can be reverted again. We will need to observe that this "invertibility" carries over also to the CCFI construction. We will also want a similar invertibility from the graph chosen from Lemma 24. Lemma 27. Fix integer n ≥ 1 and vertex-and edge-label alphabets Σ and ∆. There is a connected labeled graph G with vertex-and edge-labels Σ and ∆ such that for every labeled graph F with n vertices and vertex-and edge-labels Σ and ∆, homs(EL(F ), EL(G)) > 0.
Proof. Let C be the set of all (up to isomorphism) labeled n vertex cliques over vertex-and edgelabel alphabets Σ and ∆. The desired graph G is then constructed by taking the disjoint union of C, adding a new vertex v that is adjacent to one vertex of every C ∈ C. The labels of v and its incident edges can be chosen arbitrarily. Clearly, there is some C ∈ C such that for any considered F , homs(F, C) > 0 and thus also homs(EL(F ), EL(C)) > 0. Since C is a subgraph of G, EL(C) is a subgraph of EL(G).
Proof of Lemma 3. Let G be a labeled graph of treewidth greater than k. By inspection of the proof of Theorem 15 and Lemma 17, there exist edge-labeled graphs H, H ′ such that H ≡ ET k H ′ , homs(EL(G), H) ≠ homs(EL(G), H ′ ), H is the disjoint union of CCFI(EL(G 1 )) and some J, and H ′ is the disjoint union of CCFI × (EL(G 1 )) and J, where G 1 is a maximal connected component of G with maximal treewidth.
By Proposition 26, there are labeled graphs X, X ′ such that EL(X) = CCFI(EL(G 1 ) and EL(X ′ ) = CCFI × (EL(G 1 )). By Lemma 27, we can assume w.l.o.g, that there is a labeled graph Y such that EL(Y ) = J. Let Z be the disjoint union of X and Y , and Z ′ be the disjoint union of X ′ and Y . We see that EL(Z) = H and EL(Z ′ ) = H ′ and thus
homs(G, Z) = homs(EL(G), H) ≠ homs(EL(G), H ′ ) = homs(G, Z ′ )
where the equalities are by Proposition 25.
We also know that for every edge-labeled graph F * ∈ ET k , homs(F * , H) = homs(F * , H ′ ). Thus also for every labeled graph F ∈ LT k , homs(F, Z) = homs(EL(F ), H) = homs(EL(F ), H ′ ) = homs(F, Z ′ ). In other words Z ≡ LT k Z ′ and the proof is complete.
B.3 LEMMA 5
In this section we will discuss why Lemma 5 holds in the labeled setting. From Lemma 7 we directly see that for labeled graphs G, H, G ≡ kW L H implies homs(F, G) = homs(F, H) for every F ∈ LT k . We thus focus our attention on showing that G ≡ k H implies the existence of an F ′ ∈ LT k such that homs(F ′ , G) ≠ homs(F ′ , H).
As a first step, we consider the logic C L k+1 of first-order formulas with k + 1 variables and counting quantifiers 4 over the signature that contains unary relation symbols U σ for each label σ in the vertex label alphabet Σ, and binary relation symbols E δ for each label δ in the edge label alphabet ∆. We add the superscript L here to distinguish from the usual use of C k+1 for unlabeled graphs. Cai et al. (1992) famously showed that for graphs G, H we have G ⊧ ϕ ⇐⇒ H ⊧ ϕ for all ϕ ∈ C k+1 if and only if G ≡ kW L H. It is folklore that this also holds in the setting of labeled graphs. We in fact only need one direction here, namely if G ≡ kW L H, then there is a ϕ ∈ C L k+1 such that G ⊧ ϕ and H ⊧ ϕ. It is straightforward to verify that the original argument for this direction is unaffected by labels (Cai et al., 1992, Theorem 5.4, case ¬1 ⇒ ¬2).
In the rest of this section we will show that the existence of such a formula ϕ implies the existence of a labeled graph F for which the homomorphism count into G and H differs. For this purpose we can adapt an argument by Dvorák (2010).
We first recall the key definitions from (Dvorák, 2010). A k-marked 5 labeled graph G is a graph with a partial function mark G ∶ [k] → V (G). The set of active markings M G is the subset of [k] for which mark G is defined. Suppose that G, H are k-marked labeled graphs with M H ⊆ M H , we define homs(G, H) as the number of those homomorphisms φ that also preserve markings, i.e., where φ(mark G (i)) = mark H (i) for each active marking i ∈ M G . A k-marked labeled quantum graph G is a finite linear combination with real coefficients of k-marked labeled quantum graphs. We require all graphs in the linear combination to have the same set of active markings, which we refer to as M G . The function homs is extended to the case where the first argument is a k-marked labeled quantum graph G = ∑ ℓ i α i G i as homs(G, H) = ∑ ℓ i α i homs(G i , H). The treewidth of a quantum graph ∑ i α i G i is max{tw(G i ) α i ≠ 0} 6 .
A k-marked labeled quantum graph models a formula ϕ for graphs of size n if M G consists of the indices of the free variables of ϕ, and for each labeled graph H on n vertices with M G ⊆ M H :
• if H ⊧ ϕ then homs(G, H) = 1, and
• if H ⊧ ϕ then homs(G, H) = 0.
From here on we follow the same strategy as Dvorák, but require some changes to correctly handle labeled graphs.
A product G 1 G 2 of two k-marked labeled graphs G 1 , G 2 is constructed by taking the disjoint union of G 1 and G 2 , identifying the vertices with the same marking, and suppressing parallel edges with the same edge label. That is, we allow the product to have self-loops and to have parallel edges with distinct labels. If there are vertices v ∈ V (G 1 ), u ∈ V (G 2 ) with the same marking but different vertex labels (we say that G 1 , G 2 are incompatible), the product is a single vertex with a self-loop (and arbitrary labels).
For quantum k-marked labeled quantum graphs G 1 = ∑ i α 1,i G 1,i and G 2 = ∑ i α 2,i G 2,i , the product G 1 G 2 is defined as ∑ i,j β i,j G 1,i G 2,j . The coefficient β i,j equals 0 is G 1,i G 2,j is the empty graph, contains a loop or parallel edges with distinct labels, otherwise β i,j = α1, iα 2,j . The motivation for this distinction in the coefficients β i,j is that if H is a labeled graph, then by definition it has no self-loops and any two vertices are connected only by one edge with one label, hence there can be no homomorphisms into H from a product G 1,i G 2,j that produces such situations. Note that in the situation where G 1,i , G 2,j are incompatible at least one of G 1,i , G 2,j must have 0 homomorphisms into H which is why we (ab)use the self-loop as a failure state in this case. In particular, we have homs(G 1 G 2 , H) = homs(G 1 , H)homs(G 2 , H) for every labeled graph H. We write 0 for a kmarked labeled quantum graph where all coefficients are zero.
Lemma 28 (Labeled version of Lemma 6, (Dvorák, 2010)). For each formula in ϕ ∈ C L k+1 , and for each positive integer n, there exists a labeled quantum graph G of tree-width at most k such that G models ϕ for labeled graphs of size n.
Proof. We construct G inductively on the structure of ϕ. The base cases significantly differ from those in the proof of (Dvorák, 2010, Lemma 6). However, once we have established the fact for the base cases, the inductive steps remain the same as they rely only on the property that homs(G 1 G 2 , H) = homs(G 1 , H)homs(G 2 , H), as discussed above, and a technical lemma for quantum graphs (Dvorák, 2010, Lemma 5) that is not affected by labels.
If ϕ = true, let G be the empty graph, if ϕ = false let G = 0. If ϕ = U σ (x i ), let G be the graph with a single vertex v with mark G (i) = v and λ(v) = σ.
If ϕ = x i = x j , let G = ∑ σ∈Σ G σ (the sum ranges over the elements of the vertex label alphabet) where G σ is the k-marked labeled graph with a single vertex v, mark G (i) = mathsf mark G (j) = v and λ(v) = σ. For any specific k-marked H, the vertex marked as i and j will have only one vertex label. Hence, there will be a homomorphism from exactly one G δ (where δ matches the label in H) and there are 0 homomorphisms for all other germs of G.
If ϕ = E δ (x i , x j ) there are two cases: if i = j, then let G = 0 as the predicate cannot be satisfied in a self-loop free H. If i ≠ j, let G = ∑ σ,σ ′ ∈Σ G σ,σ ′ where G σ,σ ′ is the graph with adjacent vertices v and u, marked as i, j and labeled with σ, σ ′ , respectively, and κ({v, u}) = δ. As above, for any given H, one term of G will have 1 homomorphism into H, and the others will all have 0.
That is, for every base case we have shown that the appropriate k-marked labeled quantum graph exists. It is clear that in all the base cases the treewidth is 1. The rest of the induction proceeds exactly as in (Dvorák, 2010, Lemma 6).
Proof of Lemma 5. As discussed above, we have that for labeled graphs G, H, G ≡ kW L H implies the existence of a φ ∈ C L k+1 such that G ⊧ ϕ and H ⊧ ϕ. By Lemma 28, there is a k-marked labeled quantum graph F with treewidth at most k that models ϕ. Hence ∑ i α i homs(F i , G) = 1 ≠ 0 = ∑ i α i homs(F i , H), which means there must be some i such that homs(F i , H) ≠ homs(F i , H) (recall that the linear combination is always finite). Since F has treewidth at most k, F i ∈ LT k and therefore G ≡ LT k H.
B.4 ON LEMMA 4 FOR LABELED GRAPHS
Seppelt (2023) originally showed Lemma 4 for unlabeled graphs. Here we briefly note why the lemma holds unchanged in the labeled case. In particular, the lemma relies on three facts.
1. First, that homs(F, G 1 × G 2 ) = homs(F, G 1 )homs(F, G 2 ). Where × is the direct product as defined in (Lovász, 1967).
2. Second, that the matrix (homs(K, L)) K,L∈L is invertible, where L n is the class of all graphs with at most n vertices.
3. And finally, that if G ≡ F H, then also G × K ≡ F H × K for all graphs K.
The first two points are classic results by Lovász (1967), which were originally shown for all relational structures. They thus hold unchanged also for labeled graphs. For the final point, recall that G ≡ F H means that for every F ∈ F we have homs(F, G) = homs(F, H). Then by the first point also homs(F, G × K) = homs(F, G)homs(F, K) = homs(F, H)homs(F, K) = homs(F, H × K).
C PROOF OF THEOREM 8
This section will make some slight departures from the terminology used in the main body of the paper. For a more focused presentation, we will first discuss the case for unlabeled graphs and afterwards show how to additionally handle labels afterwards. In the main body, the spasm of a graph was defined as the set of all homomorphic images, but here it will be simpler to take an alternative (equivalent) perspective. A quotient G τ of a graph G is obtained by taking a partition τ of V (G) and constructing the graph like G but with all vertices in the same block of τ identified. That is, the vertices of G are the blocks of τ , and the incidence of a vertex in Gsigma is the incidence of all vertices in the corresponding block of the partition. It is a standard observation that the set of all quotients of G without self-loops is precisely spasm(G) (see, e.g., Curticapean et al. (2017)). It will be convenient to also consider the set of all quotients of G, i.e., including those with self-loops, which we will refer to as spasm ○ .
The arguments of this section will revolve around formulas of monadic second-order logic (MSO). That is, formulas of first-order logic that additionally allow for quantification over unary predicates. For formal definition see, e.g., (Courcelle, 1990).
We will decide the treewidth of graphs by deciding whether they contain certain other graphs as minors. To that end we first define the notion of minors formally.
Checking whether graph H is a minor of graph G via an MSO formula is a standard construction. Interestingly, to check whether H is a minor of some quotient of G is simpler, at least in terms of formulating the question in MSO. The key insight here is that the second condition of minor models can effectively be ignored, since there will always be a quotient where all vertices of G[f (v)] are identified. Instead it is enough to guarantee that the image is non-empty. In concrete terms, to check whether H is a minor of a graph in spasm ○ (G) we will use the following MSO formula.
QuotMinor H ≡ ∃X 1 , . . . , X c ⊆ V (G) (⋀ i X i ≥ 1 ∧ ⋀ i≠j (X i ∩ X j = ∅) ∧ ⋀ {vi,vj }∈E(H)
∃x ∈ X i ∃y ∈ X j ((x, y) ∈ E(G)))
Intuitively, the sets X i correspond to the image of the minor model for vertex v i in H. The minor is of the quotient τ where τ extends the disjoint sets X 1 , . . . , X c by the singletons {v} for all v ∈ V (G) but not in any X i for i ∈ [c]. Lemma 29. Let G and H be graphs. Then H is a minor of a quotient G τ ∈ spasm ○ if and only if G ⊧ QuotMinor H .
Proof. Suppose H is a minor of G τ and let h be the respective minor map from H into G τ . Let µ be the edge surjective endomorphism (i.e., the homomorphic image) from G into G τ induced by the quotient (i.e., every vertex maps to the block in τ that contains the vertex). Let us refer to the vertices of H as v 1 , . . . , v c . The formula is satisfied when for every i ∈ [c], the second-order variable X i is assigned to {w ∈ V (G) µ(w) ∈ f (v i )}, that is, all vertices in G that are mapped by µ to vertices in f (v i ). Clearly, the sets X i are disjoint, as the images of f are disjoint. Furthermore, we know by assumption that f is a minor model that for every {v i , v j } ∈ E(H), there is at least one edge {a, b} with a ∈ f (v i ), b ∈ f (v j ). Since µ is edge surjective, there are a ′ , b ′ ∈ X i such that {a ′ , b ′ } ∈ E(G) and µ(a ′ ) = a, µ(b ′ ) = b.
For the other direction now assume that the formula holds. Let τ X be the partition of V (G) induced by some satisfying choices of X 1 , . . . , X c (any vertex v not in any of these sets corresponds to a singleton {v} in the partition). Consider the graph G τ X and let us refer to the vertex corresponding to the block defined by X i as u i . We claim that f ∶ v i ↦ {u i } is a minor model of H into G τ X . Condition 1 and 2 of a minor model are trivially satisfied. For Condition 3, observe that if there is an edge {v i , v j } ∈ E(H) but no edge {u i , u j } ∈ E(G τ X ), then there can be no x ∈ X i , y ∈ X j that have an edge between them in E(G), hence contradicting the satisfaction of the final block of conjuncts in QuotMinor H .
We can resolve the possibility of self-loops with a simple observations about minors of quotient graphs. A self-loop occurs in quotient G τ if there is a block B of τ that contains adjacent vertices v, u. W.l.o.g., assume for now that v, u are the only adjacent vertices in B. If H is a minor of G τ , it is also a minor of G τ ′ where τ ′ is like τ but with u as a singleton rather than in B. The vertex for {u} in G τ ′ will be adjacent to the vertex for block B ′ = B ∖ {u}. Contracting the vertices for B ′ and {u} will produce exactly G τ without the self-loop at B. Iterating this idea shows that G τ without self-loops is a minor of some loop-free quotient of G. Proposition 30. Let G be a graph and let H be a loop-free graph. Suppose H is a minor of some graph in spasm ○ (G). Then H is a minor of some graph in spasm.
Robertson and Seymour famously showed that the graph minor relation is a well-quasi-ordering. Together with the classic observation that any well-quasi-ordering has a finite set of minimal elements this leads to the following standard result. Theorem 31 (Robertson & Seymour (2004)). For every minor closed class of graphs P, there exists a finite set of graphs F such that G ∈ P if and only if no F ∈ F is a minor of G. Theorem 32. Deciding max{tw(F ) F ∈ spasm(G)} ≥ k is feasible in fixed-parameter linear time when parameterised by k, and in linear time for fixed k.
Proof. We first check whether tw(G) ≤ k − 1. It is well known that this can be decided in linear time for fixed k Bodlaender (1996). If not, then tw(G) ≥ k and the algorithm returns true.
The property of having treewidth at most k − 1 is closed under graph minors. By Theorem 31 there exists a finite set F of forbidden minors for class of graphs with treewidth at most k−1. Furthermore, the set F depends only on the parameter k. Thus, we have that a graph H has treewidth ≥ k iff it does not have treewidth ≤ k−1 iff it has some some graph of F as a minor. Combining with Proposition 30, we can therefore decide whether the treewidth in the spasm is at least k, by checking for every F ∈ F whether F is a minor of any quotient of G. According to Lemma 29, this is equivalent to checking whether the formula φ ≡ ⋁ F ∈F QuotMinor F is satisfied by G. By Courcelle's Theorem (Courcelle, 1990) we can decide G ⊧ φ in time f (k, ϕ) ⋅ G where k is the treewidth of G. Since ϕ depends only on k this is equivalent to f (k) ⋅ G and hence polynomial for fixed k.
As a final note we wish to note that this result is primarily of theoretical interest. For most practical pattern sizes naive enumeration of the spasm and checking the treewidth individually with state of the art systems for treewidth computation is feasible. The bottleneck in practice is the size of the pattern, as enumerating the spasm via naive methods, e.g., enumerating all partitions, (see Sequence A000110 in the On-Line Encyclopedia of Integer Sequences (OEIS Foundation Inc., 2023)).
Adding Labels We finally discuss the necessary changes for labeled graphs in the above argument, from which Theorem 8 then follows immediately. By definition, treewidth of labeled graphs is unaffected by the labels. As such we can can simply ignore labels in checking for minors. Where the labels make a difference however, is in the set spasm itself. Not all loop-free quotients are homomorphic images in the labeled case, but rather we require an additional restriction.
A quotient G τ will not be a homomorphic image in labeled graphs in two cases. First, if a block of τ contains two vertices with different vertex labels, and second if the quotient would create parallel edges with different labels. To lift the previous argument to the labeled case it is therefore enough to enforce these two extra conditions in the QuotMinor F formulas. To this end, recall that in a satisfying interpretation of the formula, the sets X i correspond precisely to the non-trivial blocks of the quotient for which F is a minor. The two extra restrictions can then be enforced simply by conjunction with the following two formulas formula inside the scope of second order quantification (where Σ, ∆ are the vertex-and edge-label alphabets, respectively).
c ⋀ i=1 ⋁ σ∈Σ ∀x ∈ X i U σ (x) (8) ¬ ⋁ i,j∈[c] 2 i≠j ∃x 1 , x 2 ∈ X i ∃y 1 , y 2 ∈ X j ⎛ ⎜ ⎜ ⎜ ⎝ ⋁ δ,δ ′ ∈∆ δ≠δ ′ (x 1 , y 1 ) ∈ E δ (G) ∧ (x 2 , y 2 ) ∈ E δ ′ (G) ⎞ ⎟ ⎟ ⎟ ⎠(9)
The term in 8 is easy to interpret, every block X i of the partition must have uniform vertex labels. The term 9 states that there are no two blocks X i , X j such that there are edges with two different edge labels between the blocks, i.e., it is not the case that the quotient will have parallel edges with different labels. Relation E in the original definition of QuotMinor can trivially be replaced by the disjunction over all edge-label relations. Thus we can easily adapt our argument above to also decide the existence of minor models in the spasm of labeled graphs.
For labeled graphs F, G withā ∈ V (F ) k ,v ∈ V (G) k we write Hom(F, G)[ā ↦v] for the set of all homomorphisms from F into G that extend the mappingā ↦v.For a vertex v of labeled graph F we write F − v to mean F [V (F ) ∖ {v}].Proposition 10. Let (T, B) be a tree decomposition of labeled graph F . Let T ′ be a subtree of T and let α be a node adjacent to T ′ in T . For any z ∈ B(α) ∖ B(T ′ ) and y ∈ B(T ′ ) it holds that if {y, z} ∈ E(F ), then y ∈ B(α).
Let ι ′ be the disjoint union of all the individual ι ′ µ for all µ ∈ Hom(F [T ′ ], G)[ā ↦v]. To see that the ι ′ µ are in fact disjoint it is enough to observe that the sets Hom(F [T ], G)[µ] are disjoint for different choices of µ as dom(µ) ⊆ V (F [T ]). Since all ι ′ µ are injective, so is ι ′ . To see that ι ′ is also surjective, suppose a ν ∈ Hom(F [T ], H)[ā ↦w] not in the image of ι ′ . This means that ν cannot be in any set Hom(F [T ], H)[ι(µ)] for µ ∈ Hom(F [T ′ ], G)[ā ↦v]. But ι is surjective and therefore this would mean that the projection of ν to
That is, κ({(v, S), (u, T )}) = c if and only if κ({v, u}) = c and {v, u} ∈ S∆T .
{b,c},(a,e) = 1 if a ∈ {b, c} and e = {φ(b),
homs φ (F, CCFI(G, U )) = homs φ (F, CCFI(G)) if Equation 6 has a solution 0 if Equation 6 has no solution Proof Sketch. The original proof (Roberson, 2022, Theorem 3.6) is a straightforward application of Lemma 20. The equations are exactly the same as in the unlabeled case and the argument works without modification in our setting.
Proposition 26 .
26Let G be an labeled graph. For every U ⊆ V (G), there exists a labeled graph H such that EL(H) = CCFI(EL(G), U ).Proof. Let H be the labeled graph with the vertices and edges of CCFI(EL(G, U )). For the labels of H,set λ H ((v, S)) = λ G (v) and κ H ({(v, S), (u, T )}) = κ G ({v, u}), where the subscripts distinguish the labeling functions of G and H. Any edge between (v, S) and (u, T ) with λ G (v) ≤ lex λ G (u) in EL(H) will thus be labeled as (λ G (v), κ G ({v, u}, λ G (u)). This is precisely the label of the edge {v, u} in EL(G), and thus inherited for {(v, S), (u, T )} in the construction of CCFI(G, U ). That is, EL(H) and CCFI(EL(G), U ) have the same vertices, edges, and labels.
Hs for which kWL can distinguish subgraph counting for Sub H are precisely those for which the maximum treewidth of a labeled graph in spasm(H) is at most k. This result has been established for unlabeled graphs in Neuen (2023). Here, we extend this result to labeled graphs.Consider now the case of counting induced subgraphs for H. We write Ind H for the corresponding labeled graph motif pattern It can be shown that, in this case, the support for Ind H is the set of all labeled graphs H ′ that can be obtained from H by adding edges among its nodes. If H has k nodes, then the support for H contains the clique of size k which is known to have treewidth k − 1. Furthermore, this is the labeled graph with the largest treewidth in the support of H. It follows from Theorem 2 that the Hs for which kWL can distinguish induced subgraph counting for Ind H are precisely those with at most k + 1 nodes. This solves an open question fromChen et al. (2020).Before providing a proof sketch of this result, we show some examples of how the theorem can be
applied to concrete instances of labeled graph motif patterns used in this paper.
Example 2. Let us consider first the case of counting subgraphs for a labeled graph H. We denote
the corresponding labeled graph motif pattern by Sub H . We assume that H does not have self-loops.
We call a labeled graph H ′ a homomorphic image of H, if there exists a surjective homomorphism
from H into H ′ . Notice that the set of homomorphic images of H contains a finite number of
labeled graphs up to isomorphism. Following Curticapean et al. (2017), we denote by spasm(H)
the set of all labeled graphs that are obtained from the homomorphic images of H by removing self-
loops. As shown in Curticapean et al. (2017), the support of Sub H is precisely the set spasm(H). It
follows from Theorem 2 that the
Based on known results this suggests that k-dimensional MPGNNs are capable of computing this number, if equipped with the right pooling functions(Morris et al., 2019; Xu et al., 2019). Some work on this direction has already been carried out inChen et al. (2020) by using local relational polling. Finally, we have used our characterization to show that both the classes of patterns for which the kWL test can distinguish labeled subgraph counting and labeled induced subgraph counting can be recognized in polynomial time.
Holger Dell, Marc Roth, and Philip Wellnitz. Counting answers to existential questions. In Proc. ICALP 2019, volume 132 of LIPIcs, pp. 113:1-113:15. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2019. Reinhard Diestel. Graph Theory, 4th Edition, volume 173 of Graduate texts in mathematics. Springer, 2012. Zdenek Dvorák. On recognizing graphs by numbers of homomorphisms. J. Graph Theory, 64(4): 330-342, 2010. doi: 10.1002/jgt.20461. Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. In ICLR, 2020. Martin Fürer. On the combinatorial power of the weisfeiler-lehman algorithm. CoRR, abs/1704.01023, 2017. URL http://arxiv.org/abs/1704.01023. Michael Galkin, Zhaocheng Zhu, Hongyu Ren, and Jian Tang. Inductive logical query answering in knowledge graphs. In NeurIPS, 2022. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ICML, 2017. Martin Grohe and Martin Otto. Pebble games and linear equations. J. Symb. Log.Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan F. Sequeda, Steffen Staab, and Antoine Zimmermann. Knowledge graphs. ACM Comput. Surv., 54(4):71:1-71:37, 2022. Xingyue Huang, Miguel A. Romero Orth,İsmailİlkan Ceylan, and Pablo Barceló. A theory of link prediction via relational weisfeiler-leman. CoRR, abs/2302.02209, 2023. Jiashun Jin, Zheng Tracy Ke, and Shengming Luo. Network global testing by counting graphlets. In ICML, volume 80, pp. 2338-2346. PMLR, 2018. URL http://proceedings.mlr.press/v80/jin18b.html. Jakob Jonsson. Simplicial complexes of graphs, volume 3. Springer, 2008. Chaitanya K. Joshi, Yujia Li, and Bryan Perozzi. A comprehensive survey on graph neural networks. IEEE Communications Surveys & Tutorials, 2020. Thomas Kipf and Max Welling. Community detection in graphs with graphsage. In ICLR, 2018. URL https://arxiv.org/abs/1706.02216. Nils M. Kriege, Christopher Morris, Anja Rey, and Christian Sohler. A property testing framework for the theoretical expressivity of graph kernels. In IJCAI, pp. 2348-2354. ijcai.org, 2018. László Lovász. Operations with structures. Acta Mathematica Hungarica, 18(3-4):321-328, 1967. Diego Marcheggiani and Ivan Titov. Encoding sentences with graph convolutional networks for semantic role labeling. In EMNLP, 2017. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI, pp. 4602-4609, 2019. Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. In NeurIPS 2020, 2020. Daniel Neuen. Homomorphism-distinguishing closedness for graphs of bounded tree-width. CoRR, abs/2304.07011, 2023., 80(3):797-844,
2015.
A PROOF OF LEMMA 7
Dell et al. (2019) both additionally require ψ to be in CNF/DNF. For our purposes the potential blowup incurred by transformation into CNF is of no consequence.
Alternatively, it is also possible to show that the maximal treewidth in spasm H is minor-closed and obtain a different set of forbidden minors for H itself.
This observation was already mentioned implicitly but without details in Roth (2019).
For an extensive definition of first-order logic with counting quantifiers refer toCai et al. (1992). 5 What we call k-marked here is referred to as k-labeled in the original which could clearly clash with our use of labeled graphs.
Homomorphisms from quantum graphs can be seen as an alternative representation of graph motif parameters. Because of the two distinct natures of their uses in this paper and the disjoint connections to prior work we have decided to not unify the presentation of these two concepts here.
Biomolecular network motif counting and discovery by color coding. Noga Alon, P Dao, I Hajirasouliha, F Hormozdiari, S C Sahinalp, Bioinformatics. 13Noga Alon, Dao P, Hajirasouliha I, Hormozdiari F, and Sahinalp SC. Biomolecular network motif counting and discovery by color coding. Bioinformatics, 13:1-24, 2008.
On weisfeiler-leman invariance: Subgraph counts and related graph properties. Vikraman Arvind, Frank Fuhlbrück, Johannes Köbler, Oleg Verbitsky, J. Comput. Syst. Sci. 113Vikraman Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. J. Comput. Syst. Sci., 113:42-59, 2020.
Graph neural networks with local graph parameters. Pablo Barceló, Floris Geerts, Juan L Reutter, Maksimilian Ryschkov, NeurIPS 2021. Pablo Barceló, Floris Geerts, Juan L. Reutter, and Maksimilian Ryschkov. Graph neural networks with local graph parameters. In NeurIPS 2021, pp. 25280-25293, 2021.
Weisfeiler and leman go relational. Pablo Barceló, Mikhail Galkin, Christopher Morris, Miguel A Romero Orth, PMLRLoG. 19846Pablo Barceló, Mikhail Galkin, Christopher Morris, and Miguel A. Romero Orth. Weisfeiler and leman go relational. In LoG, volume 198 of Proceedings of Machine Learning Research, pp. 46. PMLR, 2022.
A linear-time algorithm for finding tree-decompositions of small treewidth. L Hans, Bodlaender, 10.1137/S0097539793251219SIAM J. Comput. 256Hans L. Bodlaender. A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM J. Comput., 25(6):1305-1317, 1996. doi: 10.1137/S0097539793251219.
Improving graph neural network expressivity via subgraph isomorphism counting. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, Michael M Bronstein, IEEE Trans. Pattern Anal. Mach. Intell. 451Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans. Pattern Anal. Mach. Intell., 45(1):657-668, 2023.
Counting graphlets: Space vs time. Marco Bressan, Flavio Chierichetti, Ravi Kumar, Stefano Leucci, Alessandro Panconesi, 10.1145/3018661WSDM. ACMMarco Bressan, Flavio Chierichetti, Ravi Kumar, Stefano Leucci, and Alessandro Panconesi. Count- ing graphlets: Space vs time. In WSDM, pp. 557-566. ACM, 2017. doi: 10.1145/3018661.
An optimal lower bound on the number of variables for graph identification. Jin-Yi Cai, Martin Fürer, Neil Immerman, Comb. 124Jin-yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Comb., 12(4):389-410, 1992.
Can graph neural networks count substructures? In NeurIPS. Zhengdao Chen, Lei Chen, Soledad Villar, Joan Bruna, Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? In NeurIPS, 2020.
The monadic second-order logic of graphs. i. recognizable sets of finite graphs. Bruno Courcelle, Inf. Comput. 851Bruno Courcelle. The monadic second-order logic of graphs. i. recognizable sets of finite graphs. Inf. Comput., 85(1):12-75, 1990.
Homomorphisms are a good basis for counting small subgraphs. Radu Curticapean, Holger Dell, Dániel Marx, STOC. ACMRadu Curticapean, Holger Dell, and Dániel Marx. Homomorphisms are a good basis for counting small subgraphs. In STOC, pp. 210-223. ACM, 2017.
Ioannis Tsochantaridis, and Petar Veličković. Learning combinatorial optimization algorithms over graphs. Hanjun Dai, Artem Kozachenko, Mark Schmidt, ICML. 2021Hanjun Dai, Artem Kozachenko, Mark Schmidt, Ioannis Tsochantaridis, and Petar Veličković. Learning combinatorial optimization algorithms over graphs. In ICML, 2021.
Message passing for query answering over knowledge graphs. CoRR, abs. Daniel Daza, Michael Cochez, Daniel Daza and Michael Cochez. Message passing for query answering over knowledge graphs. CoRR, abs/2002.02406, 2020. URL https://arxiv.org/abs/2002.02406.
Lovász meets weisfeiler and leman. Holger Dell, Martin Grohe, Gaurav Rattan, 10.4230/LIPIcs.ICALP.2018.40Proc. ICALP. ICALP107LIPIcs, pp.Holger Dell, Martin Grohe, and Gaurav Rattan. Lovász meets weisfeiler and leman. In Proc. ICALP, volume 107 of LIPIcs, pp. 40:1-40:14. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2018. doi: 10.4230/LIPIcs.ICALP.2018.40.
OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences. Published electroni-OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences, 2023. Published electroni-
. V (g) Such, That, 1. ∀u ≠ v ∈ V (H) f (u) ∩ f (v) = ∅, 2. ∀v ∈ V (H) Gf (v)] is connected, and 3. ∀{u, v} ∈ E(H) there is an edge (in G) between some vertex in f (u) and some vertex in f (v)V (G) such that: 1. ∀u ≠ v ∈ V (H) f (u) ∩ f (v) = ∅, 2. ∀v ∈ V (H) G[f (v)] is connected, and 3. ∀{u, v} ∈ E(H) there is an edge (in G) between some vertex in f (u) and some vertex in f (v). |
52,895,589 | HOW POWERFUL ARE GRAPH NEURAL NETWORKS? | Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. | [
3292002
] | HOW POWERFUL ARE GRAPH NEURAL NETWORKS?
Keyulu Xu [email protected]
Stanford University
Stanford University
MIT
† Mit
Stanford University
Stanford University
MIT
Weihua Hu [email protected]
Stanford University
Stanford University
MIT
Jure Leskovec
Stanford University
Stanford University
MIT
Stefanie Jegelka
Stanford University
Stanford University
MIT
HOW POWERFUL ARE GRAPH NEURAL NETWORKS?
Published as a conference paper at ICLR 2019
Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
INTRODUCTION
Learning with graph structured data, such as molecules, social, biological, and financial networks, requires effective representation of their graph structure (Hamilton et al., 2017b). Recently, there has been a surge of interest in Graph Neural Network (GNN) approaches for representation learning of graphs (Li et al., 2016;Hamilton et al., 2017a;Kipf & Welling, 2017;Velickovic et al., 2018;Xu et al., 2018). GNNs broadly follow a recursive neighborhood aggregation (or message passing) scheme, where each node aggregates feature vectors of its neighbors to compute its new feature vector (Gilmer et al., 2017;Xu et al., 2018). After k iterations of aggregation, a node is represented by its transformed feature vector, which captures the structural information within the node's k-hop network neighborhood. The representation of an entire graph can then be obtained through pooling, for example, by summing the representation vectors of all nodes in the graph.
Many GNN variants with different neighborhood aggregation and graph-level pooling schemes have been proposed (Scarselli et al., 2009b;Battaglia et al., 2016;Defferrard et al., 2016;Duvenaud et al., 2015;Hamilton et al., 2017a;Kearnes et al., 2016;Kipf & Welling, 2017;Li et al., 2016;Velickovic et al., 2018;Santoro et al., 2017;Xu et al., 2018;Santoro et al., 2018;Verma & Zhang, 2018;Ying et al., 2018;. Empirically, these GNNs have achieved state-of-the-art performance in many tasks such as node classification, link prediction, and graph classification. However, the design of new GNNs is mostly based on empirical intuition, heuristics, and experimental trial-anderror. There is little theoretical understanding of the properties and limitations of GNNs, and formal analysis of GNNs' representational capacity is limited.
Here, we present a theoretical framework for analyzing the representational power of GNNs. We formally characterize how expressive different GNN variants are in learning to represent and distinguish between different graph structures. Our framework is inspired by the close connection between GNNs and the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler & Lehman, 1968), a powerful test known to distinguish a broad class of graphs (Babai & Kucera, 1979). Similar to GNNs, the WL test iteratively updates a given node's feature vector by aggregating feature vectors of its network neighbors. What makes the WL test so powerful is its injective aggregation update that maps different node neighborhoods to different feature vectors. Our key insight is that a GNN can have as large discriminative power as the WL test if the GNN's aggregation scheme is highly expressive and can model injective functions.
To mathematically formalize the above insight, our framework first abstracts the feature vectors of a node's neighbors as a multiset, i.e., a set with possibly repeating elements. Then, the neighbor aggregation in GNNs can be abstracted as a function over the multiset. We rigorously study different variants of multiset functions and theoretically characterize their discriminative power, i.e., how well different aggregation functions can distinguish different multisets. The more discriminative the multiset function is, the more powerful the representational power of the underlying GNN.
Our main results are summarized as follows:
1) We show that GNNs are at most as powerful as the WL test in distinguishing graph structures.
2) We establish conditions on the neighbor aggregation and graph pooling functions under which the resulting GNN is as powerful as the WL test. 3) We identify graph structures that cannot be distinguished by popular GNN variants, such as GCN (Kipf & Welling, 2017) and GraphSAGE (Hamilton et al., 2017a), and we precisely characterize the kinds of graph structures such GNN-based models can capture. 4) We develop a simple neural architecture, Graph Isomorphism Network (GIN), and show that its discriminative/representational power is equal to the power of the WL test. We validate our theory via experiments on graph classification datasets, where the expressive power of GNNs is crucial to capture graph structures. In particular, we compare the performance of GNNs with various aggregation functions. Our results confirm that the most powerful GNN (our Graph Isomorphism Network (GIN)) has high representational power as it almost perfectly fits the training data, whereas the less powerful GNN variants often severely underfit the training data. In addition, the representationally more powerful GNNs outperform the others by test set accuracy and achieve state-of-the-art performance on many graph classification benchmarks.
PRELIMINARIES
We begin by summarizing some of the most common GNN models and, along the way, introduce our notation. Let G = (V, E) denote a graph with node feature vectors X v for v ∈ V . There are two tasks of interest: (1) Node classification, where each node v ∈ V has an associated label y v and the goal is to learn a representation vector h v of v such that v's label can be predicted as y v = f (h v ); (2) Graph classification, where, given a set of graphs {G 1 , ..., G N } ⊆ G and their labels {y 1 , ..., y N } ⊆ Y, we aim to learn a representation vector h G that helps predict the label of an entire graph, y G = g(h G ).
Graph Neural Networks. GNNs use the graph structure and node features X v to learn a representation vector of a node, h v , or the entire graph, h G . Modern GNNs follow a neighborhood aggregation strategy, where we iteratively update the representation of a node by aggregating representations of its neighbors. After k iterations of aggregation, a node's representation captures the structural information within its k-hop network neighborhood. Formally, the k-th layer of a GNN is
a (k) v = AGGREGATE (k) h (k−1) u : u ∈ N (v) , h (k) v = COMBINE (k) h (k−1) v , a (k) v , (2.1) where h (k)
v is the feature vector of node v at the k-th iteration/layer. We initialize h (0) v = X v , and N (v) is a set of nodes adjacent to v. The choice of AGGREGATE (k) (·) and COMBINE (k) (·) in GNNs is crucial. A number of architectures for AGGREGATE have been proposed. In the pooling variant of GraphSAGE (Hamilton et al., 2017a), AGGREGATE has been formulated as where W is a learnable matrix, and MAX represents an element-wise max-pooling. The COMBINE step could be a concatenation followed by a linear mapping W · h
a (k) v = MAX ReLU W · h (k−1) u , ∀u ∈ N (v) ,(2.(k−1) v a (k) v
as in GraphSAGE. In Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the element-wise mean pooling is used instead, and the AGGREGATE and COMBINE steps are integrated as follows:
h (k) v = ReLU W · MEAN h (k−1) u , ∀u ∈ N (v) ∪ {v} . (2.3)
Many other GNNs can be similarly represented as Eq. 2.1 (Xu et al., 2018;Gilmer et al., 2017).
For node classification, the node representation h
(K) v
of the final iteration is used for prediction. For graph classification, the READOUT function aggregates node features from the final iteration to obtain the entire graph's representation h G :
h G = READOUT h (K) v v ∈ G .
(2.4)
READOUT can be a simple permutation invariant function such as summation or a more sophisticated graph-level pooling function (Ying et al., 2018;.
Weisfeiler-Lehman test. The graph isomorphism problem asks whether two graphs are topologically identical. This is a challenging problem: no polynomial-time algorithm is known for it yet (Garey, 1979;Garey & Johnson, 2002;Babai, 2016). Despite some corner cases (Cai et al., 1992), the Weisfeiler-Lehman (WL) test of graph isomorphism (Weisfeiler & Lehman, 1968) is an effective and computationally efficient test that distinguishes a broad class of graphs (Babai & Kucera, 1979). Its 1-dimensional form, "naive vertex refinement", is analogous to neighborhood aggregation in GNNs.
Assuming each node has a categorical label, the WL test iteratively (1) aggregates the labels of nodes and their neighborhoods, and (2) hashes the aggregated labels into unique new labels. The algorithm decides that two graphs are different if at some iteration their node labels are different.
Based on the WL test, Shervashidze et al. (2011) proposed the WL subtree kernel that measures the similarity between graphs. The kernel uses the counts of node labels at different iterations of the WL test as the feature vector of a graph. Intuitively, a node's label at the k-th iteration of WL test represents a subtree structure of height k rooted at the node ( Figure 1). Thus, the graph features considered by the WL subtree kernel are essentially counts of different rooted subtrees in the graph.
THEORETICAL FRAMEWORK: OVERVIEW
We start with an overview of our framework for analyzing the expressive power of GNNs. A GNN recursively updates each node's feature vector to capture the network structure and features of other nodes around it, i.e., its rooted subtree structures ( Figure 1). Throughout the paper, we assume node input features are from a countable universe. For finite graphs, we can recursively show that node feature vectors at deeper layers of any fixed model are also from a countable universe. For notational simplicity, we can assign each feature vector a unique label ∈ {a, b, c . . .}. Then, feature vectors of a set of neighboring nodes form a multiset: the same element can appear multiple times since different nodes can have identical feature vectors. Definition 1 (Multiset). A multiset is a generalized concept of a set that allows multiple instances for its elements. More formally, a multiset is a 2-tuple X = (S, m) where S is the underlying set of X that is formed from its distinct elements, and m : S → N ≥1 gives the multiplicity of the elements.
In order to analyze the representational power of a GNN, we analyze when a GNN maps two nodes into the same location in the embedding space. Intuitively, the most powerful GNN maps two nodes to the same location only if they have identical subtree structures with identical features on the corresponding nodes. Since subtree structures are defined recursively via node neighborhoods (Figure 1), we can reduce our analysis recursively to the question when a GNN maps two neighborhoods to the same embedding. The most powerful GNN would never map two different neighborhoods, i.e., multisets of feature vectors, to the same location. This means its aggregation scheme is injective. Thus, we abstract a GNN's aggregation scheme as a class of functions over multisets that its neural networks can represent, and analyze whether they are able to represent injective multiset functions.
Next, we use this reasoning to develop a maximally powerful GNN. In Section 5, we study popular GNN variants and see that their aggregation schemes are inherently not injective and thus less powerful, but that they can capture other interesting properties of graphs.
BUILDING POWERFUL GRAPH NEURAL NETWORKS
Ideally, a GNN is able to (1) distinguish different graph structures by mapping them to different locations in the embedding space, and (2) capture their structural similarity in the embedding space.
In this paper, we are mainly concerned with the first part, and we will briefly discuss the second part. The ability to map different graphs to different embeddings, however, implies solving graph isomorphism. In our analysis, we characterize the representational capacity of GNNs via a slightly weaker criterion: the Weisfeiler-Lehman (WL) graph isomorphism test that is known to work well in general, with some few exceptions, in particular, regular graphs (Cai et al., 1992;Douglas, 2011;Evdokimov & Ponomarenko, 1999). Proofs of all lemmas and theorems can be found in the appendix. Lemma 2. Let G 1 and G 2 be any non-isomorphic graphs. If a graph neural network A : G → R d following the neighborhood aggregation scheme maps G 1 and G 2 to different embeddings, the Weisfeiler-Lehman graph isomorphism test also decides G 1 and G 2 are not isomorphic.
Hence, any aggregation-based GNN is at most as powerful as the WL test in distinguishing different graphs. A natural follow-up question is whether there exist GNNs that are, in principle, as powerful as the WL test? Our answer, in Theorem 3, is yes: if the neighbor aggregation and graph pooling functions are injective, then the resulting GNN is as powerful as the WL test.
Theorem 3. Let A : G → R d be a GNN following the neighborhood aggregation scheme. With sufficient iterations, A maps any graphs G 1 and G 2 that the Weisfeiler-Lehman test of isomorphism decides as non-isomorphic, to different embeddings if the following conditions hold: a) A aggregates and updates node features iteratively with
h (k) v = φ h (k−1) v , f h (k−1) u : u ∈ N (v) ,
where the functions f , which operates on multisets, and φ are injective.
b) A's graph-level readout, which operates on the multiset of node features h (k) v , is injective.
We prove Theorem 3 in the appendix. On countable sets, injectiveness well characterizes whether a function preserves the distinctness of inputs. On uncountable sets, where node features are continuous, the notions of injectiveness and discriminative power are "weakened". It would be useful to characterize how closely packed the learned features are in a function's image. In this paper, we assume that input node features are from a countable set (that can be a subset of an uncountable set such as R n ). Given the countability assumption on the input node features, one may ask whether countability still holds for the node features at deeper layers of a GNN? Lemma 4 says yes, i.e., countability can propagate across layers. Lemma 4. Assume input feature space X is countable. Let g (k) be the function parameterized by a GNN's k-th layer for k = 1, ..., L, where g (1) is defined on finite multisets X ⊂ X . The range of g (k) , i.e., the space of node hidden features h (k) v , is also countable for all k = 1, ..., L.
Here, it is also worth discussing an important benefit of GNNs beyond distinguishing different graphs, that is, capturing similarity of graph structures. Note that node feature vectors in the WL test are essentially one-hot encodings and thus cannot capture the similarity between subtrees. In contrast, a GNN satisfying the criteria in Theorem 3 generalizes the WL test by learning to embed the subtrees to low-dimensional space. This enables GNNs to not only discriminate different structures, but also to learn to map similar graph structures to similar embeddings and capture dependencies between graph structures. Capturing structural similarity of the node labels is shown to be helpful for generalization particularly when the co-occurrence of subtrees is sparse across different graphs or there are noisy edges and node features (Yanardag & Vishwanathan, 2015).
GRAPH ISOMORPHISM NETWORK (GIN)
Next we develop a model that provably satisfies the conditions in Theorem 3 and thus generalizes the WL test. We name the resulting architecture Graph Isomorphism Network (GIN).
To model injective multiset functions for the neighbor aggregation, we develop a theory of "deep multisets", i.e., parameterizing universal multiset functions with neural networks. Our next lemma states that sum aggregators can represent injective, in fact, universal functions over multisets. Lemma 5. Assume X is countable. There exists a function f :
X → R n so that h(X) = x∈X f (x)
is unique for each finite multiset X ⊂ X . Moreover, any multiset function g can be decomposed as g (X) = φ x∈X f (x) for some function φ. We prove Lemma 5 in the appendix. The proof extends the setting in (Zaheer et al., 2017) from sets to multisets. An important distinction between deep multisets and sets is that certain popular injective set functions, such as the mean aggregator, are not injective multiset functions. With the mechanism for modeling universal multiset functions in Lemma 5 as a building block, we can now come up with an aggregation scheme that can represent universal functions over the pairs of a node and the multiset of its neighors, and thus will satisfy the injectiveness condition in Theorem 3a. Our next corollary provides a simple and concrete formulation among many such aggregation schemes. Corollary 6. Assume X is countable. There exists a function f : X → R n so that for infinitely many choices of , including all irrational numbers, h(c, X) = (1 + ) · f (c) + x∈X f (x) is unique for each pair (c, X), where c ∈ X and X ⊂ X is a finite multiset. Moreover, any function g over such pairs can be decomposed as g (c,
X) = φ (1 + ) · f (c) + x∈X f (x) for some function φ.
We can use multi-layer perceptrons (MLPs) to model and learn f and φ in Corollary 6, thanks to the universal approximation theorem (Hornik et al., 1989;Hornik, 1991). In practice, we model f (k+1) • φ (k) with one MLP, because MLPs can represent the composition of functions. In the first iteration, we do not need MLPs before summation if input features are one-hot encodings as their summation alone is injective. We can make a learnable parameter or a fixed scalar. Then, GIN updates node representations as
h (k) v = MLP (k) 1 + (k) · h (k−1) v + u∈N (v) h (k−1) u .
(4.1)
Generally, there may exist many other powerful GNNs. GIN is one example among the maximally powerful GNNs, while being simple.
READOUT SUBTREE STRUCTURES OF DIFFERENT DEPTHS
An important aspect of the graph-level readout is that node representations, corresponding to subtree structures, get more refined and global as the number of iterations increases. A sufficient number of iterations is key to achieving good discriminative power. Yet, features from earlier iterations may sometimes generalize better. To consider all structural information, GIN uses information from all depths/iterations of the model. We achieve this by an architecture similar to Jumping Knowledge Networks (JK-Nets) (Xu et al., 2018), where we replace Eq. 2.4 with graph representations concatenated across all iterations:
h G = CONCAT READOUT h (k) v |v ∈ G k = 0, 1, . . . , K . (4.2)
By Theorem 3 and Corollary 6, if GIN replaces READOUT in Eq. 4.2 with summing all node features from the same iterations (we do not need an extra MLP before summation for the same reason as in Eq. 4.1), it provably generalizes the WL test and the WL subtree kernel.
LESS POWERFUL BUT STILL INTERESTING GNNS
Next we study GNNs that do not satisfy the conditions in Theorem 3, including GCN (Kipf & Welling, 2017) and GraphSAGE (Hamilton et al., 2017a). We conduct ablation studies on two aspects of the aggregator in Eq. 4.1: (1) 1-layer perceptrons instead of MLPs and (2) mean or max-pooling instead of the sum. We will see that these GNN variants get confused by surprisingly simple graphs and are sum -multiset > mean -distribution max -set > Input Figure 2: Ranking by expressive power for sum, mean and max-pooling aggregators over a multiset. Left panel shows the input multiset and the three panels illustrate the aspects of the multiset a given aggregator is able to capture: sum captures the full multiset, mean captures the proportion/distribution of elements of a given type, and the max aggregator ignores multiplicities (reduces the multiset to a simple set).
vs. Figure 2 gives reasoning about how different aggregators "compress" different graph structures/multisets. less powerful than the WL test. Nonetheless, models with mean aggregators like GCN perform well for node classification tasks. To better understand this, we precisely characterize what different GNN variants can and cannot capture about a graph and discuss the implications for learning with graphs.
1-LAYER PERCEPTRONS ARE NOT SUFFICIENT
The function f in Lemma 5 helps map distinct multisets to unique embeddings. It can be parameterized by an MLP by the universal approximation theorem (Hornik, 1991). Nonetheless, many existing GNNs instead use a 1-layer perceptron σ (Nelder & Wedderburn, 1972). Therefore, we are interested in understanding whether 1-layer perceptrons are enough for graph learning. Lemma 7 suggests that there are indeed network neighborhoods (multisets) that models with 1-layer perceptrons can never distinguish. Lemma 7. There exist finite multisets X 1 = X 2 so that for any linear mapping W ,
x∈X1 ReLU (W x) = x∈X2 ReLU (W x) .
The main idea of the proof for Lemma 7 is that 1-layer perceptrons can behave much like linear mappings, so the GNN layers degenerate into simply summing over neighborhood features. Our proof builds on the fact that the bias term is lacking in the linear mapping. With the bias term and sufficiently large output dimensionality, 1-layer perceptrons might be able to distinguish different multisets. Nonetheless, unlike models using MLPs, the 1-layer perceptron (even with the bias term) is not a universal approximator of multiset functions. Consequently, even if GNNs with 1-layer perceptrons can embed different graphs to different locations to some degree, such embeddings may not adequately capture structural similarity, and can be difficult for simple classifiers, e.g., linear classifiers, to fit. In Section 7, we will empirically see that GNNs with 1-layer perceptrons, when applied to graph classification, sometimes severely underfit training data and often underperform GNNs with MLPs in terms of test accuracy.
STRUCTURES THAT CONFUSE MEAN AND MAX-POOLING
What happens if we replace the sum in h (X) = x∈X f (x) with mean or max-pooling as in GCN and GraphSAGE? Mean and max-pooling aggregators are still well-defined multiset functions because they are permutation invariant. But, they are not injective. Figure 2 ranks the three aggregators by their representational power, and Figure 3 illustrates pairs of structures that the mean and max-pooling aggregators fail to distinguish. Here, node colors denote different node features, and we assume the GNNs aggregate neighbors first before combining them with the central node.
In Figure 3a, every node has the same feature a and f (a) is the same across all nodes (for any function f ). When performing neighborhood aggregation, the mean or maximum over f (a) remains f (a) and, by induction, we always obtain the same node representation everywhere. Thus, mean and max-pooling aggregators fail to capture any structural information. In contrast, a sum aggregator distinguishes the structures because 2 · f (a) and 3 · f (a) give different values. The same argument can be applied to any unlabeled graph. If node degrees instead of a constant value is used as node input features, in principle, mean can recover sum, but max-pooling cannot. Fig. 3a suggests that mean and max have trouble distinguishing graphs with nodes that have repeating features. Let h color (r for red, g for green) denote node features transformed by f . Fig. 3b shows that maximum over the neighborhood of the blue nodes yields max (h g , h r ) and max (h g , h r , h r ), which collapse to the same representation. Thus, max-pooling fails to distinguish them. In contrast, the sum aggregator still works because 1 2 (h g + h r ) and 1 3 (h g + h r + h r ) are in general not equivalent. Similarly, in Fig. 3c, both mean and max fail as 1 2 (h g + h r ) = 1 4 (h g + h g + h r + h r ).
MEAN LEARNS DISTRIBUTIONS
To characterize the class of multisets that the mean aggregator can distinguish, consider the example X 1 = (S, m) and X 2 = (S, k · m), where X 1 and X 2 have the same set of distinct elements, but X 2 contains k copies of each element of X 1 . Any mean aggregator maps X 1 and X 2 to the same embedding, because it simply takes averages over individual element features. Thus, the mean captures the distribution (proportions) of elements in a multiset, but not the exact multiset.
Corollary 8. Assume X is countable. There exists a function f :
X → R n so that for h(X) = 1 |X| x∈X f (x), h(X 1 ) = h(X 2 )
if and only if finite multisets X 1 and X 2 have the same distribution. That is, assuming |X 2 | ≥ |X 1 |, we have X 1 = (S, m) and X 2 = (S, k · m) for some k ∈ N ≥1 .
The mean aggregator may perform well if, for the task, the statistical and distributional information in the graph is more important than the exact structure. Moreover, when node features are diverse and rarely repeat, the mean aggregator is as powerful as the sum aggregator. This may explain why, despite the limitations identified in Section 5.2, GNNs with mean aggregators are effective for node classification tasks, such as classifying article subjects and community detection, where node features are rich and the distribution of the neighborhood features provides a strong signal for the task.
MAX-POOLING LEARNS SETS WITH DISTINCT ELEMENTS
The examples in Figure 3 illustrate that max-pooling considers multiple nodes with the same feature as only one node (i.e., treats a multiset as a set). Max-pooling captures neither the exact structure nor the distribution. However, it may be suitable for tasks where it is important to identify representative elements or the "skeleton", rather than to distinguish the exact structure or distribution. Qi et al. (2017) empirically show that the max-pooling aggregator learns to identify the skeleton of a 3D point cloud and that it is robust to noise and outliers. For completeness, the next corollary shows that the max-pooling aggregator captures the underlying set of a multiset.
Corollary 9. Assume X is countable. Then there exists a function f : X → R ∞ so that for h(X) = max x∈X f (x), h(X 1 ) = h(X 2 ) if and only if X 1 and X 2 have the same underlying set.
REMARKS ON OTHER AGGREGATORS
There are other non-standard and complicated neighbor aggregation schemes that we do not cover in this paper, e.g., Graph Attention Networks (Velickovic et al., 2018) and LSTM pooling (Hamilton et al., 2017a;Murphy et al., 2018). We emphasize that our theoretical framework is general enough to characterize the representaional power of any message-passing-based GNNs with different aggregation schemes. In the future, it would be interesting to apply our framework to analyze these aggregation schemes as well as others to gain theoretical understanding.
OTHER RELATED WORK
Despite the empirical success of GNNs, there have been few theoretical studies on them. Scarselli et al. (2009a) shows the earliest GNN (Scarselli et al., 2009b) can approximate measurable functions in probability. Lei et al. (2017) shows their proposed architecture lies in the RKHS of graph kernels, but it does not tell which graphs can actually be distinguished. These works focus on their specific GNNs. In contrast, our paper provides a general framework for analyzing and characterizing the expressive power of a broad class of GNNs. Recently, a large number of GNN architectures have been proposed and applied. Not surprisingly, building blocks of GIN, e.g. sum aggregation and MLP encoding, also appeared in other models (Battaglia et al., 2016;Scarselli et al., 2009b;Duvenaud et al., 2015). While the previous architectures are often adhoc and complicated, our Graph Isomorphism Network (GIN) is simple and theoretically motivated.
EXPERIMENTS
We evaluate and compare the training and test performance of GIN and less powerful GNN variants.
Datasets. We use 9 graph classification benchmarks: 4 bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and 5 social network datasets (COLLAB, IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY and REDDIT-MULTI5K) (Yanardag & Vishwanathan, 2015). In the bioinformatic graphs, the nodes have categorical input features; in the social networks, they have no features. For the REDDIT datasets, we set all node feature vectors to be the same (thus, features here are uninformative); for the other social graphs, we use one-hot encodings of node degrees. Dataset statistics are summarized in Table 1, and more details of the data can be found in Appendix I.
Models and configurations.
We evaluate GINs (Eqs. 4.1 and 4.2) and the less powerful GNN variants. Under the GIN framework, we consider two variants: (1) GIN that learns in Eq. 4.1 by gradient descent, which we call GIN-, and (2) simpler (slightly less powerful) 1 GIN, where in Eq. 4.1 is fixed to 0, which we call GIN-0. As we will see, GIN-0 shows strong empirical performance: not only does GIN-0 fit training data equally well as GIN-, it also demonstrates good generalization, slightly but consistently outperforming GIN-in terms of test accuracy. For the less powerful GNN variants, we consider architectures that replace the sum in GIN-0 aggregation with mean or max-pooling 2 , or replace MLPs with 1-layer perceptrons, i.e., a linear mapping followed by ReLU. In Figure 4 and Table 1, a model is named by the aggregator/perceptron it uses. We apply the same graph-level readout (READOUT in Eq. 4.2) for GINs and all GNN variants, specifically, sum readout on bioinformatics datasets and mean readout on social datasets due to better test performance.
Following (Yanardag & Vishwanathan, 2015;Niepert et al., 2016), we perform 10-fold crossvalidation with LIB-SVM (Chang & Lin, 2011). We report the average and standard deviation of validation accuracies across the 10 folds within the cross-validation. For all configurations, 5 GNN layers (including the input layer) are applied, and all MLPs have 2 layers. Batch normalization (Ioffe & Szegedy, 2015) is applied on every hidden layer. We use the Adam optimizer (Kingma & Ba, 2015) with initial learning rate 0.01 and decay the learning rate by 0.5 every 50 epochs. The hyper-parameters we tune for each dataset are: (1) The number of hidden units ∈ {16, 32} for bioinformatics graphs and 64 for social graphs; (2) batch size ∈ {32, 128}; (3) dropout ratio ∈ {0, 0.5} after the dense layer (Srivastava et al., 2014); (4) the number of epochs.
Baselines. We compare the GNNs above with a number of state-of-the-art baselines for graph classification: (1) The WL subtree kernel (Shervashidze et al., 2011) For all the GNNs, the same configurations were used across datasets: 5 GNN layers (including the input layer), hidden units of size 64, minibatch of size 128, and 0.5 dropout ratio. For the WL subtree kernel, we set the number of iterations to 4, which is comparable to the 5 GNN layers. Table 1: Classification accuracies (%). † indicate test accuracies (equal to chance rates) when all nodes have the same feature vector. We also report in the parentheses the test accuracies when the node degrees are used as input node features. The best-performing GNNs are highlighted with boldface. On datasets where GINs' accuracy is not strictly the highest among GNN variants, GINs are comparable to the best because paired t-test at significance level 10% does not distinguish GINs from the best. If a baseline performs better than all GNNs, we highlight it with boldface and asterisk.
RESULTS
Training set performance. We validate our theoretical analysis of the representational power of GNNs by comparing their training accuracies. Figure 4 shows training curves of GINs and less powerful GNN variants with the same hyper-parameter settings. First, both the theoretically most powerful GNN, i.e. GIN-(Sum-MLP), and GIN-0 are able to almost perfectly fit all the training sets.
In our experiments, explicit learning of in GIN-brings no gain in fitting training data compared to fixing to 0 as in GIN-0. In comparison, the GNN variants using mean/max pooling or 1-layer perceptrons severely underfit on many datasets. In particular, the training accuracy patterns align with our ranking by the models' representational power: GNN variants with MLPs tend to have higher training accuracies than those with 1-layer perceptrons, and GNNs with sum aggregators tend to fit the training sets better than those with mean and max-pooling aggregators.
On our datasets training accuracies of the GNNs, however, never exceed those of the WL subtree kernel, which has the same discriminative power as the WL test. For example, on IMDBBINARY, none of the models can perfectly fit the training set, and the GNNs achieve at most the same training accuracy as the WL kernel. This pattern aligns with our result that the WL test provides an upper bound for the representational capacity of the aggregation-based GNNs. Our theoretical results focus on representational power and do not yet take into account optimization (e.g., local minima). Nonetheless, the empirical results align very well with our theory.
Test set performance. Next, we compare test accuracies. Although our theoretical results do not directly speak about generalization ability of GNNs, it is reasonable to expect that GNNs with strong expressive power can accurately capture graph structures of interest and thus generalize well. Table 1 compares test accuracies of GINs (Sum-MLP), other GNN variants, as well as the state-of-the-art baselines.
First, GINs, especially GIN-0, outperform (or achieve comparable performance as) the less powerful GNN variants on all the 9 datasets, achieving state-of-the-art performance. In particular, GINs shine on the social network datasets, which contain a relatively large number of training graphs. On the Reddit datasets, the same one-dimensional vector was used as node features. Here GINs and sumaggregation GNNs accurately capture the graph structure (as predicted in Section 5.2) and significantly outperform other models. Mean-aggregation GNNs, however, fail to capture structural information and do not perform better than random guessing. Even if node degrees are provided as input features, mean-based GNNs perform much worse than sum-based GNNs, especially GINs. Comparing among GINs (GIN-and GIN-0), we observe that GIN-0 slightly but consistently outperforms GIN-. Since both models fit training data equally well, the better generalization of GIN-0 may be explained by its simplicity compared to GIN-.
CONCLUSION
In this paper, we developed theoretical foundations for reasoning about expressive power of GNNs and proved tight bounds on the representational capacity of popular GNN variants. Along the way, we also designed a provably most powerful GNN under the neighborhood aggregation framework. An interesting direction for future work is to go beyond the neighborhood aggregation (or message passing) framework in order to pursue even more powerful architectures for learning with graphs. It would also be interesting to understand and improve the generalization properties of GNNs. A PROOF FOR LEMMA 2
Proof. Suppose after k iterations, a graph neural network A has A(G 1 ) = A(G 2 ) but the WL test cannot decide G 1 and G 2 are non-isomorphic. It follows that from iteration 0 to k in the WL test, G 1 and G 2 always have the same collection of node labels. In particular, because G 1 and G 2 have the same WL node labels for iteration i and i + 1 for any i = 0, ..., k − 1, G 1 and G 2 have the same collection, i.e. multiset, of WL node labels l (i) v as well as the same collection of node neighborhoods l
(i) v , l (i) u : u ∈ N (v)
. Otherwise, the WL test would have obtained different collections of node labels at iteration i + 1 for G 1 and G 2 as different multisets get unique new labels.
The WL test always relabels different multisets of neighboring nodes into different new labels. We show that on the same graph G = G 1 or G 2 , if WL node labels l
(i) v = l (i) u , we always have GNN node features h (i) v = h (i)
u for any iteration i. This apparently holds for i = 0 because WL and GNN starts with the same node features. Suppose this holds for iteration j, if for any u, v, l
(j+1) v = l (j+1) u ,
then it must be the case that
l (j) v , l (j) w : w ∈ N (v) = l (j) u , l (j) w : w ∈ N (u)
By our assumption on iteration j, we must have
h (j) v , h (j) w : w ∈ N (v) = h (j) u , h (j) w : w ∈ N (u)
In the aggregation process of the GNN, the same AGGREGATE and COMBINE are applied. The same input, i.e. neighborhood features, generates the same output. Thus, h
(j+1) v = h (j+1) u . By induction, if WL node labels l (i) v = l (i) u , we always have GNN node features h (i) v = h (i)
u for any iteration i. This creates a valid mapping φ such that h
(i) v = φ(l (i) v )
for any v ∈ G. It follows from G 1 and G 2 have the same multiset of WL neighborhood labels that G 1 and G 2 also have the same collection of GNN neighborhood features
h (i) v , h (i) u : u ∈ N (v) = φ(l (i) v ), φ(l (i) u ) : u ∈ N (v) Thus, h (i+1) v
are the same. In particular, we have the same collection of GNN node features h (k) v for G 1 and G 2 . Because the graph level readout function is permutation invariant with respect to the collection of node features, A(G 1 ) = A(G 2 ). Hence we have reached a contradiction.
B PROOF FOR THEOREM 3
Proof. Let A be a graph neural network where the condition holds. Let G 1 , G 2 be any graphs which the WL test decides as non-isomorphic at iteration K. Because the graph-level readout function is injective, i.e. it maps distinct multiset of node features into unique embeddings, it sufficies to show that A's neighborhood aggregation process, with sufficient iterations, embeds G 1 and G 2 into different multisets of node features. Let us assume A updates node representations as
h (k) v = φ h (k−1) v , f h (k−1) u : u ∈ N (v)
with injective funtions f and φ. The WL test applies a predetermined injective hash function g to update the WL node labels l
(k) v . l (k) v = g l (k−1) v , l (k−1) u : u ∈ N (v)
We will show, by induction, that for any iteration k, there always exists an injective function ϕ such that h
(k) v = ϕ l (k) v
. This apparently holds for k = 0 because the initial node features are the same
(0) v = h (0)
v for all v ∈ G 1 , G 2 . So ϕ could be the identity function for k = 0. Suppose this holds for iteration k − 1, we show that it also holds for k. Substituting h
(k−1) v with ϕ l (k−1) v gives us h (k) v = φ ϕ l (k−1) v , f ϕ l (k−1) u : u ∈ N (v)
It follows from the fact composition of injective functions is injective that there exists some injective function ψ so that
h (k) v = ψ l (k−1) v , l (k−1) u : u ∈ N (v)
Then we have
h (k) v = ψ • g −1 g l (k−1) v , l (k−1) u : u ∈ N (v) = ψ • g −1 l (k) v ϕ = ψ • g −1 is
injective because the composition of injective functions is injective. Hence for any iteration k, there always exists an injective function ϕ such that h
(k) v = ϕ l (k) v
. At the K-th iteration, the WL test decides that G 1 and G 2 are non-isomorphic, that is the multisets l (K) v are different for G 1 and G 2 . The graph neural network A's node embeddings h
(K) v = ϕ l (K) v
must also be different for G 1 and G 2 because of the injectivity of ϕ.
C PROOF FOR LEMMA 4
Proof. Before proving our lemma, we first show a well-known result that we will later reduce our problem to: N k is countable for every k ∈ N, i.e. finite Cartesian product of countable sets is countable. We observe that it suffices to show N × N is countable, because the proof then follows clearly from induction. To show N × N is countable, we construct a bijection φ from N × N to N as φ (m, n) = 2 m−1 · (2n − 1)
Now we go back to proving our lemma. If we can show that the range of any function g defined on finite multisets of a countable set is also countable, then the lemma holds for any g (k) by induction. Thus, our goal is to show that the range of such g is countable. First, it is clear that the mapping from g(X) to X is injective because g is a well-defined function. It follows that it suffices to show the set of all finite multisets X ⊂ X is countable.
By the union of two countable sets is countable, the following set X is also countable.
X = X ∪ {e}
where e is a dummy element that is not in X . It follows from the result we showed above, i.e., N k is countable for every k ∈ N, that X k is countable for every k ∈ N. It remains to show there exists an injective mapping from the set of finite multisets in X to X k for some k ∈ N.
We construct an injective mapping h from the set of finite multisets X ⊂ X to X k for some k ∈ N as follows. Because X is countable, there exists a mapping Z : X → N from x ∈ X to natural numbers. We can sort the elements x ∈ X by z(x) as x 1 , x 2 , ..., x n , where n = |X|. Because the multisets X are finite, there exists k ∈ N so that |X| < k for all X. We can then define h as follows.
h (X) = (x 1 , x 2 , ..., x n , e, e, e...)
where the k − n coordinates are filled with the dummy element e. It is clear that h is injective because for any finite multisets X and Y , h(X) = h(Y ) only if X is equivalent to Y . Hence it follows that the range of g is countable as desired.
D PROOF FOR LEMMA 5
Proof. We first prove that there exists a mapping f so that x∈X f (x) is unique for each finite multiset X. Because X is countable, there exists a mapping Z : X → N from x ∈ X to natural numbers. Because the multisets X are finite, there exists a number N ∈ N so that |X| < N for all X. Then an example of such f is f (x) = N −Z(x) . This f can be viewed as a more compressed form of an one-hot vector or N -digit presentation. Thus, h(X) = x∈X f (x) is an injective function of multisets.
φ x∈X f (x)
is permutation invariant so it is a well-defined multiset function. For any multiset function g, we can construct such φ by letting φ x∈X f (x) = g(X). Note that such φ is well-defined because h(X) = x∈X f (x) is injective.
E PROOF OF COROLLARY 6
Proof. Following the proof of Lemma 5, we consider f (x) = N −Z(x) , where N and Z : X → N are the same as defined in Appendix D. Let h(c, X) ≡ (1 + ) · f (c) + x∈X f (x). Our goal is show that for any (c , X ) = (c, X) with c, c ∈ X and finite multisets X, X ⊂ X , h(c, X) = h(c , X ) holds, if is an irrational number. We prove by contradiction. For any (c, X), suppose there exists (c , X ) such that (c , X ) = (c, X) but h(c, X) = h(c , X ) holds. Let us consider the following two cases: (1) c = c but X = X, and (2) c = c. For the first case, h(c, X) = h(c, X ) implies x∈X f (x) = x∈X f (x). It follows from Lemma 5 that the equality will not hold, because with
f (x) = N −Z(x) , X = X implies x∈X f (x) = x∈X f (x)
. Thus, we reach a contradiction. For the second case, we can similarly rewrite h(c, X) = h(c , X ) as For any function g over the pairs (c, X), we can construct such φ for the desired decomposition by letting φ (1 + ) · f (c) + x∈X f (x) = g(c, X). Note that such φ is well-defined because h(c, X) = (1 + ) · f (c) + x∈X f (x) is injective.
· (f (c) − f (c )) = f (c ) + x∈X f (x) − f (c) + x∈X f (x) .
F PROOF FOR LEMMA 7
Proof. Let us consider the example X 1 = {1, 1, 1, 1, 1} and X 2 = {2, 3}, i.e. two different multisets of positive numbers that sum up to the same value. We will be using the homogeneity of ReLU.
Let W be an arbitrary linear transform that maps x ∈ X 1 , X 2 into R n . It is clear that, at the same coordinates, W x are either positive or negative for all x because all x in X 1 and X 2 are positive. It follows that ReLU(W x) are either positive or 0 at the same coordinate for all x in X 1 , X 2 . For the coordinates where ReLU(W x) are 0, we have x∈X1 ReLU (W x) = x∈X2 ReLU (W x). For the coordinates where W x are positive, linearity still holds. It follows from linearity that x∈X ReLU (W x) = ReLU W x∈X x where X could be X 1 or X 2 . Because x∈X1 x = x∈X2 x, we have the following as desired.
G PROOF FOR COROLLARY 8
Proof. Suppose multisets X 1 and X 2 have the same distribution, without loss of generality, let us assume X 1 = (S, m) and X 2 = (S, k · m) for some k ∈ N ≥1 , i.e. X 1 and X 2 have the same underlying set and the multiplicity of each element in X 2 is k times of that in X 1 . Then we have |X 2 | = k|X 1 | and x∈X2 f (x) = k · x∈X1 f (x). Thus,
1 |X 2 | x∈X2 f (x) = 1 k · |X 1 | · k · x∈X1 f (x) = 1 |X 1 | x∈X1 f (x)
Now we show that there exists a function f so that 1 |X| x∈X f (x) is unique for distributionally equivalent X. Because X is countable, there exists a mapping Z : X → N from x ∈ X to natural numbers. Because the multisets X are finite, there exists a number N ∈ N so that |X| < N for all X.
Then an example of such f is f (x) = N −2Z(x) .
H PROOF FOR COROLLARY 9
Proof. Suppose multisets X 1 and X 2 have the same underlying set S, then we have
max x∈X1 f (x) = max x∈S f (x) = max x∈X2 f (x)
Now we show that there exists a mapping f so that max x∈X f (x) is unique for Xs with the same underlying set. Because X is countable, there exists a mapping Z : X → N from x ∈ X to natural numbers. Then an example of such f : X → R ∞ is defined as f i (x) = 1 for i = Z(x) and f i (x) = 0 otherwise, where f i (x) is the i-th coordinate of f (x). Such an f essentially maps a multiset to its one-hot embedding.
I DETAILS OF DATASETS
We give detailed descriptions of datasets used in our experiments. Furthre details can be found in (Yanardag & Vishwanathan, 2015).
Social networks datasets. IMDB-BINARY and IMDB-MULTI are movie collaboration datasets. Each graph corresponds to an ego-network for each actor/actress, where nodes correspond to actors/actresses and an edge is drawn betwen two actors/actresses if they appear in the same movie. Each graph is derived from a pre-specified genre of movies, and the task is to classify the genre graph it is derived from. REDDIT-BINARY and REDDIT-MULTI5K are balanced datasets where each graph corresponds to an online discussion thread and nodes correspond to users. An edge was drawn between two nodes if at least one of them responded to another's comment. The task is to classify each graph to a community or a subreddit it belongs to. COLLAB is a scientific collaboration dataset, derived from 3 public collaboration datasets, namely, High Energy Physics, Condensed Matter Physics and Astro Physics. Each graph corresponds to an ego-network of different researchers from each field. The task is to classify each graph to a field the corresponding researcher belongs to.
Bioinformatics datasets. MUTAG is a dataset of 188 mutagenic aromatic and heteroaromatic nitro compounds with 7 discrete labels. PROTEINS is a dataset where nodes are secondary structure elements (SSEs) and there is an edge between two nodes if they are neighbors in the amino-acid sequence or in 3D space. It has 3 discrete labels, representing helix, sheet or turn. PTC is a dataset of 344 chemical compounds that reports the carcinogenicity for male and female rats and it has 19 discrete labels. NCI1 is a dataset made publicly available by the National Cancer Institute (NCI) and is a subset of balanced datasets of chemical compounds screened for ability to suppress or inhibit the growth of a panel of human tumor cell lines, having 37 discrete labels.
Figure 3 :
3Examples of simple graph structures that mean and max-pooling aggregators fail to distinguish.
• W (Duvenaud et al., 2015; Kipf & Welling, 2017; Zhang et al., 2018), a linear mapping followed by a non-linear activation function such as a ReLU. Such 1-layer mappings are examples of Generalized Linear Models
, where C-SVM (Chang & Lin, 2011) was used as a classifier. The hyper-parameters we tune are C in SVM and the number of WL iterations ∈ {1, 2, . . . , 6};(2) state-of-the-art deep learning architectures Diffusion-convolutional neural networks (DCNN)(Atwood & Towsley, 2016), PATCHY-SAN(Niepert et al., 2016) and Deep Graph CNN (DGCNN); (3) Anonymous Walk Embeddings (AWL)(Ivanov & Burnaev, 2018). For the deep learning methods and AWL, we report the accuracies reported in the original papers.
Figure 4 :
4Training set performance of GINs, less powerful GNN variants, and the WL subtree kernel.
an irrational number and f (c) − f (c ) is a non-zero rational number, L.H.S. of Eq. E.1 is irrational. On the other hand, R.H.S. of Eq. E.1, the sum of a finite number of rational numbers, is rational. Hence the equality in Eq. E.1 cannot hold, and we have reached a contradiction.
ReLU (W x) = x∈X2 ReLU (W x)
2 )
2Figure 1: Subtree structures at the blue nodes in Weisfeiler-Lehman graph isomorphism test. Two WL iterations can capture and distinguish the structure of rooted subtrees of height 2.
There exist certain (somewhat contrived) graphs that GIN-and the WL test can distinguish but GIN-0 cannot.2 For REDDIT-BINARY, REDDIT-MULTI5K, and COLLAB, we did not run experiments for max-pooling due to GPU memory constraints.
ACKNOWLEDGMENTSThis research was supported by NSF CAREER award 1553284 and a DARPA D3M award. This research was also supported in part by NSF, ARO MURI, IARPA HFC, Boeing, Huawei, Stanford Data Science Initiative, and Chan Zuckerberg Biohub.We thank Prof. Ken-ichi Kawarabayashi and Prof. Masashi Sugiyama for supporting this research with computing resources and great advice. We thank Tomohiro Sonobe and Kento Nozawa for managing servers. We thank Rex Ying and William Hamilton for helpful reviews and positive comments. We thank Simon S. Du, Dr. Yasuo Tabei, Chengtao Li, and Jingling Li for helpful discussions and positive comments.
Diffusion-convolutional neural networks. James Atwood, Don Towsley, Advances in Neural Information Processing Systems (NIPS). James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 1993-2001, 2016.
Graph isomorphism in quasipolynomial time. László Babai, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. the forty-eighth annual ACM symposium on Theory of ComputingACMLászló Babai. Graph isomorphism in quasipolynomial time. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pp. 684-697. ACM, 2016.
Canonical labelling of graphs in linear average time. László Babai, Ludik Kucera, Foundations of Computer Science. IEEE20th Annual Symposium onLászló Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. In Foundations of Computer Science, 1979., 20th Annual Symposium on, pp. 39-46. IEEE, 1979.
Interaction networks for learning about objects, relations and physics. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, Advances in Neural Information Processing Systems (NIPS). Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems (NIPS), pp. 4502-4510, 2016.
An optimal lower bound on the number of variables for graph identification. Jin-Yi Cai, Martin Fürer, Neil Immerman, Combinatorica. 124Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992.
Generalized linear models. J A Nelder, R W M Wedderburn, Journal of the Royal Statistical Society, Series A, General. 135J. A. Nelder and R. W. M. Wedderburn. Generalized linear models. Journal of the Royal Statistical Society, Series A, General, 135:370-384, 1972.
Learning convolutional neural networks for graphs. Mathias Niepert, Mohamed Ahmed, Konstantin Kutzkov, International Conference on Machine Learning (ICML). Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International Conference on Machine Learning (ICML), pp. 2014-2023, 2016.
Pointnet: Deep learning on point sets for 3d classification and segmentation. Hao Charles R Qi, Kaichun Su, Leonidas J Mo, Guibas, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)14Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1(2):4, 2017.
A simple neural network module for relational reasoning. Adam Santoro, David Raposo, G David, Mateusz Barrett, Razvan Malinowski, Peter Pascanu, Timothy Battaglia, Lillicrap, Advances in neural information processing systems. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967-4976, 2017.
Measuring abstract reasoning in neural networks. Adam Santoro, Felix Hill, David Barrett, Ari Morcos, Timothy Lillicrap, International Conference on Machine Learning. Adam Santoro, Felix Hill, David Barrett, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, pp. 4477-4486, 2018.
Computational capabilities of graph neural networks. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, IEEE Transactions on Neural Networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20 (1):81-102, 2009a.
The graph neural network model. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, IEEE Transactions on Neural Networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009b.
Weisfeiler-lehman graph kernels. Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, Karsten M Borgwardt, Journal of Machine Learning Research. 12Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(Sep): 2539-2561, 2011.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014.
Graph attention networks. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, International Conference on Learning Representations (ICLR). Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations (ICLR), 2018.
Saurabh Verma, Zhi-Li Zhang, arXiv:1805.08090Graph capsule convolutional neural networks. arXiv preprintSaurabh Verma and Zhi-Li Zhang. Graph capsule convolutional neural networks. arXiv preprint arXiv:1805.08090, 2018.
A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia. Boris Weisfeiler, Lehman, 2Boris Weisfeiler and AA Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, 2(9):12-16, 1968.
Representation learning on graphs with jumping knowledge networks. Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-Ichi Kawarabayashi, Stefanie Jegelka, International Conference on Machine Learning (ICML). Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning (ICML), pp. 5453-5462, 2018.
Deep graph kernels. Pinar Yanardag, Vishwanathan, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMPinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1365-1374. ACM, 2015.
Hierarchical graph representation learning with differentiable pooling. Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, L William, Jure Hamilton, Leskovec, Advances in Neural Information Processing Systems (NIPS). Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems (NIPS), 2018.
Deep sets. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, R Ruslan, Alexander J Salakhutdinov, Smola, Advances in Neural Information Processing Systems. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, pp. 3391-3401, 2017.
An end-to-end deep learning architecture for graph classification. Muhan Zhang, Zhicheng Cui, Marion Neumann, Yixin Chen, AAAI Conference on Artificial Intelligence. Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In AAAI Conference on Artificial Intelligence, pp. 4438-4445, 2018. |
222,379,753 | SELF-TRAINING FOR FEW-SHOT TRANSFER ACROSS EXTREME TASK DIFFERENCES | All few-shot learning techniques must be pre-trained on a large, labeled "base dataset". In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray images), one must resort to pre-training in a different "source" problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on a challenging benchmark with multiple domains. | [
52912260,
202230734,
4009713,
49868626,
3507990
] | SELF-TRAINING FOR FEW-SHOT TRANSFER ACROSS EXTREME TASK DIFFERENCES
Perng Cheng
Department of Computer Science
Cornell University
Bharath Phoo [email protected]
Department of Computer Science
Cornell University
Hariharan
Department of Computer Science
Cornell University
SELF-TRAINING FOR FEW-SHOT TRANSFER ACROSS EXTREME TASK DIFFERENCES
Under review as a conference paper at ICLR 2021
All few-shot learning techniques must be pre-trained on a large, labeled "base dataset". In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray images), one must resort to pre-training in a different "source" problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on a challenging benchmark with multiple domains.
INTRODUCTION
Despite progress in visual recognition, training recognition systems for new classes in novel domains requires thousands of labeled training images per class and several hours of compute. For example, to train a recognition system for different kinds of pneumonia in chest X-rays, one would have to get radiologists to label thousands of X-ray images, and then spend several hours to train a neural network on high-end GPUs. The high cost of doing so precludes many downstream applications.
This issue has motivated research on few-shot learners: systems that can rapidly learn novel classes from a few examples. However, all few-shot learners must be trained on a large base dataset of classes from the same domain. This is a problem in many domains (such as medical imagery, satellite images), where no large labeled dataset of base classes exists. The only alternative is to train the few-shot learner on a different domain (a common choice is to use ImageNet). Unfortunately, fewshot learning techniques assume that novel and base classes share modes of variation , class-distinctive features (Snell et al., 2017), or other inductive biases. These assumptions are broken when the difference between base and novel is as extreme as the difference between object classification in internet photos and pneumonia detection in X-ray images. As such, recent work has found that all few-shot learners fail in the face of such extreme task/domain differences, underperforming even naive transfer learning from ImageNet (Guo et al., 2020).
Another alternative comes to light when one considers that many of these problem domains have unlabeled data (e.g., undiagnosed X-ray images, or unlabeled satellite imagery). This suggests the possibility of using self-supervised techniques on this unlabeled data to produce a good feature representation, which can then be used to train linear classifiers for the target classification task using just a few labeled examples. Indeed, recent work has explored self-supervised learning on a variety of domains (Wallace & Hariharan, 2020). However, self-supervised learning starts tabula rasa, and as such requires extremely large amounts of unlabeled data (on the order of millions of images). With more practical unlabeled datasets, self supervised techniques still struggle to outcompete naive ImageNet transfer (Wallace & Hariharan, 2020). We are thus faced with a conundrum: on the one hand, few-shot learning techniques fail to bridge the extreme differences between ImageNet and domains such as X-rays. On the other hand, self-supervised techniques fail when they ignore inductive biases from ImageNet. A sweet spot in the middle, if it exists, is elusive. Figure 1: Problem setup. In the representation learning phase (left), the learner has access to a large labeled "base dataset" in the source domain, and some unlabeled data in the target domain, on which to pre-train its representation. The learner must then rapidly learn/adapt to few-shot tasks in the target domain in the evaluation phase (right).
In this paper, we solve this conundrum by presenting a strategy that adapts feature representations trained on source tasks to extremely different target domains, so that target task classifiers can then be trained on the adapted representation with very little labeled data. Our key insight is that a pre-trained base classifier from the source domain, when applied to the target domain, induces a grouping of images on the target domain. This grouping captures what the pre-trained classifier thinks are similar or dissimilar in the target domain. Even though the classes of the pre-trained classifier are themselves irrelevant in the target domain, the induced notions of similarity and dissimilarity might still be relevant and informative. This induced notion of similarity is in contrast to current self-supervised techniques which often function by considering each image as its own class and dissimilar from every other image in the dataset (Wu et al., 2018;. We propose to train feature representations on the novel target domain to replicate this induced grouping. This approach produces a feature representation that is (a) adapted to the target domain, while (b) maintaining prior knowledge from the source task to the extent that it is relevant. A discerning reader might observe the similarity of this approach to self-training, except that our goal is to adapt the feature representation to the target domain, rather than improve the base classifier itself.
We call our approach "Self Training to Adapt Representations To Unseen Problems", or STARTUP. In a recently released benchmark consisting of datasets from extremely different domains (Guo et al., 2020), we show that STARTUP provides significant gains (up to 2.9 points on average) over few-shot learning, transfer learning and self-supervision state-of-the-art. To the best of our knowledge, ours is the first attempt to bridge such large task/domain gaps and successfully and consistently outperform naive transfer in cross-domain few-shot learning.
PROBLEM SETUP
Our goal is to build learners for novel domains that can be quickly trained to recognize new classes when presented with very few labeled data points ("few-shot"). Formally, the target domain is defined by a set of data points (e.g. images) X N , an unknown set of classes (or label space) Y N , and a distribution D N over X N × Y N . A "few-shot learning task" in this domain will consist of a set of classes Y ⊂ Y N , a very small training set ("support")
S = {(x i , y i )} n i=1 ∼ D n N , y i ∈ Y and a small test set ("query") Q = {x i } m i=1 ∼ D m N
When presented with such a few-shot learning task, the learner must rapidly learn the classes presented and accurately classify the query images.
As with prior few-shot learning work, we will assume that before being presented with few-shot learning tasks in the target domain, the learner has access to a large annotated dataset D B known as the base dataset. However, crucially unlike prior work on few-shot learning, we assume that this base dataset is drawn from a very different distribution. In fact, we assume that the base dataset is drawn from a completely disjoint image space X B and a disjoint set of classes Y B :
D B = {(x i , y i )} N B i=1 ⊂ X B × Y B where X B
is the set of data (or the source domain) and Y B is the set of base classes. Because the base dataset is so different from the target domain, we introduce another difference vis-a-vis the conventional few-shot learning setup: the learner is given access to an additional unlabeled dataset from the target domain:
D u = {x i } Nu i=1 ∼ D N Nu
Put together, the learner will undergo two phases. In the representation learning phase, the learner will pre-train its representation on D B and D u ; then it goes into the evaluation phase where it will be presented few-shot tasks from the target domain where it learns the novel classes ( Figure 1).
RELATED WORK
Few-shot Learning (FSL). This paper explores few-shot transfer, and as such the closest related work is on few-shot learning. Few-shot learning techniques are typically predicated on some degree of similarity between classes in the base dataset and novel classes. For example, they may assume that features that are discriminative for the base classes are also discriminative for the novel classes, suggesting a metric learning-based approach Qi et al., 2018;Snell et al., 2017;Vinyals et al., 2016;Sung et al., 2018;Hou et al., 2019). Alternatively, they may assume that model initializations that lead to rapid convergence on the base classes are also good initializations for the novel classes (Finn et al., 2017;Ravi & Larochelle, 2017;Nichol & Schulman;Rusu et al., 2019;. Other methods assume that modes of intra-class variation are shared, suggesting the possibility of learned, class-agnostic augmentation policies (Hariharan & Girshick, 2017;Chen et al., 2019b). Somewhat related is the use of a class-agnostic parametric model that can "denoise" few-shot models, be they from the base or novel classes . In contrast to such strong assumptions of similarity between base and novel classes, this paper tackles few-shot learning problems where base and novel classes come from very different domains, also called cross-domain few-shot learning.
Cross-domain Few-shot Classification (CD-FSL). When the domain gap between the base and novel dataset is large, recent work (Guo et al., 2020;Chen et al., 2019a) has shown that existing stateof-the-art few-shot learners fail to generalize. Tseng et al. (2020) attempt to address this problem by simulating cross-domain transfer during training. However, their approach assumes access to an equally diverse array of domains during training, and a much smaller domain gap at test time: for example, both base and novel datasets are from internet images. Our paper tackles a more extreme domain gap.
Few-shot learning with unlabeled data. This paper uses unlabeled data from the target domain to bridge the domain gap. Semi-supervised few-shot learning (SS-FSL) (Ren et al., 2018;Rodríguez et al., 2020;Wang et al., 2020) and transductive few-shot learning (T-FSL) Dhillon et al., 2020;Hou et al., 2019;Wang et al., 2020;Rodríguez et al., 2020) do use such unlabeled data, but only during evaluation, assuming that representations trained on the base dataset are good enough. In contrast our approach leverages the unlabeled data during representation learning. The two are orthogonal innovations and can be combined.
Self-Training. Our approach is closely related to self-training, which has been shown to be effective for semi-supervised training and knowledge distillation. In self-training, a teacher model trained on the labeled data is used to label the unlabeled data and another student model is trained on both the original labeled data and the unlabeled data labeled by the teacher. and Yalniz et al. (2019) have shown that using self-training can improve ImageNet classification performance. Knowledge distillation (Hinton et al., 2015) is similar but aims to compress a large teacher network by training a student network to mimic the prediction of the teacher network. A key difference between these and our work is that self-training / knowledge distillation focus on a single task of interest, i.e, there is no change in label space. Our approach is similar, but we are interested in transferring to novel domains with a wholly different label space: an unexplored scenario.
Domain Adaptation. Transfer to new domains is also in the purview of domain adaptation (Tzeng et al., 2017;Hoffman et al., 2018;Long et al., 2018;Xu et al., 2019;Laradji & Babanezhad, 2020;Wang & Deng, 2018;Wilson & Cook, 2020) where the goal is to transfer knowledge from the labelabundant source domain to a target domain where only unlabeled data is available. However, a key assumption in domain adaptation is that the source domain and target domain share the same label space which does not hold for few-shot learning.
Self-supervised Learning. Learning from unlabeled data has seen a resurgence of interest with advances in self-supervised learning. Early self-supervised approaches were based on handcrafted "pretext tasks" such as solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization or predicting rotation . A more recent (and better performing) line of self-supervised learning is contrastive learning (Wu et al., 2018;Misra & Maaten, 2020;He et al., 2020; which aims to learn representations by considering each image together with its augmentations as a separate class. While self supervision has been shown to boost few-shot learning Su et al., 2020), its utility in cases of large domain gaps between base and novel datasets have not been evaluated. Our work focuses on this challenging scenario.
APPROACH
Consider a classification model f θ = C • φ where φ embeds input x into R d and C is a (typically linear) classifier head that maps φ(x) to predicted probabilities P (y|x). θ is a vector of parameters. During representation learning, STARTUP performs the following three steps:
1. Learn a teacher model θ 0 on the base dataset D B by minimizing the cross entropy loss 2. Use the teacher model to construct a softly-labeled set
D * u = {(x i ,ȳ i )} Nu i=1 wherē y i = f θ0 (x i ) ∀x i ∈ D u .(1)
Note thatȳ i is a probability distribution as described above.
3. Learn a new student model θ * on D B and D * u by optimizing:
min θ 1 N B (xi,yi)∈D B l CE (f θ (x i ), y i ) + 1 N u (xj ,ȳj )∈D * u l KL (f θ (x j ),ȳ j ) + l unlabeled (D u )
(2) where l CE is the cross entropy loss, l KL is the KL divergence and l unlabeled is any unsupervised/self-supervised loss function (See below).
The third term, l unlabeled , is intended to help the learner extract additional useful knowledge specific to the target domain. We use a state-of-the-art self-supervised loss function based on contrastive learning: SimCLR . The SimCLR loss encourages two augmentations of the same image to be closer in feature space to each other than to other images in the batch. We refer the reader to the paper for the detailed loss formulation.
The first two terms are similar to those in prior self-training literature . However, while in prior self-training work, the second term (l KL ) is thought to mainly introduce noise during training, we posit that l KL has a more substantial role to play here: it encourages the model to learn feature representations that emphasize the groupings induced by the pseudo-labelsȳ i on the target domain. We analyze this intuition in section 5.2.2.
EVALUATION
STARTUP is agnostic to inference methods during evaluation; any inference methods that rely on a representation (Snell et al., 2017; can be used with STARTUP. For simplicity and based on results reported by Guo et al. (2020), we freeze the representation φ after performing STARTUP and train a linear classifier on the support set and evaluate the classifier on the query set. found that training the student from scratch sometimes yields better results for ImageNet classification. For STARTUP, the student is initialized to the teacher embedding with a randomly initialized classifier by default. We experimented with two other initialization strategies -from scratch (STARTUP-Rand) and from teacher (STARTUP-T ) and find (STARTUP-Rand) hurts performance whereas STARTUP-T performs slightly worse than STARTUP (See Appendix A.3).
INITIALIZATION STRATEGIES
EXPERIMENTS
We defer the implementation details to Appendix A.1.
FEW-SHOT TRANSFER ACROSS DRASTICALLY DIFFERENT DOMAINS
Benchmark. We experiment with the challenging (BSCD-FSL) benchmark introduced in Guo et al. (2020). The base dataset in this benchmark is miniImageNet (Vinyals et al., 2016), which is an object recognition task on internet images. There are 4 novel datasets in the benchmark, none of which involve objects, and all of which come from a very different domain than internet images: CropDiseases (recognizing plant diseases in leaf images), EuroSAT (predicting land-use from satellite images), ISIC2018 (identifying melanoma from images of skin lesions) and ChestX (diagnosing chest X-rays). Guo et al. found that state-of-the-art few-shot learners fail on this benchmark.
To construct our setup, we randomly sample 20% of data from each novel datasets to form the respective unlabeled datasets D u . We use the rest for sampling tasks for evaluation. Following Guo et al. (2020), we evaluate 5-way k-shot classification tasks (the support set consists of 5 classes and k examples per class) for k ∈ {1, 5} and report the mean and 95% confidence interval over 600 few-shot tasks (conclusions generalize to k ∈ {20, 50}. See Appendix A.2).
Baselines. We compare to the techniques reported in Guo et al. (2020), which includes most stateof-the-art approaches as well as a cross-domain few-shot technique Tseng et al. (2020). The top performing among these is naive Transfer which simply trains a convolutional network to classify the base dataset, and uses the resulting representation to learn a linear classifier when faced with novel few-shot tasks. These techniques do not use the novel domain unlabeled data.
We also compare to an additional baseline, SimCLR that uses the novel domain unlabeled data D u to train a representation using SimCLR , and then uses the resulting representation to learn linear classifiers for few-shot tasks. This builds upon state-of-the-art self-supervised techniques.
Following the benchmark, all methods use a ResNet-10 (He et al., 2016) unless otherwise stated.
RESULTS
We present our main results on miniImageNet → BSCD-FSL in Table 1. STARTUP vs Few-shot learning techniques. STARTUP performs significantly better than all fewshot techniques in most datasets (except ChestX, where all methods are similar). Compared to previous state-of-the-art, Transfer, we observe an average of 2.9 points improvement on the 1-shot case. The improvement is particularly large on CropDisease, where STARTUP provides almost a 6 point increase for 1-shot classification. This improvement is significant given the simplicity of our approach, and given that all meta-learning techniques underperform this baseline. STARTUP vs SimCLR. The SimCLR baseline in general tends to underperform naive transfer from miniImageNet, and consequently, STARTUP performs significantly better than SimCLR on ISIC and EuroSAT. The exception to this is CropDisease, where SimCLR produces a surprisingly good representation. We conjecture that the base embedding is not a good starting point for this dataset .
Different variants of STARTUP . In general all variants of STARTUP outperform the transfer baseline. However, we find that using SimCLR as an auxilliary loss to train the student (STARTUP vs STARTUP (no SS), and STARTUP-T vs STARTUP-T (no SS)) is beneficial. Additionally, Table 1: 5-way k-shot classification accuracy on miniImageNet→BSCD-FSL. Mean and 95% confidence interval are reported. (no SS) indicates removal of SimCLR. ProtoNet: (Snell et al., 2017), MAML: (Finn et al., 2017), MetaOpt: FWT: (Tseng et al., 2020). * Numbers reported in (Guo et al., 2020); our re-implementation of Transfer uses a different batch size and 80% of the original test set for evaluation.
.
Methods
ChestX re-initializing the classifier head works better than using the pre-trained classifier (STARTUP vs STARTUP-T ), potentially because the classifier is too overfitted to the base dataset.
Larger and stronger teachers. To unpack the impact of teacher quality, we experiment with a larger network and transfer from the full ILSVRC 2012 dataset (Deng et al., 2009) to BSCD-FSL.
In particular, we used the publicly available pre-trained ResNet-18 (He et al., 2016) as a teacher and train a student via STARTUP. We compare this to a transfer baseline that uses the same network and ImageNet as the training set. The result can be found in table 2. Surprisingly, larger, richer embeddings do not always transfer better, in contrast to in-domain results reported by Hariharan & Girshick (2017). However, STARTUP is still useful in improving performance: the absolute improvement in performance for STARTUP compared to Transfer remains about the same in most datasets except EuroSAT and CropDisease where larger improvements are observed.
WHY SHOULD STARTUP WORK?
While it is clear that STARTUP helps improve few shot transfer across extreme domain differences, it is not clear why or how it achieves this improvement. Below, we look at a few possible hypotheses.
HYPOTHESIS 1: STARTUP ADDS NOISE WHICH INCREASES ROBUSTNESS.
Xie et al. (2020) posit that self-training introduces noise when training the student and thus yielding a more robust student. More robust students may be learning more generalizable representations, and this may be allowing STARTUP to bridge the domain gap. Under this hypothesis, the function of the unlabeled data is only to add noise during training. This in turn suggests that STARTUP should yield improvements on the target tasks even if trained on unlabeled data from a different domain.
To test this, we train a STARTUP ResNet-18 student on EuroSAT and ImageNet and evaluate it on CropDisease. This model yields a 5-way 1-shot performance of 70.40 ± 0.86 (88.78 ± 0.54 for 5-shot), significantly underperforming the naive Transfer baseline ( Table 2. See Appendix A.6 for different combinations of unlabeled dataset and target dataset). This suggests that the hypothesis is incorrect: unlabeled data are not merely functioning as noise. Rather, STARTUP is learning inherent structure in the target domain useful for downstream classification. The question now becomes what inherent structure STARTUP is learning, which leads us to the next hypothesis.
HYPOTHESIS 2: STARTUP ENHANCES TEACHER-INDUCED GROUPINGS
The teacher produces a meaningful grouping of the data from the target domain. The predictions made by the teacher essentially induce a grouping on the target domain. Even though the base label space and novel label space are disjoint, the groupings produced by the teacher might not be entirely irrelevant for the downstream classification task. To test this, we first assign each example in the novel datasets to its most probable prediction by the teacher (ResNet 18 trained on ImageNet). We then compute the adjusted mutual information (AMI) (Vinh et al., 2010) between the resulting grouping and ground truth label. AMI ranges from 0 for unrelated groupings to 1 for identical groupings. From Table 3, we see that on EuroSAT and CropDisease, there is quite a bit of agreement between the induced grouping and ground truth label. Interestingly, these are the two datasets where we observe the best transfer performance and most improvement from STARTUP (Table 2), suggesting correlations between the agreement and the downstream classification task performance.
STARTUP enhances the grouping induced by the teacher. Even though the induced groupings by the teacher can be meaningful, one could argue that those groupings are captured in the teacher model already, and no further action to update the representation is necessary. However, we posit that STARTUP encourages the feature representations to emphasize the grouping. To verify, we plot the t-SNE (Maaten & Hinton, 2008) of the data prior to STARTUP and after STARTUP for the figure 2. From the t-SNE plot, we observe more separation after doing STARTUP, signifying a representation with stronger discriminability.
Put together, this suggests that STARTUP works by (a) inducing a potentially meaningful grouping on the target domain data, and (b) training a representation that emphasizes this grouping.
FEW-SHOT TRANSFER ACROSS SIMILAR DOMAINS
Is STARTUP still useful when the gap between the base and target is smaller? To answer this, we tested STARTUP on two popular within-domain few-shot learning benchmark: miniImageNet (Vinyals et al., 2016) and tieredImageNet (Ren et al., 2018). For miniImageNet, we use 20% of the novel set as the unlabeled dataset and use the same teacher as in section 5.1. For tieredImageNet, we use 10% of the novel set as unlabeled data and use ResNet-12 (Oreshkin et al., 2018) as our model architecture. We follow the same evaluation protocols in section 5.1.
We report the results in table 4. We find that on miniImageNet, STARTUP neither helps nor hurts, indicating that the representation is already well-matched. On tieredImageNet, we find that the STARTUP-T variant did outperform naive transfer, but the default STARTUP variant did not. This might be because in this case the pre-trained classifier head does in fact carry useful information for the target domain and therefore should not be discarded. In sum, these results show the potential of STARTUP variants to boost few-shot transfer even when the base and target domains are close.
Additional Ablation Studies: We conducted two additional ablation studies: (a) training the student with various amount of unlabeled data and (b) training the student without the base dataset. We show that STARTUP benefits from more unlabeled data (Appendix A.4) and training student without the base dataset can hurt performance in certain datasets but not all datasets (Appendix A.5).
CONCLUSION
We investigate the use of unlabeled data from novel target domains to mitigate the performance degradation of few-shot learners due to large domain/task differences. We introduce STARTUP -a simple yet effective approach that allows few-shot learners to adapt feature representations to the target domain while retaining class grouping induced by the base classifier. We show that STARTUP outperforms prior art on extreme cross-domain few-shot transfer.
ACKNOWLEDGEMENT
This work was funded by the DARPA LwLL program.
model and pick the learning rate that yields lowest loss on the validation set as the starting learning rate. We reduce the learning rate by a factor of 2 when the training loss has not decreased by 20 epochs. The model that achieves the lowest loss on the internal validation set throughout the 1000 epochs of training is picked as the final model.
SimCLR. Our implementation of SimCLR's loss function is based on a publicly available implementation of SimCLR 2 . We added the two-layer projection head on top of the embedding function φ. The temperature of NT-Xent is set to 1 since there is no validation set for BSCD-FSL for hyperparameter selection and we use a temperature of 1 when inferring the soft label of the unlabeled set. For the stochastic image augmentations for SimCLR, we use the augmentations defined for each novel dataset in Guo et al. (2020). These augmentations include the commonly used "randomly resized crop", color jittering, random horizontal flipping. For tieredImageNet and miniImageNet, we use the stochastic transformation implemented for the BSCD-FSL benchmark. We refer readers to the BSCD-FSL implementation for more details.
When training the student on the base dataset, we use the augmentation used for training the teacher for fair comparison. The batchsize for SIMCLR is set to 256.
A.1.3 TRAINING LINEAR CLASSIFIER.
We use the implementation by BSCD-FSL, i.e training the linear classifier with standard cross entropy and SGD optimizer. The linear classifier is trained for 100 epochs with learning rate 0.01, momentum 0.9 and weight decay 1e-4.
A.1.4 BASELINES
We use the same evaluation methods -linear classifier. Please see A.1.3 for classifier training.
Transfer. This is implemented using the teacher model as feature extractor. Please see A.1.1 for details.
SimCLR. This is implemented similarly to the SimCLR loss described in A.1.2
A.1.5 T-SNE
We use the publicly available scikit-learn implementation of t-SNE (Buitinck et al., 2013). We used the default parameters except for the perplexity where we set to 50. To speed up the experiment, we randomly sampled 25% of the data used for sampling few-shot tasks (80 % of the full dataset) and run t-SNE on this subset.
A.2 HIGH SHOTS RESULTS FOR MINIIMAGENET → BSCD-FSL
We present the result on miniImageNet → BSCD-FSL for higher shots in Table 5 and results on ImageNet → BSCD-FSL for higher shots in Table 6. The conclusions we found in 5.1 still hold for higher shots in general.
A.3 RANDOM INITIALIZATION FOR THE STUDENT
We investigate the impact of different initialization strategies for the student on STARTUP. For this experiment, we remove SimCLR from STARTUP and consider three initialization strategies for the student -from scratch (STARTUP-Rand (no SS)), from teacher embedding with a randomly initialized classifier (STARTUP (no SS)) , from teacher model (STARTUP-T (no SS)). We repeated the experiment in section 5.1 on miniImageNet → BSCD-FSL and report the results in table 7. We found that STARTUP (no SS)and STARTUP-T (no SS)performs similarly whereas STARTUP-Rand (no SS) performs significantly worse than its counterparts on ChestX and ISIC. Interestingly, STARTUP-Rand (no SS) performs better than the rest on CropDisease, suggesting again that the base embedding trained on miniImageNet might not be a good starting point for this dataset as we have found in section 5.1 (comparing SimCLR to STARTUP).
Top1 Accuracy
Figure 3: 5-way 5-shot Classification Accuracy of STARTUP for miniImageNet → ISIC with various amount of unlabeled data. Mean and 95% confidence interval over 600 tasks are plotted. Table 9: Few-shot classification accuracy on ImageNet(ILSVRC 2012)→BSCD-FSL with STARTUP on different datasets. STARTUP-X represents the STARTUP student trained on ImageNet and dataset X. The top table presents the results for 5-way 1-shot and the bottom table presents the results for 5-way 5-shot
Figure 2 :
2t-SNE plot of EuroSAT and CropDisease prior to and after STARTUP.
Table 2 :
25-way k-shot classification accuracy on ImageNet(ILSVRC 2012)→BSCD-FSL.
Methods
ChestX
ISIC
k=1
k=5
k=1
k=5
Transfer
21.97 ± 0.39 25.85 ± 0.41
30.27 ± 0.51 43.88 ± 0.56
STARTUP (no SS)
22.90 ± 0.40 26.74 ± 0.46
30.18 ± 0.56 44.19 ± 0.57
STARTUP
23.03 ± 0.42 27.24 ± 0.46
31.69 ± 0.59 46.02 ± 0.59
Methods
EuroSAT
CropDisease
k=1
k=5
k=1
k=5
Transfer
66.08 ± 0.81 85.58 ± 0.48
74.17 ± 0.82 92.46 ± 0.42
STARTUP (no SS)
70.08 ± 0.80 87.12 ± 0.45
80.13 ± 0.77 94.51 ± 0.38
STARTUP
73.83 ± 0.77 89.70 ± 0.41
85.10 ± 0.74 96.06 ± 0.33
Table 3 :
3Adjusted Mutual Information (AMI) of the grouping induced by the teacher and the ground truth label. AMI has value from 0 to 1 with higher value indicating more agreement.ChestX
ISIC
EuroSAT
CropDisease
AMI
0.0075
0.0427
0.3079
0.2969
Table 4
4: 5-way k-shot classification accuracy on miniImagenet and tieredImageNet.
Methods
miniImageNet
tieredImageNet
k=1
k=5
k=1
k=5
Transfer
54.18 ± 0.79 76.20 ± 0.64
57.29 ± 0.83 79.05 ± 0.65
STARTUP-T (no SS)
53.91 ± 0.79 76.26 ± 0.64
60.39 ± 0.86 80.14 ± 0.65
STARTUP (no SS)
53.74 ± 0.80 76.42 ± 0.63
55.49 ± 0.85 78.36 ± 0.66
STARTUP
54.20 ± 0.81 76.48 ± 0.63
55.33 ± 0.85 77.78 ± 0.67
two datasets in
Table 5 :
55-way k-shot classification accuracy on miniImageNet→BSCD-FSL for higher shots. Mean and 95% confidence interval are reported. * are methods reported in(Guo et al., 2020). Despite using their code, difference in batch size and test set (80% of the original test set) have resulted in discrepancies between our Transfer and their Transfer * . (no SS) indicates removal of SimCLR..
Methods
ChestX
ISIC
k=20
k=50
k=20
k=50
MAML *
27.53 ± 0.43 -
52.36 ± 0.57 -
ProtoNet *
28.21 ± 1.15 29.32 ± 1.12
49.50 ± 0.55 51.99 ± 0.52
ProtoNet + FWT *
26.87 ± 0.43 30.12 ± 0.46
43.78 ± 0.47 49.84 ± 0.51
MetaOpt *
25.53 ± 1.02 29.35 ± 0.99
49.42 ± 0.60 54.80 ± 0.54
Transfer *
30.83 ± 1.05 36.04 ± 0.46
52.78 ± 0.58 57.34 ± 0.56
Transfer
31.99 ± 0.46 35.74 ± 0.47
54.28 ± 0.59 60.26 ± 0.56
SimCLR
29.62 ± 0.44 32.69 ± 0.42
47.17 ± 0.58 52.55 ± 0.56
STARTUP-T (no SS)
31.77 ± 0.44 35.57 ± 0.47
55.80 ± 0.59 61.15 ± 0.56
STARTUP (no SS)
33.02 ± 0.47 36.72 ± 0.47
57.41 ± 0.57 62.71 ± 0.56
STARTUP-T
32.79 ± 0.46 36.66 ± 0.47
56.43 ± 0.60 61.76 ± 0.58
STARTUP
33.19 ± 0.46 36.91 ± 0.50
58.63 ± 0.58 64.16 ± 0.58
Methods
EuroSAT
CropDisease
k=20
k=50
k=20
k=50
Table 6 :
65-way k-shot classification accuracy on ImageNet(ILSVRC 2012)→BSCD-FSL for higher shots.Methods
ChestX
ISIC
k=20
k=50
k=20
k=50
Transfer
30.28 ± 0.45 32.55 ± 0.46
55.14 ± 0.60 60.99 ± 0.60
STARTUP (no SS)
31.98 ± 0.47 34.22 ± 0.47
55.54 ± 0.57 61.54 ± 0.55
STARTUP
32.40 ± 0.45 34.95 ± 0.48
57.06 ± 0.58 62.94 ± 0.56
Methods
EuroSAT
CropDisease
k=20
k=50
k=20
k=50
Transfer
91.78 ± 0.33 93.76 ± 0.29
96.96 ± 0.25 98.10 ± 0.19
STARTUP (no SS)
92.60 ± 0.31 94.53 ± 0.26
97.94 ± 0.20 98.62 ± 0.16
STARTUP
94.27 ± 0.26 95.61 ± 0.23
98.55 ± 0.17 99.07 ± 0.13
Table 7 :
75-way k-shot classification accuracy on miniImageNet→BSCD-FSL for different initialization strategies. Mean and 95% confidence interval are reported. STARTUP-Rand (no SS) 22.38 ± 0.41 24.96 ± 0.41 29.76 ± 0.60 40.45 ± 0.59 STARTUP-T (no SS) 22.79 ± 0.41 26.03 ± 0.43 32.37 ± 0.61 45.20 ± 0.61 STARTUP (no SS) 22.87 ± 0.41 26.68 ± 0.45 32.24 ± 0.62 46.48 ± 0.61.
Methods
ChestX
ISIC
k=1
k=5
k=1
k=5
Methods
EuroSAT
CropDisease
k=1
k=5
k=1
k=5
Table 8 :
85-way k-shot classification accuracy on miniImageNet→BSCD-FSL. We compare STARTUP to fine-tuning. 63.88 ± 0.84 82.29 ± 0.60 75.93 ± 0.80 93.02 ± 0.45 Fine-tuning 62.86 ± 0.85 82.36 ± 0.61 76.13 ± 0.78 93.01 ± 0.44.
Methods
ChestX
ISIC
k=1
k=5
k=1
k=5
STARTUP
23.09 ± 0.43 26.94 ± 0.44
32.66 ± 0.60 47.22 ± 0.61
Fine-tuning
22.76 ± 0.41 27.05 ± 0.45
32.45 ± 0.61 45.73 ± 0.61
Methods
EuroSAT
CropDisease
k=1
k=5
k=1
k=5
STARTUP
https://github.com/IBM/cdfsl-benchmark
https://github.com/sthalles/SimCLR
A APPENDIXA.1 IMPLEMENTATION DETAILS We implemented STARTUP by modifying the the publicly-available implementation 1 of BSCD-FSL byGuo et al. (2020).A.1.1 TRAINING THE TEACHER 1. MiniImageNet: We train the teacher model using the code provided in the BSCD-FSL benchmark. We keep everything the same except setting the batch size from 16 to 256. 2. TieredImageNet: We used the same setup as miniImageNet except we reduce the number of epochs to 90. We do not use any image augmentation for tieredImageNet. 3. ImageNet: We used the pretrained ResNet18 available on PyTorch(Paszke et al., 2019)A.1.2 TRAINING THE STUDENT Optimization Details. Regardless of the base and novel datasets, the student model is trained for 1000 epochs where an epoch is defined to be a complete pass on the unlabeled data. We use a batch size of 256 on the unlabeled dataset and a batch size of 256 for the base dataset if applicable. We use the SGD with momentum optimizer with momentum 0.9 and weight decay 1e-4. To pick the suitable starting learning rate, 10% of the unlabeled data and 5% of the labeled data (1% when using ImageNet as the base dataset) are set aside as our internal validation set. We pick the starting learning rate by training the student with starting learning rate lr ∈ {1e-1, 5e-2, 3e-2, 1e-2, 5e-3, 3e-3, 1e-3} for k epochs where k is the smallest epoch that guarantees at least 50 updates to the
API design for machine learning software: experiences from the scikit-learn project. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Van-Derplas, Arnaud Joly, Brian Holt, Gaël Varoquaux, ECML PKDD Workshop: Languages for Data Mining and Machine Learning. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Van- derPlas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. API design for machine learning software: experiences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Min- ing and Machine Learning, pp. 108-122, 2013.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, Proceedings of International Conference on Machine Learning (ICML). International Conference on Machine Learning (ICML)2020Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. Proceedings of International Conference on Machine Learning (ICML), 2020.
A closer look at few-shot classification. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2019a.
Image deformation meta-networks for one-shot learning. Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, Martial Hebert, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, and Martial Hebert. Image deformation meta-networks for one-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019b.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi- erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
A baseline for few-shot image classification. Pratik Guneet S Dhillon, Avinash Chaudhari, Stefano Ravichandran, Soatto, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)2020Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In Proceedings of the International Conference on Learning Rep- resentations (ICLR), 2020.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pp. 1126-1135, 2017.
Probabilistic model-agnostic meta-learning. Chelsea Finn, Kelvin Xu, Sergey Levine, Advances in Neural Information Processing Systems. Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In Ad- vances in Neural Information Processing Systems, pp. 9516-9527, 2018.
Dynamic few-shot visual learning without forgetting. Spyros Gidaris, Nikos Komodakis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSpyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367- 4375, 2018.
Generating classification weights with gnn denoising autoencoders for few-shot learning. Spyros Gidaris, Nikos Komodakis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSpyros Gidaris and Nikos Komodakis. Generating classification weights with gnn denoising autoen- coders for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21-30, 2019.
Unsupervised representation learning by predicting image rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2018.
Boosting fewshot visual learning with self-supervision. Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, Matthieu Cord, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionSpyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick Pérez, and Matthieu Cord. Boosting few- shot visual learning with self-supervision. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8059-8068, 2019.
A new benchmark for evaluation of cross-domain few-shot learning. Yunhui Guo, C F Noel, Leonid Codella, Karlinsky, R John, Tajana Smith, Rogerio Rosing, Feris, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)2020Yunhui Guo, Noel CF Codella, Leonid Karlinsky, John R Smith, Tajana Rosing, and Rogerio Feris. A new benchmark for evaluation of cross-domain few-shot learning. In Proceedings of the Euro- pean Conference on Computer Vision (ECCV), 2020.
Low-shot visual recognition by shrinking and hallucinating features. Bharath Hariharan, Ross Girshick, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionBharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3018- 3027, 2017.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729-9738, 2020.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Cycada: Cycle-consistent adversarial domain adaptation. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, Trevor Darrell, International conference on machine learning. PMLRJudy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pp. 1989-1998. PMLR, 2018.
Cross attention network for few-shot classification. Ruibing Hou, Hong Chang, M A Bingpeng, Shiguang Shan, Xilin Chen, Advances in Neural Information Processing Systems. Ruibing Hou, Hong Chang, MA Bingpeng, Shiguang Shan, and Xilin Chen. Cross attention network for few-shot classification. In Advances in Neural Information Processing Systems, pp. 4003- 4014, 2019.
M-adda: Unsupervised domain adaptation with deep metric learning. H Issam, Reza Laradji, Babanezhad, Domain Adaptation for Visual Understanding. SpringerIssam H Laradji and Reza Babanezhad. M-adda: Unsupervised domain adaptation with deep metric learning. In Domain Adaptation for Visual Understanding, pp. 17-31. Springer, 2020.
Meta-learning with differentiable convex optimization. Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10657-10665, 2019.
Learning to self-train for semi-supervised few-shot classification. Xinzhe Li, Qianru Sun, Yaoyao Liu, Qin Zhou, Shibao Zheng, Advances in Neural Information Processing Systems. Tat-Seng Chua, and Bernt SchieleXinzhe Li, Qianru Sun, Yaoyao Liu, Qin Zhou, Shibao Zheng, Tat-Seng Chua, and Bernt Schiele. Learning to self-train for semi-supervised few-shot classification. In Advances in Neural Infor- mation Processing Systems, pp. 10276-10286, 2019.
Learning to propagate labels: Transductive propagation network for few-shot learning. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. In Pro- ceedings of the International Conference on Learning Representations (ICLR), 2019.
Conditional adversarial domain adaptation. Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I Jordan , Advances in Neural Information Processing Systems. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems, pp. 1640-1650, 2018.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 9Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008.
Self-supervised learning of pretext-invariant representations. Ishan Misra, Laurens Van Der Maaten, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionIshan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representa- tions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6707-6717, 2020.
Reptile: a scalable metalearning algorithm. Alex Nichol, John Schulman, Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm.
Unsupervised learning of visual representations by solving jigsaw puzzles. Mehdi Noroozi, Paolo Favaro, European Conference on Computer Vision. SpringerMehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69-84. Springer, 2016.
Tadam: Task dependent adaptive metric for improved few-shot learning. Boris Oreshkin, Alexandre Pau Rodríguez López, Lacoste, Advances in Neural Information Processing Systems. Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pp. 721-731, 2018.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf.
Low-shot learning with imprinted weights. Hang Qi, Matthew Brown, David G Lowe, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHang Qi, Matthew Brown, and David G Lowe. Low-shot learning with imprinted weights. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5822-5830, 2018.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, Proceedings of the International Conference on Learning Representations (ICLR. the International Conference on Learning Representations (ICLRSachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
Meta-learning for semi-supervised few-shot classification. Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, Richard S Zemel, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classifica- tion. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
Embedding propagation: Smoother manifold for few-shot classification. Issam Pau Rodríguez, Alexandre Laradji, Alexandre Drouin, Lacoste, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)2020Pau Rodríguez, Issam Laradji, Alexandre Drouin, and Alexandre Lacoste. Embedding propagation: Smoother manifold for few-shot classification. In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
Meta-learning with latent embedding optimization. Dushyant Andrei A Rusu, Jakub Rao, Oriol Sygnowski, Razvan Vinyals, Simon Pascanu, Raia Osindero, Hadsell, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Advances in neural information processing systems. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077-4087, 2017.
When does self-supervision improve fewshot learning?. Jong-Chyi Su, Subhransu Maji, Bharath Hariharan, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)2020Jong-Chyi Su, Subhransu Maji, and Bharath Hariharan. When does self-supervision improve few- shot learning? In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
Meta-transfer learning for few-shot learning. Qianru Sun, Yaoyao Liu, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTat-Seng Chua, and Bernt SchieleQianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 403-412, 2019.
Learning to compare: Relation network for few-shot learning. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, H S Philip, Timothy M Torr, Hospedales, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionFlood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018.
Cross-domain few-shot classification via learned feature-wise transformation. Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, Ming-Hsuan Yang, International Conference on Learning Representations. Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, and Ming-Hsuan Yang. Cross-domain few-shot classification via learned feature-wise transformation. In International Conference on Learning Representations, 2020.
Adversarial discriminative domain adaptation. Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionEric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7167-7176, 2017.
Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Julien Nguyen Xuan Vinh, James Epps, Bailey, The Journal of Machine Learning Research. 11Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for cluster- ings comparison: Variants, properties, normalization and correction for chance. The Journal of Machine Learning Research, 11:2837-2854, 2010.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, Advances in neural information processing systems. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pp. 3630-3638, 2016.
Extending and analyzing self-supervised learning across domains. Bram Wallace, Bharath Hariharan, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)2020Bram Wallace and Bharath Hariharan. Extending and analyzing self-supervised learning across domains. In Proceedings of the European Conference on Computer Vision (ECCV), 2020.
Deep visual domain adaptation: A survey. Mei Wang, Weihong Deng, Neurocomputing. 312Mei Wang and Weihong Deng. Deep visual domain adaptation: A survey. Neurocomputing, 312: 135-153, 2018.
Instance credibility inference for few-shot learning. Yikai Wang, Chengming Xu, Chen Liu, Li Zhang, Yanwei Fu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYikai Wang, Chengming Xu, Chen Liu, Li Zhang, and Yanwei Fu. Instance credibility inference for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12836-12845, 2020.
Low-shot learning from imaginary data. Yu-Xiong Wang, Ross Girshick, Martial Hebert, Bharath Hariharan, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pp. 7278-7286, 2018.
A survey of unsupervised deep domain adaptation. Garrett Wilson, Diane J Cook, ACM Transactions on Intelligent Systems and Technology (TIST). 115Garrett Wilson and Diane J Cook. A survey of unsupervised deep domain adaptation. ACM Trans- actions on Intelligent Systems and Technology (TIST), 11(5):1-46, 2020.
Unsupervised feature learning via nonparametric instance discrimination. Zhirong Wu, Yuanjun Xiong, X Stella, Dahua Yu, Lin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non- parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733-3742, 2018.
Self-training with noisy student improves imagenet classification. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionQizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10687-10698, 2020.
Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. Ruijia Xu, Guanbin Li, Jihan Yang, Liang Lin, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionRuijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pp. 1426-1435, 2019.
Billion-scale semisupervised learning for image classification. Hervé I Zeki Yalniz, Kan Jégou, Manohar Chen, Dhruv Paluri, Mahajan, arXiv:1905.00546arXiv preprintI Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi- supervised learning for image classification. arXiv preprint arXiv:1905.00546, 2019.
Transmatch: A transfer-learning scheme for semi-supervised few-shot learning. Zhongjie Yu, Lin Chen, Zhongwei Cheng, Jiebo Luo, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhongjie Yu, Lin Chen, Zhongwei Cheng, and Jiebo Luo. Transmatch: A transfer-learning scheme for semi-supervised few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12856-12864, 2020.
Colorful image colorization. Richard Zhang, Phillip Isola, Alexei A Efros, SpringerIn European conference on computer visionRichard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European confer- ence on computer vision, pp. 649-666. Springer, 2016.
As with all learning techniques, it should perform better with more unlabeled data. To investigate how the amount of unlabeled examples impacts STARTUP, we repeated the miniImageNet → ISIC experiments in 5.1 with various amount of unlabeled data (20% of the dataset. A.4 IMPACT OF DIFFERENT AMOUNT OF UNLABELED EXAMPLES STARTUP uses unlabeled data to adapt feature representations to novel domains. examples) is set aside for evaluation. The verdict is clear -STARTUP benefits from more unlabeled data (Figure 3A.4 IMPACT OF DIFFERENT AMOUNT OF UNLABELED EXAMPLES STARTUP uses unlabeled data to adapt feature representations to novel domains. As with all learning techniques, it should perform better with more unlabeled data. To investigate how the amount of un- labeled examples impacts STARTUP, we repeated the miniImageNet → ISIC experiments in 5.1 with various amount of unlabeled data (20% of the dataset (2003 examples) is set aside for evaluation). The verdict is clear -STARTUP benefits from more unlabeled data (Figure 3).
But in many cases, the base dataset may not be available. Removing the cross entropy loss on the base dataset when training the student essentially boils down to a fine-tuning paradigm. For miniImageNet → BSCD-FSL (Table 8), we find no discernible difference between all datasets except on ISIC where we observe significant degradation in 5-shot performance. A.5 TRAINING THE STUDENT WITHOUT THE BASE DATASET STARTUP requires joint training on both the base dataset as well as the target domain. STARTUP-Randno SSA.5 TRAINING THE STUDENT WITHOUT THE BASE DATASET STARTUP requires joint training on both the base dataset as well as the target domain. But in many cases, the base dataset may not be available. Removing the cross entropy loss on the base dataset when training the student essentially boils down to a fine-tuning paradigm. For miniImageNet → BSCD-FSL (Table 8), we find no discernible difference between all datasets except on ISIC where we observe significant degradation in 5-shot performance. STARTUP-Rand (no SS)
We found that it is crucial that the unlabeled data to perform STARTUP on should be from the target domain of interest. 23.03 ± 0.42 31.02 ± 0.55 65.20 ± 0.87 70.36 ± 0.86A.6 STARTUP ON DIFFERENT UNLABELED DATA We consider the ImageNet → CD-FSL experiment. We perform STARTUP on unlabeled data different from the target domain and present the result in Table 9A.6 STARTUP ON DIFFERENT UNLABELED DATA We consider the ImageNet → CD-FSL experiment. We perform STARTUP on unlabeled data dif- ferent from the target domain and present the result in Table 9. We found that it is crucial that the unlabeled data to perform STARTUP on should be from the target domain of interest. Methods ChestX ISIC EuroSAT CropDisease STARTUP-ChestX 23.03 ± 0.42 31.02 ± 0.55 65.20 ± 0.87 70.36 ± 0.86
. Startup-Eurosat, 22.23 ± 0.38 30.38 ± 0.59 73.83 ± 0.77 70.40 ± 0.86STARTUP-EuroSAT 22.23 ± 0.38 30.38 ± 0.59 73.83 ± 0.77 70.40 ± 0.86
. Startup-Cropdisease, 22.51 ± 0.40 30.59 ± 0.55 62.56 ± 0.83 85.10 ± 0.74STARTUP-CropDisease 22.51 ± 0.40 30.59 ± 0.55 62.56 ± 0.83 85.10 ± 0.74
. 27.24 ± 0.46 44.14 ± 0.59 83.76 ± 0.56 89.95 ± 0.49Methods ChestX ISIC EuroSAT CropDisease STARTUP-ChestX. Methods ChestX ISIC EuroSAT CropDisease STARTUP-ChestX 27.24 ± 0.46 44.14 ± 0.59 83.76 ± 0.56 89.95 ± 0.49
. Startup-Eurosat, 25.15 ± 0.43 43.90 ± 0.58 89.70 ± 0.41 88.78 ± 0.54STARTUP-EuroSAT 25.15 ± 0.43 43.90 ± 0.58 89.70 ± 0.41 88.78 ± 0.54
. Startup-Cropdisease, 25.21 ± 0.44 44.34 ± 0.60 83.13 ± 0.54 96.06 ± 0.33STARTUP-CropDisease 25.21 ± 0.44 44.34 ± 0.60 83.13 ± 0.54 96.06 ± 0.33 |
246,210,276 | POST-TRAINING DETECTION OF BACKDOOR ATTACKS FOR TWO-CLASS AND MULTI-ATTACK SCENARIOS | Backdoor attacks (BAs) are an emerging threat to deep neural network classifiers. A victim classifier will predict to an attacker-desired target class whenever a test sample is embedded with the same backdoor pattern (BP) that was used to poison the classifier's training set. Detecting whether a classifier is backdoor attacked is not easy in practice, especially when the defender is, e.g., a downstream user without access to the classifier's training set. This challenge is addressed here by a reverse-engineering defense (RED), which has been shown to yield state-of-the-art performance in several domains. However, existing REDs are not applicable when there are only two classes or when multiple attacks are present. These scenarios are first studied in the current paper, under the practical constraints that the defender neither has access to the classifier's training set nor to supervision from clean reference classifiers trained for the same domain. We propose a detection framework based on BP reverseengineering and a novel expected transferability (ET) statistic. We show that our ET statistic is effective using the same detection threshold, irrespective of the classification domain, the attack configuration, and the BP reverse-engineering algorithm that is used. The excellent performance of our method is demonstrated on six benchmark datasets. Notably, our detection framework is also applicable to multi-class scenarios with multiple attacks. Code is available at | [] | POST-TRAINING DETECTION OF BACKDOOR ATTACKS FOR TWO-CLASS AND MULTI-ATTACK SCENARIOS
Zhen Xiang
School of EECS
Pennsylvania State University
David J Miller
School of EECS
Pennsylvania State University
George Kesidis
School of EECS
Pennsylvania State University
POST-TRAINING DETECTION OF BACKDOOR ATTACKS FOR TWO-CLASS AND MULTI-ATTACK SCENARIOS
Published as a conference paper at ICLR 2022
Backdoor attacks (BAs) are an emerging threat to deep neural network classifiers. A victim classifier will predict to an attacker-desired target class whenever a test sample is embedded with the same backdoor pattern (BP) that was used to poison the classifier's training set. Detecting whether a classifier is backdoor attacked is not easy in practice, especially when the defender is, e.g., a downstream user without access to the classifier's training set. This challenge is addressed here by a reverse-engineering defense (RED), which has been shown to yield state-of-the-art performance in several domains. However, existing REDs are not applicable when there are only two classes or when multiple attacks are present. These scenarios are first studied in the current paper, under the practical constraints that the defender neither has access to the classifier's training set nor to supervision from clean reference classifiers trained for the same domain. We propose a detection framework based on BP reverseengineering and a novel expected transferability (ET) statistic. We show that our ET statistic is effective using the same detection threshold, irrespective of the classification domain, the attack configuration, and the BP reverse-engineering algorithm that is used. The excellent performance of our method is demonstrated on six benchmark datasets. Notably, our detection framework is also applicable to multi-class scenarios with multiple attacks. Code is available at
INTRODUCTION
Despite the success of deep neural network (DNN) classifiers in many research areas, their vulnerabilities have been recently exposed Xu et al. (2020);Miller et al. (2020). One emerging threat to DNN classifiers is a backdoor attack (BA) Li et al. (2020). Here, a classifier will predict to the attacker's target class when a test sample is embedded with the same backdoor pattern (BP) that was used to poison the classifier's training set. On the other hand, the classifier's clean test set accuracy (i.e., on samples without an embedded BP) is largely uncompromised Gu et al. (2019); Chen et al. (2017); Liu et al. (2018b), which makes the attack difficult to detect. Early BA defenses aim to cleanse the possibly poisoned training set of the victim classifier Tran et al. (2018); Chen et al. (2018). But deployment of these defenses is not feasible when the defender is the user/consumer of the classifier, without access to its training set or to any prior knowledge of the backdoor pattern Wang et al. (2020). A major defense approach suitable for this scenario is a reverse-engineering defense (RED). In general, a RED treats each class as a putative BA target class and trial-reverse-engineers a BP. Then, detection statistics are derived from the estimated pattern for each putative target class. If there is a BA, the pattern estimated for the true BA target class will likely be correlated with the true BP used by the attacker, such that the associated statistics will likely be anomalous referenced against the statistics for the other (non-BA) classes Wang et al. (2019); Xiang et al. (2020). Notably, RED-based anomaly detection does not require supervision from clean classifiers trained for the same domain like, e.g., Kolouri et al. (2020). Although existing REDs have achieved leading performance in many practical detection tasks, a fundamental limitation still remains. RED-based anomaly detection typically requires estimating a null distribution used to assess anomalies, with the number of detection statistics (samples) used to estimate this null a function of the number of classes K. Wang et al. (2019) uses O(K) statistics, and Xiang et al. (2020) uses even more statistics -O(K 2 ). These methods are not applicable for domains with K = 2, e.g. sentiment classification Gao et al. (2019b), disease diagnosis Li et al. (2014), etc., because there are insufficient statistics for estimating the null. More generally, their accuracy is affected when the number of classes is small (K > 2, but small).
In this paper, we focus on BA detection for the two-class (and possibly multi-attack) scenario, under two practical constraints: a) the defender has no access to the training set of the victim classifier (or to any BP used by the attacker); b) supervision from clean classifiers trained for the same domain is not available. This scenario is clearly more challenging than the one considered by existing REDs, for which there is at most one BA target class and a sufficient number of non-target classes to learn an accurate null. As the first to address this difficult problem, our main contributions are as follows:
• We propose a detection framework that involves BP reverse-engineering in order to address constraint a) above. However, instead of performing anomaly detection as existing REDs do, we process each class independently using a novel detection statistic dubbed expected transferability (ET). This allows our method to be applicable to the two-class, multi-attack scenario (as well as to the multi-class and (possibly) multi-attack scenario). • We show that for ET there is a large range of effective choices for the detection threshold, which commonly contains a particular threshold value effective for detecting BAs irrespective of the classification domain or particulars of BA. This common threshold is mathematically derived and is based on properties for general classification tasks that have been verified empirically by many existing works; thus, constraint b) that no domain-specific supervision is needed is well-addressed. • Our ET statistic is obtained by BP reverse-engineering and does not strongly depend on the type of BP, or the particular RED objective function and optimization technique used to reverse-engineer putative BPs. Thus, our detection framework can incorporate existing reverse-engineering techniques and potentially their future advances. • We show the effectiveness of our detection framework on six popular benchmark image datasets.
RELATED WORK
Backdoor Attack (BA). For a classification task with sample space X and label space C, a BA aims to have the victim classifier f : X → C predict to the target class t ∈ C whenever a test sample x ∈ X is embedded with the backdoor pattern (BP) Gu et al. (2019). BAs were initially proposed for image classification, but have also been proposed for other domains and tasks Xiang et al. (2021a); Zhai et al. (2021); Li et al. (2022). While we focus on images experimentally, our detector is also applicable to other domains. Major image BPs include: 1) an additive perturbation v embedded bỹ
x = [x + v] c
(1) where ||v|| 2 is small (for imperceptibility) and (2) where ||m|| 1 is small for imperceptibility) and represents element-wise multiplication Gu et al. (2019); Wang et al. (2019); Xiang et al. (2021c). Typically 1 , BAs are launched by inserting into the training set a small set of samples labeled to the target class and embedded with the same BP that will be used during testing. Such data poisoning can be achieved e.g. when DNN training is outsourced to parties that are possibly malicious Gu et al. (2019). BAs are also stealthy since they do not degrade the classifier's accuracy on clean test samples; hence they are not detectable based on validation set accuracy. Backdoor Defense. Some BA defenses aim to separate backdoor training samples from clean ones during the training process Tran et al. Huang et al. (2022). However, their deployment is not feasible in many practical scenarios where the defender is a downstream user who has no access to the training process. A family of pruningbased method "removes" the backdoor mapping by inspecting neuron activations (and removing some neurons) Liu et al. (2018a); Li et al. (2021). These methods cause non-negligible degradation to classification accuracy, they may remove neurons even when there is no backdoor, and moreover they do not make explicit detection inferences. Kolouri et al. (2020) and Xu et al. (2021) train a binary classifier to classify models as "with" and "without" BAs; but such training requires clean classifiers from the same domain or a significant number of labeled samples and heavy computation to train these clean classifiers. Chou et al. (2018) and Gao et al. (2019a) can detect triggering of a backdoor by an observed test sample. Unlike these methods, the existing RED-based detector (introduced next) reliably detects backdoors without the need to observe any test samples. Reverse-Engineering Defense (RED). RED is a family of BA defenses that do not need access to the classifier's training set, nor to any prior knowledge of the BP that may be used in an attack.
Without knowing whether the classifier has been backdoor-attacked, a RED trial-reverse-engineers a BP for each putative BA target class (or possibly for each (source, target) BA class pair) using a small, clean dataset independently collected by the defender. Then, for each class, a detection statistic is derived from the estimated pattern and used to infer if the class is a BA target class or not. For example, to detect BAs with an additive perturbation BP (Eq. (1)), Xiang et al. (2020) finds, for each putative target class, the perturbation with the minimum l 2 norm that induces a large misclassification fraction to this class when added to images from another class (the source class). Since BPs embedded by Eq. (1) usually have a very small l 2 norm for imperceptibility, the pattern estimated for the true BA target class (if a BA exists) will likely have a much smaller l 2 norm than for non-target classes. Thus, unsupervised anomaly detection based on these perturbation sizes is performed -when there is one BA target class and a sufficient number of non-target classes, the statistic for the BA target class will likely be detected as an anomaly compared with the others. Except the RED above proposed by Xiang et al. (2020), existing REDs also include Neural Cleanse
METHOD
Our main goals are detecting whether a given classifier is backdoor attacked or not and, if so, finding out all the BA target classes. We focus on a practical "post-training" scenario: S1) The defender has no access to the classifier's training set, nor any prior knowledge of the true BP used by an attacker. S2) There are no clean classifiers trained for the same domain for reference; and the defender is not capable of training such clean classifiers. S3) The classification domain has two classes that can both be BA targets. While we focus on two-class scenarios in this section, our method is more generally applicable to multi-class scenarios, with arbitrary number of attacks (see experiments in Sec. 4.4). To address S1, our detection framework involves BP reverse-engineering using a small, clean dataset independently collected by the defender, like existing REDs. To address S3, unlike existing REDs that perform anomaly detection involving statistics from all classes, we inspect each class independently using a novel detection statistic called expected transferability (ET), which can be empirically estimated for each class independently. To address S2, we show that ET possesses a theoreticallygrounded detection threshold value for distinguishing BA target classes from non-target classes, one which depends neither on the domain nor on the attack configuration. This is very different from existing REDs, for which a suitable detection threshold for their proposed statistics may be both domain and attack-dependent. For example, the range of the l 1 norm of the estimated mask used by Wang et al. (2019) depends on the image size. The practical import here is that the detection threshold is a hyperparameter, but setting this threshold in a supervised fashion (e.g. to achieve a specified false positive rate on a group of clean classifiers) is generally infeasible due to S2. Use of ET thus obviates the need for such hyperparameter setting. In the following, we define ET in Sec. 3.1, and then, the constant threshold on ET for BA detection is derived in Sec. 3.2. Finally, our detection procedure, which consists of ET estimation and an inference step, is in Sec. 3.3. Note that our detection framework is effective for various types of BPs (as will be shown experimentally). Solely for clarity, in this section, we focus on BAs with additive perturbation BPs (Eq. (1)). Similar derivation for BPs embedded by Eq. (2) is deferred to Apdx. C.
EXPECTED TRANSFERABILITY (ET)
Consider a classifier f : X → C to be inspected with category space C = {0, 1} and continuous sample distribution P i on X for class i ∈ C. For any x ∈ X from any class, the optimal solution to
minimize v ||v|| 2 subject to f (x + v) = f (x) (3)
is defined as v * (x). In practice, (3) can be viewed as a typical BP reverse-engineering problem and solved using methods in existing REDs like Xiang et al. (2020). It can also be practically solved by creating an adversarial example for x using methods in, e.g., M.-Dezfooli et al. (2016); Carlini & Wagner (2017). We present the following definition for the set of practical solutions to (3). Definition 3.1. ( -solution set) For any sample x from any class, regardless of the method being used, the -solution set to problem (3), is defined by
V (x) {v ∈ X ||v|| 2 − ||v * (x)|| 2 ≤ , f (x + v) = f (x)},(4)
where > 0 is the "quality gap" of practical solutions, which is usually small for existing methods.
A practical solution to (3) for sample x may or may not cause a misclassification when embedded in another sample y from the same class. In the following, we first present the definition regarding such a "transferability" property. Then, we define the ET statistic for any class i ∈ C. Definition 3.2. (Transferable set) The transferable set for any sample x and > 0 is defined by
T (x) {y ∈ X f (y) = f (x), ∃v ∈ V (x) s.t. f (y + v) = f (y)}.
(5) Definition 3.3. (ET statistic) For any class i ∈ C = {0, 1} and > 0, considering i.i.d. random samples X, Y ∼ P i , the ET statistic for class i is defined by ET i, E P(Y ∈ T (X)|X) .
USING ET FOR BA DETECTION
Here, we show that for any i ∈ C = {0, 1} and small , we will likely have ET i, > 1 2 when class (1 − i) is a BA target class; and ET i, ≤ 1 2 otherwise. Note that this constant threshold does not rely on any specific data domain, classifier architecture, or attack configuration; even for BPs embedded by Eq. (2), the same threshold can be obtained following a similar derivation (see Apdx. C). We first present the following theorem showing the connection between ET and the threshold 1 2 regardless of the presence of any BA. Then, we discuss the attack and non-attack cases, respectively. Theorem 3.1. For any class i ∈ C and > 0:
ET i, = 1 2 + 1 2 (P MT,i − P NT,i ),(6)
with P MT,i a "mutual-transfer probability" and P NT,i a "non-transfer probability", defined by P MT,i P(Y ∈ T (X), X ∈ T (Y)) and P NT,i P(Y / ∈ T (X), X / ∈ T (Y)) (7) respectively, where X and Y are i.i.d. random samples following distribution P i for class i. 1) Non-attack case: class (1 − i) is not a BA target class. We first focus on P NT,i . Property 3.1. For general two-class domains in practice and small , if class (1 − i) is not a BA target class, P NT,i for class i will likely be larger than 1 2 (see Sec. 4.3 for empirical support). For any i ∈ C, P NT,i is the probability that two independent samples from class i are mutually "not transferable". For such a pair of samples, the event associated with P NT,i is that the pattern estimated for one (by solving (3)) does not induce the other to be misclassified and vice versa. Accordingly, if we solve a problem similar to (3) but requiring a common v that induces both samples to be misclassified, the solution should have a larger norm than the solution to (3) for each of them. Property 3.1 has been verified by many existing works for general classification tasks commonly using highly non-linear classifiers. For example, the universal adversarial perturbation studied by Moosavi-Dezfooli et al. (2017) can be viewed as the solution to problem (3) (in absence of BA) for a group of samples instead of one. Compared with the minimum sample-wise perturbation required for each individual to be misclassified, the minimum universal perturbation required for high group misclassification typically has a much larger norm. Similar empirical results have also been shown by Xiang et al. (2020) and inspired their proposed RED. Despite the evidence in existing works, we also verify this property experimentally in Sec 4.3. Next, we focus on P MT,i . Note that P MT,i is upper bounded by 1 − P NT,i ; thus it will likely be smaller than 1 2 (and even possibly close to 0) for the non-attack case based on Property 3.1. The asymptotic behavior of P MT,i for small can also be approached by the following theorem. Note that in practice, a small is not difficult to achieve, since a solution to problem (3) using, e.g., algorithms for generating adversarial samples Szegedy et al. (2014) usually have a small norm.
Algorithm 1 BA detection using ET statistics.
1: Input: Classifier f : X → C for inspection; clean dataset D i = {x (i) 1 , · · · , x (i)
Ni } for each i ∈ C. 2: Initialization: attacked = F alse; BA targets = ∅. 3: for each putative target class t ∈ C = {0, 1} do 4:
Step 1: Obtain empirical estimation ET i, using D i for i = 1 − t.
5:
for n = 1 : N i do 6:
T (x (i) n ) = ∅; converge = F alse 7:
while not converge do 8:
Obtain an empirical solutionv(x (i) n ) to problem (3) using random initialization. 9:
T (x (i) n ) ← T (x (i) n ) ∪ {x (i) m |m ∈ {1, · · · , N i } \ n, f (x (i) m +v(x (i) n )) = f (x (i) m )} 10: if T (x (i) n ) unchanged for τ iterations then 11: converge ← T rue 12: p (i) n = | T (x (i) n )|/(N i − 1) 13: ET i, = 1 Ni Ni n=1 p (i) n
14:
Step 2: Determine if class t is a BA target class or not.
P i , P MT,i → 0 as → 0.
In summary, for the non-attack case with small , we will likely have P NT,i ≥ 1 2 ≥ P MT,i ; and thus, ET i, ≤ 1 2 (based on Thm. 3.1) -this will be shown by our experiments. Moreover, in Apdx. B, for a simplified (yet still relatively general) domain and a linear prototype classifier, we show that 1 2 is a strict upper bound of the ET statistic when there is no BA. 2) Attack case: class (1 − i) is the target class of a successful BA. Suppose v 0 is the BP for this attack, such that for any X ∼ P i , f (X) = i, while f (X + v 0 ) = f (X). Intuitively, P MT,i will be large and possibly close to 1 because, different from the non-attack case, there is a special patternthe BP v 0 -that could likely be an element of V (X) and V (Y) simultaneously, for X and Y i.i.d. following P i . In this case, Y ∈ T (X) and X ∈ T (Y) jointly hold (i.e. mutually transferable) by Definition 3.2. Accordingly, P NT,i which is upper bounded by 1 − P MT,i will likely be small and possibly close to 0; then, we will have P MT,i > P NT,i and consequently, ET i, > 1 2 by Thm. 3.1. Beyond the intuitive analysis above, the following theorem gives a guaranteed large ET statistic (being exactly 1) in the attack case when the backdoor pattern has a (sufficiently) small norm (which is in fact desired in order to have imperceptibility of the attack).
Theorem 3.3. If class (1−i) is the target class of a successful BA with BP v 0 such that f (X+v 0 ) = f (X) for all X ∼ P i , and if ||v 0 || 2 ≤ , we will have P (V (X) ∩ V (Y) = ∅) = 1 for X and Y i.i.d. following P i ; and furthermore, ET i, = 1.
DETECTION PROCEDURE
Our detection procedure is summarized in Alg. 1. Basically, for each putative target class t ∈ C = {0, 1}, we estimate the ET statistic ET i, (with a "hat" representing empirical estimation) for class i = 1−t, and claim a detection if ET i, > 1 2 . In particular, the core to estimating ET i, is to estimate P(Y ∈ T (X)|X = x (i) n ) for each clean sample x (i) n ∈ D i used for detection (line 6-12, Alg. 1). To do so, we propose to find, for each x
(i) n ∈ D i , the subset T (x (i) n ) which contains all samples in D i \ x (i) n belonging to the transferable set T (x (i) n ) of sample x (i) n . Then, P(Y ∈ T (X)|X = x (i) n ) can be estimated by p (i) n = | T (x (i)
n )|/(|D i | − 1) (line 12, Alg. 1). However, by Def. 3.2, a sample y is in the transferable set T (x) of a sample x as long as there exists a practical solution to problem (3) (with some intrinsic quality gap ) that induces y to be misclassified as well. Thus, it is insufficient to decide whether or not a sample is in the transferable set of another sample according to merely one solution realization to problem (3). To address this, for each x Alg. 1). Such repetition stops when T (x (i) n ) stays unchanged for some τ iterations. This procedure is summarized as the "while" loop in Alg. 1, which is guaranteed to converge in (N i − 1) × τ iterations. Finally, we obtain the estimated ET by averaging p
(i) n -the empirical estimation of P(Y ∈ T (X)|X = x (i) n ) -over all samples in D i .
Our detection framework has the following generalization capabilities. 1) BP embedding mechanism. As mentioned before, our detection rule can be adapted to BPs embedded by Eq. (2), with the same constant threshold 1 2 on ET and only little modification to the while loop in Alg. 1 (see Apdx. C.3). 2) BP reverse-engineering algorithm. We do not limit our detector to any specific algorithm when solving problem (3) (line 8 in Alg. 1) -existing algorithms proposed by, e.g., Xiang et al. (2020);Carlini & Wagner (2017), and even future BP reverse-engineering algorithms can be used.
3) Adoption for multi-class scenario. When there are multiple classes, where each can possibly be a BA target, Alg. 1 can still be used for detection by, for each putative target class t ∈ C, treating all the classes other than t as a super-class and estimating ET on ∪ i∈C\t D i . In our experiments (next), we will evaluate our detection framework considering a variety of these extensions. Attack configuration. Like most related works, we mainly focus on classical BAs launched by poisoning the classifier's training set with a small set of samples embedded with a BP and labeled to some target class Gu et al. (2019). Effectiveness of our detector against another type of clean-label BA is shown in Apdx. E. For each 2-class domain generated from CIFAR-10, CIFAR-100, STL-10, and TinyImageNet, we create two attack instances, one for BA with additive perturbation BP embedded by Eq. (1), and the other for BA with patch replacement BP embedded by Eq. (2). For each 2-class domain generated from FMNIST and MNIST, we create one attack instance with additive perturbation BP 2 . For convenience, the six ensembles of attack instances with additive perturbation BP and for 2-class domains generated from CIFAR-10, CIFAR-100, STL-10, TinyImageNet, FM-NIST, and MNIST are denoted as A 1 -A 6 respectively. The four ensembles of attack instances with patch replacement BP and for 2-class domains generated from CIFAR-10, CIFAR-100, STL-10, and TinyImageNet are denoted as A 7 -A 10 respectively. The BPs used in our experiments include many popular ones in the BA literature -examples of some BPs and images embedded with them are shown in Fig. 1 (details for generating these BPs are deferred to Apdx. D.3). We consider 2-class scenarios where both classes can possibly be a BA target class. For each attack instance in ensembles A 1 , A 2 , A 3 , A 7 , A 8 , and A 10 , we create two attacks each with one of the two classes being the BA target class. For each instance in ensembles A 4 , A 5 , A 6 , and A 9 , we create one attack with the second class being the BA target class. For each of these attacks, the BP is Table 1: Detection accuracy for RE-AP and RE-PR on attack ensembles A 1 -A 10 , and on clean ensembles C 1 -C 6 , using the common threshold 1/2 on ET statistic. "n/a" represents "not applicable".
EXPERIMENTS
A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 C1 C2 C3 C4 C5 C6
RE-AP 45/45 18/20 16/20 17/20 20/20 20/20 n/a n/a n/a n/a 45/45 20/20 20/20 20/20 20/20 20/20 RE-PR n/a n/a n/a n/a n/a n/a 45/45 20/20 19/20 19/20 39/45 19/20 20/20 16/20 18/20 19/20 randomly selected from the candidate BPs (part of which are shown in Fig. 1), with the specified BP type for the ensemble which the attack is associated with. However, to avoid confusion for learning the backdoor mapping during training, for all 2-attack instances with additive perturbation BPs, we ensure that the BPs for the two attacks have different shapes (e.g., two "X" BPs are not allowed).
Similarly, for all 2-attack instances where the two attacks both choose to use the unicolor patch BP (Fig. 1e), we ensure that the colors for the two BPs are significantly different. Other attack configurations, including e.g., the poisoning rate for each attack are detailed in Apdx. D.4. Training configurations. We train one classifier for each attack instance using the poisoned training set. For each 2-class domain, we also train a clean classifier to evaluate false detections. We denote the six ensembles of clean instances for datasets CIFAR-10, CIFAR-100, STL-10, TinyImageNet, FMNIST, and MNIST as C 1 -C 6 respectively. For classifier training, we consider a variety of DNN architectures (with two output neurons, one for each of the two classes). For 2-class domains associated with CIFAR-10, CIFAR-100, and STL-10, we use (46) in Apdx. C.1). For convenience, we denote detection configurations with these two algorithms as RE-AP and RE-PR, respectively. Details for these two algorithms are both introduced in the original papers and reviewed in Apdx. D.6. Irrespective of the BP reverse-engineering algorithm, we use only 20 clean images per class (similar to most existing REDs) for detection. Finally, we set the "patience" parameter for determination of convergence in Alg. 1 to τ =4 -this choice is independent of the presence of BA; a larger τ will not change the resulting ET much, but only increase the execution time.
Detection performance (using the common ET threshold 1/2). In practice, our detector with RE-AP and with RE-PR can be deployed in parallel to cover both additive perturbation BPs and patch replacement BPs. Here, for simplicity, we apply our detector with RE-AP to classifiers (with BAs using additive perturbation BP) in ensemble A 1 -A 6 , and apply our detector with RE-PR to classifiers (with BAs using patch replacement BP) in ensemble A 7 -A 10 . For each classifier in clean ensembles C 1 -C 6 , we apply our detector with both configurations. In Tab. 1, for each ensemble of attack instances, we report the fraction of classifiers such that the attack and all BA target classes are both successfully detected; for each ensemble of clean instances, we report the fraction of classifiers that are inferred to be not attacked. Given the large variety of classification domains, attack configurations, DNN architectures, and defense generalizations mentioned above, using the common ET threshold 1/2, we successfully detect most attacks with only very few false detections (see Tab. 1). More discussions regarding the detection performance are deferred to Apdx. K.
COMPARE ET WITH OTHER STATISTICS
The results for existing REDs applying to the attacks we created are neglected for brevity, since these REDs cannot detect BAs for 2-class domains by their design. However, we compare our ET statistic with some popular types of statistic used by existing REDs in terms of their potential for being used to distinguish BA target classes from non-target classes. The types of statistic for comparison include: 1) the l 2 norm of the estimated additive perturbation used by Xiang et al. (2020) For each type of statistic mentioned above and each benchmark dataset, we consider all classifiers (with and without BA) trained for all 2-class domains generated from the dataset, and plot a double histogram for statistics obtained for all BA target classes and all non-target classes across these classifiers respectively. For example, in the 1 st column of Fig. 2, for our detector with RE-AP, we plot a double histogram for ET statistics obtained for all BA target classes and all non-target classes from all classifiers in each of A 1 &C 1 , A 2 &C 2 , A 3 &C 3 , A 4 &C 4 , A 5 &C 5 , and A 6 &C 6 . Note that all classifiers in, e.g., A 1 &C 1 are trained for 2-class domains generated from CIFAR-10. Here, for simplicity, for ET (with RE-AP) and L 2 (both designated for additive perturbation BP), we do not consider BA target classes with patch replacement BP (e.g. associated with classifiers in A 7 -A 10 ); for ET (with RE-PR), L 1 and CS (designated for patch replacement BP), we do not consider BA target classes with additive perturbation BP (e.g. associated with classifiers in A 1 -A 6 ). Based on Fig. 2, for all types of statistics including our ET, statistics obtained for BA target classes are generally separable from statistics obtained for non-target classes for classifiers (with and without BA) trained for 2-class domains generated from the same benchmark dataset (using same training configurations including DNN architecture). But only for our ET (obtained by both RE-AP and RE-PR), there is a common range (irrespective of the classification domain, attack configurations, and training configurations) for choosing a detection threshold to effectively distinguish BA target classes from non-target classes for all instances; and such common range clearly includes the constant threshold 1/2 (marked by red dashed lines in Fig. 2) derived mathematically in Sec. 3. By contrast, for both L 1 and L 2 , a proper choice of detection threshold for distinguishing BA target classes from non-target classes is domain dependent. For example, in the 4 th column of Fig. 2, the l 1 norm of masks estimated for both BA target classes and non-target classes for domains with larger image size (e.g. 96×96 for domains in A 9 &C 3 generated from STL-10) is commonly larger than for domains with smaller image size (e.g. 32 × 32 for domains in A 7 &C 1 generated from CIFAR-10). The CS statistic, on the other hand, not only relies on the domain, but also depends on the DNN architecture. More details regarding CS are deferred to Apdx. D.7. In summary, all the above mentioned types of statistics are suitable for BA detection if there is supervision from the same domain for choosing the best proper detection threshold. However, in most practical scenarios where such supervision is not available, only our ET statistic with the common detection threshold 1/2 can still be used to achieve a good detection performance. Finally, in Fig. 3, we show the receiver operating characteristic (ROC) curves associated with Fig. 2 for each of ET, L 1 , L 2 , and SC -our ET statistic has clearly larger area under the ROC curve (very close to 1) than the other types of statistics.
EXPERIMENTAL VERIFICATION OF PROPERTY 3.1
We verify Property 3.1 by showing that, if a class is not a BA target class, for any two clean images from the other class (considering 2-class domains), the minimum additive perturbation required to induce both images to be misclassified has a larger norm than the minimum perturbation required for each of these two images to be misclassified. Here, we randomly choose one clean classifier from each of C 1 -C 6 . For each classifier, we randomly choose 50 pairs of clean images from a random class of the associated 2-class domain -these images are also used for detection in Sec. 4.1. For each pair of images, we apply the same RE-AP algorithm (i.e. the BP reverse-engineering algorithm proposed by Xiang et al. (2020) for additive perturbation BPs) on the two images both jointly (to get a pair-wise common perturbation) and separately (to get two sample-wise perturbationsfor the two images respectively). Then, we divide the l 2 norm of the pair-wise perturbation by the maximum l 2 norm of the two sample-wise perturbations to get a ratio (which is more scale-insensitive than taking absolute difference). In Fig. 4, we plot the histogram of such ratio for all image pairs for all six classifiers -the ratio for most of the image pairs is greater than 1 (marked by the red dashed line in Fig. 4). For these pairs, very likely that the perturbation estimated for one sample cannot induce the other sample to be misclassified and vise versa (otherwise their expected ratio will likely be 1).
MULTI-CLASS, MULTI-ATTACK BA DETECTION USING ET STATISTIC
Since our detector inspects each class independently, it can also be used for BA detection for more general scenarios with more than two classes and arbitrary number of attacks (and BA target classes). For demonstration, for each of CIFAR-10, CIFAR-100, and STL-10, we create three attack instances with one, two, and three attacks, respectively (on the original domain). Additional experiments with aggregated results are deferred to Apdx. L due to space limitations. Here, we consider BAs with additive perturbation BPs for simplicity. The target class and the shape of BP for each attack are randomly selected. We train one classifier for each attack instance (thus, nine classifiers being attacked in total). More details for these attacks and training configurations are in Apdx. D.8. For each domain, we also train a clean classifier (without BA) for evaluating false detections.
Following the description at the end of Sec. 3, we apply the generalized Alg. 1 with RE-AP to these classifiers. For CIFAR-10 and STL-10, we use three clean images per class for detection; for CIFAR-100, we use only one clean image per class for detection. Other detection configurations including the detection threshold 1/2 are the same as in Sec. 4.1. Since a classifier is deemed to be attacked if ET obtained for any class is greater than the threshold 1/2, for each classifier, we show the maximum ET over all the classes in Tab. 2. Clearly, the maximum ET is greater than 1/2 for all classifiers being attacked and less than 1/2 for all clean classifiers. Thus, our detection framework (with the same constant threshold on ET) is also applicable to multi-class, multi-attack scenarios.
CONCLUSIONS
We proposed the first BA detector for two-class, multi-attack scenarios without access to the classifier's training set or any supervision from clean reference classifiers trained for the same domain.
Our detection framework is based on BP reverse-engineering and a novel ET statistic. Our ET statistic can be used to effectively distinguish BA target classes from non-target classes, with a particular, theoretical-grounded threshold value 1/2, irrespective of the classification domain, the DNN architecture, and the attack configurations. Our detection framework can also be generalized to incorporate a variety of BP reverse-engineering algorithms to address different BP types, and be applied to multi-class scenarios with arbitrary number of attacks.
ETHICS STATEMENT
The main purpose of this research is to understand the behavior of deep learning systems facing malicious activities, and enhance their safety level by unsupervised means. The backdoor attack considered this paper is well-known, with open-sourced implementation code. Thus, publication of this paper (with code released at https://github.com/zhenxianglance/ 2ClassBADetection) will be beneficial to the community in defending backdoor attacks. For random variables X and Y i.i.d. following distribution P i , for any realization x of X, we have
REFERENCES
P(Y ∈ T (x)) = P(Y ∈ T (X)|X = x) = E 1(Y ∈ T (X)) X = x(8)
Thus, ET, the left hand side of Eq. (6), can be written as
E P(Y ∈ T (X)|X) = E 1(Y ∈ T (X)) = P(Y ∈ T (X))(9)
Based on the inclusion-exclusion rule
P(Y ∈ T (X)) + P(X ∈ T (Y)) = 1 + P MT,i − P NT,i(10)
Since, X and Y are i.i.d. random variables, P(Y ∈ T (X)) = P(X ∈ T (Y)); thus, we get Eq. (6).
A.2 PROOF OF THEOREM 3.2
Step 1: We show that for any i ∈ C and X,
Y i.i.d. following distribution P i , X ∈ T (Y) and Y ∈ T (X) if and only if V (X) ∩ V (Y) = ∅. (a) If V (X) ∩ V (Y) = ∅, there exists v such that v ∈ V (X) and v ∈ V (Y). By Definition 3.1, v ∈ V (X) ⇒ f (X + v) = f (X) (11) v ∈ V (Y) ⇒ f (Y + v) = f (Y)(12)
Then, by Definition 3.2, the existence of such v yields X ∈ T (Y) and Y ∈ T (X).
(b) We prove the "only if" part by contradiction. Given X ∈ T (Y) and Y ∈ T (X), suppose V (X)∩V (Y) = ∅. By Definition 3.2, if Y ∈ T (X), there exists v ∈ V (X) such that f (Y+v) = f (Y). Since V (X) ∩ V (Y) = ∅, v / ∈ V (Y); hence, by Definition 3.1 ||v|| 2 − ||v * (Y)|| 2 > .(13)Similarly, there exists v ∈ V (Y) such that f (X + v ) = f (X) and v / ∈ V (X); hence ||v || 2 − ||v * (X)|| 2 > .(14)
By Definition 3.1, for all u ∈ V (Y), ||u|| 2 − ||v * (Y)|| 2 < . Thus, we have
||v|| 2 − ||v * (Y)|| 2 > ||u|| 2 − ||v * (Y)|| 2(15)
and accordingly ||v|| 2 > ||u|| 2 for all u ∈ V (Y). Similarly, for all u ∈ V (X), we have
||v || 2 − ||v * (X)|| 2 > ||u || 2 − ||v * (X)|| 2(16)
and thus, ||v || 2 > ||u || 2 for all u ∈ V (X). Clearly, there is a contradiction since there cannot exist an element in one set having a larger norm than all elements in another set while vise versa; and therefore, V (X) ∩ V (Y) = ∅.
Step 2: We show that the upper bound for P MT,i goes to 0 when → 0. Note that is the "quality gap" between the practical solution and the optimal solution.
Note that by Definition 3.1, V (X) ∩ V (Y) = ∅ only if ||v * (X)|| 2 − ||v * (Y)|| 2 ≤ ; also based on Step 1:
P MT,i = P(V (X) ∩ V (Y) = ∅) ≤ P( ||v * (X)|| 2 − ||v * (Y)|| 2 ≤ ).(17)
For X following continuous distribution P i , ||v * (X)|| 2 also follows some continuous distribution 3 on (0, +∞). Since X and Y are i.i.d., the right hand side of Eq. (17) goes to 0 as → 0.
A.3 PROOF OF THEOREM 3.3
For any X ∼ P i , since v 0 satisfies f (X + v 0 ) = f (X) and
||v 0 || 2 − ||v * (X)|| 2 < ||v 0 || 2 ≤ ,(18)v 0 ∈ V (X) by Definition 3.1. Thus, v 0 ∈ V (X) ∩ V (Y) for X and Y i.i.d. following P i ; and P (V (X) ∩ V (Y) = ∅) = 1.(19)
Thus, based on Step 1 (and Eq. (17)) in Apdx A.2, we have P MT = 1. Since P MT + P NT ≤ 1, Eq.
(6) can be written as
ET i, ≥ 1 2 + 1 2 (P MT − 1 + P MT ) = P MT(20)
Then we have
ET i, = 1.(21)
B ANALYSIS ON A SIMPLIFIED CLASSIFICATION PROBLEM
In this section, we consider a simplified analogue to practical 2-class classification problems. Although assumptions are imposed for simplicity, we still keep the problem relatively general by allowing freedom on, e.g., the sample distribution in their latent space. With the simplification, we are able to analytically derive the condition for a sample belonging to the transferable set of another sample from the same class. Based on this, we show that 1 2 is the supremum of the ET statistic for this problem when there is no attack. Note that we will abuse some notations that appeared in the main paper; the notations in this section are all self-contained.
B.1 PROBLEM SETTINGS
We consider sample space R n with an orthonormal basis {α 1 , · · · , α d , β 1 , · · · , β n−d }. We assume that samples from the two classes are distributed on sub-spaces V 0 = {Ac|c ∈ R d } and V 1 = {Be|e ∈ R n−d } respectively, with A = [α 1 , · · · , α d ] and B = [β 1 , · · · , β n−d ]. By the definition of V 0 and V 1 , we have also defined the latent spaces, i.e. R d and R n−d , for the two classes. Here, we do not constrain the form or parameters of the distributions for the two classes in both the sample space R n and latent spaces, as long as the distributions are continuous. Such an analogue may be corresponding to some simple classification domains in practice. Moreover, the latent space may be corresponding to an internal layer space of some deep neural network (DNN) classifier. For example, for a typical ReLU DNN classifier, it is possible that a subset of nodes in the penultimate layer are mainly activated for one class, while another subset of nodes are mainly activated for the other class.
For this simplified domain, we consider a nearest prototype classifier that is capable of classifying the two classes perfectly for any continuous sample distributions. To achieve this, any point x ∈ R n on the decision boundary of the classifier should have equal distance to V 0 and V 1 , i.e.:
||AA T x − x|| 2 = ||BB T x − x|| 2(22)
By expanding both sides of Eq. (22) and rearranging terms (using the fact that A T A = I and B T B = I), we obtain the decision boundary of the classifier, which is {x ∈ R n |x(AA T − BB T )x = 0}. In other words, the classifier f : R n → {0, 1} is defined by
f (x) = 0 x(AA T − BB T )x > 0 1 x(AA T − BB T )x ≤ 0(23)
Note that this classifier is not necessarily linear; and the region for each class may not be convex.
B.2 DERIVATION OF TRANSFER CONDITION
We derive the condition that one sample belongs to the transferable set of another sample from the same class. Since the two classes are symmetric, we focus on class 0 for brevity. Moreover, in this section, we refer to samples by their latent space representation; i.e., instead of "a sample x ∈ R n from class 0", we use "a sample c ∈ R d ".
First, we present the following modified definition of the transferable set (compared with Def. 3.2 in the main paper) using latent space representation. Definition B.1. (Transferable set in latent space) The transferable set for any sample c ∈ R d from class 0 is defined by
T (c) = {c ∈ R d f (Ac ) = f (Ac), f (Ac + v * (c)) = f (Ac )},(24)
where v * (c) is the optimal solution to minimize v∈R n ||v|| 2 subject to f (Ac + v) = f (Ac).
Note that the above definition is in similar form to Def. 3.2 in the main paper, though here, the quality to the solution to problem (25) is no longer considered. This is because, different with problem (3) in the main paper (which is the prerequisite of Def. 3.2), problem (25) here can be solved analytically, yielding a close form solution instead of a solution set with some intrinsic quality bound . Accordingly, we present the following theorem, which gives the condition for one sample belonging to the transferable set of another sample from the same class. Proof. First, we derive the solution to problem (25). Note that v ∈ R n can be decomposed (using the orthonormal basis specified by A and B) as
v = Av a + Bv b with v a ∈ R d and v b ∈ R n−d .
We substitute this decomposition into the constraint of problem (25). In words, the constraint means that v induces Ac to be (mis)classified to class 1; thus, according to the expression of the classifier in Eq. (23), the constraint can be written as
(Ac + Av a + Bv b ) T (AA T − BB T )(Ac + Av a + Bv b ) ≤ 0.(26)
Expanding the left hand side of the above inequality and using the fact that A T B = 0 for simplification, we get a much simpler expression of the constraint:
||v a + c|| 2 ≤ ||v b || 2 .(27)
Thus, a lower bound of the (square of the) objective to be minimized in problem (25) can be derived as ||v|| 2 2 = ||Av a + Bv b || 2 2 = ||v a || 2 2 + ||v b || 2 2 ≥ ||v a || 2 2 + ||v a + c|| 2 2 ,
with equality holds if and only if ||v b || 2 = ||v a + c||, i.e. Ac + v on the decision boundary of the classifier. Note that the right hand side of the inequality above is minimized when v a = − c 2 . Then, the optimal solution v * (c) = Av * a (c) + Bv * b (c) to problem (25)
satisfies: v * a (c) = − c 2 , ||v * b (c)|| 2 = ||c||2 2 .(29)
Next, for any c, c ∈ R d , we derive the condition for c ∈ T (c). Since f (Ac ) = f (Ac) is already satisfied, by Def. B.1, c ∈ T (c) if and only if f (Ac + v * (c)) = f (Ac ), which is equivalent to (by Eq. (23))
(Ac + v * (c)) T (AA T − BB T )(Ac + v * (c)) ≤ 0.(30)
Using the decomposition of v * (c), expanding and rearranging terms on the left hand side of the above, we obtain
c T c + v * a (c) T v * a (c) − v * b (c) T v * b (c) + 2c T v * a (c) ≤ 0.(31)
With the optimal solution in Eq. (29) substituted in, we get
c T (c − c) ≤ 0,(32)
or, equivalently ||c − c/2|| 2 ≤ ||c/2|| 2 .
B.3 UPPER BOUND ON ET STATISTIC
In this section, we show that 1 2 is the minimum upper bound on the ET statistic for this problem when there is no BA. To do so, we first present a definition of ET statistic modified from Def 3.3 in the main paper, merely in adaption to the latent space representation used here (see Def. B.2). Then, we show the tightness of the bound by giving a concrete example where ET equals 1 2 in Lem. B.1. Finally, we prove that the ET statistic cannot be greater than 1 2 . Definition B.2. For i.i.d. random samples C and C following some continuous distribution G on R d , the ET statistic is defined by
ET = E C∼G P(C ∈ T (C)|C)(34)
Lemma B.1. For latent space R d with dimension d = 1 and distribution G continuous on R, the ET statistic satisfies
1 4 ≤ ET ≤ 1 2 ,(35)
where ET = 1 2 if and only if G(0) = 0 or G(0) = 1.
ET (d=1) = E C∼G [G( C 2 + |C| 2 ) − G( C 2 − |C| 2 )] = ∞ −∞ [G( c 2 + |c| 2 ) − G( c 2 − |c| 2 )]g(c)dc = 0 −∞ [G(0) − G(c)]g(c)dc + ∞ 0 [G(c) − G(0)]g(c)dc = G(0) 0 [G(0) − G(c)]dG(c) + 1 G(0) [G(c) − G(0)]dG(c) = G(0) 2 − 1 2 G(0) 2 + 1 2 [1 − G(0) 2 ] − G(0)[1 − G(0)] = 1 2 − G(0) + G(0) 2 ,(36)
where g(·) is the density function of distribution G. Note that the last line of Eq. (36)
Proof. By Lem. B.1, there exists d = 1 and G(0) = 0 or G(0) = 1, such that ET = 1 2 . Here, we only need to show that ET ≤ 1 2 for any d and G. Again, by the definition of ET (Def. B.2) and Thm. B.1, we can write ET as
ET = E C∼G P(||C − C 2 || 2 ≤ || C 2 || 2 )|C ,(38)
for C and C i.i.d. following distribution G. Similar to our proof of Thm. 3.1 in Sec. A.1
P(||C − c 2 || 2 ≤ || c 2 || 2 ) = E C ∼G 1{||C − C 2 || 2 ≤ || C 2 || 2 } C = c .(39)
Thus, based on Eq. (38) and (39), ET can be written as 4 We do not use bold C but C instead, since it is given as a scalar random variable.
ET = E C,C ∼G 1{||C − C 2 || 2 ≤ || C 2 || 2 )} = P(||C − C 2 || 2 ≤ || C 2 || 2 )(40)
Since C and C are i.i.d,
P(||C − C 2 || 2 ≤ || C 2 || 2 ) = P(||C − C 2 || 2 ≤ || C 2 || 2 )(41)
Also note that for continuous distribution
P(||C − C 2 || 2 ≤ || C 2 || 2 , ||C − C 2 || 2 ≤ || C 2 || 2 ) ≤ P(||C − C 2 || 2 2 + ||C − C 2 || 2 2 ≤ || C 2 || 2 2 + || C 2 || 2 2 ) = P(||C − C || 2 2 ≤ 0) = 0(42)
Then, by the inclusion-exclusion rule
P(||C − C 2 || 2 ≤ || C 2 || 2 ) + P(||C − C 2 || 2 ≤ || C 2 || 2 ) − P(||C − C 2 || 2 ≤ || C 2 || 2 , ||C − C 2 || 2 ≤ || C 2 || 2 ) ≤ 1,(43)
Substituting Eq. (41) and (42) into Eq. (43), we have
P(||C − C 2 || 2 ≤ || C 2 || 2 ) ≤ 1 2 ,(44)
thus, by Eq. (40) ET ≤ 1 2
which finishes the proof.
C USING ET TO DETECT BA WITH PATCH REPLACEMENT BP
In the main paper, we have described our detection framework in details considering additive perturbation BPs embedded by Eq. (1). In fact, our detection framework is not specific to any particular backdoor embedding mechanism. In this section, we repeat our derivation and analysis in Sec. 3 by considering patch replacement BPs embedded by Eq. (2). Basically, the definitions, theorems and algorithms presented in this section are matched with those in the main paper -they are under the same framework which is generally independent of the backdoor pattern embedding mechanism.
Like the organization of Sec. 3, in this section, we first introduce several definitions related to ET but for patch replacement BPs -these definitions are similar to those in the main paper and are customized to patch replacement BPs. Especially, the ET statistic is defined in the same fashion as in Def. 3.3 of the main paper. Then, we show that the same constant detection threshold 1 2 on the ET statistic for detecting BAs with with additive perturbation BPs can be used for detecting BAs with patch replacement BPs as well. Finally, we present the detailed procedure of our detection for patch replacement BPs (as the counterpart of Alg. 1 of the main paper). Again, this section can be viewed as an independent section, where the notations are self-contained.
C.1 DEFINITION OF ET FOR PATCH REPLACEMENT BP
Consider the same classifier f : X → C to be inspected (as in the main paper) with the same label space C = {0, 1} and continuous sample distribution P i on X for class i ∈ C. For any sample x from any class, the optimal solution to minimize s={m,u}
||m|| 1 subject to f (∆(x; m, u)) = f (x)(46)
is defined as s * (x) = {m * (x), u * (x)}, where ∆(x; m, u) = (1 − m) x + m u is an alternative expression to the patch replacement embedding formula in Eq.
(2) (for notational convenience only). Again, m ∈ M is the binary mask and u ∈ X is the patch for replacement.
Existing methods for solving problem (46) including the one proposed by Wang et al. (2019), where, again, the practical solutions are usually sub-optimal. Thus, similar to Def. 3.1 in the main paper, we present the following definition in adaption to patch replacement BPs.
Definition C.1. ( -solution set for patch replacement BPs) For any sample x from any class, regardless of the method being used, the -solution set to problem (46), is defined by
S (x) {{m, u} ||m|| 1 − ||m * (x)|| 1 ≤ , f (∆(x; m, u)) = f (x)},(47)
where > 0 is the "quality" bound of the solutions, which is usually small for existing methods.
Similar to Def. 3.2 in the main paper, we present the following definition (with abused notation) for the transferable set for patch replacement BPs. Definition C.2. (Transferable set for patch replacement BPs) The transferable set for any sample x and > 0 is defined by
T (x) {y ∈ X f (y) = f (x), ∃{m, u} ∈ S (x) s.t. f (∆(y; m, u)) = f (y)}.(48)
Finally, we present the following definition of the ET statistic for patch replacement BPs, which looks exactly the same as Def. 3.3 in the main paper. However, here, the definition of the transferable set has been customized for patch replacement BPs. Even though, the similarity between these definitions and their counterparts in the main paper has already highlighted the generalization capability of our detection framework. Definition C.3. (ET statistic for patch replacement BPs) For any class i ∈ C = {0, 1} and > 0, considering i.i.d. random samples X, Y ∼ P i , the ET statistic for class i is defined by ET i, E P(Y ∈ T (X)|X) .
C.2 DETECTING BA WITH PATCH REPLACEMENT BP USING ET
For the ET statistic for patch replacement BPs defined above, the same constant detection threshold 1 2 can be used for distinguish BA target classes from non-target classes. The connection between the ET statistic for patch replacement BPs and the constant threshold 1 2 is established by the same Thm. 3.1 in Sec. 3.2 of the main paper with the same proof in Apdx. A.1. Thus, these details are not included here for brevity. Note that P MT,i and P NT,i for class i are defined in the same way as in Thm. 3.1, though the transferable set T (x) is customized for patch replacement BPs in the current section -this is the main reason why we abuse the notation for the transferable set for patch replacement BPs in Def. C.2.
In the following, like in Sec. 3.2 of the main paper, we discuss the non-attack case and the attack case respectively. Readers should notice that the theorems (and the associated proofs) and discussions are similar to those in the main paper. Such similarity further highlight the generalization capability of our detection framework.
Non-attack case. Property 3.1 from the main paper is also applicable here. That is, if class (1 − i) where i ∈ C = {0, 1} is not a BA target class, P NT,i for class i will likely be larger than 1 2 . Similar to our verification of Property 3.1 for additive perturbation BPs in Sec. 4.3 of the main paper, we verify Property 3.1 for patch replacement BPs in Apdx. F. Using similar protocol as in Sec. 4.3 of the main paper, we show that the common mask with the minimum-norm required for two sample to be misclassified usually has a larger norm than mask with the mimimum-norm required for each of them.
As for P MT,i , again, Thm. 3.2 from the main paper is also applicable here, but with a slightly different proof (shown below) in adaption to the modifications to the definitions for the patch replacement BPs in Apdx. C.1.
Proof.
Step 1: Similar to the proof in Apdx. A.2, We show that for any i ∈ C and X, Y i.i.d. following distribution P i , X ∈ T (Y) and Y ∈ T (X) if and only if S (X) ∩ S (Y) = ∅.
(a) If S (X) ∩ S (Y) = ∅, there exists s such that s ∈ S (X) and s ∈ S (Y). By Def. C.1,
s ∈ S (X) ⇒ f (∆(X; m, u)) = f (X) (49) s ∈ S (Y) ⇒ f (∆(Y; m, u)) = f (Y)(50)
Then, by Definition C.2, the existence of such s yields X ∈ T (Y) and Y ∈ T (X).
(b) We prove the "only if" part also by contradiction. Given X ∈ T (Y) and Y ∈ T (X), suppose S (X) ∩ S (Y) = ∅. By Definition C.2, if Y ∈ T (X), there exists s = {m, u} ∈ S (X) such that f (∆(Y; m, u)) = f (Y). Since S (X) ∩ S (Y) = ∅, such s / ∈ S (Y); hence, by Definition C.1
||m|| 1 − ||m * (Y)|| 1 > .(51)
Similarly, there exists s = {m , u } ∈ S (Y) such that f (∆(X; m , u )) = f (X) and s / ∈ S (X) (since it is assumed that S (X) ∩ S (Y) = ∅); hence ||m || 1 − ||m * (X)|| 1 > .
By Definition C.1, for alls = {m,ũ} ∈ S (Y), ||m|| 1 − ||m * (Y)|| 1 < . Thus, we have
||m|| 1 − ||m * (Y)|| 1 > ||m|| 1 − ||m * (Y)|| 1(53)
and accordingly ||m|| 1 > ||m|| 1 for alls ∈ S (Y). Similarly, for alls = {m ,ũ } ∈ S (X), we have ||m || 1 − ||m * (X)|| 1 > ||m || 1 − ||m * (X)|| 1
and thus, ||m || 1 > ||m || 1 for alls ∈ S (X). Clearly, there is a contradiction since there cannot exist an element in one set having a larger norm than all elements in another set while vise versa; and therefore, S (X) ∩ S (Y) = ∅.
Step 2: We show that the upper bound for P MT,i goes to 0 when → 0.
Note that by Definition C.1, S (X) ∩ S (Y) = ∅ only if ||m * (X)|| 1 − ||m * (Y)|| 1 ≤ ; also based on Step 1:
P MT,i = P(S (X) ∩ S (Y) = ∅) ≤ P( ||m * (X)|| 1 − ||m * (Y)|| 1 ≤ ).(55)
For X following continuous distribution P i , m * (X) also follows some continuous distribution on (0, +∞). Since X and Y are i.i.d., the right hand side of Eq. (55) goes to 0 as → 0.
Based on the above, we have reached the same conclusions for patch replacement BPs as in the main paper. That is, for the non-attack case, we will likely have P NT,i ≥ P MT,i , and consequently, ET i, ≤ 1 2 based on Thm. 3.1. Attack case. For class i ∈ C = {0, 1}, we consider a successful BA with target class (1 − i). The BP used by the attacker is specified by s 0 = {m 0 , u 0 } with mask m 0 and patch u 0 . Thus, for any X ∼ P i , due to the success of the BA, f (X) = i and f (∆(X; m 0 , u 0 )) = f (X). Similar to our discussion in the main paper, s 0 will likely be a common element in both S (X) and S (Y) for X, Y i.i.d. following P i . For the same reason, in this case, the ET statistic will likely be greater than 1 2 . Similar to additive perturbation BPs, for patch replacement BPs, we also have a guarantee for a large ET statistic (ET i, = 1) when the mask size of the BP used by the attacker is sufficiently small. Such property is summarized in the theorem below.
Theorem C.1. If class (1 − i) it the target class of a BA with patch replacement BP s 0 = {m 0 , u 0 } such that ||m 0 || 1 ≤ , we will have P (S (X) ∩ S (Y) = ∅) = 1 for X and Y i.i.d. following P i ; and furthermore, ET i, = 1.
Proof. For any X ∼ P i , since s 0 = {m 0 , u 0 } satisfies f (∆(X; m 0 , u 0 )) = f (X), and also because
||m 0 || 1 − ||m * (X)|| 1 < ||m 0 || 1 ≤ ,(56)
by Definition C.1, we have s 0 ∈ S (X). Thus, s 0 ∈ S (X) ∩ S (Y) for X and Y i.i.d. following P i ; and P (S (X) ∩ S (Y) = ∅) = 1.
The rest of the proof (showing that ET i, = 1 for this attack scenario) is exactly the same as the proof of Thm. 3.3 shown in Apdx. A.3, thus is neglected here. (46) using random initialization." and then change line 9 of Alg. 1 to:
n ) = {m(x (i) n ),û(x (i) n )} to problem" T (x (i) n ) ← T (x (i) n ) ∪ {x (i) k |k ∈ {1, · · · , N i } \ n, f (∆(x (i) k ;m(x (i) n ),û(x (i) n ))) = f (x (i) k )}".
Such a simple "module-based" modification allows our detection framework to be applicable to a variety of BP embedding mechanisms, which again, shows the generalization capability of our detection framework. (1998). All the datasets are associated with the torchvision package, except for that STL-10 is downloaded from the official website https://cs.stanford. edu/˜acoates/stl10/. Though the details of these datasets can be easily found online, we summarize them in Tab. 3.
D.2 DETAILS FOR GENERATING THE 2-CLASS DOMAINS
In Sec. 4.1, we generate 45 2-class domains from CIFAR-10, and 20 2-class domains from each of CIFAR-100, STL-10, TinyImageNet, FMNIST, and MNIST. Here we provide more details about how these 2-class domains are generated.
As mentioned in Sec. 4.1, for CIFAR-10, the 45 2-class domains are corresponding to the 45 unordered class pairs of CIFAR-10 respectively. For each of CIFAR-100, FMNIST, and MNIST, we randomly sample 20 unordered class pairs, each forming a 2-class domain. For TinyImageNet, due to high image resolution and data scarcity, we generate 20 "super class" pairs -for each pair, we randomly sample 20 classes from the original category space and then evenly assign them to the two super classes (each getting 10 classes from the original category space). Similarly, for STL-10 with 10 classes, we generate 20 super class pairs by randomly and evenly dividing the 10 classes into two groups (of 5 classes from the original category space) for each pair. For each generated 2-class domain, we use the subset of data associated with these two (super) classes from the original dataset, with the original train-test split.
D.3 DETAILS OF BPS
In this paper, we consider both additive perturbation BPs embedded by Eq. (1) and patch replacement BPs embedded by Eq. (2) that are frequently used in existing backdoor papers. Despite the BPs (with images embedded with them) illustrated in Fig. 1 the BPs used in our experiments that are not included in the main paper due to space limitations. In the following, we also provide details for each of these BPs (in both Fig. 1 and Fig. 5).
First, we provide the details for all the additive perturbation BPs. The "chessboard" pattern in Fig. 1a is a "global" pattern that has been used by Xiang et al. (2020). Here, one and only one of two adjacent pixels are perturbed positively by 3/255 in all color channels. Another global pattern is the "static" pattern in Other additive perturbation BPs are all "localized" patterns. The "L" pattern in Fig. 1b and the "X" pattern in Fig. 1c have been used by both Tran et al. (2018) and Wang et al. (2020). For the "L" pattern, we perturb all the color channels by 50/255. For the "X" pattern, for each attack, we randomly choose a channel (for all images to be embedded in for this particular attack) and perturb the associated pixels positively by 50/255. The "pixel" BP in Fig. 1d has been used by Tran et al. (2018);Chen et al. (2018), where a single pixel is perturbed in all channels by 50/255 for color images and 70/255 for gray-scale images. The "chessboard patch" pattern in Fig. 5b, the "cross" in Fig. 5c, and the "square" in Fig. 5d have all been previously considered. For the cross and the chessboard patch, the perturbation sizes for each pixel being perturbed are 50/255 and 5/255, respectively; and perturbation is applied to all channels. For the square pattern, one channel is randomly selected for each attack and the perturbation size is 50/255. The spatial location of all these localized patterns are randomly selected over the entire image (and fixed for all images to be embedded in) for each attack. Only for gray-scale images, the pixels being perturbed are restricted to one of the four corners, such that these pixels will likely be black (with pixel value close to 0) originally.
Next, we provide the details for the two patch replacement BPs considered in our experiments. The BP in Fig. 1e is a small, monochromatic patch located near the margin of the images to be embedded in. Similar BPs have been considered by Gu et al. (2019) and Wang et al. (2019). The color is randomly chosen and fixed for each attack. The BP in Fig. 1f is a small noisy patch located near the margin of the images to be embedded in. Similar BPs have been considered by Turner et al.
(2019) and A. Saha (2020). For both BPs, once the location is selected, the same location will be applied to all images to be embedded in for the same attack. Also, for both BPs, the size of the patch is 3 × 3 for the 2-class domains generated from CIFAR-10 and CIFAR-100; 4 × 4 for the 2-class domains generated from TinyImageNet; and 10 × 10 for the 2-class domains generated from STL-10.
D.4 OTHER ATTACK CONFIGURATIONS
In the main paper, we defined a "code" for each ensemble of attack instances based on both the type of BP being used and the dataset from which the associated 2-class domains are generated. Here, we summarize these codes in Tab. 4 for reference. Also in the main paper, we have described most of the attack configurations for the attacks instances in each ensemble. For some ensembles, each attack instance is associated with two attacks, each with one of the two class being the target class. Also, the BPs used by the two attacks, though of the same type, are guaranteed to be sufficiently different in shapes (for additive perturbation BP) or colors (for patch replacement BP). For other ensembles, there is only one attack for each attack instance. Here, we provide a summary for these configurations in Tab. 5. Also, in Tab 5, for attack instances in each ensemble, we summarize the number of training samples embedded with the BP and labeled to the target class that are used for poisoning the classifier's training set for the associated attacks.
D.5 TRAINING DETAILS
Here we provide the training details that are not included in the main paper due to space limitations. For each generated 2-class domain, we use the same training configuration irrespective of the existence of BA. In Tab. 6, we show the training details including learning rate, batch size, number of epochs, whether or not using training data augmentation, choice of optimizer (Adam D. P. Kingma (2015) or stochastic gradient descent (SGD)) for 2-class domains generated from CIFAR-10, CIFAR-100, STL-10, TinyImageNet, FMNIST, and MNIST, respectively. Training data augmentations for 2-class domains generated from TinyImageNet include random cropping and random horizontal flipping -these augmentations are helpful for the backdoor mapping to be learned without compromising the classifier's accuracy on clean test samples. Otherwise, we may not easily produce an effective attack to evaluate the performance of our defense.
We also show the effectiveness of the attacks we created for evaluating our defense. Commonly, the effectiveness of a BA is evaluated by attack success rate (ASR) and clean test accuracy (ACC) Xiang et al. (2020); Wang et al. (2020). ASR is defined (for each attack) as the probability that a test image from the source class is (mis)classified to the target class of BA when the BP is embedded. ACC is defined (for each classifier being attack regardless of the number of attacks) as the classification accuracy on test samples with no BP. In our experiments, we evaluate ASR and ACC using images from the test set associated with each 2-class domain -these images are not used during training. For each attack instance, we evaluate ASR for each attack (since there can be either one attack or two attacks with different BA target class) separately. In Tab. 7, for each of ensembles A 1 -A 10 , we show the average ACC for the classifier being attacked over all instances in the ensemble; we also show the mean and minimum ASR over all attacks of all instances in the ensemble. As a reference, Table 6: Training details, including learning rate, batch size, number of epochs, whether or not using training data augmentation, choice of optimizer (Adam D. P. Kingma (2015) or stochastic gradient descent (SGD)), for 2-class domains generated from CIFAR-10, CIFAR-100, STL-10, TinyIma-geNet, FMNIST, and MNIST, respectively. in Tab. 8 for each of ensemble C 1 -C 6 of clean instances, we show the average ACC over all the clean classifiers for the ensemble. Based on the results in both Tab. 7 and Tab. 8, all the attacks we created are successful with high ASR and almost no degradation in ACC.
D.6 BP REVERSE-ENGINEERING ALGORITHMS
In the main paper, we evaluate our detection framework with two BP reverse-engineering algorithms, which are denoted as RE-AP and RE-PR, respectively. RE-AP is proposed by Xiang et al. (2020) for reverse-engineering additive perturbation BPs. The general form of RE-AP estimates a common perturbation that induces a group of images to be misclassified to a common target class. When there is a single image in such a group, and when there are only two classes, the optimization problem solved by RE-AP is reduced to (3). To solve this problem for some target class i ∈ C and image x ∈ X from the class other than i, RE-AP minimizes the following surrogate objective function:
L AP (v) = − log p(i|x + v),(58)
using gradient descent with v initialized from 0, until the constraint of (3) is satisfied. Here, p(i|x) denotes the classifier's posterior of class i give any input sample x ∈ X . The step size for minimization is set small to ensure a good "quality" for the solution; otherwise, the resulting perturbation may have a much larger norm than the minimum norm perturbation required for inducing a misclassification. Moreover, for each domain and each classifier to be inspected, choosing a proper step size can be done based on the norm of the solution and without any knowledge of the presence of BA.
Another BP reverse-engineering algorithm, RE-PR is proposed by Wang et al. (2019) for patch replacement BPs. Similarly, RE-PR solves problem (46) in Apdx. C.1, which is the counterpart of problem (3) for patch replacement BPs. Formally, for some target class i ∈ C and image x ∈ X from the class other than i, RE-PR minimizes the following surrogate objective function:
L PR (m, u) = − log p(i|∆(x; m, u)) + λ||m|| 1 ,(59)
using gradient descent, where λ is the Lagrange multiplier and ∆(x; m, u) is the alternative expression to Eq. 2 (see the description below Eq. (46)) for patch replacement BP embedding. In the main paper, we compared our ET statistic with other three types of detection statistics including a cosine similarity (CS) statistic proposed by Wang et al. (2020). As an important work addressing unsupervised backdoor detection without access to the training set, this method can effectively detect backdoor attacks when their are multiple classes with only few of them are backdoor target classes.
For general classification domains with arbitrary number of classes, the CS statistic is obtained for each putative target class t ∈ C as following. First, a common (patch replacement) BP is estimated to: a) induces a group of images from classes other than t to be misclassified in an untargeted fashion (i.e., to any class other than their originally labeled classes); b) not induce any class t images to be misclassified; and c) have as small mask size (measured by l 1 norm) as possible. Accordingly, Wang et al. (2020) proposed to minimize the following loss:
L CSC (m, u) = i∈C\t x∈Di max{h i (∆(x; m, u)) − max j =i h j (∆(x; m, u)), −κ} + x∈Dt max{max j =t h j (∆(x; m, u)) − h t (∆(x; m, u)), −κ} + λ||m|| 1 ,(60)
where h i (·) : X → R is the logit (right before softmax) of class i ∈ C Carlini & Wagner (2017). We denote the estimated (common) BP for class t as s * t = {m * t , u * t }. Then, for each image not from class t, a sample-wise BP is estimated to: a) induce the image to be misclassified to class t; and b) have as small mask size (measured by l 1 norm) as possible. Thus, the following loss is minimized for each x ∈ ∪ i∈C\t D i :
L CSS (m, u) = max{max j =t h j (∆(x; m, u)) − h t (∆(x; m, u)), −κ} + λ||m|| 1 .(61)
We denote the sample-wise BP estimated for class t and sample x ass * t (x) = {m * t (x),ũ * t (x)}. Finally, the cosine similarity statistic for class t is computed by:
CS t = 1 | ∪ i∈C\t D i | x∈∪ i∈C\t Di cos(z(∆(x; m * t , u * t )), z(∆(x;m * t (x),ũ * t (x)))),(62)
where z(·) : X → R d is the mapping from input layer to the penultimate layer with some dimension d. cos(·) : R d × R d → [−1, 1] is the cosine similarity between two real vectors.
Based on our results in Fig. 2, CS for BA target classes and non-target classes are separable for most domains. This is not surprising because when class t ∈ C is a BA target class, the estimated common BP will likely be highly correlated with the sample-wise BPs estimated for images from classes other than t; thus, the resulting CS will be large and possibly close to 1. If class t ∈ C is not a BA target class, the estimated common BP may induce images from classes other than t to be misclassified to some arbitrary classes (possibly the "semantically" closest class for each individual image). Thus, the common BP may be very different from sample-wise BP estimated to only induce misclassifications to class t. Consequently, the CS statistic will likely be small.
However, considering our 2-class problem with no sufficient number of statistics to inform estimation of a null distribution, and also our assumption that there is no domain-specific supervision (e.g. using clean classifiers trained for the same domain) for setting a proper detection threshold, CS statistic may not be effective since it is sensitive to domains and DNN architectures.
24
Published as a conference paper at ICLR 2022 First, when there is no BA, for ReLU networks, the penultimate layer features are always nonnegative, such that the cosine similarity is guaranteed to be non-negative. However, for DNN with sigmoid or leaky ReLU activation functions, the penultimate layer features can be negative. Thus, the CS statistic may be distributed in the entire interval of [−1, 1]. Such large difference in the null distribution of CS statistic makes the choice of a detection threshold very difficult without domainspecific knowledge.
Second, CS is also sensitive to the classification domain, especially the number of classes. Considering some putative target class t ∈ C, when there are a large number of classes in the domain, the common BP estimated for a group of images from classes other than t will likely be very different with the sample-wise BP estimated for each individual in the group. In particular, most images will likely be misclassified to some class other than t when the common BP is embedded; but the sample-wise BP is estimated for each of these images to induce them to be misclassified to class t. Thus, the penultimate layer feature associated with the common BP and the sample-wise BP will likely be different for most images. However, when there are only two classes, i.e. C = {0, 1} in our case, the images used for BP estimation for class t are all from class (1 − t). Moreover, both common BP and samples-wise BP estimated for these images will induce them to be misclassified to class t. Thus, CS obtained for this case will likely be larger than for the cases where there are a large number of classes when there is no BA. We demonstrate this phenomenon in the following.
We construct five domains from CIFAR-10. The first four domains contain 2, 4, 6, 8 classes randomly selected from the 10 classes of CIFAR-10 respectively, and the the fifth domain is the original CIFAR-10 with 10 classes. We train a classifier without BA for each domain using the same configurations as in Sec. 4.1. For each classifier, we obtain CS statistics for all classes. In Fig. 6, we show the average CS statistic over all classes for the five classifiers. In general, CS statistic decreases as the number of classes grows; thus, it is highly domain-dependent.
D.8 DETAILS FOR MULTI-CLASS EXPERIMENTS
In Sec. 4.4, we evaluate the performance of our detection framework for multi-class scenarios with arbitrary number of attacks. In other words, the classifier has more than two classes, and each class can possibly be a BA target class.
For each of CIFAR-10, CIFAR-100, and STL-10, we create three attack instances with one, two, and three attacks, respectively. Like the attacks we created for the 2-class domains, the attacks here are created following the same data poisoning protocol that has been widely considered in existing works. That is, we create backdoor training images by embedding a BP into a small set of images from classes other than the target class. These backdoor training images are labeled to a target class and inserted into the training set of the classifier Gu et al. (2019). In our experiment here, for each attack of each instance, we randomly select a target class. For simplicity, we consider only additive perturbation BP here. We randomly select a shape (and location for localized BP) for the BP to be used from our pool of candidate BPs. Note that for any two attacks of the same attack instance, the target classes and the BPs should both be different from each other. For attacks on CIAFR-10, CIFAR-100, and STL-10, the backdoor training images are created using 60, 10, and 100 clean images per class (not including the target class), respectively. For each domain, the three attack instances and the clean classifier use the same training configuration. For CIFAR-10 and STL-10, we use ResNet-18 as the DNN architecture; for CIFAR-100, we use ResNet-34 architecture. For all three domains, training data augmentations including random cropping and random horizontal flipping are adopted. Other configurations for classifier training for these domains are the same as for the 2-class domains generated from these original domains (shown in Tab. 6). Using these training configurations, the resulting nine classifiers being attacked (three classifiers for the three attack instances respectively for each domain) all have high attack success rate (ASR). Compared with the three clean classifiers (without BA) trained for the three domains respectively, there is also no significant degradation in clean test accuracy (ACC) for the classifiers being attacked. ASR and ACC for these classifiers are shown in Tab. 9.
E USING ET TO DETECT CLEAN-LABEL BAS
In this section, we demonstrate the effectiveness of our detection framework against a recent cleanlabel BA proposed by Turner et al. (2019). Clean-label BAs are motivated by the possible human/machine inspection of the training set. For example, backdoor training samples labeled to some target class inserted by a typical backdoor attacker are originally from classes other than the target class. Such "mislabeling" may be noticed by a human expert who inspects the training set manually, or may be detected by a shallow neural network trained on a small held-out validation set that is guaranteed to be clean. Thus, Turner et al. (2019) proposed to create backdoor training samples (that will be inserted into the classifier's training set) by embedding the BP only to target class samples. However, for target class samples embedded with the BP, there is no guarantee that it is the BP but not the features associated with the target class that will be learned by the classifier. Thus, Turner et al. (2019) proposed to "destroy" these target class features before embedding the BP when creating backdoor training samples. Then, the classifier will learn the BP and classify any test sample embedded with the BP to the target class.
One In our experiment here, we randomly generate ten 2-class domains from CIFAR-10 following its original train-test split. For each 2-class domain, we first train a surrogate classifier using a subset of the training set (2000 training images per class). The remaining samples (3000 per class) are assumed to be possessed by the trainer for training the victim classifier. For each 2-class domain, we create one attack instance with one BA targeting on the second class and using an additive perturbation BP. The candidate BPs to be used are the same as in our experiments in the main paper.
Here, for each BA, we use 1500 images from the target class (i.e. the second class of the associated 2-class domain) to create backdoor training images. These images are randomly sampled from the images used for training the surrogate classifier. For each of these 1500 images, we independently generate an adversarial perturbation using the surrogate classifier following the standard protocol of PGD Madry et al. (2018). In particular, we set the maximum perturbation size as 8/255, the number of perturbation steps as 10, and the step size as 1/255. Most of the perturbed images are misclassified by the surrogate classifier. Then we embed the BP randomly selected from the candidates for each attack into these images with the adversarial perturbation and still label them to the target class, where they are originally from. The created backdoor training images are inserted into the training set of the victim classifier. They will be barely noticeable to human inspectors since they visually look like standard target class images and the embedded BP is almost imperceptible by humans. Some examples of backdoor training images are shown in Fig. 7.
For each attack instance, we use the same training configurations as in Sec. 4.1 of the main paper to train the victim classifier. We also train a clean classifier for each instance to evaluate false detections. Moreover, we apply our detection framework with the BP reverse-engineering algorithm RE-AP and the same defense configurations as in the main paper to both the classifiers being attacked and the clean classifiers. In Fig. 8
− i) for i ∈ C = {0, 1}
is not a BA target class, for any two samples from class i, the minimum l 1 norm for any common mask that induces both samples to be misclassified will likely be larger than the minimum l 1 norm for the masks inducing each individual sample to be misclassified. The masks are obtained by solving problem (3). We randomly choose one clean classifier from each of C 1 -C 4 . For each classifier, we randomly choose 10 pairs of clean images from a random class of the associated 2-class domain. For each pair of images, we apply the RE-PR algorithm (for reverse-engineering patch replacement BPs) on the two images jointly (to get a pair-wise common mask) as well as separately (to get two sample-wise masks for the two images respectively). Again, we divide the l 1 norm of the pair-wise (common) mask by the maximum l 1 norm of the two sample-wise masks to get a ratio. In Fig. 9, we plot the histogram of such ratio for all image pairs for all four classifiers -the ratio for most of the image pairs is greater than 1 (marked by the red dashed line in Fig. 9). For these pairs, it is very likely that the BP (in particular, the mask) estimated for one sample cannot induce the other sample to be misclassified and vise versa. Note that if for any image pair, the mask (and some associated pattern) estimated for one sample can induce the other to be misclassified, such mask (and the associated pattern) will be a common mask (and pattern) that induces both images to be misclassified. Then, we would expect the ratio computed above to be very close to 1 for such an image pair.
G INFLUENCE OF THE NUMBER OF CLEAN IMAGES FOR DETECTION
The core of our detector is to estimate the ET statistic for each class. Note that the ET statistic is in fact an expectation. In principle, with fewer clean images per class for ET estimation, the variance of the estimated ET will be larger, though the execution time for ET estimation may be smaller. Thus, we would expect that, sometimes, the ET estimated using only a few clean images may be smaller than 1 2 for the attack case; or larger than 1 2 for the clean case. In Sec. 4.1, we used 20 images per class (40 images in total) for backdoor detection for all attack instances and all clean instances, and achieved good detection accuracy. Here, we show the influ-ence of the number of clean images per class on detection accuracy. In particular, for all 45 attack instances and all 45 clean instances of two-class domains generated from CIFAR-10 (i.e. A 1 and C 1 ), we apply the same detector in Sec. 4.1 with detection threshold 1 2 , but varying the number of clean images per class (in [2,5,10,15]) used for detection (i.e. ET estimation). As shown in Fig. 10, with 5 clean images per class (10 images in total), our method achieves relatively good detection accuracy. Even with only 2 clean images per class (which is the minimum sample size for empirical ET estimation), our detector catches ∼ 80% of attacks with less than 10% false detection rate 5 .
H CHOICE OF THE PATIENCE PARAMETER τ
In all experiments in the main paper, we set the patience parameter in Alg. 1 to τ = 4 and claim that larger τ will not induce much change to the estimated ET. Here, we provide some empirical evidence to support this claim.
We apply Alg. 1 with RE-AP to a classifier being attacked in A 1 and a classifier being attacked in A 2 . We also apply Alg. 1 with RE-PR to a classifier being attacked in A 7 and a classifier being attacked in A 8 . Moreover, Alg. 1 with both RE-AP and RE-PR are applied to a clean classifier in C 1 and a clean classifier in C 2 . Note that classification domains associated with A 1 , A 7 , and C 1 are generated from CIFAR-10; while classification domains associated with A 2 , A 8 , and C 2 are generated from CIFAR-100.
For all experiments in this section, we set the patience parameter to τ = 8 instead of τ = 4 used in the main paper. The purpose is to get a better observation of the asymptotic behavior of p
(i) n = | T (x (i) n )|/(N i − 1) during ET estimation for each clean sample x (i)
n used for detection (see line 7-12 of Alg. 1 for the definition of related quantities). The number of clean samples used for detection is 20.
As shown in Fig. 11, when applying our method to classifiers being attacked, p (i) n (for some class i) quickly grows to 1 (in very few iterations) for most clean samples used for detection (see (a)(b)(e)(f) of Fig. 11). For a few clean samples, p (i) n quickly grows to a large value close to 1 and then slowly reaches 1 (see (e)(f) of Fig. 11). Only for very few samples, p (i) n stays at some value in between 0 and 1 (see (b)(e) of Fig. 11). Based on these observations, which are generally true for other domains we investigated, τ = 4 is not a critical choices to our detection performance. The estimated ET, which is the average p (i) n for all clean samples used for detection, will likely be greater than 1 2 , as determined by the majority of clean samples used for detection.
On the other hand, when applying our method to clean classifiers, with RE-AP, p (i) n stays at 0 (or some small values close to 0) for all clean samples used for detection (see (c)(g) of Fig. 11). When applying our method with RE-PR to the same classifiers, p (i) n stays at 0 or some small values close to 0 for most samples and shows a trend of convergence (see (d)(h) of Fig. 11). Again, reducing τ from 8 to 4 or further increasing τ will not change the estimated ET much -the estimated ET for these clean instances will still be clearly less than 1 2 . . However, images generated in such a way are not guaranteed to be visually typical to their designated classes, especially for complicated domains like ImageNet. Thus we ask the following question: 5 Here, we claim a failure in detection if ET is exactly 1 2 for both clean instances and attack instances. Thus, the actual detection accuracy should be higher than those in Fig. 10 if we either do or do not trigger an alarm when ET is exactly 1 2 .
I USING SYNTHESIZED IMAGES FOR BACKDOOR DETECTION
29
Published as a conference paper at ICLR 2022 Can our ET framework be generalized to involve backdoor pattern reverse-engineering using synthesized images?
Here, we consider the 20 two-class domains generated from MNIST due to this domain's simplicity. We apply RE-AP to the 20 classifiers being attacked in A 6 and the 20 clean classifiers in C 6 , respectively. The clean images used for detection are generated using a simpler version of the model inversion method used by Chen et al. (2019). To synthesize an image for detection, we first initialize an all-zero image added with some small random positive noise. Then, we maximize (over the image values) the posterior of the designated class of the image using the classifier to be inspected, until the posterior is greater than 0.9. Examples for our synthesized images are shown in Fig. 12. Note that without the "auxiliary constraints" on images during their generation process (Chen et al.
), the generated images are usually atypical to their designated classes.
In Fig. 13, we show the ET statistic for the 80 classes associated with the 40 binary classifiers (20 classifiers being attacked and 20 clean classifiers). Among these 80 classes, there are 60 backdoor target classes and 20 non-target classes (please see the settings in Sec. 4.1). Despite one backdoor target class evading our detection and one ET for a backdoor target class lying close to the threshold 1 2 , the separation of ET for backdoor target classes and non-target classes is even better than that in the bottom left figure in Fig. 2 (where backdoor pattern reverse-engineering is performed on typical MNIST images). The possible reasons maybe:
• For non-attack cases, two independently synthesized samples predicted to the same class will likely be both atypical to their predicted class. They will likely share fewer features associated with their predicted class than two independent typical samples from this class. They can also be located anywhere in the input space. Thus, they will be more likely to be "mutually not transferable".
• The synthesized samples are also far from the data manifold of classes other than the class they are predicted to. So the minimum perturbation size required to induce them to be misclassified will have a large norm -possibly larger than the norm of the backdoor pattern when there is an attack. If this is the case, the ET statistic will be exactly 1 based on Thm.
3.3.
Future works along this research line include further investigation of the phenomenon we observed, and also improvement to the model inversion techniques, such that more complicated domains can be handled.
J COMPUTATIONAL COMPLEXITY
Here, we discuss the computational complexity of Alg. 1. More generally, we consider Alg. 1 for multi-class scenarios. Let K be the number of classes, N be the number of samples per class for detection, and T be the maximum number of forward/backward propagations for backdoor pattern reverse-engineering (i.e. "solving problem (3)" in line 8 of Alg. 1). For each of the K classes, we compute the p (i) n quantity (line 12 of Alg. 1) for each of the KN samples used for detection. To compute each p (i) n , the transferable set (line 9 of Alg. 1) is updated at most (KN − 1) × τ times; and each updating involves at most T forward/backward propagations. Thus, the theoretical upper bound of the number of forward/backward propagations for Alg. 1 is of the order O(K 3 N 2 T τ ) where τ is the patience parameter.
However, the actual complexity of our method in practice is much lower than the theoretical bound. First, the purpose of using a sufficiently large number of samples for detection is to reduce the variance of the estimated ET (see Apdx. G for more discussion and empirical results). Thus, for sufficiently large K, we can set N = 1 and use only K samples for detection; or even use fewer samples randomly selected from these K samples.
Second, the actual number of iterations for updating the transferable set is much smaller than (KN − 1) × τ in practice. In Apdx. H, we discussed the influence of the choice of τ on the estimated ET statistic and provided empirical results. For attack cases, p (i) n reaches 1 (and thus terminates the updating of the transferable set) very quickly (even in one or two iterations) for most samples (see (a)(b)(e)(f) of Fig. 11). For non-attack cases, convergence is also reached quickly for most samples (see (c)(d)(g)(h) of Fig. 11). In these examples, 20 images are used for estimating the ET statistic with patience τ = 8. Thus the theoretical maximum number of iterations for the transferable set updating for an image is (20 − 1) × 8 = 152, which is several times larger than the actual maximum number of iterations (<25).
In Fig. 14, we show the curve of the execution time growing with the number of samples used for detection. Specifically, we apply our detector to the clean binary classifiers in C 1 and record the average execution time, with the number of images for detection varying in [2,5,10,15,20]. Execution time is measured on a dual card RTX2080-Ti (11GB) GPU. Comparing with Fig. 10 where we show the effectiveness of our detector with only a few samples for detection, the actual time required for our detector to achieve good performance is only a few minutes. Moreover, for attack cases, the total number of iterations (for all samples used for detection) required for ET estimation is generally much smaller than for clean cases (as shown in Fig. 11). Thus, the actual execution time when there is an attack should be much smaller than the time shown in Fig. 14.
K INFLUENCE OF BACKDOOR PATTERN REVERSE-ENGINEERING ON DETECTION PERFORMANCE
Like most existing REDs, our method cannot achieve 100% detection accuracy in practice. As shown in both Tab. 1 and Fig. 2, out method, though achieving generally good detection accuracy, suffers from a few false negatives and false positives.
One main reason for the false negatives is that the existing BP reverse-engineering techniques used in our detection framework cannot not always recover the key features of the true BP used by the attacker 6 . In such cases, the "non-transfer probability" for a backdoor target class will be large, as if there were no attacks; and ET will likely be less than 1 2 . In Fig. 15a, we show an example of such failure of BP reverse-engineering. We consider a classifier being attacked associated with a two-class domain generated from CIFAR-100 that evaded our detection (from a blue bar less than 1 2 in the second figure of the left row in Fig. 2). For a random sample, we observed that the estimated BP is visually uncorrelated with the true BP used by the attacker. For comparison, we also show an example for a successful BP reverse-engineering in Fig. 15b, where the estimated pattern contains some key features of the true BP used by the attacker; the ET statistic for this class is larger than 1 2 and the attack is successfully detected.
We notice that most false positives happen when applying RE-PR (i.e. the reverse-engineering method in Wang et al. (2019) for patch replacement BPs) to clean classifiers. As introduced in Sec. D.6, RE-PR searches for a small image patch that induces high group misclassification to a putative target class. But it is possible for some domains and some classes, there are common key features associated with the class that is easy to be reverse-engineered on a small spatial support/mask. Such features, if reverse-engineered on one sample, will likely also induce another sample to be misclassified. Although this hypothesis requires further validation in the future, we have shown empirically that the false detections have a low frequency, given that our experiments are performed on a large number of different two-class domains generated from six benchmark datasets.
L ADDITIONAL EXPERIMENTS FOR MULTI-CLASS SCENARIOS WITH ARBITRARY NUMBER OF ATTACKS
In Sec. 4.4, we showed the performance of our ET framework against BAs for multi-class scenarios with arbitrary number of attacks. We considered the original domains of CIFAR-10, CIFAR-100, and STL-10. For each domain, we showed that the maximum ET statistic over all classes is larger than 1 2 if there is an attack, and less than 1 2 if there is no attack. Here, we further investigate the capability of our ET framework on more multi-class domains. In particular, we generate 10 five-class domains from CIFAR-10, with the five classes for each domain randomly selected from the original ten classes of CIFAR-10. For each domain, we create an attack instance with one attack, an attack instance with two attacks, and a clean instance. The protocols for attack creation, classifier training, and defense configurations are the same as in Sec. 4.4. Moreover, we compare our ET (with RE-AP) with the existing RED proposed by Xiang et al. (2020) (with its original protocol including a confidence threshold 0.05 (indicating a confidence 0.95)), as well as the same RED proposed by Xiang et al. (2020) but with the anomaly detection method changed to the one based on median absolute deviation (MAD) proposed by Wang et al. (2019) (with the same threshold 2 (also indicating a confidence 0.95) used by Wang et al. (2019)). For simplicity, we name these two methods as "RED-AP (original)" and "RED-AP (MAD)" respectively. All three methods use 3 clean images per class for detection.
In Tab. 10, we show the detection accuracy of our ET compared with RED-AP (original) and RED-AP (MAD). Our ET detects all attacks with no false detections. For each of the three ensembles with ten classifiers, the histogram of the maximum ET statistic over all the five classes is shown in Fig. 16. On the other hand, RED-AP (original) achieved good accuracy to detect classifiers with 1 attack, with very low false detection rate, but it fails to detect any classifiers with 2 attacks. This is because the anomaly detection setting of RED-AP (original) relies on the 1-attack assumption to estimate a null distribution for non-backdoor class pairs. For RED-AP (MAD), there is also a clear gap in performance compared with our ET. This is because the median of the five statistics is largely biased by statistics associated with backdoor target classes.
[·] c is a clipping function Chen et al. (2017); Zhong et al. (2020); Xiang et al. (2020); or 2) a patch u inserted using an image-wide binary mask m viã x = (1 − m) x + m u,
(2018); Chen et al. (2018); Du et al. (2020); Xiang et al. (2019);
(NC) proposed by Wang et al. (2019), which reverse-engineers BPs embedded by Eq. (2) and uses the l 1 norm of the estimated mask as the detection statistic. Guo et al. (2019) adds constraints to NC's BP reverse-engineering problem by considering various properties of BPs. Liu et al. (2019) proposed a novel objective function for BP reverse-engineering leveraging the abnormal internal neuron activations caused by the backdoor mapping. Chen et al. (2019) constructs clean images for detection using model inversion. Dong et al. (2021) queries the classifier for BP reverse-engineering to address the "black-box" scenario. Wang et al. (2020) proposes a detection statistic based on the similarity between universal and sample-wise BP estimation. Limitation of Existing REDs The anomaly detection of REDs heavily relies on the assumption that there is a relatively large number of non-target classes, thus providing a sufficient number of statistics to inform estimation of a null distribution. But this assumption does not hold for domains with only two classes. For example, Xiang et al. (2020) and Wang et al. (2019) exploit O(K 2 ) and O(K) statistics in estimating a null respectively, where K is the number of classes. Both of these methods are unsuitable for 2-class problems (K = 2).
nFigure 1 :
1, we solve problem (3) repeatedly with random initialization. For each practical solution, we embed it to all elements in D i \ x(i) n and find those that are misclassified -these samples are included into the subset T (x Part of the BPs used in our experiments (others in Apdx. D.3) and images with these BPs embedded. (a)-(d) are additive perturbation BPs and (e)-(f) are patch replacement BPs. BP in (a) is amplified for visualization. Spatial locations for BPs in (b)-(f) are randomly selected for each attack.
Our experiments involve six common benchmark image datasets with a variety of image size and color scale: CIFAR-10, CIFAR-100 Krizhevsky (2012), STL-10 Coates et al. (2011), TinyImageNet, FMNIST Xiao et al. (2017), MNIST Lecun et al. (1998). Details of these datasets are in Apdx. D.1. 4.1 MAIN EXPERIMENT: 2-CLASS, MULTI-ATTACK BA DETECTION USING ET STATISTIC Generating 2-class domains. From CIFAR-10, we generate 45 different 2-class domains (for all 45 unordered class pairs of CIFAR-10). From each of the other five datasets, we generate 20 different random 2-class domains. More details are provided in Apdx. D.2.
(denoted by L 2 ); 2) the l 1 norm of the estimated mask used by Wang et al. (2019) (denoted by L 1 ); and 3) the (cosine) similarity between the BP estimated group-wise and the BP estimated for each sample in terms of classifier's internal layer representation Wang et al. (2020) (denoted by CS).
Figure 2 :
2Comparison between our ET statistic (for both RE-AP and RE-PR configurations) and statistic types used by existing REDs (L 1 , L 2 , and CS). Only for ET, there is a common range for all 2-class domains for choosing a threshold to distinguish BA target classes (blue) from non-target classes (orange). Such common range also contains the constant threshold 1/2 (red dashed line).
Figure 3 :
3ROC curves for ET, L 1 , L 2 , and SC in distinguishing BA target classes from non-target classes for the large variety of classification domains and attack configurations considered inFig. 2.
Figure 4 :
4Histogram of l 2 norm ratio between pair-wise additive perturbation and maximum of the two sample-wise perturbations for each random image pair for clean classifiers.
Theorem B. 1 .
1For any c, c ∈ R d , c ∈ T (c) if and only if ||c − c/2|| 2 ≤ ||c/2|| 2 .
is strictly in the interval [ 1 4 , 1 2 ] for G(·) in range [0, 1]. The upper bound of ET when d = 1 is 1 2 , which is achieved if and only if G(0) = 0 or G(0) = 1.Theorem B.2. For arbitrary d ∈ Z + and continuous distribution G on R d
are conducted on six popular benchmark image datasets. They are CIFAR-10, CIFAR-100 Krizhevsky (2012), STL-10 Coates et al. (2011), TinyImageNet, FMNIST Xiao et al. (2017) and MNIST Lecun et al.
Figure 5 :
5in the main paper, here, in Fig. 5, we show BPs used in our experiments that are not shown in Fig. 1 of the main paper due to space limitations; and images with these BPs embedded. BPs in (a) and (b) are amplified for visualization.
Fig. 5a considered by both Zhong et al. (2020); Xiang et al. (2021b). For pixel indices starting from 0, a pixel (i, j) is perturbed positively if and only if i and j are both even numbers. Again, the perturbation size is 3/255 for all pixels being perturbed.
Figure 6 :
6Average CS statistic versus the number of classes in the domain.
Figure 7 :
7Examples for backdoor training images for clean-label BAs. These images are originally from the BA target class, perturbed (in human-imperceptible fashion) to be misclassified by a surrogate classifier, embedded with the BP, and are still labeled to the target class.
Figure 8 :
8simple yet effective approach proposed byTurner et al. (2019) to destroy the features in the backdoor training samples associated with the target class is inspired by a method for creating adversarial examples. The backdoor attacker needs to first train a surrogate classifier using an independently collected clean dataset. Then, for each of a small set of samples from the target class used for creating backdoor training samples, the attacker independently launches a projected gradi-Effectiveness of our defense, with the constant ET threshold 1/2, against the clean-label BA proposed by Turner et al.(2019). Classifiers being attacked have a maximum ET (over the two classes) greater than 1/2; clean classifiers have a maximum ET (over the two classes) less than 1/2. ent descent (PGD) attack(Madry et al. (2018)) to have the sample be predicted to any class other than its original class (i.e. the target class) by the surrogate classifier. These samples with the target class features destroyed are then embedded with the BP, are still labeled to the target class, and are inserted into the classifier's training set.
Figure 9 :Figure 10 :
910Histogram of l 1 norm ratio between pair-wise common mask and maximum of the two sample-wise masks for each random image pair for clean classifiers.(a) attack instances (A1) (b) clean instances (C1) Accuracy of detection inference on the ensemble of attack instances A 1 and the ensemble of clean instances C 1 , when the number of images used for detection varies in[2, 5, 10, 15].
For
most REDs, the defender is assumed to possess a small, clean dataset (collected independently) for detection Wang et al. (2019); Xiang et al. (2020); Guo et al. (2019); Wang et al. (2020). Although this assumption is relatively mild and feasible in most practical scenarios, it may be unnecessary if the defender is able to synthesize the images used for detection. The first trial was made by Chen et al. (2019), where on simple datasets like MNIST Lecun et al. (1998), images for backdoor pattern reverse-engineering are synthesized by model inversion Fredrikson et al. (2015)
Figure 11 :Figure 12 :
1112attack instance in A1 (RE-AP) (b) an attack instance in A7 (RE-PR) (c) a clean instance in C1 (RE-AP) (d) a clean instance in C1 (RE-PR) (e) an attack instance in A2 (RE-AP) (f) an attack instance in A8 (RE-PR) (g) a clean instance in C2 (RE-AP) (h) a clean instance in C2 (RE-PR) Example growing curves of p (i) n (with patience τ = 8). In each figure, there are 20 curves, each corresponding to a clean sample used for detection. ET is the average final p Images synthesized using a simpler version of the model inversion method used by Chen et al. (2019).
Figure 13 :
13Histogram of ET statistics for classifiers in A 6 and C 6 , when the images for backdoor pattern reverse-engineering are synthesized.
Figure 14 :
14Execution time versus the number of samples for detection.
Figure 15 :
15Example of (a) a failed BP reverse-engineering; and (b) a successful BP reverseengineering. For both examples, the estimated BP is on the top, while the true BP used by the attacker is at the bottom.
Figure 16 :
1610: Detection accuracy of ET with RE-AP on the 10 five-class domains with 1 attack, 2 attacks, and no attack, compared with RED-AP (original) and RED-AP (MADHistogram of the maximum ET over the five classes for classifier ensembles with (a) 1 attack, (b) 2 attacks, and (c) no attack.
attacked = T rue; BA targets ← BA targets ∪ {t} 17: Output: attacked; BA targets Theorem 3.2. For any class i ∈ C with continuous sample distribution15:
if ET i, > 1
2 then
16:
Table 2 :
2Maximum ET statistic over all
classes for classifiers with one, two, and three
attacks respectively, and a clean classifier, for
CIFAR-10, CIFAR-100, and STL-10.
1 attack 2 attacks 3 attacks clean
CIFAR-10
0.91
0.92
0.87
0
CIFAR-100
0.95
0.99
0.99
0.27
STL-10
0.65
0.83
0.77
4.3e -3
H. Pirsiavash A. Saha, A. Subramanya. Hidden trigger backdoor attacks. In AAAI, 2020. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57, May 2017. B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, and B. Srivastava. Detecting backdoor attacks on deep neural networks by activation clustering. http://arxiv.org/abs/1811.03728, Nov 2018. H. Chen, C. Fu, J. Zhao, and F. Koushanfar. Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 4658-4664, 7 2019. X. Chen, C. Liu, B. Li, K. Lu, and D. Song. Targeted backdoor attacks on deep learning systems using data poisoning. https://arxiv.org/abs/1712.05526v1, 2017. E. Chou, F. Tramèr, G. Pellegrino, and D. Boneh. Sentinet: Detecting physical attacks against deep learning systems, 2018. URL http://arxiv.org/abs/1812.00292. A. Coates, H. Lee, and A. Y. Ng. An analysis of single layer networks in unsupervised feature learning. In AISTATS, 2011. J. Ba D. P. Kingma. Adam: A method for stochastic optimization. In ICLR, 2015. Y. Dong, X. Yang, Z. Deng, T. Pang, Z. Xiao, H. Su, and J. Zhu. Black-box detection of backdoor attacks with limited information and data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. M. Du, R. Jia, and D. Song. Robust anomaly detection and backdoor attack detection via differential privacy. In Proc. ICLR, 2020. M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. CCS, 2015. Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal. STRIP: A defence against trojan attacks on deep neural networks. In Proc. ACSAC, 2019a. Z. Gao, A. Feng, X. Song, and X. Wu. Target-dependent sentiment classification with bert. IEEE Access, 7:154290-154299, 2019b. T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230-47244, 2019. W. Guo, L. Wang, X. Xing, M. Du, and D. Song. TABOR: A highly accurate approach to inspecting and restoring Trojan backdoors in AI systems. https://arxiv.org/abs/1908.01763, 2019. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016. K. Huang, Y. Li, B. Wu, Z. Qin, and K. Ren. Backdoor defense via decoupling the training process. In ICLR, 2022. S. Kolouri, A. Saha, H. Pirsiavash, and H. Hoffmann. Universal litmus patterns: Revealing backdoor attacks in cnns. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 298-307, 2020. A. Krizhevsky. Learning multiple layers of features from tiny images. University of Toronto, 05 2012. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. R. Li, W. Zhang, H.-I. Suk, L. Wang, J. Li, D. Shen, and S. Ji. Deep learning based imaging data completion for improved brain disease diagnosis. In Medical Image Computing and Computer-Assisted Intervention -MICCAI 2014, pp. 305-312, 2014. Y. Li, B. Wu, Y. Jiang, Z. Li, and S.-T. Xia. Backdoor learning: A survey, 2020. URL https: //arxiv.org/pdf/2004.04692.pdf.Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma. Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. In Proc. ICLR, 2021. Y. Li, H. Zhong, X. Ma, Y. Jiang, and S.-T. Xia. Few-shot backdoor attacks on visual object tracking. In ICLR, 2022. K. Liu, B. Doan-Gavitt, and S. Garg. Fine-pruning: Defending against backdoor attacks on deep neural networks. In Proc. RAID, 2018a. Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, and J Zhai. Trojaning attack on neural networks. In Proc. NDSS, San Diego, CA, 2018b. Y. Liu, W.-C. Lee, G. Tao, S. Ma, Y. Aafer, and X. Zhang. Abs: Scanning neural networks for back-doors by artificial brain stimulation. In CCS, 2019. S.-M. M.-Dezfooli, A. Fawzi, and P. Frossard. DeepFool: a simple and accurate method to fool deep neural networks. In Proc. CVPR, 2016. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In Proc. ICLR, 2018. D. J. Miller, Z. Xiang, and G. Kesidis. Adversarial learning in statistical classification: A comprehensive review of defenses against attacks. Proceedings of the IEEE, 108:402-433, 2020. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proc. CVPR, 2017. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. C. Szegedy, W. Zaremba, I Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proc. ICLR, 2014. B. Tran, J. Li, and A. Madry. Spectral signatures in backdoor attacks. In Proc. NIPS, 2018. A. Turner, D. Tsipras, and A. Madry. Clean-label backdoor attacks. https://people.csail.mit.edu/madry/lab/cleanlabel.pdf, 2019. B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B.Y. Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proc. IEEE Symposium on Security and Privacy, 2019. R. Wang, G. Zhang, S. Liu, P.-Y. Chen, J. Xiong, and M. Wang. Practical detection of trojan neural networks: Data-limited and data-free cases. In Proc. ECCV, 2020. Z. Xiang, D.J. Miller, and G. Kesidis. A benchmark study of backdoor data poisoning defenses for deep neural network classifiers and a novel defense. In Proc. IEEE MLSP, Pittsburgh, 2019.Z. Xiang, D. J. Miller, and G. Kesidis. Detection of backdoors in trained classifiers without access to the training set. IEEE Transactions on Neural Networks and Learning Systems, pp. 1-15, 2020. Z. Xiang, D. J. Miller, S. Chen, X. Li, and G. Kesidis. A backdoor attack against 3d point cloud classifiers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7597-7607, October 2021a. Z. Xiang, D. J. Miller, and G. Kesidis. Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing. Computers and Security, 106, 2021b. Z. Xiang, D. J. Miller, H. Wang, and G. Kesidis. Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set. Neural Computation, 33(5):1329-1371, 2021c. H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. URL http://arxiv.org/abs/1708.07747. H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing, 17: 151-178, 2020. X. Xu, Q. Wang, H. Li, N. Borisov, C. A. Gunter, and B. Li. Detecting AI Trojans Using Meta Neural Analysis. In Proc. IEEE Symposium on Security and Privacy, 2021. T. Zhai, Y. Li, Z. Zhang, B. Wu, Y. Jiang, and S.-T. Xia. Backdoor attack against speaker verification. In ICASSP, 2021. H. Zhong, C. Liao, A. Squicciarini, S. Zhu, and D.J. Miller. Backdoor embedding in convolutional neural network models via invisible perturbation. In Proc. CODASPY, 2020. A PROOF OF THEOREMS IN THE MAIN PAPER A.1 PROOF OF THEOREM 3.1
Proof. Based on Thm. B.1, for latent space dimension d = 1 and two scalar 4 i.i.d. random samples C and C with continuous distribution G, C ∈ T (C) if and only if |C − C/2| ≤ |C/2|. Thus, by Def. B.2, we have
Table 3 :
3Details of CIFAR-10, CIFAR-100, STL-10, TinyImageNet, FMNIST, MNIST datasets.Color Image size # Classes # Images/class # Training images/class
CIFAR-10
32 × 32
10
6000
5000
CIFAR-100
32 × 32
100
600
500
STL-10
96 × 96
10
1300
500
TinyImageNet
64 × 64
200
600
500
FMNIST
28 × 28
10
7000
6000
MNIST
28 × 28
10
7000
6000
C.3 DETECTION PROCEDURE FOR PATCH REPLACEMENT BP
The same procedure, i.e. Alg. 1 in the main paper, can be used for detecting BAs with patch replace-
ment BP, with only the following two modifications. We only need to first replace line 8 of Alg. 1
by:
"Obtain an empirical solutionŝ(x
(i)
Table 4 :
4Short hand "code" for each ensemble of attack instances based on both the BP being used and the dataset where the associated 2-class domains are generated from. "n/a" represents "not applicable".Additive perturbation BP Patch replacement BP
CIFAR-10
A 1
A 7
CIFAR-100
A 2
A 8
STL-10
A 3
A 9
TinyImageNet
A 4
A 10
FMNIST
A 5
n/a
MNIST
A 6
n/a
Table 5 :
5Summary of attack configurations for instances in each of ensembles A 1 -A 10 . For each
ensemble, we show the number of attacks for each instance in this ensemble. We also show, for
each ensemble, the number of samples (embedded with BP and labeled to the target class) used for
poison the training set for each attack associated with this ensemble, as well as the corresponding
poisoning rate. Poisoning rate is defined as the number of samples inserted into the training set by
the attacker divided by the total number of training samples from the target class after poisoning.
A 1
A 2
A 3
A 4
A 5
A 6
A 7
A 8
A 9
A 10
# Attacks
2
2
2
1
1
1
2
2
1
2
# Poison
samples
500
150
1000
1500
1000
1000
500
50
750
500
Poisoning
rate
9.1% 23.0% 28.6% 23.0% 14.3% 14.3% 9.1% 9.1% 23.1% 9.1%
Table 7 :
794.6±4.2 90.7±4.5 79.6±2.9 77.0±3.2 99.0±1.1 99.8±0.1 96.7±2.5 93.2±4.1 78.7±3.4 77.2±3.5Average clean test accuracy (ACC, in percentage) over all classifiers being attacked, average
and minimum attack success rate (ASR, in percentage) over all attacks, for each of ensemble A 1 -A 10
of attack instances.
A 1
A 2
A 3
A 4
A 5
A 6
A 7
A 8
A 9
A 10
ASR
(average)
95.7±3.6 91.6±4.5 98.8±1.0 93.9±2.8 96.8±3.3 99.8±0.4 99.2±1.1 96.4±4.0 97.9±1.4 94.5±4.6
ASR
(minimum)
82.3
80.0
95.9
88.0
87.6
98.4
92.5
82.0
95.1
83.2
ACC
(average)
Table 8 :
8Average clean test accuracy (ACC, in percentage) over the classifiers for the clean instances
in each of ensemble C 1 -C 6 .
C 1
C 2
C 3
C 4
C 5
C 6
ACC
(average) 95.4±3.6 93.3±3.3 80.5±3.2 78.2±3.0 99.3±1.0 99.8±0.1
D.7 LIMITATIONS OF THE COSINE SIMILARITY STATISTIC IN BA DETECTION
Table 9 :
9Attack success rate (ASR) and clean test accuracy (ACC) for the classifiers being attacked
(with one, two, and three attacks/target classes) for CIAFR-10, CIFAR-100, and STL-10, respec-
tively; and ACC for the clean classifiers trained for the three domains respectively.
CIFAR-10
CIFAR-100
STL-10
one attack
ASR: 97.0
ACC: 93.7
ASR: 97.7
ACC: 71.5
ASR: 95.1
ACC: 79.7
two attacks
ASR=96.0, 93.7
ACC: 92.5
ASR=89.6, 95.3
ACC: 71.9
ASR=99.6, 96.0
ACC: 80.8
three attacks
ASR: 94.5, 80.2, 97.9
ACC: 92.8
ASR: 95.7, 74.7, 97.4
ACC: 70.5
ASR: 78.1, 99.0, 95.1
ACC: 79.1
no attack
ACC: 92.5
ACC: 70.4
ACC: 78.8
(a) bird
(b) deer
(c) frog
(d) horse
(e) ship
(f) truck
We verify Property 3.1 for patch replacement BPs embedded by Eq. (2). Similar to Sec. 4.3 of the main paper, here, we show that when class (1, we show the maximum ET (over the two classes) for all these
classifiers. Using ET and the constant detection threshold 1/2, we perfectly detect all clean-label
BAs with no false detections.
F EXPERIMENTAL VERIFICATION OF PROPERTY 3.1 FOR PATCH
REPLACEMENT BPS
Table
A clean-label BA with a different strategy Turner et al.(2019)is also detectable by our method (Apdx. E).
Images from these two datasets commonly have a large area of "black" background. Positively perturbing a few background pixels, which is a common practice to achieve a successful BAChen et al. (2018), is equivalent to replacing these pixels with a gray patch using Eq. (2).
One can construct very extreme cases such that P(||v * (X)||2 = d) > 0 for some constant d > 0. In other words, there is a set of samples from class i with non-negligible probability that are equal distant to the decision boundary. However, the probability for these cases is zero for practical domains and highly non-linear classifiers.
Possible reasons can be related to the design of the BP reverse-engineering algorithm, the attack configurations (which, e.g., cause a low attack success rate), etc. |
263,334,074 | LEGO-PROVER: NEURAL THEOREM PROVING WITH GROWING LIBRARIES | Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved.Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems.One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process.However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results.In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving.By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process.These skills are further evolved (by prompting an LLM) to enrich the library on another scale.Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems.Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps.LEGO-Prover advances the stateof-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 50.0%).During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library.Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%.We also release our code and all the generated skills. 1 | [
231879554,
259370805,
52967399
] | LEGO-PROVER: NEURAL THEOREM PROVING WITH GROWING LIBRARIES
27 Oct 2023
Haiming Wang
Sun Yat-sen University
Huajian Xin
Sun Yat-sen University
Chuanyang Zheng [email protected]
The Chinese University of Hong Kong
Lin Li
Zhengying Liu [email protected]
Huawei Noah's Ark Lab
Qingxing Cao
Sun Yat-sen University
Yinya Huang [email protected]
City University
Jing Xiong [email protected]
Sun Yat-sen University
Han Shi [email protected]
Huawei Noah's Ark Lab
Enze Xie [email protected]
Huawei Noah's Ark Lab
Jian Yin
Sun Yat-sen University
Zhenguo Li [email protected]
Huawei Noah's Ark Lab
Heng Liao [email protected]
Xiaodan Liang [email protected]
Sun Yat-sen University
Hong Kong
Huawei Hisilicon
LEGO-PROVER: NEURAL THEOREM PROVING WITH GROWING LIBRARIES
27 Oct 2023002472A7CD5B7FBED69179E10ECDC6E4arXiv:2310.00656v3[cs.AI]
Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved.Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems.One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process.However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results.In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving.By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process.These skills are further evolved (by prompting an LLM) to enrich the library on another scale.Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems.Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps.LEGO-Prover advances the stateof-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 50.0%).During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library.Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%.We also release our code and all the generated skills. 1
INTRODUCTION
The automation of formal reasoning tasks, such as theorem proving and mathematical proof formalization, represents a formidable challenge and an active area of research within the domain of artificial intelligence (Polu & Sutskever, 2020a;Han et al., 2022;Jiang et al., 2022a;First et al., 2023;Bansal et al., 2019;Lample et al., 2022;Jiang et al., 2022b;2021;Zhao et al., 2023;Yang et al., 2023;Wang et al., 2023b;Liu et al., 2023).The process of formalizing mathematical proofs typically relies on human experts to transcribe intricate mathematical concepts into structured formal languages verifiable by interactive theorem prover like Lean (de Moura et al., 2015) or Isabelle (Paulson, 1994).This process, while robust, is often labor-intensive and demands a high level of expertise.
In the past few years, large language models (LLMs) have emerged as a promising avenue, with their capacity to process and produce human-like text, opening doors to the idea of LLM-based neural LEGO Prover (* lemma 1. Retrieved from , proves the inequality of arithmetic and geometric means) lemma : fixes x y :: real assumes "x >= 0" "y >= 0" shows "x^2 + y^2 >= 2*x*y" using assms by <ATP> (* lemma 2. Synthesized by , proves a special form of the AM-GM inequality required in the proof of the target theorem.*) lemma :
fixes x y :: real assumes "x > 0" "y > 0"
shows "x/y + y/x >= 2" proof -... have "x/y + y/x >= 2 * sqrt(x/y) * sqrt(y/x)" using [of "sqrt(x/y)" "sqrt(y/x)"] c0 c1 by <ATP> ... qed theorem amc12a_2003_p24:
fixes a b::real assumes "b≤a" "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b ≤0" (is "?L ≤ _") proof -... also have "... ≤ 0" using ‹0 < x› ‹0 < y› by <ATP> finally show ?thesis .qed theorem amc12a_2003_p24: fixes a b::real assumes "b≤a" "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b ≤0" (is "?L ≤ _") proof -... (* Prove a special form of the AM-GM inequality as an intermediate result that is equivalent to the lemma on the right *) also have "... ≤ 0" proofdefine a1 a2 where "a1=sqrt(x/y)" "a2=sqrt(y/x)" have c2: "a1 \<ge> 0" using c0 c1 a1_a2_def by <ATP> have c3: "a2 \<ge> 0" using c0 c1 a1_a2_def by <ATP> then have "a1^2 + a2^2 ≥ 2 * a1 * a2" using c2 c3 by <ATP> then have "sqrt(x/y)^2 + sqrt(y/x)^2 ≥ 2 * sqrt(x/y) *sqrt(y/x)" using a1_a2_def by force then have "x/y + y/x ≥ 2 * sqrt(x/y * y/x)" by <ATP> then have "x/y + y/x ≥ 2" using c0 c1 by auto LEGO-Prover contains the prover that proves the theorem modularly using the retrieved skill and the evolver that transforms the skill for reusability and generalizability.These two components are bridged by the growing skill library.
theorem proving.Specifically, two predominant paradigms have been extensively explored in neural theorem proving.One stream of work involves step-by-step proof generation (Polu & Sutskever, 2020a;Han et al., 2022;Polu et al., 2022;Lample et al., 2022;Wang et al., 2023b;Yang et al., 2023;Jiang et al., 2022a), where fine-tuned models provide single-step proof actions coupled with search algorithms to find the complete proofs.Another paradigm leverages the coding capabilities of LLM to construct entire proofs in a single decoding process (Jiang et al., 2022b;Zhao et al., 2023;First et al., 2023).As shown in Fig. 1(a) left, these approaches share common proving strategies that synthesize the proof sequentially, with each step building upon the previous proof step, and stocking all the proof code into one large proof block.We denoted these approaches as plain provers since they generate the whole proof directly.Despite their promising results, plain provers still have several shortcomings.On one hand, plain provers attempt to prove theorems using static LLMs independently, while different problems can usually provide some insights into others.In other words, different problems may share the same lemmas while plain provers cannot utilize the proved lemmas once again even though it has been proved.On the other hand, even though plain provers can generate short-chain proofs with the help of advanced large language models like ChatGPT or GPT-4 (OpenAI, 2023) , it usually fails when it comes to long-chain proofs due to the reasoning difficulty.
To overcome the shortcomings of plain provers and inspired by the modularity of LEGO building blocks, we present LEGO-Prover, a novel approach designed to prove theorems in a block-by-block manner backed by a growing skill library.As shown in Fig. 1(a) right, LEGO-Prover tackles the problem of proving a theorem by first proving the sub-goal lemmas and then finalizing the problem using these lemmas.These lemmas can be retrieved from the skill library or newly constructed during the proving process.Specifically,Fig. 1(b) shows the overall process of LEGO-Prover, containing a prover and an evolver, which are bridged by the growing skill library.The prover takes the problem's formal statement as input and retrieves skills to prompt the LLM in generating the modular proof, with the generated lemmas accumulated into the skill library.However, lemmas created by the prover are often problem-specific with low reusability.Thus, LEGO-Prover incorporates an evolver that transforms the skills in the library for better generality, reusability, and complexity of the skills.The evolved new skills will also be verified and added back to the skill library.
We conduct extensive experiments on the popular theorem-proving dataset miniF2F (Zheng et al., 2021) to validate the effectiveness of our proposed approach.LEGO-Prover significantly outperforms previous approaches, achieving a pass rate of 57.0% and 50.0% on the miniF2F valid and test datasets, respectively.With a 6.75% absolute improvement on average over the previous state-of-the-art methods.In addition, our case study reveals that LLMs prove theorems modularly akin to LEGO block assembly, utilizing the retrieved skill by directly copying or using as a referee to construct the proof.Moreover, the learned skill library contains 22532 skills encompassing many useful high-level lemmas broadly applicable to various problems, as is shown in our case study and ablation study.
RELATED WORKS
Machine learning for formal mathematics.Modern formal mathematics environments often center around Interactive Theorem Provers (ITPs) like Lean (de Moura et al., 2015), Isabelle (Paulson, 1994), Metamath (Megill & Wheeler, 2019) and Coq (Barras et al., 1997).These ITPs often include specific formal languages, accompanied formal verifiers, and automated provers like Sledgehammer.
ITPs provide machine-human interactive interfaces, which gives verification results during formal proof construction for specific theorems and human coder can correct errors or continue to fill gaps in proofs under the guidance of error messages and local proof states, respectively.
Research leveraging machine learning techniques atop these formal mathematics environments generally falls into two predominant paradigms.The first focuses on proof search strategies and premise selection, epitomized by GPT-f (Polu & Sutskever, 2020a), where a language model advises single-step actions based on the current proving state, and the tree search finds a sequence of correct steps using actions given by the language model.The follow-up works PACT (Han et al., 2022) and Expert Iteration (Polu et al., 2022) incorporate supplemental pre-training tasks like theorem naming to enhance the policy model's reasoning ability.HTPS (Lample et al., 2022) applies Monte-Carlo tree search coupled with online training to optimize the exploration of the proof space.DT-Solver (Wang et al., 2023b) enhances search efficiency by dynamically adjusting search budgets to accommodate varying levels of state complexity.Thor (Jiang et al., 2022a) blends traditional Automated Theorem Provers (ATPs) with neural policy models to prove theorems in a neural-symbolic manner.Magnushammer (Mikuła et al., 2023) augments Thor's performance by integrating premise selection, thereby boosting the performance of rule-based ATPs.
Autoformalization.Our LEGO-Prover aligns closely with the second paradigm in machine learning for formal mathematics, which leverages the capabilities of large language models (LLMs) for the formalization of mathematical proofs.Several notable works have paved the way in this domain.For instance, Wang et al. (2018) delved into both supervised and unsupervised translation techniques for auto-formalization tasks.Wu et al. (2022) makes its first attempt at employing a large language model to translate natural language mathematical problems into formal theorem statements.Building on this, Draft, sketch, and proof Jiang et al. (2022b) develops a three-step approach that aims to fully formalize proofs, using natural language as guidance.First et al. (2023) goes a step further by generating complete proofs in a single pass and introducing a proof repair model to enhance the theorem-proving capabilities.Zhao et al. (2023) advances Jiang et al. (2022b) by incorporating cross-verified informal proofs to better inform the generation of formal proof sketches.Despite their contributions, none of the aforementioned methods have succeeded in establishing a learning paradigm that incrementally formalizes increasingly complex problems via a growing skill library, a gap that our work seeks to fill.
Skill-based agents.LEGO-Prover is also related to trending AI agents powered by large language models like GPT-3.5 and GPT-4 (Shen et al., 2023;Park et al., 2023;Wang et al., 2023c;Zahedi & Kambhampati, 2021).These AI agents are characterized by their ability to perform complex tasks through a combination of task planning, logical reasoning, and skill accumulation.For example, Voyager (Wang et al., 2023a) creates an AI agent capable of autonomously playing Minecraft.It has a dynamic growing skill library that empowers the in-game character to tackle increasingly intricate tasks.Similarly, (Cai et al., 2023) showcases the ability to generate reusable Python tools and documentation, which can then be leveraged by weaker GPT models, increasing the ability of the model.
METHOD
In this section, we introduce the detailed implementations of our proposed LEGO-Prover.Following the setting of Jiang et al. (2022b), we assume that each theorem is equipped with an informal statement, a human-written informal proof, and a formal statement defining the problem.As illustrated in Fig. 2 informal solver produces an informal solution for the problem statement, followed by using the decomposer to decompose it into step-by-step proof and propose useful sub-goals in the form of lemma statements.We use these lemma statements to retrieve useful lemma from the skill library.The formalizer constructs the final proof with the help of decomposed informal proof and retrieved lemma in a block-by-block manner.We extract newly constructed lemmas from the final proof and add them to the lemma vector store.(b) The evolver contains two evolution approaches, the directional transformer transforms the existing skill into a more general and reusable form in four predefined directions.The request solver directly solves the requested subgoal proposed by the prover.Newly generated and formally verifed lemmas from the evolver will be added to the lemma vector store.
LEGO-Prover consists of two main components: the prover and the evolver.The prover decomposes the problem into possible subgoal lemmas and proves the problem in a block-by-block style, aided by the skill library.The evolver refines skills in the skill library for enhanced diversity, generality, and reusability, and also resolves decomposed sub-goals from the prover to create new skills.These components are linked via a skill library housing lemmas and requests.In the following sections, we will detail introduce the skill library, the prover, and the evolver.
SKILL LIBRARY
The skill library contains various vector stores that are optimized for retrieval.Every vector store maintains its data in pairs consisting of documents and their corresponding embeddings, encoded with an embedding language model2 .Upon the receipt of a query, the vector store employs the embedding model to encode the query and leverages the k-NN algorithm to retrieve relevant documents stored within.The skill library used in LEGO-Prover is comprised of three distinct vector stores.1) The lemma vector store contains the Isabelle verified lemmas, encompassing both the lemma's statement and its proof.This is the core of the skill library and it facilitates the perpetual enhancement of LLMs' capabilities in constructing increasingly intricate theorems.For representation simplicity, the notion of lemmas in the lemma vector store and skills in the skill library are use interchangeably in this paper.
2) The request vector stores preserve the lemma statements proposed by the decomposer.These requests are crucial to the success of LEGO-Prover, their works as an in-depth reasoned query for retrieving the useful skill for the prover, as well as possible complete lemmas when they are solved by the evolver.3) The problem vector store houses the formal statements in the miniF2F dataset.The evolver utilizes these problem statements as heuristics to guide the LLMs in generating more beneficial new lemmas.
PROVER
As illustrated in Fig. 2 (a), the prover employs a three-step process to construct the proof.Initially, an informal solver is deployed to draft a solution in natural language corresponding to the informal statement.Akin to (Jiang et al., 2022b), LEGO-Prover experiments the use of ground truth humanwritten proofs as alternatives to model-generated proofs.After obtaining the informal proof, LEGO-Prover constructs the formal proof using the decomposer and the formalizer sequentially, which we detailed in the following.
Decomposer.The decomposer aims to decompose the formalization tasks, which transform the informal proof into the decomposed step-by-step informal proof as well as decompose the problem into formal goals.A concrete example of the decomposer is shown in Fig. 5 in Appendix.A.1.Specifically, the decomposer prompts the LLM to refine informal proofs, producing step-by-step informal proof that more closely aligns with the structure of the actual Isabelle proof code.We posit that this alignment is crucial as it considerably reduces the complexity encountered during the formalization process.Concurrently, the decomposer tasks the LLM with generating requests: some potential lemma or theorem statements that could be useful in addressing the given problem.Each request is composed of a chain-of-thought reasoning on what kind of lemma is required for solving the problem followed by the formal statement of the lemma.Subsequently, LEGO-Prover put these requests into the request vector store.
Formalizer.The process of formalization involves translating an informal proof into Isabelle's proof sketches, as depicted in Fig. 6 (refer to Appendix.A.1).In addition to the problem statement, the refined informal proof, and the formal statement, the formalizer is designed to incorporate useful lemmas retrieved from the lemma vector stores as part of the input.The formalizer employs the proposed request originating from the decomposer and the formal statement of the problem as query texts and, in total, retrieves n f skills.Upon collecting all the necessary input, the LLM is tasked to provide the proof code.Unlike the setting in Jiang et al. (2022b) and Zhao et al. (2023), we prompt the LLM to construct the complete content of the source file in Isabelle.This may encompass the requisite imports, definitions, or lemmas before the elucidation of the main problem to be proven.Consequently, the LLM possesses the capability to generate or apply useful lemmas before embarking on the resolution of the problem.Empirical evaluations demonstrate that our model exhibits a more modular problem-solving approach compared to established baseline methods.This modularity facilitates recycling smaller lemma components, thereby enhancing the LLM's capability to tackle progressively intricate problems.
After obtaining the formalized proof code, LEGO-Prover employs the Isabelle theorem prover to verify the correctness of the provided proof code.In instances where a particular proof tactic (such as "by ...") falls short of proving the given conjecture, we resort to 11 heuristic tactics alongside the sledgehammer method to facilitate an auto-correction.The heuristic selection we employ is consistent with those presented in Jiang et al. (2022b).After verifying the code, all validated lemmas or theorems are added to the skill vector store, while any failed lemmas' statement is added to the request vector store.We consider a formalized proof valid if and only if (a) the proof does not contain "cheating" keywords (sorry or oops) that exit a proof without completing it.(b) Isabelle can verify the proof code containing the corresponding formal statement.
EVOLVER
The lemmas extracted from the prover are mostly problem-specific, rendering them non-reusable with limited applicability.And the number of these lemmas is also very limited.The objective of the evolver is to create or refine these skills, enhancing their reusability and expanding their functional coverage.As shown in Fig. 2 (b), the evolver contains two functionalities: the directional transformer transforms the current skill and the request solver directly solves the request proposed by the prover to create new lemmas.We detail each in the following.
Directional transformer.The objective of the directional transformer is to facilitate the evolution of a skill along various predefined trajectories, thereby augmenting the reusability and usefulness of the skill.It is composed of four distinct trajectories: extension of dimensions, identification of key concepts, parameterization, and enhancement of complexity.Table .A.1 shows the detailed functionality of each evolving direction.Each instance of the directional transformer adheres to a unified prompt template depicted in Fig. 8.The adaptation involves substituting the core description and its in-context examples for the corresponding evolving directions.Specifically, the directional transformer begins with randomly selecting the least evolved skill (with the least amount of time being selected to evolve).Subsequently, the transformer employs this skill to retrieve n d relevant pending problem's formal statement from the problem vector store and the relevant request's formal statement from the request vector store.Upon assembling the inputs for the LLM, the transformer arbitrarily selects one direction of evolution and prompts the LLM to generate a novel skill.
Request solver.The request solver is designed to facilitate the development of new skills by directly addressing the sub-goals proposed by the prover.As depicted in Fig. 7, the process initiated by the request solver involves the random selection of a minimally solved request (with least amount of time being selected to solve the request).After this selection, this request is employed to query the lemma vector store to retrieve pertinent skills that can serve as references.Finally, the request solver prompts the LLM to generate the proof for the request.
After obtaining the new skill (evolved lemma or solved request) generated by the LLM, the evolver uses Isabelle to verify the generated code.To mitigate the risk of redundancy within the skill library, a comparative strategy is conducted between the newly acquired skills and existing ones.This is accomplished by employing the SequenceMatcher method from the difflib Python library, which quantifies the level of difference between the new and existing skills.Only skills that have been verified and exhibit a difference bellowing the predetermined threshold of 0.85 are incorporated into the skill library.
EXPERIMENTS
EXPERIMENTAL SETUP
Implementation details.To expedite the experimental procedure, both the prover and the evolver are executed in a multi-processing manner, maintaining a process number ratio of 3 : 8 respectively.Consistent with the (Jiang et al., 2022b;Zhao et al., 2023), each problem undergoes 100 attempts of proving.To maximally leverage the expanding skill library, problems are formalized through successive rounds, with each round addressing each valid/test set problem once.
For the execution of the prover and the evolver, ChatGPT is utilized as the LLM. 3 The temperature is consistently set at T = 0.7 across all procedures.Within the prover, 3-shot examples are leveraged for the decomposer.Regarding the formalizer, the quantity of reference skills n f is set to 6 for the valid set and 4 for the test set, and paired with 2 formalization in-context examples.For the directional transformer, the number of reference problem statements n d is set to 4, supplemented by two directional transformation in-context examples.For the request solver, 3 skills are retrieved for reference.
Dataset and evaluation.For a more accurate comparison, we follow (Jiang et al., 2022b;Zhao et al., 2023) and adopt the miniF2F dataset (Zheng et al., 2021).This dataset includes 488 problems sourced from high-school mathematical competitions.These problems vary in difficulty, ranging from basic algebra and number theory, originating from the MATH dataset Hendrycks et al. (2021), to more challenging problems found in the American Invitational Mathematics Examination (AIME) and International Mathematical Olympiad (IMO).The problems are divided into valid and test sets, with 244 problems each.In this paper, we utilize the updated version of the miniF2F dataset from Jiang et al. (2022b).Each question in this updated dataset contains a formal statement in Isabelle language, an informal statement, and a human-written informal proof.For interacting with the Isabelle theorem prover, we employ the PISA environment (Jiang et al., 2021).PISA is a flexible Python REPL wrapper for Isabelle, capable of verifying Isabelle's code and providing information such as proof states or error messages from Isabelle.
Baseline methods.We have included baselines that represent state-of-the-art neural theorem proving in Isabelle.Thor (Jiang et al., 2022a) and Thor with expert iteration on auto-formalized data (Wu et al., 2022) are works focused on proof search paradigms, which use a fine-tuned 700m language model to prove theorems.Draft, Sketch, and Prove (Jiang et al., 2022b) and Subgoal-Learning Zhao et al. (2023) are works that use Codex or ChatGPT to prove theorems directly.The model-generated informal proofs are pre-generated using GPT-4, with up to 20 informal proofs per problem.For each proving attempt, we randomly select one proof as the informal proof to feed into the decomposer procedure.For ablation, we remove the growing skill library to validate the effectiveness of the LEGO-Prover.Due to limited resources and the expense of OpenAI API calls4 , we perform ablation only under 50 proving attempts per problem on the miniF2F validation set.
MAIN RESULT
In Table .1, we illustrate the proportion of successful formal proofs found on the miniF2F dataset.
Thor and Thor + expert iteration displays the performance of the fine-tuned search-based proving method.The results indicate that all LLM-based methods significantly outperform the search-based methods by around 4.7%.The efficacy of search-based proving methods is limited by the short proof steps that the policy language model generates, drastically increasing the search space and, therefore, hindering the prover from finding long proofs.
Our proposed LEGO-Prover significantly outperforms both search-based and LLM-based methods.
With proofs written by humans, the LEGO-Prover improves by 7.3% and 4.5% over the Subgoal-Learning method on the miniF2F-valid and miniF2F-test datasets, respectively.A total of 257 out of 488 problems were solved by the LEGO-Prover with human-written proof.When replacing humanwritten informal proofs with model-generated informal proofs, the LEGO-Prover still achieves 52.4% and 45.5% pass rates on the valid set and test set, respectively, close to the results with human-written informal proofs.
Effects of the growing skill library.The growing skill library greatly enhances the proving ability of static LLMs like chatGPT or GPT-4.As the major contribution of LEGO-Prover, we are interested in how well it contributes to solving more problems and improving the LLMs' ability.Specifically, we remove the growing skill library (including the evolver).As shown in the Table .1, in 50 attempts, LEGO-Prover achieves 50.4% on the validation set, whereas the LEGO-Prover without a skill library achieves 47.1%.For a more intuitive representation, we further plot the trends of the number of problems solved against the number of proving attempts for both settings, shown in Fig. 3(a).
Compared to the problem solved without a growing skill library, the advantage of adding the skill library is initially minimal, as the libraries are still underdeveloped and lack useful skills.However, as the skill library expands, the gap between LEGO-Prover and the ablation method widens consistently.This outcome supports our hypothesis that the prover becomes increasingly adept at formalizing theorems as more skills are added to the skill library.fixes a b k :: nat assumes h0 : "0<a \<and> 0<b" and h1 : "a\<le>b" and h2 : "a^2 + b^2 = k*(a*b+1)" shows "(a, b, k) = (a, b, (a^2+b^2) div (a*b+1))" proofhave "log 2 (x powr y) = log 2 (exp (y * ln x))" using assms by (simp add: powr_def) also have "... = y * log 2 (exp (ln x))" by (metis assms(1) calculation exp_ln less_nu-meral_extra(3) log_powr of_nat_numeral) also have "... = y * log 2 x" by (metis assms(1) exp_ln of_nat_numeral) finally show ?thesis by simp qed new skill theorem imo_1988_p6_helper_generalized: fixes a b k x :: nat assumes h0 : "0<a \<and> 0<b" and h1 : "a\<le>b" and h2 : "a^2 + b^2 = k*(a*b+x)" and h3 : "x mod a = 0" shows "(a, b, k) = (a, b, (a^2+b^2) div (a*b+x))" proofhave "(a^2 + b^2) mod (a*b+x) = 0" using h2 h3 by (metis add.commute mod_mult_-self2_is_0)
then have "a^2 + b^2 = (a*b+x) * ((a^2+b^2) div (a*b+x))" by auto then have "a^2 + b^2 = (a*b+x) * k" using h2 by auto then have "(a, b, k) = (a, b, (a^2+b^2) div (a*b+x))" by (metis add.commute add_eq_0_iff_both_eq_0 bits_div_0 div_mult_self1_is_m h0 less_numeral_extra( 3 shows "\<exists>q r. a = b * q + r \<and> r < b" proofobtain q r where "r = a mod b" and "a = b * q + r" using div_mult_mod_eq[of a b] by auto then have "a = b * q + (a mod b)" by simp moreover have "a mod b < b" using mod_less_divisor[OF h0] by simp ultimately show ?thesis by auto qed theorem imo_1988_p6_helper: fixes a b k :: nat assumes h0 : "0<a \<and> 0<b" and h1 : "a\<le>b" and h2 : "a^2 + b^2 = k*(a*b+1)" shows "(a, b, k) = (a, b, (a^2+b^2) div (a*b+1))" proofhave "b > 0" using h0 h1 by auto obtain q r where "a^2 + b^2 = (a * b + 1) * q + r" and "r < As the number of skills grows (shown in green dotted line), the prover's performance gap between with and without the skill library becomes increasingly evident (b) Distribution of skill origins in successful proofs, plotted against prover attempts and the percentage contribution of each skill origin to the total successful proofs.The solid lines show the distribution of the two major methods by which the skill is used (direct use or proposing a lemma by imitation).The dotted lines show the detailed skill origins used in the successful proofs.(c) A skill-evolving tree gradually generated through multiple steps for imo 1988 p6 conceals how a relatively large-scale skill library is produced from a few seed theorems.
ANALYSIS
WHAT DOES THE GROWING SKILL LIBRARY LOOK LIKE?
Figure .3(c) illustrates an example of a skill-evolving tree in the skill library.The grown skill library is a massive forest containing numerous evolved trees like this one.The lemmas, originating from either the prover or the evolver's request solver sub-task (as the example shown in the figure), become the root nodes of these skill-evolving trees.The evolver's directional transformation generalizes these lemmas and creates child nodes.In terms of statistics, there are 22532 skills in the skill library in total, with 10.8% of the skills originating from the prover, 38.2% originating from the evolver's solve request sub-task, and 51.1% originating from the evolver's directional transformation sub-tasks.Although some lemmas are trivially true or already exist in Isabelle's theorem library, LEGO-Prover generates more unique, interesting, and useful lemmas.The gap between natural and formal language is the greatest challenge in formalizing natural language mathematical proofs.A simple proving step described in natural language can result in numerous lines of code written in the formal language.However, this gap can gradually diminish as we accumulate more lemmas and theorems.To conclude, the grown skill library generated by LEGO-Prover significantly enlarges this library and further bridges the gap between informal and formal mathematical languages.have "x + 1/(2*x) ≥ sqrt 2" using am_gm[OF h0] by simp (* Step 5: Since $2 -\sqrt{2} \geq 2 -x -\frac{1}{2x}$ is equivalent to $x + \frac{1}{2x} \geq \sqrt{2}$, we can conclude that $2 -\sqrt{2} \geq 2 -x -\frac{1}{2x}$ is true.*) then show "2 -sqrt 2 ≥ 2 -x -1/ (2 * x)" by simp qed synthesized proof retrieved skill lemma am_gm: fixes x :: real assumes "x > 0" shows "x + 1/(2*x) ≥ sqrt 2" proofhave "(sqrt x -sqrt (1/(2*x)))^2 ≥ 0" by simp then have "x + 1/(2*x) -2 * sqrt x * sqrt (1/(2*x)) ≥ 0" by (smt (verit) add_le_imp_le_diff assms divide_nonneg_nonneg less_eq_-real_def real_sqrt_pow2 sum_squares_bound)
then have "x + 1/(2*x) -sqrt 2 ≥ 0" by (metis add_divide_distrib assms div_by_1 divide_divide_eq_right di-vide_self less_numeral_extra(3) mult_2 one_add_one real_div_sqrt re-al_sqrt_divide real_sqrt_eq_1_iff real_sqrt_gt_0_iff real_sqrt_one times_di-vide_eq_right zero_le_numeral)
then have "x + 1/(2*x) ≥ sqrt 2" by simp then show ?thesis by simp qed lemma prod_frac_common_factor:
fixes n:: nat and a::real assumes "a ≠ 0" shows "(∏k = 1..n. the skill library and provided it as a reference skill for the formalizer.The formalizer synthesizes proof by directly copying the lemma as part of the proof code and using the lemma in the main proof.(b) Propose lemma by imitation.The retrieved lemma prod 1n 4n serves as a useful reference for the prover to synthesize the lemma prod frac common factor.
HOW DOES THE SKILL BOOST THE ABILITY OF LLMS?
To investigate closely how these learned skills can better help and boost the performance of LLM, we manually inspect the successfully proven problems in the miniF2F valid set.The conclusions are as follows:
Skill as directly reusable blocks.This is the most straightforward way of utilizing the generated skills.Since every skill in the skill library is verified by the Isabelle prover, LLM can directly copy the lemma code from the input prompt without fear of causing an error.As shown in Fig. 4 left, the final proof of the problem algebra amgm faxinrrp2msqrt2geq2mxm1div2x directly copies the retrieved skill am gm's code as part of the proof and uses this lemma to help prove the problem.
Skill as reference for solving the problem.Many skills cannot be directly reused but are very helpful as reference examples for formalizing the main problem.As shown in Fig. 4 right, the retrieved skill examples prod 1n 4n provide great clues for solving the conjecture prod frac common factor.Since the provided skills are lemmas with verified steps, these steps drastically increase the accuracy of the LLM to generate the correct proof steps.2).Out of 135 problems of the miniF2F-valid dataset passing the validation of the Isabelle verifier, 24% is completed with the aid of retrieved skills.Within this subset, 51% of the problems directly incorporate the retrieved skills into their proofs, while the remaining 49% formulate new lemmas that are specifically tailored to address the distinct problems at hand.Regarding the skills directly applied in the proofs, 71% are procured by the "do requests" procedure.The skills derived through the evolution techniques of "identifying key concepts" and "scaling complexity" each contributes to 12%, while those acquired through "parameterization" constitute 6%.Although skill as directly reusable blocks is the most ideal usage of skills in the library, the problems solved by directly reusing the skill are not substantial.That is because many trivial problems in the dataset miniF2F can be solved trivially without requiring any skill as a reference.
CONCLUSIONS
In this work, we introduced a new theorem-proving method, LEGO-Prover, which uses a growing skill library to continuously boost the capability of LLM for formalization.The prover utilizes refined structural informal proof and retrieved lemma to correctly formalize the proof.The evolver solves the request proposed by the prover or evolves existing skills into new skills.LEGO-Prover introduces a fundamentally different theorem proving paradigms for the community.With the previous approaches all struggling to complete the proof at once, LEGO-Prover tries to prove the theorem in a block-byblock manner, akin to divide and concur approaches.Extensive tests show that our method can indeed improve pass rates on the miniF2F dataset.Our ablation studies and detailed analysis showcase the effectiveness of each component we proposed in LEGO-Prover.
A APPENDIX
A.1 PROMPT EXAMPLES
In this section, we illustrate the prompts used in the LEGO-Prover.For prover, the prompt used is the decomposer (Fig. 5), and the formalizer (Fig. 6).For the evolver, the prompt used is the directional evolver (Fig. 8) and request solver (Fig. 7).The blue line separates the LLMs' input and outputs.
For directional evolve, we list all the core statement to be replaced in the Table .2
Table 2: The core description of individual directional evolve.The description will be replaced into the directional evolve prompt template.
Evolve type Description
Identify key concepts Determine the essential ideas, methods, or theorems that are crucial to solving the initial problem.
Parameterize
If the problem involves specific numbers, generalize it by replacing these with variables.
Scale complexity
Try both simpler and more complicated versions of the problem to see how the approach adapts.
Extend dimensions
If the problem is defined in a specific number of dimensions, consider if it holds in more or fewer dimensions.
Input:
System message: As a mathematician and expert in Isabelle theorem prover, your task is to analyze the given theorem (including the problem's informal statement, human written informal proof, and formal statement).Provide a better structured step-by-step proof that is closer to Isabelle.and request relevant lemmas, and theorems that might help in proving this problem.Step 2: Given that b > 1, conclude that the natural logarithm of b, y, is greater than 0.
Step 3: Using the given assumptions, establish that x ≥ y.This implies x > 0.
Step 4: Express the LHS of the main inequality in terms of x and y.
Step 5: Simplify the expression derived in Step 4.
Step 6: Demonstrate that the above expression is less than or equal to 0. The Arithmetic Mean-Geometric Mean Inequality (AM-GM) is used here to argue that the term (y/x + x/y) is at least 2, hence the expression is at most 0. Required skills: Thoughts 1: A base lemma proving am gm inequality that is useful for step 6 Code 1: lemma am_gm: fixes x y :: real assumes "x >= 0" "y >= 0" shows "xˆ2 + yˆ2 >= 2 * x * y" Thoughts 2: According to the step 6, we derived a special form of am gm inequality.which proofs the conjecture x/y + y/x >= 2 required in step 6.Code 2: lemma am_gm_divide_form: fixes x y :: real assumes "x > 0" "y > 0"
shows "x/y + y/x >= 2" [two more examples like above...] Statement: What is the least possible value of (xy − 1) 2 + (x + y) 2 for real numbers x and y? Show that it is 1.Informal proof: Expanding, we get that the expression is x 2 + 2xy + y 2 + x 2 y 2 − 2xy + 1 or x 2 + y 2 + x 2 y 2 + 1.By the Trivial Inequality (all squares are nonnegative) the minimum value for this is 1, which can be achieved at x = y = 0. Formal statement: theorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) -1)ˆ2 + (x + y)ˆ2" apply (auto simp:algebra_simps power2_eq_square)
Output:
Structure proof:
Step 1: Expand the expression (xy − 1) 2 + (x + y) 2 to obtain x 2 + 2xy + y 2 + x 2 y 2 − 2xy + 1.
Step 2: Simplify the expression derived in Step 1 to obtain x 2 + y 2 + x 2 y 2 + 1.
Step 3: Apply the Trivial Inequality, which states that all squares are nonnegative, to argue that the minimum value of the expression is 1.
Step 4: Show that the minimum value of 1 can be achieved when x = y = 0.
Required skills:
Thoughts 1: The Trivial Inequality is a key concept in this proof.Understanding and applying this inequality is crucial to show that the minimum value of the expression is 1.Code 1: lemma trivial_inequality: fixes a :: real shows "0 \<le> aˆ2" Thoughts 2: The ability to expand and simplify algebraic expressions is important in this proof.Code 2: lemma expand_expression: fixes x y :: real shows "(x * y -1)ˆ2 + (x + y)ˆ2 = xˆ2 + 2 * x * y + yˆ2 + xˆ2 * yˆ2 -2 * x * y + 1" Thoughts 3: The ability to substitute values into an expression and evaluate it is necessary to show that the minimum value of 1 can be achieved when x = y = 0. Code 3: lemma substitute_values: fixes x y :: real assumes "x = 0" "y = 0" shows "(x * y -1)ˆ2 + (x + y)ˆ2 = 1" Figure 5: Prompt example for decomposer.
Input:
System message: As a mathematician familiar with Isabelle, your task is to provide formal proof in response to a given problem statement.Your proof should be structured and clearly written, meeting the following criteria: -It can be verified by Isabelle.
-Each step of the proof should be explained in detail using comments enclosed in "(*" and "*)".
-The explanation for each step should be clear and concise, avoiding any unnecessary or apologetic language.
-You are strongly encouraged to create useful and reusable lemmas to solve the problem.
-The lemmas should be as general as possible (generalizable), and be able to cover a large step in proofs (non-trivial).Please ensure that your proof is well-organized and easy to follow, with each step building upon the previous one.
Here are some useful skills for reference: Useful Statement: Show that for positive integer n, 2 divides 4 n .Informal Proof:
Step 1: Since n is positive, we can find a natural number m where m + 1 = n.Then we can show that 2 divides 4 m+1 Formal statement: theorem numbertheory_2dvd4expn:
fixes n :: nat assumes h0 : "n \<noteq> 0" shows "(2::nat) dvd 4ˆn" Formal proof: theory Scratch imports Complex_Main begin theorem numbertheory_2dvd4expn:
fixes n :: nat assumes h0 : "n \<noteq> 0" shows "(2::nat) dvd 4ˆn" proofobtain m::nat where c0: "m+1=n" sledgehammer have "(2::nat) dvd 4ˆ(m+1)" sledgehammer then show ?thesis unfolding c0 sledgehammer qed end [1 more formalization example...] Statement: What is the least possible value of (xy − 1) 2 + (x + y) 2 for real numbers x and y? Show that it is 1.
Informal proof:
Step 1: Expand the expression (xy − 1) 2 + (x + y) 2 to obtain x 2 + 2xy + y 2 + x 2 y 2 − 2xy + 1.
Step 2: Simplify the expression derived in Step 1 to obtain x 2 + y 2 + x 2 y 2 + 1.
Step 3: Apply the Trivial Inequality, which states that all squares are nonnegative, to argue that the minimum value of the expression is 1.
Step 4: Show that the minimum value of 1 can be achieved when x = y = 0. Formal statement: theorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) -1)ˆ2 + (x + y)ˆ2"
Output:
Formal proof: theory Scratch imports Complex_Main begin lemma am_gm: fixes x y :: real shows "xˆ2 + yˆ2 \<ge> 2 * x * y" proofhave "(x -y)ˆ2 \<ge> 0" by simp then have "xˆ2 -2 * x * y + yˆ2 \<ge> 0" by (simp add: algebra_simps power2_diff) then have "xˆ2 + yˆ2 \<ge> 2 * x * y" by simp then show ?thesis by simp qed theorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) -1)ˆ2 + (x + y)ˆ2" apply (auto simp:algebra_simps power2_eq_square) by (metis am_gm add.commute add.left_commute add_mono_thms_linordered_semiring(1) add_pos_nonneg less_add_same_cancel1 power2_sum power_mono zero_le_power2) end
Input:
System message: As a mathematician familiar with Isabelle, your task is to provide a formal proof in response to a given formal statement.Your proof should be structured and clearly written, meeting the following criteria: -It can be verified by Isabelle.
-Please ensure that your proof is well-organized and easy to follow, with each step building upon the previous one.Formal statement: lemma power_eq_imp_eq: fixes a::real assumes "a>0" "a\<noteq>1" "aˆm = aˆn" shows "m = n" Formal proof: theory Scratch imports Complex_Main begin lemma power_eq_imp_eq: fixes a::real assumes "a>0" "a\<noteq>1" "aˆm = aˆn" shows "m = n" proofhave "aˆ(m-n) = 1" using assms(3) assms( 2) by (metis assms(1) diff_is_0_eq diff_self_eq_0 le_numeral_extra(3) less_imp_diff_less linorder_le_less_linear nat_int_comparison(2) nle_le one_le_power order_le_less order_less_irrefl order_refl power_0 power_0_left power_decreasing_iff power_eq_0_iff power_inject_exp real_root_ge_1_iff real_root_pos_unique real_root_power zero_le_power) hence "m-n=0" by (smt (verit) assms( 1) assms( 2
Input:
System Message: As an expert mathematician who is proficient in Isabelle theorem proving, your task is to modify the given lemma, theorem, function, or definition given in the code to aid in solving one or more of the problems provided.You should accomplish this by [evolve description].
Here are some reference problems: Problem 1: theorem mathd_algebra_131: fixes a b :: real and f :: "real \<Rightarrow> real" assumes h0 : "\<And>x.f x = 2 * xˆ2 -7 * x + 2" and h1 : "f a = 0" and h2 : "f b = 0" and h3 : "a \<noteq> b" shows "1 / (a-1) + 1 / (b-1) = -1" [3 more reference problems.
Figure 1 :
1
Figure 1: (a) Comparison between plain provers and LEGO-Prover.Unlike plain provers that prove the theorem sequentially, LEGO-Prover constructs the proof in a modular paradigm.Useful lemmas can be directly retrieved from the skill library and used as part of the proof.A newly constructed lemma can also be added to the skill library to help the proof of other theorems.(b) Overall framework of our proposed LEGO-Prover.LEGO-Prover contains the prover that proves the theorem modularly using the retrieved skill and the evolver that transforms the skill for reusability and generalizability.These two components are bridged by the growing skill library.
Figure 2 :
2
Figure 2: Overview of LEGO-Prover.(a) The prover takes three consecutive steps to prove the problem.The
fixes a b k :: nat assumes h0 : h0 : "0<a \<and> 0<b" and h1 : "a\<le>b" and h2 : "a^2 + b^2 = k*(a*b+1)" shows "(a, b, k) = (a, b, (a^2+b^2) div (a*b+1))" request theorem imo_1988_p6: fixes a b :: nat assumes h0 : "0<a \<and> 0<b" and h1 : "(a*b+1) dvd (a^2 + b^2)" shows "\<exists>(x::nat).((x^2) = (a^2+b^2)/(a*b+1))"
) mult.commute nonzero_mult_div_cancel_right) then show ?thesis by auto qed new skill theorem div_mult_le: fixes a b k x :: nat assumes a b c :: nat shows "a div b \<le> c" proofhave "a div b \<le> (b * c) div b" using assms by (metis div_le_mono) then show ?thesis by (metis Euclidean_Division.div_eq_0_iffNat.lessE Zero_neq_Suc assms du-al_order.irreflless_mult_imp_div_less linor-der_not_less mult.commutenonzero_mult_div_can-cel_left order.strict_iff_notorder_less_le)
Figure3: (a) A comparison of proof success rate between LEGO-Prover with and without the growing skill library.As the number of skills grows (shown in green dotted line), the prover's performance gap between with and without the skill library becomes increasingly evident (b) Distribution of skill origins in successful proofs, plotted against prover attempts and the percentage contribution of each skill origin to the total successful proofs.The solid lines show the distribution of the two major methods by which the skill is used (direct use or proposing a lemma by imitation).The dotted lines show the detailed skill origins used in the successful proofs.(c) A skill-evolving tree gradually generated through multiple steps for imo 1988 p6 conceals how a relatively large-scale skill library is produced from a few seed theorems.
sqrt x -sqrt (1/(2*x)))^2 ≥ 0" by simp then have "x + 1/(2*x) -2 * sqrt x * sqrt (1/(2*x)) ≥ 0" by (smt (verit) add_le_imp_le_diff assms divide_nonneg_nonneg less_eq_-real_def real_sqrt_pow2 sum_squares_bound) then have "x + 1/(2*x) -sqrt 2 ≥ 0" by (metis add_divide_distrib assms div_by_1 divide_divide_eq_right di-vide_self less_numeral_extra(3) mult_2 one_add_one real_div_sqrt re-al_sqrt_divide real_sqrt_eq_1_iff real_sqrt_gt_0_iff real_sqrt_one times_di-vide_eq_right zero_le_numeral) then have "x + 1/(2*x) ≥ sqrt 2" by simp then show ?thesis by simp qed theorem algebra_amgm_faxinrrp2msqrt2geq2mxm1div2x: "⋀x.(x>0) ⟹ 2 -sqrt 2 ≥ 2 -x -1/ (2 * x)" prooffixes x :: real assumes h0: "x>0" (*Step 1: We want to show that $x + \frac{1}{2x} \geq \sqrt{2}$ is true.*)(* Step 2: Apply the AM-GM inequality to $x$ and $\frac{1}{2x}$ to get $x + \frac{1}{2x} \geq 2\sqrt{\frac{x}{2x}}$.*) (* Step 3: Simplify $2\sqrt{\frac{x}{2x}}$ to $\sqrt{2}$.*) (* Step 4: Conclude that $x + \frac{1}{2x} \geq \sqrt{2}$ is true.*) have c1: "f (500 * (6/5)) = f 500 / (6/5)"
(a * k + a) / (a * k)) = n + 1" proof(induct n) case 0 then show ?case by simp next case (Suc n) have "(∏k = 1..Suc n. (a * k + a) / (a * k)) = ((a * Suc n + a) / (a * Suc n)) * (∏k = 1..n. (a * k + a) / (a * k))" by simp also have "... = ((a * Suc n + a) / (a * Suc n)) * (n + 1)" using Suc by simp also have "... = Suc n + 1" using assms by (smt (verit) add.commute add_divide_distrib divisors_zero nonzero_mult_div_cancel_left nonzero_mult_divide_mult_cancel_right of_nat_1 of_nat_add of_nat_neq_0 plus_1_eq_Suc right_inverse_eq times_divide_eq_left) finally show ?case by simp qed theorem amc12a_2008_p4: "(∏k::nat=1..501.((4::real) * k + 4) / (4 * k)) = 502" proof -(* Step 1: Rewrite the given product as $\frac {8}{4}\cdot\frac {12}{8}\c-dot\frac {16}{12}\cdots\frac {4n + 4}{4n}\cdots\frac {2008}{2004}$.*) have "(∏k::nat=1..501.((4::real) * k + 4) / (4 * k)) = (∏k::nat=1..501.(4 * (k + 1)) / (4 * k))" by eval (* Step 2: Simplify the product by canceling out common factors.Notice that each term in the numerator cancels with the corresponding term in the denominator, leaving only the last term $\frac {2008}{4}$.*) also have "... = (∏k::nat=1..501.(k + 1) / k)" by eval (* Use lemma 1 to simplify the product *) also have "... = 501 + 1" using prod_frac_common_factor[of "1::real" "501"] by eval (* Step 3: Calculate the value of $\frac {2008}{4}$ to find that it is equal to $502$.*) also have "... = 502" by simp (* Step 4: Conclude that the given product is equal to $502$.*)
Figure 4 :
4
Figure 4: Two primary forms of utilizing the skills.(a) Directly use.we retrieved the am gm skill from
Fig. 3
3
Fig. 3(b) first compares two scenarios: directly applying retrieved skills to the proofs and constructing new lemmas by imitating retrieved skills to assist in theorem proving (represented by the light blue and light green lines).It then examines the skill evolution pattern of the lemmas used in the proofs (corresponding to Fig. 2 (b) and Table.2).Out of 135 problems of the miniF2F-valid dataset passing the validation of the Isabelle verifier, 24% is completed with the aid of retrieved skills.Within this subset, 51% of the problems directly incorporate the retrieved skills into their proofs, while the remaining 49% formulate new lemmas that are specifically tailored to address the distinct problems at hand.Regarding the skills directly applied in the proofs, 71% are procured by the "do requests"
Statement: If a ≥ b > 1, what is the largest possible value of log a (a/b) + log b (b/a)?Show that it is 0. Informal proof: Using logarithmic rules, we see that log a a − log a b + log b b − log b a = 2 − (log a b + log b a) = 2 − (log a b + 1 log a b ) Since a and b are both greater than 1, using [[AM-GM]] gives that the term in parentheses must be at least 2, so the largest possible values is 2 − 2 = 0 Note that the maximum occurs when a = b.Formal statement: theorem fixes a b::real assumes "b\<le>a" and "1<b" shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _") Structure proof: Step 1: Introduce variables x and y to represent the natural logarithms of a and b respectively.
skills 1: lemma square_diff_identity: fixes x y z :: real shows "(x -y)ˆ2 + (y -z)ˆ2 + (z -x)ˆ2 = 2 * (xˆ2 + yˆ2 + zˆ2 -x * y -y * z -z * x)" proofhave "(x -y)ˆ2 + (y -z)ˆ2 + (z -x)ˆ2 = (xˆ2 -2 * x * y + yˆ2) + (yˆ2 -2 * y * z + zˆ2) + (zˆ2 -2 * z * x + xˆ2)" by (simp add: power2_diff) also have "... = 2 * (xˆ2 + yˆ2 + zˆ2 -x * y -y * z -z * x)" by (simp add: algebra_simps) finally show ?thesis by auto qed [5 more useful skills...]
Figure 6 :
6
Figure 6: Prompt example for formalization.21
) assms(3) cancel_comm_monoid_add_class.diff_canceldiff_is_0_eq' power_decreasing_iff power_inject_exp verit_comp_simplify1(3)) thus "m = n" by (smt (verit) assms(1) assms(2) assms(3) less_numeral_extra(3) nat_neq_iff power_inject_exp power_strict_decreasing_iff zero_less_diff) qed end [3 more request solving examples...] Formal statement: lemma exponent_properties: fixes a b :: real assumes "0 < a \<and> 0 < b" shows "aˆn * aˆm = aˆ(n + m) \<and> (aˆn)ˆm = aˆ(n * m)" assumes "0 < a \<and> 0 < b" shows "aˆn * aˆm = aˆ(n + m) \<and> (aˆn)ˆm = aˆ(n * m)" proof show "aˆn * aˆm = aˆ(n + m)" by (simp add: assms(1) power_add) next show "(aˆn)ˆm = aˆ(n * m)" by (simp add: assms(1) power_mult) qed end
Figure 7 :
7
Figure 7: Prompt example for solving request.
fixes n d :: nat assumes "d > 0" shows "n mod d \<in> {0..d-1}" proofhave "n mod d < d" by (rule mod_less_divisor[OF assms(1)]) moreover have "n mod d \<ge> 0" by simp ultimately show ?thesis by auto qed end [2 more directional evolve examples] Skill to evolve: theory Scratch imports Complex_Main begin lemma divide_cross_mul: fixes a b c d :: real assumes "b \<noteq> 0" and "d \<noteq> 0" and "a / b = c / d" shows "a * d = b * c" using assms by (auto simp: field_simps) end and "a / b = c / d" and "a = x * b" and "c = y * d" shows "x * d = y * b" using assms by (auto simp: field_simps) end
Figure 8 :
8
Figure 8: Prompt examples for directional evolve
qed formal proof new skill informal proof
,
decomposedIf a ≥ b > 1, what is the largest pos-sible value of log a (a/b)+log b (b/a)? Show that it is B....Since a and b are both greater than 1, using AM-GM inequality gives that the term in values is 2-2=0... largest possible at least 2, so the parentheses must bedecomposer2. + x / y is at least informal proof that the term y / x equality to argue 6. Use the AM-GM in-... 1. Introduce vari-ables... ... 2: Given that b > 1lemma fixes x y :: real : assumes "x >= 0" "y >= 0" shows "x^2 + y^2 >= 2*x*y" using assms by <ATP> lemma : fixes x y :: real assumes "x > 0" "y > 0" ... proof -shows "x/y + y/x >= 2"have "x/y + y/x >= 2 * sqrt(x/y) * sqrt(y/x)"using[of "sqrt(x/y)" "sqrt(y/x)"] c0 c1by <ATP>...qedtheorem amc12a_2003_p24:fixes a b::realassumes "b≤a" "1<b"shows "ln (a/b) / ln a + ln (b/a) / ln b ≤0"(is "?L ≤ _")proof -... also have "... ≤ 0"using‹0 < x› ‹0 < y› by <ATP>finally show ?thesis .(a)lemma fixes x y :: real : shows "x^2 + y^2 >= 2*x*y" using assms assumes "x >= 0" "y >= 0" retrieved skilllemma fixes x y :: real : theorem amc12a_2003_p24: request fixes a b::real shows "ln (a/b) / ln a + ln (b/a) / ln b ≤0" (is "?L ≤ _") assumes "b≤a" "1<b" formal statementby (simp add: real_sqrt_mult) using assms shows "2 * sqrt (a * b) <= a + b" assumes "x >= 0" "y >= 0" lemma : fixes x y :: real evolved skillby (simp add: sum_squares_bound)skill libraryassumes "x >= 0" "y >= 0" shows "x^2 + y^2 >= 2*x*y"directional transformerlemma fixes a b :: real :solved requestassumes "a > 0" "b > 0"(b) lemma fixes x y :: real : assumes "x > 0" "y > 0" shows "ln (a / b) = ln a -ln b" requestskill libraryby (simp add: log_def) shows "log a b = ln b \/ ln a" assumes "a > 0" "b > 0" fixes a b :: real lemma : similar skillrequest solverqed shows "ln (a / b) = ln a -ln b" proof -finally show ?thesis . assms by (simp add: ln_div) also have "... = ln a -ln b" using b)" using assms by (simp add: ln_div) have "ln (a / b) = ln a + ln (1 /
informal statement formalizer formal statement decomposed formal goals not found
shows "x^2 + y^2 >= 2*x*y" assumes "x >= 0" "y >= 0" lemma : fixes x y :: real requestlemma shows "x^2 + y^2 >= 2*x*y" : assumes "x >= 0" "y >= 0" fixes x y :: real retrieved skilllemma:skill libraryusing assms by (simp add:fixes x y :: realsum_squares_bound)assumes "x > 0" "y > 0"shows "ln (a / b) = ln a -ln b"
request skill library lemma vector store request vector store problem vector store informal solver
Table 1 :
1
Proving success rates on the miniF2F dataset with Isabelle.The table displays the success rates of previous works and the LEGO-Prover, The highest success rates for each set are highlighted in bold.LEGO-Prover* denotes the cumulative pass rate of the miniF2F dataset, considering the total number of problems solved using model-generated and human-written proofs.
Success rateLLMminiF2F-valid miniF2F-testBaselinesThor (Jiang et al., 2022a)-28.3%29.9%Thor + expert iteration (Wu et al., 2022)Codex37.3%35.2%Draft, sketch, and Prove (Jiang et al., 2022b)Codex42.6%39.3%Subgoal-Learning (Zhao et al., 2023)ChatGPT48.0%45.5%Ours (100 attempts)LEGO-Prover (model informal proof)ChatGPT52.4%45.5%LEGO-Prover (human informal proof)ChatGPT55.3%50.0%LEGO-Prover*ChatGPT57.0%50.0%Ablations (50 attempts)LEGO-ProverChatGPT50.4%--Skill LibraryChatGPT47.1%-
Jiang et al. (2022b)g fromJiang et al. (2022b), we test the LEGO-Prover with model-generated and human-written informal proofs.
Practically, ChromaDB serves as our vector store, coupled with the OpenAI's text-davinci-ada embedding model.
A combination of gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, and gpt-3.5turbo-16k-0613 is employed, with a model being selected randomly during calls to the OpenAI API.
Estimated to be around 300 dollars for one experiment with 100 proof attempts
Licensing the mizar mathematical library. Jesse Alama, Michael Kohlhase, Lionel Mamane, Adam Naumowicz, Piotr Rudnicki, Josef Urban, 10.1007/978-3-642-22673-1_11Intelligent Computer Mathematics -18th Symposium, Calculemus 2011, and 10th International Conference, MKM 2011. Lecture Notes in Computer Science. H James, William M Davenport, Josef Farmer, Florian Urban, Rabe, Bertinoro, ItalySpringerJuly 18-23, 2011. 20116824
DeepMath -deep sequence models for premise selection. Alexander A Alemi, Niklas Franc ¸ois Chollet, Geoffrey Een, Christian Irving, Josef Szegedy, Urban, 10.1007/978-3-642-22673-1_11Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncDecember 2016
Mathematics and the formal turn. Jeremy Avigad, 2023
Holist: An environment for machine learning of higher order logic theorem proving. Kshitij Bansal, Sarah M Loos, Markus N Rabe, Christian Szegedy, Stewart Wilcox, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. Kamalika Chaudhuri, Ruslan Salakhutdinov, the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USAPMLR9-15 June 2019. 201997of Proceedings of Machine Learning Research
Learning to Reason in Large Theories without Imitation. Kshitij Bansal, Christian Szegedy, Markus Norman Rabe, Sarah M Loos, Viktor Toman, September 2020
The Coq Proof Assistant Reference Manual : Version 6.1. report, INRIA. Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Jean-Christophe Filliâtre, Eduardo Giménez, Hugo Herbelin, Gérard Huet, César Muñoz, Chetan Murthy, Catherine Parent, Christine Paulin-Mohring, Amokrane Saïbi, Benjamin Werner, May 1997
Clark Barrett, Christopher L Conway, Morgan Deters, Liana Hadarean, Dejan Jovanović, Tim King, Andrew Reynolds, Cesare Tinelli, Cvc4, Computer Aided Verification: 23rd International Conference, CAV 2011. Snowbird, UT, USASpringerJuly 14-20, 2011. 201123
Superposition for Full Higher-order Logic. Alexander Bentkamp, Jasmin Blanchette, Sophie Tourret, Petar Vukmirović, 10.1007/978-3-030-79876-523Lecture Notes in Computer Science. André Platzer and Geoff Sutcliffe2021Springer International PublishingAutomated Deduction -CADE 28
Rings of sets. Garrett Birkhoff, Duke Mathematical Journal. 31937
On the Opportunities and Risks of Foundation Models. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Michael S Sydney Von Arx, Jeannette Bernstein, Antoine Bohg, Emma Bosselut, Erik Brunskill, Shyamal Brynjolfsson, Dallas Buch, Rodrigo Card, Niladri Castellon, Annie Chatterji, Kathleen Chen, Jared Quincy Creel, Dora Davis, Chris Demszky, Moussa Donahue, Esin Doumbouya, Stefano Durmus, John Ermon, Kawin Etchemendy, Li Ethayarajh, Chelsea Fei-Fei, Trevor Finn, Lauren Gale, Karan Gillespie, Noah Goel, Shelby Goodman, Neel Grossman, Tatsunori Guha, Peter Hashimoto, John Henderson, Daniel E Hewitt, Jenny Ho, Kyle Hong, Jing Hsu, Thomas Huang, Saahil Icard, Dan Jain, Pratyusha Jurafsky, Siddharth Kalluri, Geoff Karamcheti, Fereshte Keeling, Omar Khani, Pang Wei Khattab, Mark Koh, Ranjay Krass, Rohith Krishna, Ananya Kuditipudi, Faisal Kumar, Mina Ladhak, Tony Lee, Jure Lee, Isabelle Leskovec, Levent, Lisa Xiang, Xuechen Li, Tengyu Li, Ali Ma, Christopher D Malik, Suvir Manning, Eric Mirchandani, Zanele Mitchell, Suraj Munyikwa, Avanika Nair, Deepak Narayan, Ben Narayanan, Allen Newman, Juan Carlos Nie, Hamed Niebles, Julian Nilforoshan, Giray Nyarko, Eva Ogut, Christopher Portelance, Aditi Potts, Rob Raghunathan, Hongyu Reich, Frieda Ren, Yusuf Rong, Camilo Roohani, Jack Ruiz, Christopher Ryan, Dorsa Ré, Shiori Sadigh, Keshav Sagawa, Andy Santhanam, Krishnan Shih, Alex Srinivasan, Rohan Tamkin, Armin W Taori, Florian Thomas, Rose E Tramèr, William Wang, Bohan Wang, Jiajun Wu, Yuhuai Wu, Sang Wu, Michihiro Michael Xie, Jiaxuan Yasunaga, Matei You, Michael Zaharia, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zhang, Kaitlyn Zheng, Percy Zhou, Liang, arXiv:2108.07258arXiv:2108.07258 [cs] type: articleAugust 2021Laurel Orr, Isabel PapadimitriouTechnical ReportJoon Sung Park, Chris Piech,
Language Models are Few-Shot Learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mccandlish, Alec Radford, Ilya Sutskever, Dario Amodei, Advances in Neural Information Processing Systems. H Larochelle, M Ranzato, R Hadsell, M F Balcan, H Lin, Curran Associates, Inc202033
Sparks of artificial general intelligence: Early experiments with gpt-4. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang, Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev2023. 2022Recurrent memory transformer
Scaling transformer to 1m tokens and beyond with rmt. Aydar Bulatov, Yuri Kuratov, Mikhail S Burtsev, 2023
Formalising perfectoid spaces. Kevin Buzzard, Johan Commelin, Patrick Massot, CoRR, abs/1910.123202019
Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou, arXiv:2305.17126Large language models as tool makers. 2023arXiv preprint
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé De Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Ilya Sutskever, and Wojciech Zaremba. Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr; Bob McGrew, Dario Amodei, Sam McCandlishJan LeikeEvaluating large language models trained on code. CoRR, abs/2107.03374, 2021
Three models for the description of language. Noam Chomsky, IRE Transactions on information theory. 231956
Thierry Coquand and Gérard Huet. The calculus of constructions. Thierry Coquand, Gérard Huet, 10.1016/0890-5401(88)90005-3Information and Computation. 0890-54017621986. February 1988PhD thesisINRIA
Inductively defined types. Thierry Coquand, Christine Paulin, 10.1007/3-540-52335-947COLOG-88. Lecture Notes in Computer Science. Per Martin, -Löf , Grigori Mints, Berlin, HeidelbergSpringer1990
Giannis Daras and Alexandros G. Dimakis. Discovering the Hidden Vocabulary of DALLE-2. R Sander, Johannes Dahmen, Robert Y Hölzl, Lewis, 10.4230/LIPIcs.ITP.2019.15arXiv:2206.00169arXiv:2206.00169 [cs] type: article10th International Conference on Interactive Theorem Proving, ITP 2019. John Harrison, O' John, Andrew Leary, Tolmach, Portland, OR, USASeptember 9-12, 2019. 2019. May 2022141Technical ReportSchloss Dagstuhl -Leibniz-Zentrum für Informatik
A machine program for theorem-proving. Martin Davis, George Logemann, Donald Loveland, 10.1145/368273.368557Communications of the ACM. 0001-0782571962
Z3: An efficient smt solver. Leonardo De, Moura , Nikolaj Bjørner, Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS'08/ETAPS'08. the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS'08/ETAPS'08Berlin, HeidelbergSpringer-Verlag2008ISBN 3540787992
Tools and Algorithms for the Construction and Analysis of Systems. Leonardo De, Moura , Nikolaj Bjørner, 10.1007/978-3-540-78800-324Lecture Notes in Computer Science. C. R. Ramakrishnan and Jakob Rehof2008SpringerZ3: An Efficient SMT Solver
The Lean Theorem Prover (System Description). Leonardo De Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, Jakob Von Raumer, 10.1007/978-3-319-21401-626Lecture Notes in Computer Science. Amy P. Felty and Aart Middeldorp132015Springer International PublishingAutomated Deduction -CADE-25
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Long and Short Papers. Jill Burstein, Christy Doran, Thamar Solorio, the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational LinguisticsJune 2-7, 2019. 20191
A metaprogramming framework for formal verification. Gabriel Ebner, Sebastian Ullrich, Jared Roesch, Jeremy Avigad, Leonardo De Moura, 10.1145/3110278Proc. ACM Program. Lang. 1aug 2017
Baldur: Whole-proof generation and repair with large language models. Emily First, Markus N Rabe, Talia Ringer, Yuriy Brun, 10.48550/arXiv.2303.0491023
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy, arXiv:2101.00027The Pile: An 800gb dataset of diverse text for language modeling. 2020arXiv preprint
Introduction to Lattice Theory with Computer Science Applications. K Vijay, Garg, 2015Wiley Publishing1st edition. ISBN 1118914376
Tactictoe: Learning to reason with HOL4 tactics. Thibault Gauthier, Cezary Kaliszyk, Josef Urban, 10.29007/ntlbLPAR-21, 21st International Conference on Logic for Programming, Artificial Intelligence and Reasoning. Thomas Eiter, David Sands, Maun, BotswanaMay 7-12, 2017. 201746
The independence of the continuum hypothesis in isabelle/zf. Emmanuel Gunther, Miguel Pagano, Pedro Sánchez Terraf, Matías Steinberg, Arch. Formal Proofs. 2022. 2022
Proof Artifact Co-training for Theorem Proving with Language Models. ICLR 2022. Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, Stanislas Polu, arXiv:2102.06203February 2021
Proof artifact co-training for theorem proving with language models. Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, Stanislas Polu, The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event. April 25-29, 2022. OpenReview.net, 202223
HOL light: An overview. John Harrison, 10.1007/978-3-642-03359-9_4Theorem Proving in Higher Order Logics, 22nd International Conference. Lecture Notes in Computer Science. Stefan Berghofer, Tobias Nipkow, Christian Urban, Makarius Wenzel, TPHOLs; Munich, GermanySpringer2009. August 17-20, 2009. 20095674Proceedings
History of interactive theorem proving. John Harrison, Josef Urban, Freek Wiedijk, Computational Logic. H Jörg, Siekmann, 9of Handbook of the History of Logic
. 10.1016/B978-0-444-51624-4.50004-62014aElsevier
History of interactive theorem proving. John Harrison, Josef Urban, Freek Wiedijk, Computational Logic. 2014b9
History of Interactive Theorem Proving. John Harrison, Josef Urban, Freek Wiedijk, 10.1016/B978-0-444-51624-4.50004-6Handbook of the History of Logic. Elsevier2014c9
Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning. Chadi Helwe, Chloe Clavel, Fabian Suchanek, Deep Learning. 28
Measuring mathematical problem solving with the math dataset. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt, 2021
The Formulae-as-Types Notion of Construction. Alvin William, Howard, To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus, and Formalism. Haskell Curry, B Hindley, Seldin J Roger, P Jonathan, Academic Press1980
The mind of mechanical man. Geoffrey Jefferson, British Medical Journal. 1461611051949
Survey of hallucination in natural language generation. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, Pascale Fung, CoRR, abs/2202.036292022
Thor: Wielding hammers to integrate language models and automated theorem provers. Wenda Albert Q Jiang, Szymon Li, Konrad Tworkowski, Tomasz Czechowski, Piotr Odrzygóźdź, Yuhuai Miłoś, Mateja Wu, Jamnik, arXiv:2205.108932022a. 1, 2, 3, 6, 7arXiv preprint
Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample, 10.48550/arXiv.2210.122832022b67
Language models of isabelle proofs. Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, Yuhuai Wu, Lisa, 202116
Large Language Models are Zero-Shot Reasoners. Takeshi Kojima, Shane Shixiang, Machel Gu, Yutaka Reid, Yusuke Matsuo, Iwasawa, arXiv:2205.11916arXiv:2205.11916 [cs] type: articleMay 2022Technical Report
First-Order Theorem Proving and Vampire. Laura Kovács, Andrei Voronkov, 10.1007/978-3-642-39799-81Computer Aided Verification. Lecture Notes in Computer Science. Natasha Sharygina, Helmut Veith, Berlin, HeidelbergSpringer2013
Guillaume Lample, Franc ¸ois Charton, arXiv:1912.01412arXiv: 1912.01412Deep Learning for Symbolic Mathematics. December 2019
HyperTree Proof Search for Neural Theorem Proving. Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, Timothée Lacroix, arXiv:2205.11491arXiv:2205.11491 [cs] type: article. 1May 202223Technical Report
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, 10.1038/nature14539Nature. 1476-46875217553May 2015Nature Publishing Group
Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, 2019Bart
Solving quantitative reasoning problems with language models. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, V Vinay, Ambrose Ramasesh, Cem Slone, Imanol Anil, Theo Schlag, Yuhuai Gutman-Solo, Behnam Wu, Guy Neyshabur, Vedant Gur-Ari, Misra, NeurIPS2022
IsarStep: a Benchmark for High-level Mathematical Reasoning. Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C Paulson, ICLR 2021September 2020
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien De Masson D'autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, arXiv:2203.07814arXiv: 2203.07814Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-Level Code Generation with AlphaCode. February 2022
Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, Ye Yuan, Haiming Wang, Wei Ju, Chuanyang Zheng, Yichun Yin, Lin Li, arXiv:2309.04295A challenge formal dataset for automated theorem proving. 2023arXiv preprint
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, CoRR, abs/1907.116922019
Deep network guided proof search. Sarah M Loos, Geoffrey Irving, Christian Szegedy, Cezary Kaliszyk, 10.29007/8mwcLPAR-21, 21st International Conference on Logic for Programming, Artificial Intelligence and Reasoning. Thomas Eiter, David Sands, Maun, BotswanaMay 7-12, 2017. 201746
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang, arXiv:2308.09583August 2023
An evaluation of the archive of formal proofs. Carlin Mackenzie, Jacques D Fleuriot, James Vaughan, CoRR, abs/2104.010522021
The lean mathematical library. 10.1145/3372885.3373824Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs. the 9th ACM SIGPLAN International Conference on Certified Programs and ProofsNew York, NY, USAAssociation for Computing Machinery20202020The mathlib Community
A proposal for the dartmouth summer research project on artificial intelligence. John Mccarthy, Marvin L Minsky, Nathaniel Rochester, Claude E Shannon, AI magazine. 274august 31. 1955. 2006
Metamath: a computer language for mathematical proofs. Norman Megill, David A Wheeler, 2019Lulu Press
Magnushammer: A transformerbased approach to premise selection. Maciej Mikuła, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng Zhou, Christian Szegedy, Łukasz Kuciński, Piotr Miłoś, Yuhuai Wu, arXiv:2303.044882023arXiv preprint
10.48550/arXiv.2303.08774GPT-4 technical report. 2023OpenAI
Generative agents: Interactive simulacra of human behavior. Sung Joon, Joseph C Park, Carrie J O'brien, Meredith Ringel Cai, Percy Morris, Michael S Liang, Bernstein, 2023
Isabelle a Generic Theorem Prover. Lawrence C Paulson, 1994Springer Verlag13
Generative language modeling for automated theorem proving. Stanislas Polu, Ilya Sutskever, CoRR, abs/2009.033932020a23
Generative Language Modeling for Automated Theorem Proving. Stanislas Polu, Ilya Sutskever, arXiv:2009.03393arXiv: 2009.03393September 2020bcs, stat
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, 10.48550/arXiv.2202.01344arXiv:2202.01344arXiv:2202.01344 [cs] type: article. 2Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. Formal Mathematics Statement Curriculum Learning. February 20223
Large ai models in health informatics: Applications, challenges, and the future. Jianing Qiu, Lin Li, Jiankai Sun, Jiachuan Peng, Peilun Shi, Ruiyang Zhang, Yinzhao Dong, Kyle Lam, P-W Frank, Bo Lo, Xiao, arXiv:2303.115682023arXiv preprint
Mathematical Reasoning via Self-supervised Skip-tree Training. Markus Norman Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy, September 2020
Mathematical reasoning via self-supervised skip-tree training. Markus Norman Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy, 9th International Conference on Learning Representations, ICLR 2021, Virtual Event. AustriaMay 3-7, 2021. OpenReview.net, 2021
Language Models are Unsupervised Multitask Learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 24
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2018a
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2018b
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, 2020
Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A Generalist Agent. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, arXiv:2205.06175arXiv:2205.06175 [cs] type: articleMay 2022Technical Report
The design and implementation of vampire. Alexandre Riazanov, Andrei Voronkov, AI communications. 152-32002
A Machine-Oriented Logic Based on the Resolution Principle. J A Robinson, 10.1145/321250.321253Journal of the ACM. 0004-54111211965
E -a brainiac theorem prover. Stephan Schulz, AI Communications. 0921-7126153August 2002
Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang, 2023
Mastering the game of Go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, 10.1038/nature16961Nature. 1476-46875297587January 2016Nature Publishing GroupThore Graepel, and Demis Hassabis
Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis, arXiv:1712.01815arXiv: 1712.01815December 2017
LLaMA: Open and Efficient Foundation Language Models. Thibaut Hugo Touvron, Gautier Lavril, Xavier Izacard, Marie-Anne Martinet, Timothée Lachaux, Baptiste Lacroix, Naman Rozière, Eric Goyal, Faisal Hambro, Aurelien Azhar, Armand Rodriguez, Edouard Joulin, Guillaume Grave, Lample, arXiv:2302.13971February 2023a
Llama 2: Open Foundation and Fine-Tuned Chat Models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing , Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom, arXiv:2307.09288July 2023b
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. Isabelle Guyon, Samy Ulrike Von Luxburg, Hanna M Bengio, Rob Wallach, S V N Fergus, Roman Vishwanathan, Garnett, Long Beach, CA, USADecember 4-9, 2017. 2017
Voyager: An open-ended embodied agent with large language models. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar, arXiv: Arxiv-2305.162912023aarXiv preprint
Dt-solver: Automated theorem proving with dynamic-tree sampling guided by proof-level value function. Haiming Wang, Ye Yuan, Zhengying Liu, Jianhao Shen, Yichun Yin, Jing Xiong, Enze Xie, Han Shi, Yujun Li, Lin Li, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Long Papers. the 61st Annual Meeting of the Association for Computational Linguistics2023b13
Learning to Prove Theorems by Learning to Generate Theorems. Mingzhe Wang, Jia Deng, Advances in Neural Information Processing Systems. Curran Associates, Inc202033
First experiments with neural translation of informal to formal mathematics. Qingxiang Wang, Cezary Kaliszyk, Josef Urban, Intelligent Computer Mathematics: 11th International Conference, CICM 2018. Proceedings. Hagenberg, AustriaSpringerAugust 13-17. 2018. 201811
Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona selfcollaboration. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji, 2023c
Automated Deduction -CADE-22. Christoph Weidenbach, Dilyana Dimova, Arnaud Fietzke, Rohit Kumar, Martin Suda, Patrick Wischnewski, Renate A. Schmidt2009SpringerBerlin, Heidelberg; Berlin HeidelbergSpass version 3.5
Holophrasm: a neural automated theorem prover for higher-order logic. Daniel Whalen, CoRR, abs/1608.026442016a
Holophrasm: a neural Automated Theorem Prover for higher-order logic. Daniel Whalen, arXiv:1608.02644arXiv:1608.02644 [cs] type: articleAugust 2016bTechnical Report
Recursion (computer science) -Wikipedia, the free encyclopedia. Wikipedia contributors. 2023. May-202314
INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving. ICLR 2021. Yuhuai Wu, Albert Qiaochu Jiang, Jimmy Ba, Roger Grosse, arXiv:2007.02924April 2021
Autoformalization with large language models. Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy, Advances in Neural Information Processing Systems. 2022357
Expression syntax information bottleneck for math word problems. Jing Xiong, Chengming Li, Min Yang, Xiping Hu, Bin Hu, 10.1145/3477495.3531824SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Enrique Amigó, Pablo Castells, Julio Gonzalo, J Shane Ben Carterette, Gabriella Culpepper, Kazai, Madrid, SpainACMJuly 11 -15, 20222022
Learning to prove theorems via interacting with proof assistants. Kaiyu Yang, Jia Deng, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. Kamalika Chaudhuri, Ruslan Salakhutdinov, the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USAPMLR9-15 June 2019. 201997of Proceedings of Machine Learning Research
Leandojo: Theorem proving with retrieval-augmented language models. Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar, arXiv:2306.1562620231arXiv preprint
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu, Metamath, Bootstrap your own mathematical questions for large language models. 2023
Human-ai symbiosis: A survey of current approaches. Zahra Zahedi, Subbarao Kambhampati, 2021
Decomposing the enigma: Subgoal-based demonstration learning for formal theorem proving. Xueliang Zhao, Wenda Li, Lingpeng Kong, arXiv:2305.163662023. 1, 2, 3, 5, 6, 7arXiv preprint
Progressive-hint prompting improves reasoning in large language models. Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, Yu Li, arXiv:2304.097972023arXiv preprint
miniF2F: a cross-system benchmark for formal Olympiad-level mathematics. Kunhao Zheng, Jesse Michael Han, Stanislas Polu, September 2021 |
238,198,403 | DIFFUSION-BASED VOICE CONVERSION WITH FAST MAXIMUM LIKELIHOOD SAMPLING SCHEME | Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis. The code is publicly available at https://github.com/huawei-noah/ Speech-Backbones/tree/main/DiffVC. | [
227209335,
222140788,
221447287
] | DIFFUSION-BASED VOICE CONVERSION WITH FAST MAXIMUM LIKELIHOOD SAMPLING SCHEME
Vadim Popov [email protected]
Huawei Noah's Ark Lab
MoscowRussia
Ivan Vovk [email protected]
Huawei Noah's Ark Lab
MoscowRussia
Vladimir Gogoryan [email protected]
Huawei Noah's Ark Lab
MoscowRussia
Tasnima Sadekova [email protected]
Huawei Noah's Ark Lab
MoscowRussia
Mikhail Kudinov [email protected]
Huawei Noah's Ark Lab
MoscowRussia
Jiansheng Wei [email protected]
Huawei Noah's Ark Lab
MoscowRussia
DIFFUSION-BASED VOICE CONVERSION WITH FAST MAXIMUM LIKELIHOOD SAMPLING SCHEME
Preprint. Work in progress.
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis. The code is publicly available at https://github.com/huawei-noah/ Speech-Backbones/tree/main/DiffVC.
INTRODUCTION
Voice conversion (VC) is the task of copying the target speaker's voice while preserving the linguistic content of the utterance pronounced by the source speaker. Practical VC applications often require a model which is able to operate in one-shot mode (i.e. when only one reference utterance is provided to copy the target speaker's voice) for any source and target speakers. Such models are usually referred to as one-shot many-to-many models (or sometimes zero-shot many-to-many models, or just any-to-any VC models). It is challenging to build such a model since it should be able to adapt to a new unseen voice having only one spoken utterance pronounced with it, so it was not until recently that successful one-shot VC solutions started to appear.
Conventional one-shot VC models are designed as autoencoders whose latent space ideally contains only the linguistic content of the encoded utterance while target voice identity information (usually taking shape of speaker embedding) is fed to the decoder as conditioning. Whereas in the pioneering AutoVC model (Qian et al., 2019) only speaker embedding from the pre-trained speaker verification network was used as conditioning, several other models improved on AutoVC enriching conditioning with phonetic features such as pitch and loudness (Qian et al., 2020;Nercessian, 2020), or training voice conversion and speaker embedding networks jointly (Chou & Lee, 2019). Also, several papers (Lin et al., 2021;Ishihara & Saito, 2020;Liu et al., 2021b) made use of attention mechanism to better fuse specific features of the reference utterance into the source utterance thus improving the decoder performance. Apart from providing the decoder with sufficiently rich information, one of the main problems autoencoder VC models face is to disentangle source speaker identity from speech content in the encoder. Some models (Qian et al., 2019;Nercessian, 2020) solve this problem by introducing an information bottleneck. Among other popular solutions of the disentanglement problem one can mention applying vector quantization technique to the content information (Wu et al., 2020;Wang et al., 2021), utilizing features of Variational AutoEncoders (Luong & Tran, 2021; The model we propose in this paper solves the disentanglement problem by employing the encoder predicting "average voice": it is trained to transform mel features corresponding to each phoneme into mel features corresponding to this phoneme averaged across a large multi-speaker dataset. As for decoder, in our VC model, it is designed as a part of a Diffusion Probabilistic Model (DPM) since this class of generative models has shown very good results in speech-related tasks like raw waveform generation (Chen et al., 2021a;Kong et al., 2021) and mel feature generation (Popov et al., 2021;Jeong et al., 2021). However, this decoder choice poses a problem of slow inference because DPM forward pass scheme is iterative and to obtain high-quality results it is typically necessary to run it for hundreds of iterations (Ho et al., 2020;Nichol & Dhariwal, 2021). Addressing this issue, we develop a novel inference scheme that significantly reduces the number of iterations sufficient to produce samples of decent quality and does not require model re-training. Although several attempts have been recently made to reduce the number of DPM inference steps (Song et al., 2021a;San-Roman et al., 2021;Watson et al., 2021;Kong & Ping, 2021;Chen et al., 2021a), most of them apply to some particular types of DPMs. In contrast, our approach generalizes to all popular kinds of DPMs and has a strong connection with likelihood maximization. This paper has the following structure: in Section 2 we present a one-shot many-to-many VC model and describe DPM it relies on; Section 3 introduces a novel DPM sampling scheme and establishes its connection with likelihood maximization; the experiments regarding voice conversion task as well as those demonstrating the benefits of the proposed sampling scheme are described in Section 4; we conclude in Section 5.
VOICE CONVERSION DIFFUSION MODEL
As with many other VC models, the one we propose belongs to the family of autoencoders. In fact, any conditional DPM with data-dependent prior (i.e. terminal distribution of forward diffusion) can be seen as such: forward diffusion gradually adding Gaussian noise to data can be regarded as encoder while reverse diffusion trying to remove this noise acts as a decoder. DPMs are trained to minimize the distance (expressed in different terms for different model types) between the trajectories of forward and reverse diffusion processes thus, speaking from the perspective of autoencoders, minimizing reconstruction error. Data-dependent priors have been proposed by Popov et al. (2021) and Lee et al. (2021), and we follow the former paper due to the flexibility of the continuous DPM framework used there. Our approach is summarized in Figure 1.
Figure 1: VC model training and inference. Y stands for the training mel-spectrogram at training and the target mel-spectrogram at inference. Speaker conditioning in the decoder is enabled by the speaker conditioning network g t (Y ) where Y = {Y t } t∈[0,1] is the whole forward diffusion trajectory starting at Y 0 . Dotted arrows denote operations performed only at training.
ENCODER
We choose average phoneme-level mel features as speaker-independent speech representation. To train the encoder to convert input mel-spectrograms into those of "average voice", we take three steps: (i) first, we apply Montreal Forced Aligner (McAuliffe et al., 2017) to a large-scale multi-speaker LibriTTS dataset (Zen et al., 2019) to align speech frames with phonemes; (ii) next, we obtain average mel features for each particular phoneme by aggregating its mel features across the whole LibriTTS dataset; (iii) the encoder is then trained to minimize mean square error between output mel-spectrograms and ground truth "average voice" mel-spectrograms (i.e. input mel-spectrograms where each phoneme mel feature is replaced with the average one calculated on the previous step).
The encoder has exactly the same Transformer-based architecture used in Grad-TTS (Popov et al., 2021) except that its inputs are mel features rather than character or phoneme embeddings. Note that unlike Grad-TTS the encoder is trained separately from the decoder described in the next section.
DECODER
Whereas the encoder parameterizes the terminal distribution of the forward diffusion (i.e. the prior), the reverse diffusion is parameterized with the decoder. Following Song et al. (2021c) we use Itô calculus and define diffusions in terms of stochastic processes rather than discrete-time Markov chains.
The general DPM framework we utilize consists of forward and reverse diffusions given by the following Stochastic Differential Equations (SDEs):
dX t = 1 2 β t (X − X t )dt + β t d − → W t ,(1)dX t = 1 2 (X −X t ) − s θ (X t ,X, t) β t dt + β t d ← − W t ,(2)
where t ∈ [0, 1], − → W and ← − W are two independent Wiener processes in R n , β t is non-negative function referred to as noise schedule, s θ is the score function with parameters θ andX is n-dimensional vector. It can be shown (Popov et al., 2021) that the forward SDE (1) allows for explicit solution:
Law(X t |X 0 ) = N e − 1 2 t 0 βsds X 0 + 1 − e − 1 2 t 0 βsds X , 1 − e − t 0 βsds I ,(3)
where I is n × n identity matrix. Thus, if noise follows linear schedule β t = β 0 + t(β 1 − β 0 ) for β 0 and β 1 such that e − 1 0 βsds is close to zero, then Law (X 1 ) is close to N (X, I) which is the prior in this DPM. The reverse diffusion (2) is trained by minimizing weighted L 2 loss:
θ * = arg min θ L(θ) = arg min θ 1 0 λ t E X0,Xt s θ (X t ,X, t) − ∇log p t|0 (X t |X 0 ) 2 2 dt,(4)
where p t|0 (X t |X 0 ) is the probability density function (pdf) of the conditional distribution (3) and
λ t = 1 − e − t 0 βsds . The distribution (3) is Gaussian, so we have ∇ log p t|0 (X t |X 0 ) = − X t − X 0 e − 1 2 t 0 βsds −X(1 − e − 1 2 t 0 βsds ) 1 − e − t 0 βsds .(5)
At training, time variable t is sampled uniformly from [0, 1], noisy samples X t are generated according to the formula (3) and the formula (5) is used to calculate loss function L on these samples. Note that X t can be sampled without the necessity to calculate intermediate values {X s } 0<s<t which makes optimization task (4) time and memory efficient. A well-trained reverse diffusion (2) has trajectories that are close to those of the forward diffusion (1), so generating data with this DPM can be performed by samplingX 1 from the prior N (X, I) and solving SDE (2) backwards in time.
The described above DPM was introduced by Popov et al. (2021) for text-to-speech task and we adapt it for our purposes. We putX = ϕ(X 0 ) where ϕ is the encoder, i.e.X is the "average voice" mel-spectrogram which we want to transform into that of the target voice. We condition the decoder s θ = s θ (X t ,X, g t (Y ), t) on some trainable function g t (Y ) to provide it with information about the target speaker (Y stands for forward trajectories of the target mel-spectrogram at inference and the ones of the training mel-spectrogram at training). This function is a neural network trained jointly with the decoder. We experimented with three input types for this network:
• d-only -the input is the speaker embedding extracted from the target mel-spectrogram Y 0 with the pre-trained speaker verification network employed in (Jia et al., 2018); • wodyn -in addition, the noisy target mel-spectrogram Y t is used as input;
• whole -in addition, the whole dynamics of the target mel-spectrogram under forward diffusion {Y s |s = 0.5/15, 1.5/15, .., 14.5/15} is used as input.
The decoder architecture is based on U-Net (Ronneberger et al., 2015) and is the same as in Grad-TTS but with four times more channels to better capture the whole range of human voices. The speaker conditioning network g t (Y ) is composed of 2D convolutions and MLPs and described in detail in Appendix H. Its output is 128-dimensional vector which is broadcast-concatenated to the concatenation ofX t andX as additional 128 channels.
RELATED VC MODELS
To the best of our knowledge, there exist two diffusion-based voice conversion models: VoiceGrad (Kameoka et al., 2020) and DiffSVC (Liu et al., 2021a). The one we propose differs from them in several important aspects. First, neither of the mentioned papers considers a one-shot many-tomany voice conversion scenario. Next, these models take no less than 100 reverse diffusion steps at inference while we pay special attention to reducing the number of iterations (see Section 3) achieving good quality with as few as 6 iterations. Furthermore, VoiceGrad performs voice conversion by running Langevin dynamics starting from the source mel-spectrogram, thus implicitly assuming that forward diffusion trajectories starting from the mel-spectrogram we want to synthesize are likely to pass through the neighborhood of the source mel-spectrogram on their way to Gaussian noise. Such an assumption allowing to have only one network instead of encoder-decoder architecture is too strong and hardly holds for real voices. Finally, DiffSVC performs singing voice conversion and relies on PPGs as speaker-independent speech representation.
MAXIMUM LIKELIHOOD SDE SOLVER
In this section, we develop a fixed-step first-order reverse SDE solver that maximizes the loglikelihood of sample paths of the forward diffusion. This solver differs from general-purpose Euler-Maruyama SDE solver (Kloeden & Platen, 1992) by infinitesimally small values which can however become significant when we sample from diffusion model using a few iterations.
Consider the following forward and reverse SDEs defined in Euclidean space R n for t ∈ [0, 1]:
dX t = − 1 2 β t X t dt + β t d − → W t (F ), dX t = − 1 2 β tXt − β t s θ (X t , t) dt + β t d ← − W t (R), (6) where − → W is a forward Wiener process (i.e. its forward increments − → W t − − → W s are independent of − → W s for t > s) and ← − W is a backward Wiener process (i.e. backward increments ← − W s − ← − W t are independent of ← − W t for s < t).
Following Song et al. (2021c) we will call DPM (6) Variance Preserving (VP). For simplicity we will derive maximum likelihood solver for this particular type of diffusion models. The equation (1) underlying VC diffusion model described in Section 2 can be transformed into the equation (6-F) by a constant shift and we will call such diffusion models Mean Reverting Variance Preserving (MR-VP). VP model analysis carried out in this section can be easily extended (see Appendices D, E and F) to MR-VP model as well as to other common diffusion model types such as sub-VP and VE described by Song et al. (2021c).
The forward SDE (6-F) allows for explicit solution:
Law(X t |X s ) = N (γ s,t X s , (1 − γ 2 s,t ) I), γ s,t = exp − 1 2 t s β u du ,(7)
for all 0 ≤ s < t ≤ 1. This formula is derived by means of Itô calculus in Appendix A. The reverse SDE (6-R) parameterized with a neural network s θ is trained to approximate gradient of the log-density of noisy data X t :
θ * = arg min θ 1 0 λ t E Xt s θ (X t , t) − ∇log p t (X t ) 2 2 dt,(8)
where the expectation is taken with respect to noisy data distribution Law(X t ) with pdf p t (·) and λ t is some positive weighting function. Note that certain Lipschitz constraints should be satisfied by coefficients of SDEs (6) to guarantee existence of strong solutions (Liptser & Shiryaev, 1978), and throughout this section we assume these conditions are satisfied as well as those from (Anderson, 1982) which guarantee that pathsX generated by the reverse SDE (6-R) for the optimal θ * equal forward SDE (6-F) paths X in distribution.
The generative procedure of a VP DPM consists in solving the reverse SDE (6-R) backwards in time starting fromX 1 ∼ N (0, I). Common Euler-Maruyama solver introduces discretization error (Kloeden & Platen, 1992) which may harm sample quality when the number of iterations is small. At the same time, it is possible to design unbiased (Henry-Labordère et al., 2017) or even exact (Beskos & Roberts, 2005) numerical solvers for some particular SDE types. The Theorem 1 shows that in the case of diffusion models we can make use of the forward diffusion (6-F) and propose a reverse SDE solver which is better than the general-purpose Euler-Maruyama one in terms of likelihood.
The solver proposed in the Theorem 1 is expressed in terms of the values defined as follows:
µ s,t = γ s,t 1 − γ 2 0,s 1 − γ 2 0,t , ν s,t = γ 0,s 1 − γ 2 s,t 1 − γ 2 0,t , σ 2 s,t = (1 − γ 2 0,s )(1 − γ 2 s,t ) 1 − γ 2 0,t ,(9)κ * t,h = ν t−h,t (1 − γ 2 0,t ) γ 0,t β t h − 1, ω * t,h = µ t−h,t − 1 β t h + 1 + κ * t,h 1 − γ 2 0,t − 1 2 , (σ * t,h ) 2 = σ 2 t−h,t + 1 n ν 2 t−h,t E Xt [Tr (Var (X 0 |X t ))] ,(10)
where n is data dimensionality, Var (X 0 |X t ) is the covariance matrix of the conditional data distribution Law(X 0 |X t ) (so, Tr(Var (X 0 |X t )) is the overall variance across all n dimensions) and the expectation E Xt [·] is taken with respect to the unconditional noisy data distribution Law(X t ).
Theorem 1. Consider a DPM characterized by SDEs (6) with reverse diffusion trained till optimality. Let N ∈ N be any natural number and h = 1/N . Consider the following class of fixed step size h reverse SDE solvers parameterized with triplets of real numbers
{(κ t,h ,ω t,h ,σ t,h )|t = h, 2h, .., 1}: X t−h =X t + β t h 1 2 +ω t,h X t + (1 +κ t,h )s θ * (X t , t) +σ t,h ξ t ,(11)
where θ * is given by (8), t = 1, 1 − h, .., h and ξ t are i.i.d. samples from N (0, I). Then:
(i) Log-likelihood of sample paths X = {X kh } N k=0 under generative modelX is maximized forκ t,h = κ * t,h ,ω t,h = ω * t,h andσ t,h = σ * t,h .
(ii) Assume that the SDE solver (11) starts from random variableX 1 ∼ Law (X 1 ). If X 0 is a constant or a Gaussian random variable with diagonal isotropic covariance matrix (i.e. δ 2 I for δ > 0), then generative modelX is exact The Theorem 1 provides an improved DPM sampling scheme which comes at no additional computational cost compared to standard methods (except for data-dependent term in σ * as discussed in Section 4.3) and requires neither model re-training nor extensive search on noise schedule space. The proof of this theorem is given in Appendix C. Note that it establishes optimality of the reverse SDE solver (11) with the parameters (10) in terms of likelihood of discrete paths X = {X kh } N k=0 while the optimality of continuous model (6-R) on continuous paths {X t } t∈[0,1] is guaranteed for a model with parameters θ = θ * as shown in (Song et al., 2021c).
forκ t,h = κ * t,h ,ω t,h = ω * t,h andσ t,h = σ * t,h .
The class of reverse SDE solvers considered in the Theorem 1 is rather broad: it is the class of all fixed-step solvers whose increments at time t are linear combination ofX t , s θ (X t , t) and Gaussian noise with zero mean and diagonal isotropic covariance matrix. As a particular case it includes (1)) (the proof is given in Appendix B), so the optimal SDE solver significantly differs from general-purpose Euler-Maruyama solver only when N is rather small or t has the same order as h, i.e. on the final steps of DPM inference. Appendix G contains toy examples demonstrating the difference of the proposed optimal SDE solver and Euler-Maruyama one depending on step size.
Euler-Maruyama solver (κ t,h ≡ 0,ω t,h ≡ 0,σ t,h ≡ √ β t h) and for fixed t and h → 0 we have κ * t,h =ō(1), ω * t,h =ō(1) and σ * t,h = √ β t h(1 +ō
The result (ii) from the Theorem 1 strengthens the result (i) for some particular data distributions, but it may seem useless since in practice data distribution is far from being constant or Gaussian. However, in case of generation with strong conditioning (e.g. mel-spectrogram inversion) the assumptions on the data distribution may become viable: in the limiting case when our model is conditioned on c = ψ(X 0 ) for an injective function ψ, random variable X 0 |c becomes a constant ψ −1 (c).
EXPERIMENTS
We trained two groups of models: Diff-VCTK models on VCTK (Yamagishi et al., 2019) dataset containing 109 speakers (9 speakers were held out for testing purposes) and Diff-LibriTTS models on LibriTTS (Zen et al., 2019) containing approximately 1100 speakers (10 speakers were held out). For every model both encoder and decoder were trained on the same dataset. Training hyperparameters, implementation and data processing details can be found in Appendix I. For mel-spectrogram inversion, we used the pre-trained universal HiFi-GAN vocoder (Kong et al., 2020) operating at 22.05kHz. All subjective human evaluation was carried out on Amazon Mechanical Turk (AMT) with Master assessors to ensure the reliability of the obtained Mean Opinion Scores (MOS). In all AMT tests we considered unseen-to-unseen conversion with 25 unseen (for both Diff-VCTK and Diff-LibriTTS) speakers: 9 VCTK speakers, 10 LibriTTS speakers and 6 internal speakers. For VCTK source speakers we also ensured that source phrases were unseen during training. We place other details of listening AMT tests in Appendix J. A small subset of speech samples used in them is available at our demo page https://diffvc-fast-ml-solver.github.io which we encourage to visit.
As for sampling, we considered the following class of reverse SDE solvers:
X t−h =X t + β t h 1 2 +ω t,h (X t −X) + (1 +κ t,h )s θ (X t ,X, g t (Y ), t) +σ t,h ξ t ,(12)
where t = 1, 1 − h, .., h and ξ t are i.i.d. samples from N (0, I).
Forκ t,h = κ * t,h ,ω t,h = ω * t,h andσ t,h = σ * t,h (where κ * t,h , ω * t,
h and σ * t,h are given by (10)) it becomes maximum likelihood reverse SDE solver for MR-VP DPM (1-2) as shown in Appendix D. In practice it is not trivial to estimate variance of the conditional distribution Law (X 0 |X t ), so we skipped this term in σ * t,h assuming this variance to be rather small because of strong conditioning on g t (Y ) and just used
σ t,h = σ t−h,t calling this sampling method ML-N (N = 1/h is the number of SDE solver steps).
We also experimented with Euler-Maruyama solver EM-N (i.e.κ t,h = 0,ω t,h = 0,σ t,h = √ β t h) and "probability flow sampling" from (Song et al., 2021c)
which we denote by PF-N (κ t,h = −0.5, ω t,h = 0,σ t,h = 0).
SPEAKER CONDITIONING ANALYSIS
For each dataset we trained three models -one for each input type for the speaker conditioning network g t (Y ) (see Section 2.2). Although these input types had much influence neither on speaker similarity nor on speech naturalness, we did two experiments to choose the best models (one for each training dataset) in terms of speaker similarity for further comparison with baseline systems. We compared voice conversion results (produced by ML-30 sampling scheme) on 92 source-target pairs. AMT workers were asked which of three models (if any) sounded most similar to the target speaker and which of them (if any) sounded least similar. For Diff-VCTK and Diff-LibriTTS models each conversion pair was evaluated 4 and 5 times respectively. Table 1 demonstrates that for both Diff-VCTK and Diff-LibriTTS the best option is wodyn, i.e. to condition the decoder at time t on the speaker embedding together with the noisy target mel-spectrogram Y t . Conditioning on Y t allows making use of diffusion-specific information of how the noisy target sounds whereas embedding from the pre-trained speaker verification network contains information only about the clean target. Taking these results into consideration, we used Diff-VCTK-wodyn and Diff-LibriTTS-wodyn in the remaining experiments.
ANY-TO-ANY VOICE CONVERSION
We chose four recently proposed VC models capable of one-shot many-to-many synthesis as the baselines:
• AGAIN-VC (Chen et al., 2021b), an improved version of a conventional autoencoder AdaIN-VC solving the disentanglement problem by means of instance normalization;
• FragmentVC (Lin et al., 2021), an attention-based model relying on wav2vec 2.0 (Baevski et al., 2020) to obtain speech content from the source utterance;
• VQMIVC (Wang et al., 2021), state-of-the-art approach among those employing vector quantization techniques; As shown in , PPG-based VC models provide high voice conversion quality competitive even with that of the state-of-the-art VC models taking text transcription corresponding to the source utterance as input. Therefore, we can consider BNE-PPG-VC a state-of-the-art model in our setting.
Baseline voice conversion results were produced by the pre-trained VC models provided in official GitHub repositories. Since only BNE-PPG-VC has the model pre-trained on a large-scale dataset (namely, LibriTTS + VCTK), we did two subjective human evaluation tests: the first one comparing Diff-VCTK with AGAIN-VC, FragmentVC and VQMIVC trained on VCTK and the second one comparing Diff-LibriTTS with BNE-PPG-VC. The results of these tests are given in Tables 2 and 3 respectively. Speech naturalness and speaker similarity were assessed separately. AMT workers evaluated voice conversion quality on 350 source-target pairs on 5-point scale. In the first test, each pair was assessed 6 times on average both in speech naturalness and speaker similarity evaluation; as for the second one, each pair was assessed 8 and 9 times on average in speech naturalness and speaker similarity evaluation correspondingly. No less than 41 unique assessors took part in each test. Table 2 demonstrates that our model performs significantly better than the baselines both in terms of naturalness and speaker similarity even when 6 reverse diffusion iterations are used. Despite working almost equally well on VCTK speakers, the best baseline VQMIVC shows poor performance on other speakers perhaps because of not being able to generalize to different domains with lower recording quality. Although Diff-VCTK performance also degrades on non-VCTK speakers, it achieves good speaker similarity of MOS 3.6 on VCTK ones when ML-30 sampling scheme is used and only slightly worse MOS 3.5 when 5x less iterations are used at inference. , BNE-PPG-VC has fewer mispronunciation issues than our model but synthesized speech suffers from more sonic artifacts. This observation makes us believe that incorporating PPG features in the proposed diffusion VC framework is a promising direction for future research. Table 3 also demonstrates the benefits of the proposed maximum likelihood sampling scheme over other sampling methods for a small number of inference steps: only ML-N scheme allows us to use as few as N = 6 iterations with acceptable quality degradation of MOS 0.2 and 0.1 in terms of naturalness and speaker similarity respectively while two other competing methods lead to much more significant quality degradation. Figure 2: CIFAR-10 images randomly sampled from VP DPM by running 10 reverse diffusion steps with the following schemes (from left to right): "euler-maruyama", "probability flow", "maximum likelihood (τ = 0.5)", "maximum likelihood (τ = 1.0)".
MAXIMUM LIKELIHOOD SAMPLING
To show that the maximum likelihood sampling scheme proposed in Section 3 generalizes to different tasks and DPM types, we took the models trained by Song et al. (2021c) on CIFAR-10 image generation task and compared our method with other sampling schemes described in that paper in terms of Fréchet Inception Distance (FID).
The main difficulty in applying maximum likelihood SDE solver is estimating data-dependent term E[Tr (Var (X 0 |X t ))] in σ * t,h . Although in the current experiments we just set this term to zero, we can think of two possible ways to estimate it: (i) approximate Var (X 0 |X t ) with Var (X 0 |X t = X t ): sample noisy data X t , solve reverse SDE with sufficiently small step size starting from terminal conditionX t = X t several times, and calculate sample variance of the resulting solutions at initial pointsX 0 ; (ii) use the formula (58) from Appendix C to calculate Var (X 0 |X t ) assuming that X 0 is distributed normally with mean and variance equal to sample mean and sample variance computed on the training dataset. Experimenting with these techniques and exploring new ones seems to be an interesting future research direction.
Another important practical consideration is that the proposed scheme is proven to be optimal only for score matching networks trained till optimality. Therefore, in the experiments whose results are reported in Table 4 we apply maximum likelihood sampling scheme only when t ≤ τ while using standard Euler-Maruyama solver for t > τ for some hyperparameter τ ∈ [0, 1]. Such a modification relies on the assumption that score matching network is closer to being optimal for smaller noise. Table 4 shows that despite likelihood and FID are two metrics that do not perfectly correlate (Song et al., 2021b), in most cases our maximum likelihood SDE solver performs best in terms of FID. Also, it is worth mentioning that although τ = 1 is always rather a good choice, tuning this hyperparameter can lead to even better performance. One can find randomly chosen generated images for various sampling methods in Figure 2.
CONCLUSION
In this paper, the novel one-shot many-to-many voice conversion model has been presented. Its encoder design and powerful diffusion-based decoder made it possible to achieve good results both in terms of speaker similarity and speech naturalness even on out-of-domain unseen speakers. Subjective human evaluation verified that the proposed model delivers scalable VC solution with competitive performance. Furthermore, aiming at fast synthesis, we have developed and theoretically justified the novel sampling scheme. The main idea behind it is to modify the general-purpose Euler-Maruyama SDE solver so as to maximize the likelihood of discrete sample paths of the forward diffusion. Due to the proposed sampling scheme, our VC model is capable of high-quality voice conversion with as few as 6 reverse diffusion steps. Moreover, experiments on the image generation task show that all known diffusion model types can benefit from the proposed SDE solver.
A FORWARD VP SDE SOLUTION
Since function γ −1 0,t X t is linear in X t , taking its differential does not require second order derivative term in Itô's formula:
d(γ −1 0,t X t ) = d e 1 2 t 0 βudu X t = e 1 2 t 0 βudu · 1 2 β t X t dt + e 1 2 t 0 βudu · − 1 2 β t X t dt + β t d − → W t = β t e 1 2 t 0 βudu d − → W t .(13)
Integrating this expression from s to t results in an Itô's integral:
e 1 2 t 0 βudu X t − e 1 2 s 0 βudu X s = t s β τ e 1 2 τ 0 βudu d − → W τ ,(14)
or
X t = e − 1 2 t s βudu X s + t s β τ e − 1 2 t τ βudu d − → W τ .(15)
The integrand on the right-hand side is deterministic and belongs to L 2 [0, 1] (for practical noise schedule choices), so its Itô's integral is a normal random variable, a martingale (meaning it has zero mean) and satisfies Itô's isometry which allows to calculate its variance:
Var(X t |X s ) = t s β τ e − t τ βudu I dτ = 1 − e − t s βudu I .(16)
Thus
Law(X t |X s ) = N e − 1 2 t s βudu X s , 1 − e − t s βudu I = N (γ s,t X s , (1 − γ 2 s,t ) I)(17)
B THE OPTIMAL COEFFICIENTS ASYMPTOTICS
First derive asymptotics for γ:
γ t−h,t = e − 1 2 t t−h βudu = 1 − 1 2 β t h +ō(h),(18)γ 2 0,t−h = e − t−h 0 βudu = e − t 0 βudu e t t−h βudu = γ 2 0,t (1 + β t h) +ō(h),(19)γ 0,t−h = e − 1 2 t−h 0 βudu = e − 1 2 t 0 βudu e 1 2 t t−h βudu = γ 0,t (1 + 1 2 β t h) +ō(h),(20)γ 2 t−h,t = e − t t−h βudu = 1 − β t h +ō(h).(21)
Then find asymptotics for µ, ν and σ 2 :
µ t−h,t = 1 − 1 2 β t h +ō(h) 1 − γ 2 0,t − γ 2 0,t β t h +ō(h) 1 − γ 2 0,t = 1− 1 2 β t h− γ 2 0,t 1 − γ 2 0,t β t h+ō(h), (22) ν t−h,t = (γ 0,t (1 + 1 2 β t h) +ō(h)) β t h +ō(h) 1 − γ 2 0,t = γ 0,t 1 − γ 2 0,t β t h +ō(h),(23)σ 2 t−h,t = 1 1 − γ 2 0,t (β t h +ō(h))(1 − γ 2 0,t (1 + β t h) +ō(h)) = β t h +ō(h).(24)
Finally we get asymptotics for κ * , ω * and σ * : (1)).
κ * t,h = ν t−h,t (1 − γ 2 0,t ) γ 0,t β t h − 1 = γ 0,t−h (1 − γ 2 t−h,t ) γ 0,t β t h − 1 = (β t h +ō(h))((1 + 1 2 β t h)γ 0,t +ō(h)) γ 0,t β t h − 1 =ō(1),(25)β t hω * t,h = µ t−h,t − 1 + ν t−h,t γ 0,t − 1 2 β t h = 1 − 1 2 β t h − γ 2 0,t 1 − γ 2 0,t β t h − 1 − 1 2 β t h + 1 γ 0,t × × γ 0,t 1 − γ 2 0,t β t h +ō(h) +ō(h) = β t h −1 − γ 2 0,t 1 − γ 2 0,t + 1 1 − γ 2 0,t +ō(h) =ō(h),(26)(σ * t,h ) 2 = σ 2 t−h,t + ν 2 t−h,t E Xt [Tr (Var (X 0 |X t ))] /n = β t h +ō(h) + γ 2 0,t (1 − γ 2 0,t ) 2 β 2 t h 2 E Xt [Tr (Var (X 0 |X t ))] /n = β t h(1 +ō
( 27) C PROOF OF THE THEOREM 1
The key fact necessary to prove the Theorem 1 is established in the following Lemma 1. Let p 0|t (·|x) be pdf of conditional distribution Law (X 0 |X t = x). Then for any t ∈ [0, 1]
and x ∈ R n s θ * (x, t) = − 1 1 − γ 2 0,t x − γ 0,t E p 0|t (·|x) X 0 .(28)
Proof of the Lemma 1. As mentioned in (Song et al., 2021c), an expression alternative to (8) can be derived for θ * under mild assumptions on the data density (Hyvärinen, 2005;Vincent, 2011):
θ * = arg min θ 1 0 λ t E X0∼p0(·) E Xt∼p t|0 (·|X0) s θ (X t , t) − ∇log p t|0 (X t |X 0 ) 2 2 dt,(29)
where Law (X 0 ) is data distribution with pdf p 0 (·) and Law (X t |X 0 = x 0 ) has pdf p t|0 (·|x 0 ). By Bayes formula we can rewrite this in terms of pdfs p t (·) and p 0|t (·|x t ) of distributions Law (X t ) and
Law (X 0 |X t = x t ) correspondingly:
θ * = arg min θ 1 0 λ t E Xt∼pt(·) E X0∼p 0|t (·|Xt) s θ (X t , t) − ∇log p t|0 (X t |X 0 ) 2 2 dt.(30)
For any n-dimensional random variable ξ with finite second moment and deterministic vector a we have
E ξ − a 2 2 = E ξ − Eξ + Eξ − a 2 2 = E ξ − Eξ 2 2 + 2 E[ξ − Eξ], Eξ − a + E Eξ − a 2 2 = E ξ − Eξ 2 2 + Eξ − a 2 2 .(31)
In our case ξ = ∇log p t|0 (X t |X 0 ) and a = s θ (X t , t), so E ξ − Eξ 2 2 is independent of θ. Thus
θ * = arg min θ 1 0 λ t E Xt∼pt(·) s θ (X t , t) − E X0∼p 0|t (·|Xt) ∇log p t|0 (X t |X 0 ) 2 2 dt.(32)
Therefore, the optimal score estimation network s θ * can be expressed as
s θ * (x, t) = E p 0|t (·|x) ∇log p t|0 (x|X 0 )(33)
for all t ∈ [0, 1] and x ∈ supp {p t } = R n .
As proven in Appendix A, Law (X t |X 0 ) is Gaussian with mean vector γ 0,t X 0 and covariance matrix (1 − γ 2 0,t ) I, so finally we obtain
s θ * (x, t) = E p 0|t (·|x) − 1 1 − γ 2 0,t (x − γ 0,t X 0 ) = − 1 1 − γ 2 0,t x − γ 0,t E p 0|t (·|x) X 0 .(34)
Now let us prove the Theorem 1.
Proof of the Theorem 1. The sampling scheme (11) consists in adding Gaussian noise to a linear combination ofX t and s θ * (X t , t). Combining (11) and the Lemma 1 we get
X t−h =σ t,h ξ t +X t + β t h 1 2 +ω t,h X t + (1 +κ t,h )s θ * (X t , t) =σ t,h ξ t + 1 + β t h 1 2 +ω t,h X t + β t h(1 +κ t,h ) − 1 1 − γ 2 0,t X t − γ 0,t E p 0|t (·|Xt) X 0 =σ t,h ξ t + 1 + β t h 1 2 +ω t,h − 1 +κ t,h 1 − γ 2 0,t X t + γ 0,t β t h(1 +κ t,h ) 1 − γ 2 0,t E p 0|t (·|Xt) X 0 ,(35)
where ξ t are i.i.d. random variables from standard normal distribution N (0, I) for t = 1, 1 − h, .., h. Thus, the distributionX t−h |X t is also Gaussian:
Law (X t−h |X t ) = N μ t,h (κ t,h ,ω t,h )X t +ν t,h (κ t,h )E p 0|t (·|Xt) X 0 ,σ 2 t,h I ,(36)µ t,h (κ t,h ,ω t,h ) = 1 + β t h 1 2 +ω t,h − 1 +κ t,h 1 − γ 2 0,t ,(37)ν t,h (κ t,h ) = γ 0,t β t h(1 +κ t,h ) 1 − γ 2 0,t ,(38)
which leads to the following formula for the transition densities of the reverse diffusion:
p t−h|t (x t−h |x t ) = 1 √ 2πσ n t,h exp − x t−h −μ t,h x t −ν t,h E p 0|t (·|xt) X 0 2 2 2σ 2 t,h .(39)
Moreover, comparingμ t,h andν t,h with µ t−h,t and ν t−h,t defined in (9) we deduce that
ν t,h = ν t−h,t ⇔ γ 0,t β t h(1 +κ t,h ) 1 − γ 2 0,t = ν t−h,t ⇔κ t,h = κ * t,h .(40)
If we also wantμ t,h = µ t−h,t to be satisfied, then we should have
1 + β t h 1 2 +ω t,h − 1 + κ * t,h 1 − γ 2 0,t = µ t−h,t ⇔ µ t−h,t − 1 β t h − ω * t,h +ω t,h β t h + 1 = µ t−h,t ,(41)
i.e.ν t,h = ν t−h,t andμ t,h = µ t−h,t iffκ t,h = κ * t,h andω t,h = ω * t,h for the parameters κ * t,h and ω * t,h defined in (10).
As for the corresponding densities of the forward process X, they are Gaussian when conditioned on the initial data point X 0 :
Law (X t−h |X t , X 0 ) = N (µ t−h,t X t + ν t−h,t X 0 , σ 2 t−h,t I),(42)
where coefficients µ t−h,t , ν t−h,t and σ t−h,t are defined in (9). This formula for Law (X t−h |X t , X 0 ) follows from the general fact about Gaussian distributions appearing in many recent works on diffusion probabilistic modeling (Kingma et al., 2021): if Z t |Z s ∼ N (α t|s Z s , σ 2 t|s I) and Z t |Z 0 ∼ N (α t|0 Z 0 , σ 2 t|0 I) for 0 < s < t, then
Law (Z s |Z t , Z 0 ) = N σ 2 s|0 σ 2 t|0 α t|s Z t + σ 2 t|s σ 2 t|0 α s|0 Z 0 , σ 2 s|0 σ 2 t|s σ 2 t|0 I .(43)
This fact is a result of applying Bayes formula to normal distributions. In our case α t|s = γ s,t and σ 2 t|s = 1 − γ 2 s,t .
To get an expression for the densities p t−h|t (x t−h |x t ) similar to (39), we need to integrate out the dependency on data X 0 from Gaussian distribution Law (X t−h |X t , X 0 ):
p t−h|t (x t−h |x t ) = p t−h,0|t (x t−h , x 0 |x t )dx 0 = p t−h|t,0 (x t−h |x t , x 0 )p 0|t (x 0 |x t )dx 0 = E X0∼p 0|t (·|xt) [p t−h|t,0 (x t−h |x t , X 0 )],(44)
which implies the following formula:
p t−h|t (x t−h |x t ) = 1 √ 2πσ n t−h,t E p 0|t (·|xt) exp − x t−h − µ t−h,t x t − ν t−h,t X 0 2 2 2σ 2 t−h,t .(45)
Note that in contrast with the transition densities (39) of the reverse processX, the corresponding densities (45) of the forward process X are not normal in general.
Our goal is to find parametersκ,ω andσ that maximize log-likelihood of sample paths X under probability measure with transition densitiesp. Put t k = kh for k = 0, 1, .., N and write down this log-likelihood:
p(x 1 , x 1−h , .., x 0 ) N −1 k=0 logp t k |t k+1 (x t k |x t k+1 ) + logp 1 (x 1 ) dx 1 dx 1−h ..dx 0 = N −1 k=0 p(x t k , x t k+1 ) logp t k |t k+1 (x t k |x t k+1 )dx t k+1 dx t k + p(x 1 ) logp 1 (x 1 )dx 1 .(46)
The last term does not depend onκ,ω andσ, so we can ignore it. Let R k be the k-th term in the sum above. Since we are free to have different coefficientsκ t,h ,ω t,h andσ t,h for different steps, we can maximize each R k separately. Terms R k can be expressed as
R k = p(x t k , x t k+1 ) logp t k |t k+1 (x t k |x t k+1 )dx t k+1 dx t k = p(x t k+1 )p t k |t k+1 (x t k |x t k+1 ) logp t k |t k+1 (x t k |x t k+1 )dx t k+1 dx t k = E Xt k+1 p t k |t k+1 (x t k |X t k+1 ) logp t k |t k+1 (x t k |X t k+1 )dx t k .(47)
From now on we will skip subscripts of µ, ν, σ,μ,ν,σ,κ,ω, κ * and ω * for brevity. Denote
Q(x t k , X t k+1 , X 0 ) = 1 √ 2πσ n exp − x t k − µX t k+1 − νX 0 2 2 2σ 2 logp t k |t k+1 (x t k |X t k+1 ). (48)
Using the formula (44) for the densities of X together with the explicit expression for the Gaussian density p t k |t k+1 ,0 (x t k |X t k+1 , X 0 ) and applying Fubini's theorem to change the order of integration, we rewrite R k as
R k = E Xt k+1 p t k |t k+1 (x t k |X t k+1 ) logp t k |t k+1 (x t k |X t k+1 )dx t k = E Xt k+1 E X0∼p 0|t k+1 (·|Xt k+1 ) p t k |t k+1 ,0 (x t k |X t k+1 , X 0 ) logp t k |t k+1 (x t k |X t k+1 ) dx t k = E Xt k+1 E X0∼p 0|t k+1 (·|Xt k+1 ) [Q(x t k , X t k+1 , X 0 )]dx t k = E Xt k+1 E X0∼p 0|t k+1 (·|Xt k+1 ) Q(x t k , X t k+1 , X 0 )dx t k .(49)
The formula (48) implies that the integral of Q(x t k , X t k+1 , X 0 ) with respect to x t k can be seen as expectation of logp t k |t k+1 (ξ|X t k+1 ) with respect to normal random variable ξ with mean µX t k+1 + νX 0 and covariance matrix σ 2 I. Plugging in the expression (39) into (48), we can calculate this integral:
E ξ − log √ 2π − n logσ − ξ −μX t k+1 −νE X 0 ∼p 0|t k+1 (·|Xt k+1 ) X 0 2 2 2σ 2 = − log √ 2π − n logσ − E ξ ξ −μX t k+1 −νE p 0|t k+1 (·|Xt k+1 ) X 0 2 2 2σ 2 .(50)
Thus, terms R k we want to maximize equal
R k = − log √ 2π−n logσ−E Xt k+1 E X0∼p 0|t k+1 (·|Xt k+1 ) E ξ ξ −μX t k+1 −νE p 0|t k+1 (·|Xt k+1 ) X 0 2 2 2σ 2(51)
Maximizing R k with respect to (κ,ω,σ) is equivalent to minimizing E Xt k+1 S k where S k is given by
S k = n logσ + 1 2σ 2 E X0∼p 0|t k+1 (·|Xt k+1 ) E ξ ξ −μX t k+1 −νE p 0|t k+1 (·|Xt k+1 ) X 0 2 2 ,(52)
where the expectation with respect to ξ ∼ N (µX t k+1 + νX 0 , σ 2 I) can be calculated using the fact that for every vectorâ we can express E ξ ξ −â 2 2 as
E ξ −Eξ +Eξ −â 2 2 = E ξ −Eξ 2 2 +2 E[ξ −Eξ], Eξ −â +E Eξ −â 2 2 = nσ 2 + Eξ −â 2 2 . (53)
So, the outer expectation with respect to Law(X 0 |X t k+1 ) in (52) can be simplified:
E X0∼p 0|t k+1 (·|Xt k+1 ) nσ 2 + (µ −μ)X t k+1 + νX 0 −νE X 0 ∼p 0|t k+1 (·|Xt k+1 ) X 0 2 2 = nσ 2 + E X0 ((µ −μ)X t k+1 + νX 0 −νE X 0 X 0 2 2 = nσ 2 + (µ −μ) 2 X t k+1 2 2 + 2 (µ −μ)X t k+1 , (ν −ν)E X0 X 0 + E X0 νX 0 −νE X 0 X 0 2 2 = (µ −μ) 2 X t k+1 2 2 + 2 (µ −μ)X t k+1 , (ν −ν)E X0 X 0 + ν 2 E X0 X 0 2 2 +ν 2 E X0 X 0 2 2 + nσ 2 − 2νν E X0 X 0 , E X 0 X 0 = (µ −μ)X t k+1 + (ν −ν)E X0 X 0 ) 2 2 + ν 2 E X0 X 0 2 2 − ν 2 E X0 X 0 2 2 + nσ 2 ,(54)
where all the expectations in the formula above are taken with respect to the conditional data distribution Law(X 0 |X t k+1 ). So, the resulting expression for the terms S k whose expectation with respect to Law (X t k+1 ) we want to minimize is
S k = n logσ + 1 2σ 2 nσ 2 + (µ −μ)X t k+1 + (ν −ν)E[X 0 |X t k+1 ] 2 2 + ν 2 E X 0 2 2 |X t k+1 − E[X 0 |X t k+1 ] 2 2 .(55)
Now it is clear that
κ * t k+1 ,h and ω * t k+1 ,h are optimal becauseμ t k+1 ,h (κ * t k+1 ,h , ω * t k+1 ,h ) = µ t k ,t k+1 andν t k+1 ,h (κ * t k+1 ,h ) = ν t k ,t k+1 .
For this choice of parameters we have
E Xt k+1 S k = n logσ + 1 2σ 2 nσ 2 + ν 2 E Xt k+1 E X 0 2 2 |X t k+1 − E[X 0 |X t k+1 ] 2 2 .(56)
Note that E X 0 2 which is indeed satisfied by the parameters σ * t,h defined in (10). Thus, the statement (i) is proven.
When it comes to proving thatX is exact, we have to show that Law (X t k ) = Law (X t k ) for every k = 0, 1, .., N . By the assumption that Law (X 1 ) = Law (X 1 ) it is sufficient to prove thatp t k |t k+1 (x t k |x t k+1 ) ≡ p t k |t k+1 (x t k |x t k+1 ) since the exactness will follow from this fact by mathematical induction. If X 0 is a constant random variable, Law(X 0 |X t ) = Law (X 0 ) also corresponds to the same constant, so Var (X 0 |X t ) = 0 meaning that σ * t,h = σ t−h,t , and the formulae (39) and (45) imply the desired result.
Let us now consider the second case when X 0 ∼ N (μ, δ 2 I). It is a matter of simple but lengthy computations to prove another property of Gaussian distributions similar to (43): if
Z 0 ∼ N (μ, δ 2 I) and Z t |Z 0 ∼ N (a t Z 0 , b 2 t I), then Z 0 |Z t ∼ N ( b 2 t b 2 t +δ 2 a 2 tμ + δ 2 at b 2 t +δ 2 a 2 t Z t , δ 2 b 2 t b 2 t +δ 2 a 2 t I)
and Z t ∼ N (μa t , (b 2 t + δ 2 a 2 t ) I). In our case a t = γ 0,t and b 2 t = 1 − γ 2 0,t , therefore
Law (X 0 |X t ) = N 1 − γ 2 0,t 1 − γ 2 0,t + δ 2 γ 2 0,tμ + δ 2 γ 0,t 1 − γ 2 0,t + δ 2 γ 2 0,t X t , δ 2 (1 − γ 2 0,t ) 1 − γ 2 0,t + δ 2 γ 2 0,t I .(58)
So, Var (X 0 |X t ) does not depend on X t and
(σ * t,h ) 2 = σ 2 t−h,t + ν 2 t−h,t n E Xt [Tr (Var (X 0 |X t ))] = σ 2 t−h,t + ν 2 t−h,t δ 2 (1 − γ 2 0,t ) 1 − γ 2 0,t + δ 2 γ 2 0,t .(59)
Since Law (X t |X t−h ), Law (X t−h ) and Law (X t ) are Gaussian, Bayes formula implies that Law (X t−h |X t ) is Gaussian as well with the following mean and covariance matrix:
E[X t−h |X t ] = γ 0,t−h (1 − γ 2 t−h,t ) 1 − γ 2 0,t + δ 2 γ 2 0,tμ + γ t−h,t (1 − γ 2 0,t−h + δ 2 γ 2 0,t−h ) 1 − γ 2 0,t + δ 2 γ 2 0,t X t ,(60)Var (X t−h |X t ) = (1 − γ 2 t−h,t )(1 − γ 2 0,t−h + δ 2 γ 2 0,t−h ) 1 − γ 2 0,t + δ 2 γ 2 0,t I .(61)
The distribution Law (X t−h |X t ) is also Gaussian by the formula (39), so to conclude the proof we just need to show that
E[X t−h |X t = x] = E[X t−h |X t = x] and Var (X t−h |X t = x) = Var (X t−h |X t = x)
for every x ∈ R n for the optimal parameters (10). Recall that for κ * t,h and ω * t,h
we haveμ t,h (κ * t,h , ω * t,h ) = µ t−h,t andν t,h (κ * t,h ) = ν t−h,t .
Utilizing the formulae (9), (39), (58) and the fact that γ 0,t−h · γ t−h,t = γ 0,t (following from the definition of γ in (7)) we conclude that
E[X t−h |X t = x] =μ t,h (κ * t,h , ω * t,h )x +ν t,h (κ * t,h )E p 0|t (·|x) X 0 = γ t−h,t 1 − γ 2 0,t−h 1 − γ 2 0,t x + γ 0,t−h 1 − γ 2 t−h,t 1 − γ 2 0,t 1 − γ 2 0,t 1 − γ 2 0,t + δ 2 γ 2 0,tμ + δ 2 γ 0,t 1 − γ 2 0,t + δ 2 γ 2 0,t x = γ 0,t−h 1 − γ 2 t−h,t 1 − γ 2 0,t + δ 2 γ 2 0,tμ + γ t−h,t (1 − γ 2 0,t−h )(1 − γ 2 0,t ) + δ 2 γ 2 0,t (1 − γ 2 0,t−h ) (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) x + γ t−h,t δ 2 γ 2 0,t−h (1 − γ 2 t−h,t ) (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) x = γ 0,t−h 1 − γ 2 t−h,t 1 − γ 2 0,t + δ 2 γ 2 0,tμ + γ t−h,t (1 − γ 2 0,t−h )(1 − γ 2 0,t ) + δ 2 γ 2 0,t−h (1 − γ 2 0,t ) (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) x = γ 0,t−h 1 − γ 2 t−h,t 1 − γ 2 0,t + δ 2 γ 2 0,tμ + γ t−h,t 1 − γ 2 0,t−h + δ 2 γ 2 0,t−h 1 − γ 2 0,t + δ 2 γ 2 0,t x = E[X t−h |X t = x],(62)Var (X t−h |X t = x) = (σ * t,h ) 2 I = σ 2 t−h,t + ν 2 t−h,t δ 2 (1 − γ 2 0,t ) 1 − γ 2 0,t + δ 2 γ 2 0,t I = (1 − γ 2 0,t−h )(1 − γ 2 t−h,t ) 1 − γ 2 0,t + γ 2 0,t−h δ 2 (1 − γ 2 t−h,t ) 2 (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) I = 1 − γ 2 t−h,t (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) (1 − γ 2 0,t−h )(1 − γ 2 0,t ) + δ 2 γ 2 0,t (1 − γ 2 0,t−h ) I + 1 − γ 2 t−h,t (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) δ 2 γ 2 0,t−h (1 − γ 2 t−h,t ) I = 1 − γ 2 t−h,t (1 − γ 2 0,t )(1 − γ 2 0,t + δ 2 γ 2 0,t ) (1 − γ 2 0,t−h )(1 − γ 2 0,t ) + δ 2 γ 2 0,t−h (1 − γ 2 0,t ) I = (1 − γ 2 t−h,t )(1 − γ 2 0,t−h + δ 2 γ 2 0,t−h ) 1 − γ 2 0,t + δ 2 γ 2 0,t I = Var (X t−h |X t = x).(63)
D REVERSE MR-VP SDE SOLVER MR-VP DPM is characterized by the following forward and reverse diffusions:
dX t = 1 2 β t (X − X t )dt + β t d − → W t ,(64)dX t = 1 2 β t (X −X t ) − β t s θ (X t ,X, t) dt + β t d ← − W t .(65)
Using the same method as in Appendix A, we can show that for s < t
Law (X t |X s ) = N (γ s,t X s + (1 − γ s,t )X, (1 − γ 2 s,t ) I), γ s,t = e − 1 2 t s βudu .(66)
With the following notation:
µ s,t = γ s,t 1 − γ 2 0,s 1 − γ 2 0,t , ν s,t = γ 0,s 1 − γ 2 s,t 1 − γ 2 0,t , σ 2 s,t = (1 − γ 2 0,s )(1 − γ 2 s,t ) 1 − γ 2 0,t(67)
we can write down the parameters of Gaussian distribution X s |X t , X 0 :
E[X s |X t , X 0 ] =X + µ s,t (X t −X) + ν s,t (X 0 −X), Var (X s |X t , X 0 ) = σ 2 s,t I .(68)
The Lemma 1 for MR-VP DPMs takes the following shape:
s θ * (x,X, t) = − 1 1 − γ 2 0,t x − (1 − γ 0,t )X − γ 0,t E p 0|t (·|x) X 0 = − 1 1 − γ 2 0,t (x −X) − γ 0,t E p 0|t (·|x) X 0 −X .(69)
The class of reverse SDE solvers we consider iŝ
X t−h =X t + β t h 1 2 +ω t,h (X t −X) + (1 +κ t,h )s θ (X t ,X, t) +σ t,h ξ t ,(70)
where t = 1, 1 − h, .., h and ξ t are i.i.d. samples from N (0, I). Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) parameters:
κ * t,h = ν t−h,t (1 − γ 2 0,t ) γ 0,t β t h − 1, ω * t,h = µ t−h,t − 1 β t h + 1 + κ * t,h 1 − γ 2 0,t − 1 2 , (σ * t,h ) 2 = σ 2 t−h,t + 1 n ν 2 t−h,t E Xt [Tr (Var (X 0 |X t ))] ,(71)
which are actually the same as the optimal parameters (10) for VP DPM. It is of no surprise since MR-VP DPM and VP-DPM differ only by a constant shift.
E REVERSE SUB-VP SDE SOLVER
Sub-VP DPM is characterized by the following forward and reverse diffusions:
dX t = − 1 2 β t X t dt + β t (1 − e −2 t 0 βudu )d − → W t ,(72)dX t = − 1 2 β tXt − β t 1 − e −2 t 0 βudu s θ (X t , t) dt + β t (1 − e −2 t 0 βudu )d ← − W t .(73)
Using the same method as in Appendix A, we can show that for s < t
Law (X t |X s ) = N (γ s,t X s , 1 + γ 4 0,t − γ 2 s,t (1 + γ 4 0,s ) I), γ s,t = e − 1 2 t s βudu .
Note that for s = 0 this expression simplifies to
Law (X t |X 0 ) = N (γ 0,t X 0 , (1 − γ 2 0,t ) 2 I).(75)
With the following notation:
µ s,t =γ s,t 1 − γ 2 0,s 1 − γ 2 0,t 2 , ν s,t = γ 0,s 1 + γ 4 0,t − γ 2 s,t (1 + γ 4 0,s ) (1 − γ 2 0,t ) 2 , σ 2 s,t = (1 − γ 2 0,s ) 2 (1 + γ 4 0,t − γ 2 s,t (1 + γ 4 0,s )) (1 − γ 2 0,t ) 2(76)
we can write down the parameters of Gaussian distribution X s |X t , X 0 :
E[X s |X t , X 0 ] = µ s,t X t + ν s,t X 0 , Var (X s |X t , X 0 ) = σ 2 s,t I .(77)
The Lemma 1 for sub-VP DPMs takes the following shape:
s θ * (x, t) = − 1 (1 − γ 2 0,t ) 2 x − γ 0,t E p 0|t (·|x) X 0 .(78)
The class of reverse SDE solvers we consider iŝ
X t−h =X t + β t h 1 2 +ω t,h X t + (1 +κ t,h ) 1 − e −2 t 0 βudu s θ (X t , t) +σ t,h ξ t ,(79)
where t = 1, 1 − h, .., h and ξ t are i.i.d. samples from N (0, I). Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) parameters:
κ * t,h = ν t−h,t (1 − γ 2 0,t ) γ 0,t β t h(1 + γ 2 0,t ) − 1, ω * t,h = µ t−h,t − 1 β t h + (1 + κ * t,h )(1 + γ 2 0,t ) 1 − γ 2 0,t − 1 2 , (σ * t,h ) 2 = σ 2 t−h,t + 1 n ν 2 t−h,t E Xt [Tr (Var (X 0 |X t ))] .(80)
F REVERSE VE SDE SOLVER VE DPM is characterized by the following forward and reverse diffusions:
dX t = (σ 2 t ) d − → W t ,(81)dX t = − σ 2 t s θ (X t , t)dt + (σ 2 t ) d ← − W t .(82)MSE X 0 = {i} X 0 = {i, −2i} ML / EM ε = 0.0 ε = 0.1 ε = 0.5 ε = 0.0 ε = 0.1 ε = 0.5 N = 1 conv / div div / div div / div div / div div / div div / div N = 2 conv / div div / div div / div 0.15 / div div / div div / div N = 5
conv / div 0.017 / div 0.085 / div conv / div 0.017 / div 0.085 / div N = 10 conv / 0.57 0.001/0.59 0.005/0.67 conv / 0.57 0.001/0.59 0.006/0.67 N = 100 conv / 0.01 conv / 0.01 conv / 0.01 conv / 0.01 conv / 0.01 conv / 0.01 N = 1000 conv / conv conv / conv conv / conv conv / conv conv / conv conv / conv • in accordance with the statement (ii) of the Theorem 1, the optimal Maximum Likelihood solver leads to exact data reconstruction in the case when data distribution is constant and score matching network is trained till optimality (i.e. ε = 0.0) irrespective of the number of steps N .
Also, in the second example where X 0 ∈ {i, −2i} the Maximum Likelihood SDE solver reconstructs the probabilities of these two points better than Euler-Maruyama which tends to output "i-samples" (which are closer to the origin) more frequently than "−2i-samples". E.g. for ε = 0.0 and N = 10 the frequency of "i-samples" generated by Euler-Maruyama scheme is 54% while this frequency for Maximum Likelihood scheme is 50% (500k independent runs were used to calculate these frequencies).
H SPEAKER CONDITIONING NETWORK
The function x · tanh(sof tplus(x)) is used as a non-linearity in the speaker conditioning network g t (Y ). First, time embedding t e is obtained by the following procedure: time t ∈ [0, 1] is encoded with positional encoding (Song et al., 2021c), then resulting 256-dimensional vector t is passed through the first linear module with 1024 units, then a non-linearity is applied to it and then it is passed through the second linear module with 256 units. Next, noisy mel-spectrogram Y t for wodyn input type or Y t concatenated with {Y s |s = 0.5/15, 1.5/15, .., 14.5/15} for whole is passed through 6 blocks consisting of 2D convolutional layers each followed by instance normalization and Gated Linear Unit. The number of input and output channels of these convolutions is (1, 64), (32, 64), (32, 128), (64, 128), (64, 256), (128, 256) for wodyn input type and the same but with 16 input channels in the first convolution for whole input type. After the 2nd and 4th blocks M LP 1 (t e ) and M LP 2 (t e ) are broadcast-added where M LP 1 (M LP 2 ) are composed of a nonlinearity followed by a linear module with 32 (64) units. After the last 6th block the result is passed through the final convolution with 128 output channels and average pooling along both time and frequency axes is applied resulting in 128-dimensional vector. All convolutions except for the final one have (kernel, stride, zero padding) = (3, 1, 1) while for the final one the corresponding parameters are (1, 0, 0). Denote the result of such processing of Y by c for wodyn and whole input types.
Clean target mel-spectrogram Y 0 is used to obtain 256-dimensional speaker embedding d with the pre-trained speaker verification network (Jia et al., 2018) which is not trained. Vectors d, c and t are concatenated (except for d-only input type where we concatenate only d and t ), passed through a linear module with 512 units followed by a non-linearity and a linear module with 128 units. The resulting 128-dimensional vector is the output of the speaker conditioning network g t (Y ).
I TRAINING HYPERPARAMETERS AND OTHER DETAILS
Encoders and decoders were trained with batch sizes 128 and 32 and Adam optimizer with initial learning rates 0.0005 and 0.0001 correspondingly. Encoders and decoders in VCTK models were trained for 500 and 200 epochs respectively; as for LibriTTS models, they were trained for 300 and 110 epochs. The datasets were downsampled to 22.05kHz which was the operating rate of our VC models. VCTK recordings were preprocessed by removing silence in the beginning and in the end of utterances. To fit GPU memory, decoders were trained on random speech segments of approximately 1.5 seconds rather than on the whole utterances. Training segments for reconstruction and the ones used as input to the speaker conditioning network g t (Y ) were different random segments extracted from the same training utterances. Noise schedule parameters β 0 and β 1 were set to 0.05 and 20.0.
Our VC models operated on mel-spectrograms with 80 mel features and sampling rate 22.05kHz. Short-Time Fourier Transform was used to calculate spectra with 1024 frequency bins. Hann window of length 1024 was applied with hop size 256.
For Diff-LibriTTS models we used simple spectral subtraction algorithm in mel domain with spectral floor parameter β = 0.02 as post-processing to reduce background noise sometimes produced by these models. Noise spectrum was estimated on speech fragments automatically detected as the ones corresponding to silence in source mel-spectrogram.
J DETAILS OF AMT TESTS
For fair comparison with the baselines all the recordings were downsampled to 16kHz; we also normalized their loudness. In speech naturalness tests workers chosen by geographic criterion were asked to assess the overall quality of the synthesized speech, i.e to estimate how clean and natural (human-sounding) it was. Five-point Likert scale was used: 1 -"Bad", 2 -"Poor", 3 -"Fair", 4 -"Good", 5 -"Excellent". Assessors were asked to wear headphones and work in a quiet environment. As for speaker similarity tests, workers were asked to assess how similar synthesized samples sounded to target speech samples in terms of speaker similarity. Assessors were asked not to pay attention to the overall quality of the synthesized speech (e.g. background noise or incorrect pronunciation). Five-point scale was used: 1 -"Different: absolutely sure", 2 -"Different: moderately sure", 3 -"Cannot decide more same or more different", 4 -"Same: moderately sure", 5 -"Same: absolutely sure".
3 contains human evaluation results of Diff-LibriTTS for four sampling schemes: ML-30 with 30 reverse SDE solver steps and ML-6, EM-6 and PF-6 with 6 steps of reverse diffusion. The three schemes taking 6 steps achieved real-time factor (RTF) around 0.1 on GPU (i.e. inference was 10 times faster than real time) while the one taking 30 steps had RTF around 0.5. The proposed model Diff-LibriTTS-ML-30 and the baseline BNE-PPG-VC show the same performance on the VCTK test set in terms of speech naturalness the latter being slightly better in terms of speaker similarity which can perhaps be explained by the fact that BNE-PPG-VC was trained on the union of VCTK and LibriTTS whereas our model was trained only on LibriTTS. As for the whole test set containing unseen LibriTTS and internal speakers also, Diff-LibriTTS-ML-30 outperforms BNE-PPG-VC model achieving MOS 4.0 and 3.4 in terms of speech naturalness and speaker similarity respectively. Due to employing PPG extractor trained on a large-scale ASR dataset LibriSpeech(Panayotov et al., 2015)
Table 1 :
1Input types for speaker conditioning g t (Y ) compared in terms of speaker similarity. only wodyn whole d-only wodyn whole Most similar 27.0% 38.0% 34.1% 27.2% 46.7% 23.6% Least similar 28.9% 29.3% 38.5% 25.3% 23.9% 48.6%Diff-LibriTTS
Diff-VCTK
d-
Table 2 :
2Subjective evaluation (MOS) of one-shot VC models trained on VCTK. Ground truth recordings were evaluated only for VCTK speakers. VCTK test (9 speakers, 54 pairs) Whole test (25 speakers, 350 pairs)Naturalness
Similarity
Naturalness
Similarity
AGAIN-VC
1.98 ± 0.05
1.97 ± 0.08
1.87 ± 0.03
1.75 ± 0.04
FragmentVC
2.20 ± 0.06
2.45 ± 0.09
1.91 ± 0.03
1.93 ± 0.04
VQMIVC
2.89 ± 0.06
2.60 ± 0.10
2.48 ± 0.04
1.95 ± 0.04
Diff-VCTK-ML-6
3.73 ± 0.06
3.47 ± 0.09
3.39 ± 0.04
2.69 ± 0.05
Diff-VCTK-ML-30
3.73 ± 0.06
3.57 ± 0.09
3.44 ± 0.04
2.71 ± 0.05
Ground truth
4.55 ± 0.05
4.52 ± 0.07
4.55 ± 0.05
4.52 ± 0.07
Table 3 :
3Subjective evaluation (MOS) of one-shot VC models trained on large-scale datasets.VCTK test (9 speakers, 54 pairs) Whole test (25 speakers, 350 pairs)
Naturalness
Similarity
Naturalness
Similarity
Diff-LibriTTS-EM-6
1.68 ± 0.06
1.53 ± 0.07
1.57 ± 0.02
1.47 ± 0.03
Diff-LibriTTS-PF-6
3.11 ± 0.07
2.58 ± 0.11
2.99 ± 0.03
2.50 ± 0.04
Diff-LibriTTS-ML-6
3.84 ± 0.08
3.08 ± 0.11
3.80 ± 0.03
3.27 ± 0.05
Diff-LibriTTS-ML-30
3.96 ± 0.08
3.23 ± 0.11
4.02 ± 0.03
3.39 ± 0.05
BNE-PPG-VC
3.95 ± 0.08
3.27 ± 0.12
3.83 ± 0.03
3.03 ± 0.05
Table 4 :
4Reverse SDE solvers compared in terms of FID. N is the number of SDE solver steps.=10 N =100 N =10 N =100 N =10 N =100 • BNE-PPG-VC (Liu et al., 2021b), an improved variant of PPG-based VC models combining a bottleneck feature extractor obtained from a phoneme recognizer with a seq2seq-based synthesis module.VP DPM
sub-VP DPM
VE DPM
N Euler-Maruyama
229.6
19.68
312.3
19.83
462.1
24.77
Reverse Diffusion
679.8
65.95
312.2
19.74
461.1
303.2
Probability Flow
88.92
5.70
64.22
4.42
495.3
214.5
Ancestral Sampling
679.8
68.35
-
-
454.7
17.83
Maximum Likelihood (τ = 0.1) 260.3
4.34
317.0
6.63
461.9
23.63
Maximum Likelihood (τ = 0.5) 24.45
7.82
30.90
6.43
462.0
10.07
Maximum Likelihood (τ = 1.0) 41.78
7.94
48.02
6.51
48.51
12.37
Table
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, virtual, 2020. Zhifeng Kong and Wei Ping. On Fast Sampling of Diffusion Probabilistic Models. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021. Robert S. Liptser and Albert N. Shiryaev. Statistics of Random Processes, volume 5 of Stochastic Modelling and Applied Probability. Springer-Verlag, 1978. Songxiang Liu, Yuewen Cao, Dan Su, and Helen Meng. DiffSVC: A Diffusion Probabilistic Model for Singing Voice Conversion, 2021a. Songxiang Liu, Yuewen Cao, Disong Wang, Xixin Wu, Xunying Liu, and Helen Meng. Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:1717-1728, 2021b. Shahan Nercessian. Improved Zero-Shot Voice Conversion Using Explicit Conditioning Signals. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 4711-4715. ISCA, 2020. Alexander Quinn Nichol and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8162-8171. PMLR, 18-24 Jul 2021.Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A Versatile
Diffusion Model for Audio Synthesis. In International Conference on Learning Representations,
2021.
Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen,
Sungroh Yoon, and Tie-Yan Liu. PriorGrad: Improving Conditional Denoising Diffusion Models
with Data-Driven Adaptive Prior, 2021.
Yist Y. Lin, Chung-Ming Chien, Jheng-Hao Lin, Hung-yi Lee, and Lin-Shan Lee. FragmentVC: Any-
To-Any Voice Conversion by End-To-End Extracting and Fusing Fine-Grained Voice Fragments
with Attention. In ICASSP 2021 -2021 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), pp. 5939-5943, 2021.
Manh Luong and Viet Anh Tran. Many-to-Many Voice Conversion Based Feature Disentanglement
Using Variational Autoencoder. In Proc. Interspeech 2021, pp. 851-855, 2021.
Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger.
Montreal Forced Aligner: Trainable Text-Speech Alignment Using Kaldi. In Proc. Interspeech
2017, pp. 498-502, 2017.
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: An ASR
Corpus Based on Public Domain Audio Books. In 2015 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210, 2015.
Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. Grad-TTS:
A Diffusion Probabilistic Model for Text-to-Speech. In Proceedings of the 38th International
Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of
Proceedings of Machine Learning Research, pp. 8599-8608. PMLR, 2021.
Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and Mark Hasegawa-Johnson. AutoVC: Zero-
Shot Voice Style Transfer with Only Autoencoder Loss. In Proceedings of the 36th International
Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp.
5210-5219. PMLR, 09-15 Jun 2019.
Kaizhi Qian, Zeyu Jin, Mark Hasegawa-Johnson, and Gautham J. Mysore. F0-Consistent Many-To-
Many Non-Parallel Voice Conversion Via Conditional Autoencoder. In ICASSP 2020 -2020 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6284-6288,
2020.
Table 5 :
5Maximum Likelihood (ML) and Euler-Maruyama (EM) solvers comparison in terms of Mean Square Error (MSE). MSE < 0.001 is denoted by conv, MSE > 1.0 is denoted by div. N is the number of SDE solver steps, ε is variance of Gaussian noise added to perfect scores s θ * .
|X t k+1 − E[X 0 |X t k+1 ] 2 2 = Tr (Var (X 0 |X t k+1 )) is the overall variance of Law (X 0 |X t k+1 ) along all n dimensions. Differentiating E Xt k+1 S k with respect toσ shows that the optimal σ * t k+1 ,h should satisfyn σ * t k+1 ,h − 1 (σ * t k+1 ,h ) 3 nσ 2 t k ,t k+1 + ν 2 t k ,t k+1 E Xt k+1 Tr (Var (X 0 |X t k+1 )) = 0,(57)
similar argument as in Appendix A allows showing thatLaw (X t |X s ) = N X s , I · t s σ 2 u du = N (X s , (σ 2 t − σ 2 s ) I).(84)With the following notation:we can write down the parameters of Gaussian distribution X s |X t , X 0 :The Lemma 1 for VE DPMs takes the following shape:Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) reverse SDE solver:whereG TOY EXAMPLESIn this section we consider toy examples where data distribution X 0 is represented by a single point (corresponding to the case (ii) of the Theorem 1) and by two points (corresponding to more general case (i)). In the first case the point is unit vector i = (1, 1, .., 1) of dimensionality 100, in the second one two points i and −2i have the same probability. We compare performance of two solvers, Euler-Maruyama and the proposed Maximum Likelihood, depending on the number N ∈ {1, 2, 5, 10, 100, 1000} of solver steps. The output of the perfectly trained score matching network s θ * is computed analytically and Gaussian noise with variance ε ∈ {0.0, 0.1, 0.5} is added to approximate the realistic case when the network s θ we use is not trained till optimality. We considered VP diffusion model (6) with β 0 = 0.05 and β 1 = 20.0.The results of the comparison are given inTable 5and can be summarized in the following:• for both methods, larger N means better quality;• for both methods, more accurate score matching networks (smaller ε) means better quality;• for large number of steps, both methods perform the same;• it takes less number of steps for the proposed Maximum Likelihood solver to converge with a good accuracy to data distribution than it does for Euler-Maruyama solver;
Reverse-time Diffusion Equation Models. D O Brian, Anderson, 0304-4149Stochastic Processes and their Applications. 12Brian D.O. Anderson. Reverse-time Diffusion Equation Models. Stochastic Processes and their Applications, 12(3):313 -326, 1982. ISSN 0304-4149.
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, Michael Auli, Advances in Neural Information Processing Systems. Curran Associates, Inc33Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. In Advances in Neural Information Processing Systems, volume 33, pp. 12449-12460. Curran Associates, Inc., 2020.
Exact Simulation of Diffusions. Alexandros Beskos, O Gareth, Roberts, 10505164The Annals of Applied Probability. 154Alexandros Beskos and Gareth O. Roberts. Exact Simulation of Diffusions. The Annals of Applied Probability, 15(4):2422-2444, 2005. ISSN 10505164.
WaveGrad: Estimating Gradients for Waveform Generation. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, William Chan, International Conference on Learning Representations. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. WaveG- rad: Estimating Gradients for Waveform Generation. In International Conference on Learning Representations, 2021a.
Again-VC: A One-Shot Voice Conversion Using Activation Guidance and Adaptive Instance Normalization. Yen-Hao Chen, D Wu, Tsung-Han Wu, Hung Yi Lee, ICASSP 2021 -2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Yen-Hao Chen, D. Wu, Tsung-Han Wu, and Hung yi Lee. Again-VC: A One-Shot Voice Conversion Using Activation Guidance and Adaptive Instance Normalization. In ICASSP 2021 -2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5954-5958, 2021b.
One-Shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization. Ju-Chieh Chou, Hung-Yi Lee, Interspeech 2019, 20th Annual Conference of the International Speech Communication Association. Graz, AustriaISCAJu-Chieh Chou and Hung-yi Lee. One-Shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pp. 664-668. ISCA, 2019.
Unbiased Simulation of Stochastic Differential Equations. Pierre Henry-Labordère, Xiaolu Tan, Nizar Touzi, 10505164The Annals of Applied Probability. 276Pierre Henry-Labordère, Xiaolu Tan, and Nizar Touzi. Unbiased Simulation of Stochastic Differential Equations. The Annals of Applied Probability, 27(6):3305-3341, 2017. ISSN 10505164.
Denoising Diffusion Probabilistic Models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Curran Associates, Inc2020Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, volume 33. Curran Associates, Inc., 2020.
Estimation of Non-Normalized Statistical Models by Score Matching. Aapo Hyvärinen, Journal of Machine Learning Research. 624Aapo Hyvärinen. Estimation of Non-Normalized Statistical Models by Score Matching. Journal of Machine Learning Research, 6(24):695-709, 2005.
Attention-Based Speaker Embeddings for One-Shot Voice Conversion. Tatsuma Ishihara, Daisuke Saito, Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event. Shanghai, ChinaISCATatsuma Ishihara and Daisuke Saito. Attention-Based Speaker Embeddings for One-Shot Voice Con- version. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 806-810. ISCA, 2020.
Diff-TTS: A Denoising Diffusion Model for Text-to-Speech. Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Jin Byoung, Nam Soo Choi, Kim, Proc. Interspeech 2021. Interspeech 2021Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim. Diff-TTS: A Denoising Diffusion Model for Text-to-Speech. In Proc. Interspeech 2021, pp. 3605-3609, 2021.
Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, Advances in Neural Information Processing Systems. Curran Associates, Inc31Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, and Yonghui Wu. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. In Advances in Neural Information Processing Systems 31, pp. 4480-4490. Curran Associates, Inc., 2018.
VoiceGrad: Non-Parallel Any-to-Many Voice Conversion with Annealed Langevin Dynamics. Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, Shogo Seki, Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, and Shogo Seki. VoiceGrad: Non-Parallel Any-to-Many Voice Conversion with Annealed Langevin Dynamics, 2020.
Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques. Seung-Won Kang-Wook Kim, Myun-Chul Park, Joe, Kang-wook Kim, Seung-won Park, and Myun-chul Joe. Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques, 2021.
P Diederik, Tim Kingma, Ben Salimans, Jonathan Poole, Ho, Variational Diffusion Models. Diederik P. Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational Diffusion Models, 2021.
Numerical Solution of Stochastic Differential Equations. E Peter, Eckhard Kloeden, Platen, Stochastic Modelling and Applied Probability. 23Springer-VerlagPeter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential Equations, volume 23 of Stochastic Modelling and Applied Probability. Springer-Verlag Berlin Heidelberg, 1992.
U-Net: Convolutional Networks for Biomedical Image Segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015. Springer International PublishingOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention -MICCAI 2015, pp. 234-241. Springer International Publishing, 2015.
Non-Parallel Voice Conversion Using Variational Autoencoders Conditioned by Phonetic Posteriorgrams and D-Vectors. Yuki Saito, Yusuke Ijima, Kyosuke Nishida, Shinnosuke Takamichi, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Yuki Saito, Yusuke Ijima, Kyosuke Nishida, and Shinnosuke Takamichi. Non-Parallel Voice Conver- sion Using Variational Autoencoders Conditioned by Phonetic Posteriorgrams and D-Vectors. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5274-5278, 2018.
Noise Estimation for Generative Diffusion Models. Robin San-Roman, Eliya Nachmani, Lior Wolf, Robin San-Roman, Eliya Nachmani, and Lior Wolf. Noise Estimation for Generative Diffusion Models, 2021.
Denoising Diffusion Implicit Models. Jiaming Song, Chenlin Meng, Stefano Ermon, International Conference on Learning Representations. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In International Conference on Learning Representations, 2021a.
Maximum Likelihood Training of Score-Based Diffusion Models. Yang Song, Conor Durkan, Iain Murray, Stefano Ermon, Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum Likelihood Training of Score-Based Diffusion Models, 2021b.
Score-Based Generative Modeling through Stochastic Differential Equations. Yang Song, Jascha Sohl-Dickstein, P Diederik, Abhishek Kingma, Stefano Kumar, Ben Ermon, Poole, International Conference on Learning Representations. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2021c.
A Connection Between Score Matching and Denoising Autoencoders. Pascal Vincent, Neural Computation. 237Pascal Vincent. A Connection Between Score Matching and Denoising Autoencoders. Neural Computation, 23(7):1661-1674, 2011.
VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-Shot Voice Conversion. Disong Wang, Liqun Deng, Yu Ting Yeung, Xiao Chen, Xunying Liu, Helen Meng, Proc. Interspeech 2021. Interspeech 2021Disong Wang, Liqun Deng, Yu Ting Yeung, Xiao Chen, Xunying Liu, and Helen Meng. VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disen- tanglement for One-Shot Voice Conversion. In Proc. Interspeech 2021, pp. 1344-1348, 2021.
Learning to Efficiently Sample from Diffusion Probabilistic Models. Daniel Watson, Jonathan Ho, Mohammad Norouzi, William Chan, Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to Efficiently Sample from Diffusion Probabilistic Models, 2021.
VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net Architecture. Da-Yi Wu, Yen-Hao Chen, Hung-Yi Lee, Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event. Helen Meng, Bo Xu, and Thomas Fang ZhengShanghai, ChinaISCADa-Yi Wu, Yen-Hao Chen, and Hung-yi Lee. VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net Architecture. In Helen Meng, Bo Xu, and Thomas Fang Zheng (eds.), Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 4691-4695. ISCA, 2020.
CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit. Junichi Yamagishi, Christophe Veaux, Kirsten Macdonald, version 0.92Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit (version 0.92), 2019.
LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. Heiga Zen, Rob Clark, Ron J Weiss, Viet Dang, Ye Jia, Yonghui Wu, Yu Zhang, Zhifeng Chen, InterspeechHeiga Zen, Rob Clark, Ron J. Weiss, Viet Dang, Ye Jia, Yonghui Wu, Yu Zhang, and Zhifeng Chen. LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. In Interspeech, 2019. |
52,900,202 | ADVERSARIAL DOMAIN ADAPTATION FOR STABLE BRAIN-MACHINE INTERFACES | Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis. These devices are based on the ability to extract information about movement intent from neural signals recorded using multi-electrode arrays chronically implanted in the motor cortices of the brain. However, the inherent loss and turnover of recorded neurons requires repeated recalibrations of the interface, which can potentially alter the day-to-day user experience. The resulting need for continued user adaptation interferes with the natural, subconscious use of the BMI. Here, we introduce a new computational approach that decodes movement intent from a low-dimensional latent representation of the neural data. We implement various domain adaptation methods to stabilize the interface over significantly long times. This includes Canonical Correlation Analysis used to align the latent variables across days; this method requires prior point-to-point correspondence of the time series across domains. Alternatively, we match the empirical probability distributions of the latent variables across days through the minimization of their Kullback-Leibler divergence. These two methods provide a significant and comparable improvement in the performance of the interface. However, implementation of an Adversarial Domain Adaptation Network trained to match the empirical probability distribution of the residuals of the reconstructed neural signals outperforms the two methods based on latent variables, while requiring remarkably few data points to solve the domain adaptation problem. | [] | ADVERSARIAL DOMAIN ADAPTATION FOR STABLE BRAIN-MACHINE INTERFACES
Ali Farshchian [email protected]
Northwestern University
EvanstonILUSA
Juan A Gallego [email protected]
Northwestern University
EvanstonILUSA
Lee E Miller
Northwestern University
EvanstonILUSA
Sara A Solla [email protected]
Northwestern University
EvanstonILUSA
Joseph Paul Cohen [email protected]
University of Montreal
MontrealCanada
Yoshua Bengio [email protected]
University of Montreal
MontrealCanada
ADVERSARIAL DOMAIN ADAPTATION FOR STABLE BRAIN-MACHINE INTERFACES
Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis. These devices are based on the ability to extract information about movement intent from neural signals recorded using multi-electrode arrays chronically implanted in the motor cortices of the brain. However, the inherent loss and turnover of recorded neurons requires repeated recalibrations of the interface, which can potentially alter the day-to-day user experience. The resulting need for continued user adaptation interferes with the natural, subconscious use of the BMI. Here, we introduce a new computational approach that decodes movement intent from a low-dimensional latent representation of the neural data. We implement various domain adaptation methods to stabilize the interface over significantly long times. This includes Canonical Correlation Analysis used to align the latent variables across days; this method requires prior point-to-point correspondence of the time series across domains. Alternatively, we match the empirical probability distributions of the latent variables across days through the minimization of their Kullback-Leibler divergence. These two methods provide a significant and comparable improvement in the performance of the interface. However, implementation of an Adversarial Domain Adaptation Network trained to match the empirical probability distribution of the residuals of the reconstructed neural signals outperforms the two methods based on latent variables, while requiring remarkably few data points to solve the domain adaptation problem.
INTRODUCTION
Individuals with tetraplegia due to spinal cord injury identify restoration of hand function as their highest priority (Anderson, 2004). Over 50% of respondents with a C1-C4 injury would be willing to undergo brain surgery to restore grasp . Brain Machine Interfaces (BMIs) aim to restore motor function by extracting movement intent from neural signals. Despite its great promise, current BMI technology has significant limitations. A BMI that maps neural activity in primary motor cortex (M1) onto motor intent commands should ideally provide a stable day-today user experience. However, the gradual turnover of neurons recorded by chronically implanted multi-electrode arrays, due to neuron death or to electrode movement and failure (Barrese et al., 2013), causes considerable variation in the actions produced by the BMI. This turnover is estimated to be on the order of 40% over two weeks (Dickey et al., 2009), and its compensation requires daily recalibration (Ajiboye et al., 2017) and thus ongoing user adaptation to the changing interface.
There is a high degree of correlation across the M1 neural signals. This redundancy implies that the dimensionality of the underlying motor command is much lower than the number of M1 signals. A low-dimensional representation of neural activity can be extracted from the recorded neuronal population (Yu et al., 2009;Shenoy et al., 2013;Sadtler et al., 2014;Gallego et al., 2017a) and exploited to identify movement intent. Here we develop a deep learning architecture to extract a low-dimensional representation of recorded M1 activity that includes features related to movement intent. This is achieved by the simultaneous training of a deep, nonlinear Auto-Encoder (AE) network based on neural signals from M1, and a network that predicts movement intent from the inferred low-dimensional signals. We show that this architecture significantly improves predictions over the standard sequential approach of first extracting a low-dimensional, latent representation of neural signals, followed by training a movement predictor based on the latent signals.
To stabilize the resulting BMI against continuous changes in the neural recordings, we introduce a novel approach based on the Generative Adversarial Network (GAN) architecture (Goodfellow et al., 2014). This new approach, the Adversarial Domain Adaptation Network (ADAN), focuses on the probability distribution function (PDF) of the residuals of the reconstructed neural signals to align the residual's PDF at a later day to the PDF of the first day, when the BMI was calculated. The alignment of residual PDFs serves as a proxy to the alignment of the PDFs of the neural data and of their latent representation across multiple days. We show that this method results in a significantly more stable performance of the BMI over time than the stability achieved using several other domain adaptation methods. The use of an ADAN thus results in a BMI that remains stable and consistent to the user over long periods of time. A successful domain adaptation of the neural data eliminates the need for frequent recalibration of the BMI, which remains fixed. This substantially alleviates the cognitive burden on the user, who would no longer need to learn novel strategies to compensate for a changing interface.
RELATED WORK
Current approaches to solving the BMI stability problem include gradually updating interface parameters using an exponentially weighted sliding average (Orsborn et al., 2012;Dangi et al., 2013), automatically adjusting interface parameters by tracking recording nonstationarities (Bishop et al., 2014) or by retrospectively inferring the user intention among a set of fixed targets (Jarosiewicz et al., 2015), and training the interface with large data volumes collected over a period of several months to achieve robustness against future changes in neural recordings (Sussillo et al., 2016).
Other approaches are based on the assumption that the relationship between latent dynamics and movement intent will remain stable despite changes in the recorded neural signals. Recent studies reveal the potential of latent dynamics for BMI stability. use past information about population dynamics to partially stabilize a BMI even under severe loss of recorded neurons, by aligning the remaining neurons to previously learned dynamics. Pandarinath et al. (2017) extract a single latent space from concatenating neural recordings over five months, and show that a predictor of movement kinematics based on these latent signals is reliable across all the recorded sessions.
EXPERIMENTAL SETUP
A male rhesus monkey (Macaca mulatta) sat on a chair with the forearm restrained and its hand secured into a padded, custom fit box. A torque cell with six degrees of freedom was mounted onto the box. The monkey was trained to generate isometric torques that controlled a computer cursor displayed on a screen placed at eye-level, and performed a 2D center-out task in which the cursor moved from a central target to one of eight peripheral targets equally spaced along a circle centered on the central target ( Figure 1A) . To record neural activity, we implanted a 96channel microelectrode array (Blackrock Microsystems, Salt Lake City, Utah) into the hand area of primary motor cortex (M1). Prior to implanting the array, we intraoperatively identified the hand area of M1 through sulcal landmarks, and by stimulating the surface of the cortex to elicit twitches of the wrist and hand muscles. We also implanted electrodes in 14 muscles of the forearm and hand, allowing us to record the electromyograms (EMGs) that quantify the level of activity in each of the muscles. Data was collected in five experimental sessions spanning 16 days. All methods were approved by Northwestern Universitys IACUC and carried out in accordance with the Guide for the Care and Use of Laboratory Animals.
METHODS
COMPUTATIONAL INTERFACE
Our goal is to reliably predict the actual patterns of muscle activity during task execution, based on the recorded neural signals. Similar real-time predictions of kinematics are the basis of BMIs used to provide some degree of control of a computer cursor or a robotic limb to a paralyzed person (Taylor et al., 2002;Hochberg et al., 2006;Collinger et al., 2013). Predictions of muscle activity have been used to control the intensity of electrical stimulation of muscles that are temporarily paralyzed by a pharmacological peripheral nerve block , a procedure that effectively bypasses the spinal cord to restore voluntary control of the paralyzed muscles. Similar methods have been attempted recently in humans (Bouton et al., 2016;Ajiboye et al., 2017). Figure 1: Experimental setup and methods. A. The isometric wrist center-out task with its eight targets, color coded. BMI schematics: recorded neural activity predicts muscle activity. B. The BMI consists of two networks: a neural AE and an EMG predictor. Recorded neural activity is binned and smoothed to provide an input to the AE. The activity of the low-dimensional latent space provides an input to the predictor of muscle activity. C. The ADAN architecture that aligns the firing rates of day-k to those of day-0, when the BMI was built.
The BMI is a computational interface that transforms neural signals into command signals for movement control, in this case the EMG patterns. Here we propose an interface that consists of two components, a neural Auto-Encoder (AE) and an EMG predictor ( Figure 1B). The AE is a fully connected multilayer network consisting of an input layer, five layers of hidden units, and an output layer. The reconstruction aims at minimizing the mean square error (MSE) between input and output signals. Units in the latent and output layers implement linear readouts of the activity of the preceding layer. Units in the remaining hidden layers implement a linear readout followed by an exponential nonlinearity. To provide inputs to the AE, we start with neural data s t consisting of the spike trains recorded from n electrodes at time t. We bin neural spikes at 50 ms intervals and apply a Gaussian filter with a standard deviation of 125 ms to the binned spike counts to obtain n-dimensional smoothed firing rates x t . The output layer provides n-dimensional estimates x t of the inputs x t . The latent layer activity z t consist of l latent variables, with l < n.
The EMG data y t is the envelope of the muscle activity recorded from m muscles, with m < n. The l-dimensional latent activity z t is mapped onto the m-dimensional EMGs through a long-short term memory (LSTM) (Hochreiter & Schmidhuber, 1997) layer followed by a linear layer, y t = W T LST M (z t ). To train the model, we minimize a loss function L that simultaneously accounts for two losses: L x : R n → R + is the MSE of the reconstruction of the smooth firing rates, and L y : R m → R + is the MSE of the EMG predictions:
L = λL x + L y = 1 T T t=1 λ|| x t − x t || 2 + || y t − y t || 2(1)
Here T is the number of time samples. The factor λ that multiplies the AE loss adjusts for the different units and different value ranges of firing rates and EMGs; in addition, it equalizes the contributions of the two terms in the loss function so that the learning algorithm does not prioritize one over the other. The value of λ is updated for each new training iteration; it is computed as the ratio λ = L y L x of the respective losses at the end of the preceding iteration. Once the neural AE and the EMG predictor networks have been trained, their weights remain fixed.
DOMAIN ADAPTATION
To stabilize a fixed BMI, we need to align the latent space of later days to that of the first day, when the fixed interface was initially built. This step is necessary to provide statistically stationary inputs to the EMG predictor. We first use two different approaches to align latent variables across days: Canonical Correlation Analysis (CCA) between latent trajectories and Kullback-Leibler (KL) divergence minimization between latent distributions. We then use an ADAN to align either the distribution of latent variables or the distributions of the residuals of the reconstructed neural data, the latter as a proxy for the alignment of the neural and latent variables.
CANONICAL CORRELATION ANALYSIS (CCA)
Consider the latent activities Z 0 corresponding to day-0 and Z k corresponding to a later day-k; the AE is fixed after being trained with day-0 data. Both Z 0 and Z k are matrices of dimension l by 8τ , where l is the dimensionality of the latent space and τ is the average time duration of each trial; the factor of 8 arises from concatenating the averaged latent activities for each of the eight targets. The goal of CCA is to find a linear transformation of the latent variables Z k so that they are maximally correlated with a linear transformation of the latent variables Z 0 (Bach & Jordan, 2002). This well established approach, involving only linear algebra, has been successfully applied to the analysis of M1 neural data (Sussillo et al., 2015;Gallego et al., 2017b;Russo et al., 2018). In summary, the analysis starts with a QR decomposition of the transposed latent activity matrices,
Z T 0 = Q 0 R 0 , Z T k = Q k R k .
Next, we construct the inner product matrix of Q 0 and Q k , and perform a singular value decomposition to obtain Q T 0 Q k = U SV T . The new latent space directions along which correlations are maximized are given by M 0 = R −1 0 U , and M k = R −1 k V , respectively. The implementation of CCA requires a one-to-one correspondence between data points in the two sets; this restricts its application to the ability to match neural data across day. Matching is achieved here through the repeated execution of highly stereotypic movements; the correspondence is then established by pairing average trials to a given target across different days. In a real-life scenario, motor behaviors are not structured and moment-to-moment movement intent is less clear, interfering with the possibility of establishing such a correspondence. Alignment using CCA requires a supervised calibration through the repetition of stereotyped tasks, but ideally the alignment would be achieved based on data obtained during natural, voluntary movements. A successful unsupervised approach to the alignment problem is thus highly desirable.
KULLBACK-LEIBLER DIVERGENCE MINIMIZATION (KLDM)
For the unsupervised approach, we seek to match the probability distribution of the latent variables of day-k to that of day-0, without a need for the point-to-point correspondence provided in CCA by their time sequence. We use the fixed AE trained on day-0 data to obtain the latent variables z 0 on day-0 and z k on day-k. We then compute the mean and covariance matrix for each of these two empirical distributions, and capture their first and second order statistics by approximating these two distributions by multivariate Gaussians: p 0 (z 0 ) ∼ N (z 0 ; µ 0 , Σ 0 ) and p k (z k ) ∼ N (z k ; µ k , Σ k ). We then minimize the KL divergence between them,
D KL (p k (z k ) p 0 (z 0 )) = 1 2 tr(Σ −1 0 Σ k ) + (µ 0 − µ k ) T Σ −1 0 (µ 0 − µ k )) − l + ln |Σ 0 | |Σ k |(2)
This process aligns the day-k latent PDF to that of day-0 through two global linear operations: a translation through the match of the means, and a rotation through the match of the eigenvectors of the covariance matrix; a nonuniform scaling follows from the match of the eigenvalues of the covariance matrix.
To improve on the Gaussian assumption for the distribution of latent variables, we have trained an alternative BMI in which the AE ( Figure 1B) is replaced by a Variational AE (Kingma & Welling, 2013). We train the VAE by adding to the interface loss function (equation 1) a regularizer term: the Kullback-Leibler (KL) divergence D KL (p 0 (z 0 ) q(z 0 )) between the probability distribution p 0 (z 0 ) of the latent activity on day-0 and q(z 0 ) = N (z 0 ; 0, I). The latent variables of the VAE are thus subject to the additional soft constraint of conforming to a Normal distribution.
ADVERSARIAL DOMAIN ADAPTATION NETWORK (ADAN)
In addition to matching the probability distributions of latent variables of day-k to those of day-0, we seek an alternative approach: to match the probability distributions of the residuals of the reconstructed firing rates (Zhao et al., 2016), as a proxy for matching the distributions of the neural recordings and their corresponding latent variables. To this end, we train an ADAN whose architecture is very similar to that of a Generative Adversarial Network (GAN): it consists of two deep neural networks, a distribution alignment module and a discriminator module ( Figure 1C).
The discriminator is an AE (Zhao et al., 2016) with the same architecture as the one used for the BMI ( Figure 1B). The discriminator parameters θ D are initialized with the weights of the BMI AE, trained on the day-0 neural data. The goal of the discriminator is to maximize the difference between the neural reconstruction losses of day-k and day-0.
The distribution alignment module works as an adversary to the discriminator by minimizing the neural reconstruction losses of day-k (Warde-Farley & Bengio, 2017). It consists of a hidden layer with exponential units and a linear readout layer, each with n fully connected units. The aligner parameters θ A , the weights of the n by n connectivity matrices from input to hidden and from hidden to output, are initialized as the corresponding identity matrices. The aligner module receives as inputs the firing rates X k of day-k. During training, the gradients through the discriminator bring the output A(X k ) of the aligner closer to X 0 . The adversarial mechanism provided by the discriminator allows us to achieve this alignment in an unsupervised manner.
To train the ADAN, we need to quantify the reconstruction losses. Given input data X, the discriminator outputsX =X(X, θ D ), with residuals R(X, θ D ) = X −X(X, θ D ) . Consider the scalar reconstruction losses r obtained by taking the L 1 norm of each column of R. Let ρ 0 and ρ k be the distributions of the scalar losses for day-0 and day-k, respectively, and let µ 0 and µ k be their corresponding means. We measure the dissimilarity between these two distributions by a lower bound to the Wasserstein distance (Arjovsky et al., 2017), provided by the absolute value of the difference between the means: W (ρ 0 , ρ k ) ≥ |µ 0 − µ k | (Berthelot et al., 2017). The discriminator is trained to learn features that discriminate between the two distributions by maximizing the corresponding Wasserstein distance. The discriminator initialization implies µ k > µ 0 when the ADAN training begins. By maximizing (µ k −µ 0 ), equivalent to minimizing (µ 0 −µ k ), this relation is maintained during training. Since scalar residuals and their means are nonnegative, the maximization of W (ρ 0 , ρ k ) is achieved by decreasing µ 0 while increasing µ k .
Given discriminator and aligner parameters θ D and θ A , respectively, the discriminator and aligner loss functions L D and L A to be minimized can be expressed as Figure 2A illustrates the firing rates (n = 96), the latent variables (l = 10), the reconstructed firing rates, and the actual and predicted (day-0) muscle activity for two representative muscles, a flexor and an extensor, over a set of eight trials (one trial per target location) of a test data set. The overall performance of the interface is summarized in Figure 2B, quantified using the percentage of the variance accounted for (%VAF) for five-fold cross-validated data. The blue bar shows the accuracy of EMG predictions directly from the smooth firing rates; this EMG predictor consists of an LSTM layer with n inputs, followed by a linear layer. The green bar shows the accuracy of EMG predictions from the latent space provided by the trained AE; the predictor now receives only l inputs instead of n. The latter predictions are worse; the difference is small but significant (paired t-test, p=0.006).
L D = µ 0 (X 0 ; θ D ) − µ k (A(X k ; θ A ); θ D ) for θ D L A = µ k (A(X k ; θ A ); θ D ) for θ A(3)
RESULTS
In contrast, when the EMG predictor is trained simultaneously with the AE (red bar), there is no significant difference (paired t-test, p=0.971) in performance between EMG predictions based on the n-dimensional neural activity or on the l-dimensional latent variables. Therefore, the supervision of the dimensionality reduction step through the integration of relevant movement information leads to a latent representation that better captures neural variability related to movement intent.
A.
Time ( Figure 2: Neural to muscle BMI. A. Example firing rates recorded from the hand area of primary motor cortex while the monkey performed the isometric wrist task; we also show latent variables and reconstructed firing rates. The quality of EMG predictions is illustrated by comparison to actual EMGs for two representative muscles, for each of the eight target directions. B. Performance comparison between EMG predictions from n-dimensional firing rates (blue) and EMG predictions from l-dimensional latent variables, obtained either with training the predictor sequentially (green) or simultaneously (red) with the neural AE. Error bars represent standard deviation of the mean.
Two of the methods used for domain adaptation, CCA and KLDM, achieve alignment based on latent variables; while KLDM explicitly seeks to match first and second order statistics of the latent variables through a Gaussian approximation, CCA aligns the latent variables using a point-to point correspondence provided by the latent trajectories. The effect of CCA alignment is illustrated in Figure 3A, where we show 2D t-SNE visualizations of 10D latent trajectories. Each trajectory is an average over all trials for a given target. The differences between day-16 and day-0 latent trajectories reflect the impact of turnover in the recorded neurons. Comparison between these two sets of trajectories reveals a variety of transformations, including nonuniform rotation, scaling, and skewing. In spite of the complexity of these transformations, the available point-to-point correspondence along these trajectories allows CCA to achieve a good alignment. The mechanism underlying KLDM alignment is illustrated in 3B, where we show the empirical probability distribution of the latent variables along a randomly chosen, representative direction within the 10D latent space. Results are shown for day-0 (blue), for day-16 (red), and for day-16 after alignment with KLDM (yellow). The effects of using a BMI based on a VAE instead of the AE are shown in Supplementary Figure S1. A. 2D t-SNE visualization of the averaged 10D latent neural trajectories as the monkey performed the isometric wrist task for day-0, day-16 before alignment, and day-16 after alignment with CCA. B. Probability distribution of the 10D latent variables along a randomly chosen, representative direction. We show the distribution at day-0, and the distribution at day-16 before and after alignment using KLDM. C. Probability distribution of the L 1 norm of the vector residuals of the reconstructed firing rates for day-0 and day-16, before and after adversarial alignment using ADAN. D. 2D t-SNE visualization of the vector residuals of the reconstructed firing rates for day-0 and day-16, before and after adversarial alignment using ADAN. E. Same as B, but for alignment using ADAN.
To investigate the mechanism underlying ADAN alignment, we focus on the residuals for the reconstruction of firing rates. We show in Figure 3C the 1D distribution of the L 1 norm of the vector residuals, and in Figure 3D a 2D t-SNE (Maaten & Hinton, 2008) visualization of the vector residuals based on 1000 randomly sampled data points. Residuals correspond to the errors in firing rate reconstructions using the day-0 fixed AE for both day-0 data (blue) and day-16 data (red). Residuals for the day-16 data after alignment with ADAN are shown in yellow. Figure 3E shows the empirical probability distribution of the latent variables along the same representative, randomly chosen dimension within the 10D latent space used in Figure 3B. Results are shown for latent variables on day-0 using the fixed AE (blue), for the latent variables on day-16 along the same dimension using the same, fixed AE (red), and for day-16 latent variables after alignment with ADAN (yellow). A 2D t-SNE visualization of latent variables aligned with ADAN is shown in comparison to the results of a simple center-and-scale alignment in Supplementary Figure S2.
The performance of the BMI before and after domain adaptation with CCA, KLDM, and ADAN is summarized in Figure 4A and quantified using the %VAF in EMG predictions. We report mean and standard deviation for five-fold cross-validated data. Blue symbols indicate the performance of an interface that is updated on each day; this provides an upper bound for the potential benefits of neural domain adaptation. Red symbols illustrate the natural deterioration in the performance of a fixed interface due to the gradual deterioration of neural recordings. Green, orange, and purple symbols indicate the extent to which the performance of a fixed interface improves after alignment using CCA, KLDM, and ADAN, respectively. The comparable performance of CCA and KLDM reflects that both methods achieve alignment based on latent statistics; the use of ADAN directly for latent space alignment does not produce better results than these two methods. In contrast, when ADAN is used for alignment based on residual statistics, interface stability improves. This ADAN provides a better alignment because the residuals amplify the mismatch that results when a fixed day-0 AE is applied to later day data (see Figures 3C and D). Although the improvement achieved with ADAN is small, about 6%, it is statistically significant, and more importantly, it is meaningful to the BMI user. We have been unable to achieve this degree of improvement with any of the many other domain adaptation approaches we tried. This improvement is even more remarkable given that domain adaptation with ADAN requires a surprisingly small amount of data. Figure 4B shows the percentage improvement in EMG predictions as a function of the amount of training data. Subsequent symbols are obtained by adding 6 s of data (120 samples) to the training set, and computing the average percentage improvement for the entire day (20 min recordings), for all days after day-0. EMG prediction accuracy saturates at ∼1 min; this need for a small training set is ideal for practical applications. Error bars represent standard deviation of the mean. B. Average improvements in EMG prediction performance for alignment using ADAN as a function of the amount of training data needed for domain adaptation at the beginning of each day, averaged over all days after day-0. Shading represents standard deviation of the mean.
CONCLUSION
We address the problem of stabilizing a fixed Brain-Machine Interface against performance deterioration due to the loss and turnover of recorded neural signals. We introduce a new approach to extracting a low-dimensional latent representation of the neural signals while simultaneously inferring movement intent. We then implement various domain adaption methods to stabilize the latent representation over time, including Canonical Correlation Analysis and the minimization of a Kullback-Leibler divergence. These two methods provide comparable improvement in the performance of the interface. We find that an Adversarial Domain Adaptation Network trained to match the empirical probability distribution of the residuals of the reconstructed neural recordings restores the latent representation of neural trajectories and outperforms the two methods based on latent variables, while requiring remarkably little data to solve the domain adaptation problem. This is the first successful application of an unsupervised method to the problem of aligning neural recordings in a manner that is not task specific, and thus potentially applicable to unconstrained movements.
SUPPLEMENTARY MATERIAL
B. A.
Within-day interface Figure S1: Variational Autoencoder. A. EMG prediction performance using a different BMI, based on a VAE decoder, in comparison to the performance of a BMI based on the traditional autoencoder. Blue symbols represent the sustained performance of an interface retrained on a daily basis. Red symbols illustrate the deterioration in the performance of a fixed interface without domain adaptation. Orange symbols represent the performance of a fixed interface when the latent variables are aligned using KLDM. For each of these three cases, solid lines represent the performance of an AE based BMI, and dashed lines that of a VAE based BMI. Error bars represent standard deviation of the mean. B. Probability distribution of the 10D latent variables along the same dimension used in Figure 3B, now obtained with the fixed VAE trained on day-0. We show the distribution at day-0, and the distribution at day-16 before and after alignment using KLDM. In comparison to Figure 3B, the use of a VAE greatly improves the Gaussian nature of the latent variables' distribution. However, this additional constraint in the autoencoder results in a slight deterioration of the BMI's ability to predict EMGs, as shown in A. Figure S2: Visualization of the probability distribution of latent variables in 2D using t-SNE. A. Latent variables on day-0. B. Latent variables on day-16, before alignment. C. Latent variables on day-16, after alignment to those of day-0 using T&S: a global translation to match the respective means followed by a global scaling to match the respective variances (yellow). Also shown, latent variables on day-0 (blue) on the same projection. D. Latent variables on day-16, after alignment to those of day-0 using ADAN (yellow). Also shown, latent variables on day-0 (blue) on the same projection.
Figure 3 :
3Domain adaptation.
Figure 4 :
4A. EMG prediction performance using a fixed BMI decoder. Blue symbols represent the sustained performance of interfaces retrained on a daily basis. Red symbols illustrate the deterioration in performance of a fixed interface without domain adaptation. The performance of a fixed interface after domain adaptation is shown for CCA (green), KLDM (orange), and ADAN (purple).
Fixed
Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. Bolu Ajiboye, R Francis, Willett, R Daniel, Young, D William, Brian A Memberg, Jonathan P Murphy, Miller, L Benjamin, Jennifer A Walter, Sweet, A Harry, Michael W Hoyen, Keith, The Lancet. 389A Bolu Ajiboye, Francis R Willett, Daniel R Young, William D Memberg, Brian A Murphy, Jonathan P Miller, Benjamin L Walter, Jennifer A Sweet, Harry A Hoyen, and Michael W Keith. Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. The Lancet, 389(10081):1821-1830, 2017. ISSN 0140-6736.
Targeting recovery: priorities of the spinal cord-injured population. Kim D Anderson, 0897-7151Journal of Neurotrauma. 2110Kim D Anderson. Targeting recovery: priorities of the spinal cord-injured population. Journal of Neurotrauma, 21(10):1371-1383, 2004. ISSN 0897-7151.
Wasserstein generative adversarial networks. Martin Arjovsky, Soumith Chintala, Léon Bottou, Proceedings of Machine Learning Research. Machine Learning Research70Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of Machine Learning Research, volume 70, pp. 214-223, 2017.
Kernel independent component analysis. R Francis, Michael I Jordan Bach, Journal of Machine Learning Research. 3Francis R Bach and Michael I Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3(Jul):1-48, 2002.
Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates. Naveen James C Barrese, Kaivon Rao, Corey Paroo, Carlos Triebwasser, Lachlan Vargas-Irwin, John P Franquemont, Donoghue, 1741-2552Journal of Neural Engineering. 10666014James C Barrese, Naveen Rao, Kaivon Paroo, Corey Triebwasser, Carlos Vargas-Irwin, Lachlan Franquemont, and John P Donoghue. Failure mode analysis of silicon-based intracortical micro- electrode arrays in non-human primates. Journal of Neural Engineering, 10(6):066014, 2013. ISSN 1741-2552.
Began: boundary equilibrium generative adversarial networks. David Berthelot, Thomas Schumm, Luke Metz, arXiv:1703.10717David Berthelot, Thomas Schumm, and Luke Metz. Began: boundary equilibrium generative adver- sarial networks. arXiv:1703.10717, 2017.
Self-recalibrating classifiers for intracortical braincomputer interfaces. William Bishop, C Cynthia, Vikash Chestek, Paul Gilja, Justin D Nuyujukian, Foster, Krishna V Stephen I Ryu, M Yu Shenoy, Byron, 1741-2552Journal of Neural Engineering. 11226001William Bishop, Cynthia C Chestek, Vikash Gilja, Paul Nuyujukian, Justin D Foster, Stephen I Ryu, Krishna V Shenoy, and M Yu Byron. Self-recalibrating classifiers for intracortical braincomputer interfaces. Journal of Neural Engineering, 11(2):026001, 2014. ISSN 1741-2552.
Assessment of brain machine interfaces from the perspective of people with paralysis. H Christine, Vikash Blabe, Cindy A Gilja, Krishna V Chestek, Shenoy, D Kim, Jaimie M Anderson, Henderson, 1741-2552Journal of Neural Engineering. 12443002Christine H Blabe, Vikash Gilja, Cindy A Chestek, Krishna V Shenoy, Kim D Anderson, and Jaimie M Henderson. Assessment of brain machine interfaces from the perspective of people with paralysis. Journal of Neural Engineering, 12(4):043002, 2015. ISSN 1741-2552.
Restoring cortical control of functional movement in a human with quadriplegia. Ammar Chad E Bouton, Shaikhouni, V Nicholas, Marcia A Annetta, Bockbrader, A David, Dylan M Friedenberg, Gaurav Nielson, Sharma, B Per, Sederberg, C Bradley, W Jerry Glenn, Mysiw, 1476-4687Nature. 5337602247Chad E Bouton, Ammar Shaikhouni, Nicholas V Annetta, Marcia A Bockbrader, David A Frieden- berg, Dylan M Nielson, Gaurav Sharma, Per B Sederberg, Bradley C Glenn, and W Jerry Mysiw. Restoring cortical control of functional movement in a human with quadriplegia. Nature, 533 (7602):247, 2016. ISSN 1476-4687.
High-performance neuroprosthetic control by an individual with tetraplegia. L Jennifer, Brian Collinger, John E Wodlinger, Wei Downey, Elizabeth C Wang, Tyler-Kabara, J Douglas, Weber, J C Angus, Meel Mcmorland, Velliste, L Michael, Andrew B Boninger, Schwartz, The Lancet. 3819866Jennifer L Collinger, Brian Wodlinger, John E Downey, Wei Wang, Elizabeth C Tyler-Kabara, Dou- glas J Weber, Angus JC McMorland, Meel Velliste, Michael L Boninger, and Andrew B Schwartz. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet, 381 (9866):557-564, 2013. ISSN 0140-6736.
Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces. Siddharth Dangi, Amy L Orsborn, Helene G Moorman, Jose M Carmena, 0899-7667Neural Computation. 257Siddharth Dangi, Amy L Orsborn, Helene G Moorman, and Jose M Carmena. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces. Neural Computation, 25(7):1693-1731, 2013. ISSN 0899-7667.
Single-unit stability using chronically implanted multielectrode arrays. S Adam, Aaron Dickey, Yali Suminski, Nicholas G Amit, Hatsopoulos, 0022-3077Journal of Neurophysiology. 1022Adam S Dickey, Aaron Suminski, Yali Amit, and Nicholas G Hatsopoulos. Single-unit stability using chronically implanted multielectrode arrays. Journal of Neurophysiology, 102(2):1331- 1339, 2009. ISSN 0022-3077.
Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Christian Ethier, Emily R Oby, Lee E Bauman, Miller, 1476-4687Nature. 4857398368Christian Ethier, Emily R Oby, MJ Bauman, and Lee E Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 485(7398):368, 2012. ISSN 1476-4687.
Neural manifolds for the control of movement. A Juan, Gallego, G Matthew, Lee E Perich, Sara A Miller, Solla, Neuron. 945Juan A Gallego, Matthew G Perich, Lee E Miller, and Sara A Solla. Neural manifolds for the control of movement. Neuron, 94(5):978-984, 2017a.
Multiple tasks viewed from the neural manifold: Stable control of varied behavior. A Juan, Gallego, G Matthew, Stephanie N Perich, Christian Naufel, Ethier, A Sara, Lee E Solla, Miller, 176081Juan A Gallego, Matthew G Perich, Stephanie N Naufel, Christian Ethier, Sara A Solla, and Lee E Miller. Multiple tasks viewed from the neural manifold: Stable control of varied behavior. bioRxiv:176081, 2017b.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.
Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Leigh R Hochberg, D Mijail, Gerhard M Serruya, Jon A Friehs, Maryam Mukand, Saleh, H Abraham, Almut Caplan, David Branner, Chen, D Richard, John P Penn, Donoghue, 1476-4687Nature. 4427099Leigh R Hochberg, Mijail D Serruya, Gerhard M Friehs, Jon A Mukand, Maryam Saleh, Abraham H Caplan, Almut Branner, David Chen, Richard D Penn, and John P Donoghue. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099):164, 2006. ISSN 1476-4687.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 0899-7667Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997. ISSN 0899-7667.
Virtual typing by people with tetraplegia using a self-calibrating intracortical brain-computer interface. Beata Jarosiewicz, A Anish, Daniel Sarma, Bacher, Y Nicolas, John D Masse, Brittany Simeral, Erin M Sorice, Christine Oakley, Chethan Blabe, Vikash Pandarinath, Gilja, 1946-6234Science Translational Medicine. 7313Beata Jarosiewicz, Anish A Sarma, Daniel Bacher, Nicolas Y Masse, John D Simeral, Brittany Sorice, Erin M Oakley, Christine Blabe, Chethan Pandarinath, and Vikash Gilja. Virtual typing by people with tetraplegia using a self-calibrating intracortical brain-computer interface. Science Translational Medicine, 7(313):313ra179, 2015. ISSN 1946-6234.
Leveraging neural dynamics to extend functional lifetime of brain-machine interfaces. C Jonathan, Kao, Krishna V Stephen I Ryu, Shenoy, Scientific Reports. 717395Jonathan C Kao, Stephen I Ryu, and Krishna V Shenoy. Leveraging neural dynamics to extend functional lifetime of brain-machine interfaces. Scientific Reports, 7(1):7395, 2017.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605, 2008.
Movement representation in the primary motor cortex and its contribution to generalizable emg predictions. Christian Emily R Oby, Lee E Ethier, Miller, Journal of Neurophysiology. 1093Emily R Oby, Christian Ethier, and Lee E Miller. Movement representation in the primary motor cortex and its contribution to generalizable emg predictions. Journal of Neurophysiology, 109(3): 666-678, 2012.
Closed-loop decoder adaptation on intermediate time-scales facilitates rapid bmi performance improvements independent of decoder initialization conditions. Siddharth Amy L Orsborn, Helene G Dangi, Jose M Moorman, Carmena, 1534-4320IEEE Transactions on Neural Systems and Rehabilitation Engineering. 204Amy L Orsborn, Siddharth Dangi, Helene G Moorman, and Jose M Carmena. Closed-loop decoder adaptation on intermediate time-scales facilitates rapid bmi performance improvements indepen- dent of decoder initialization conditions. IEEE Transactions on Neural Systems and Rehabilita- tion Engineering, 20(4):468-477, 2012. ISSN 1534-4320.
Inferring single-trial neural population dynamics using sequential auto-encoders. Chethan Pandarinath, J O' Daniel, Jasmine Shea, Rafal Collins, Jozefowicz, D Sergey, Jonathan C Stavisky, Eric M Kao, Trautmann, T Matthew, Stephen I Kaufman, Leigh R Ryu, Hochberg, 152884Chethan Pandarinath, Daniel J O'Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D Stavisky, Jonathan C Kao, Eric M Trautmann, Matthew T Kaufman, Stephen I Ryu, Leigh R Hochberg, et al. Inferring single-trial neural population dynamics using sequential auto-encoders. bioRxiv:152884, 2017.
Motor cortex embeds muscle-like commands in an untangled population response. A Abigail, Sean R Russo, Sean M Bittner, Jeffrey S Perkins, Brian M Seely, Antonio H London, Andrew Lara, Miri, J Najja, Adam Marshall, Thomas M Kohn, Jessell, Neuron. 974Abigail A Russo, Sean R Bittner, Sean M Perkins, Jeffrey S Seely, Brian M London, Antonio H Lara, Andrew Miri, Najja J Marshall, Adam Kohn, Thomas M Jessell, et al. Motor cortex embeds muscle-like commands in an untangled population response. Neuron, 97(4):953-966, 2018.
Neural constraints on learning. T Patrick, Kristin M Sadtler, Quick, D Matthew, Golub, M Steven, Chase, Elizabeth C Stephen I Ryu, Yu Tyler-Kabara, Aaron P Byron, Batista, 1476-4687Nature. 5127515423Patrick T Sadtler, Kristin M Quick, Matthew D Golub, Steven M Chase, Stephen I Ryu, Elizabeth C Tyler-Kabara, M Yu Byron, and Aaron P Batista. Neural constraints on learning. Nature, 512 (7515):423, 2014. ISSN 1476-4687.
Cortical control of arm movements: a dynamical systems perspective. V Krishna, Maneesh Shenoy, Mark M Sahani, Churchland, Annual Review of Neuroscience. 36Krishna V Shenoy, Maneesh Sahani, and Mark M Churchland. Cortical control of arm movements: a dynamical systems perspective. Annual Review of Neuroscience, 36:337-359, 2013.
A neural network that finds a naturalistic solution for the production of muscle activity. David Sussillo, M Mark, Churchland, T Matthew, Krishna V Kaufman, Shenoy, Nature Neuroscience. 1871025David Sussillo, Mark M Churchland, Matthew T Kaufman, and Krishna V Shenoy. A neural network that finds a naturalistic solution for the production of muscle activity. Nature Neuroscience, 18 (7):1025, 2015.
Making brainmachine interfaces robust to future neural variability. David Sussillo, D Sergey, Jonathan C Stavisky, Kao, Krishna V Stephen I Ryu, Shenoy, Nature Communications. 7David Sussillo, Sergey D Stavisky, Jonathan C Kao, Stephen I Ryu, and Krishna V Shenoy. Making brainmachine interfaces robust to future neural variability. Nature Communications, 7:13749, 2016. ISSN 2041-1723.
Direct cortical control of 3d neuroprosthetic devices. M Dawn, Stephen I Helms Taylor, Andrew B Tillery, Schwartz, 0036-8075Science. 2965574Dawn M Taylor, Stephen I Helms Tillery, and Andrew B Schwartz. Direct cortical control of 3d neuroprosthetic devices. Science, 296(5574):1829-1832, 2002. ISSN 0036-8075.
Improving generative adversarial networks with denoising feature matching. David Warde, - Farley, Yoshua Bengio, International Conference on Learning Representations (ICLR. David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In International Conference on Learning Representations (ICLR), 2017.
Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. M Byron, John P Yu, Gopal Cunningham, Santhanam, Krishna V Stephen I Ryu, Maneesh Shenoy, Sahani, Advances in Neural Information Processing Systems. Byron M Yu, John P Cunningham, Gopal Santhanam, Stephen I Ryu, Krishna V Shenoy, and Ma- neesh Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. In Advances in Neural Information Processing Systems, pp. 1881-1888, 2009.
Energy-based generative adversarial network. Junbo Zhao, Michael Mathieu, Yann Lecun, arXiv:1609.03126Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv:1609.03126, 2016. |
60,441,438 | LEARNING WHAT YOU CAN DO BEFORE DOING ANYTHING | Intelligent agents can learn to represent the action spaces of other agents simply by observing them act.Such representations help agents quickly learn to predict the effects of their own actions on the environment and to plan complex action sequences.In this work, we address the problem of learning an agent's action space purely from visual observation.We use stochastic video prediction to learn a latent variable that captures the scene's dynamics while being minimally sensitive to the scene's static content.We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions.We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP) .We show the applicability of our method to synthetic settings and its potential to capture action spaces in complex, realistic visual settings.When used in a semi-supervised setting, our learned representations perform comparably to existing fully supervised methods on tasks such as action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action labels. 1 * Equal contribution.Ordering determined by a coin flip. 1 Project website: https://daniilidis-group.github.io/learned_action_spaces | [
204922497,
5037032,
16326763,
14124313,
9128667,
24069181
] | LEARNING WHAT YOU CAN DO BEFORE DOING ANYTHING
12 Feb 2019
Oleh Rybkin
University of Pennsylvania
Karl Pertsch
University of Southern California
Konstantinos G Derpanis
Ryerson University
Samsung AI Centre Toronto
Kostas Daniilidis
University of Pennsylvania
Andrew Jaegle
University of Pennsylvania
LEARNING WHAT YOU CAN DO BEFORE DOING ANYTHING
12 Feb 20197C3A760DE5AC3EEB839A086B195C1E01arXiv:1806.09655v2[cs.LG]
Intelligent agents can learn to represent the action spaces of other agents simply by observing them act.Such representations help agents quickly learn to predict the effects of their own actions on the environment and to plan complex action sequences.In this work, we address the problem of learning an agent's action space purely from visual observation.We use stochastic video prediction to learn a latent variable that captures the scene's dynamics while being minimally sensitive to the scene's static content.We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions.We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP) .We show the applicability of our method to synthetic settings and its potential to capture action spaces in complex, realistic visual settings.When used in a semi-supervised setting, our learned representations perform comparably to existing fully supervised methods on tasks such as action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action labels. 1 * Equal contribution.Ordering determined by a coin flip. 1 Project website: https://daniilidis-group.github.io/learned_action_spaces
INTRODUCTION
Agents behaving in real-world environments rely on perception to judge what actions they can take and what effect these actions will have.Purely perceptual learning may play an important role in how these action representations are acquired and used.In this work, we focus on the problem of learning an agent's action space from unlabeled visual observations.To see the usefulness of this strategy, consider an infant that is first learning to walk.From around 10 months of age, infants rapidly progress from crawling, to irregular gaits with frequent falling, and finally to reliable locomotion (Adolph et al. (2012)).But before they first attempt to walk, infants have extensive sensory exposure to adults walking.Unsupervised learning from sensory experience of this type appears to play a critical role in how humans acquire representations of actions before they can reliably reproduce the corresponding behaviour (Ullman et al. (2012)).Infants need to relate the set of motor primitives they can generate to the action spaces exploited by adults (Dominici et al. (2011)), and a representation acquired by observation may allow an infant to more efficiently learn to produce natural, goal-directed walking behavior.
Reinforcement learning (RL) provides an alternative to the (passive) unsupervised learning approach as it implicitly discovers an agent's action space and the consequences of its actions.Recent breakthroughs in model-free and model-based RL suggest that end-to-end training can be used to learn mappings between sensory input and actions (Mnih et al. (2015); Lillicrap et al. (2016); Levine et al. (2016); Finn & Levine (2017); Schulman et al. (2015)).However, these methods require active observations and the sensorimotor mappings learned in this way cannot be easily generalized to new agents with different control interfaces.Methods for sensorimotor learning from purely visual Without requiring labels, our model learns to represent the action in sequences like these identically.We train a representation z to capture the dynamics of the scene and its compositional structure: applying (z 1 and z 2 ) should have the same effect as applying the composed representation g(z 1 , z 2 ).These properties capture the fact that effector systems, such as a robot arm, use the same composable action space in many different states.b) The learned action space z recovered by our method (PCA visualization).Points are colored by the true action u: true actions can be easily decoded from z, validating that the structure of the action space has been captured.
data may facilitate learning where action information is not available, such as when using video data collected from the Internet.Such methods may also be useful for imitation learning, where ground truth actions are often hard or impossible to collect other than by visual observation (Finn et al. (2017); Pathak et al. (2018)).More generally, learning from passive observations may make it easier to reuse action representations between systems with different effectors and goals.The representations learned by unsupervised methods are invariant to these choices because the model does not have access to motor commands or goals during training.
In this work, we evaluate the proposal that learning what you can do before doing anything can lead to action space representations that make subsequent learning more efficient.To this end, we develop a model that learns to represent an agent's action space given only unlabeled videos of the agent.The resulting representation enables direct planning in the latent space.Given a small number of action-labeled sequences we can execute the plan by learning a simple mapping from latent action representations to the agent's controls.This representation may be analogous to those in the parietal and premotor areas of cortex, which include populations of neurons that represent the structure of actions produced both by the self and by others (Rizzolatti et al. (1996); Romo et al. (2004)) and that are critical for reliably producing flexible, voluntary motor control (see Kandel et al. (2012), Chapter 38).In the brain, representations of this kind could plausibly be learned using specialized loss functions (Marblestone et al. ( 2016)) whose effect is to induce the prior needed to determine the structure of actions in observation data.
In contrast to most approaches to unsupervised learning of dynamics, which focus on learning the statistical structure of the environment, we focus on disentangling action information from the instantaneous state of the environment (Fig. 1).We base our work on recent stochastic video prediction methods (Babaeizadeh et al. (2018); Denton & Fergus (2018); Lee et al. (2018)) and impose two properties on the latent representation.First, we train the representation to be minimal, i.e. containing minimal information about the current world state.This forces the representation to focus on dynamic properties of the sensory input.A similar objective has been used in previous work to constrain the capacity of video prediction models (Denton & Fergus (2018)).Second, we train the representation to be composable by introducing a novel loss term that enforces that the cumulative effect of a sequence of actions can be computed from the individual actions' representations (Fig. 1, left).Composability encourages disentangling: as a composed representation does not have access to the static content of the intermediate frames, a representation is composable only if the individual action representations are disentangled from the static content.Taken together, these two properties lead to a representation of sensory dynamics that captures the structure of the agent's actions.
We make the following three contributions.First, we introduce a method for unsupervised learning of an agent's action space by training the latent representation of a stochastic video prediction model for the desiderata of minimality and composability.Second, we show that our method learns a representation of actions that is independent of scene content and visual characteristics on (i) a simulated robot with one degree of freedom and (ii) the BAIR robot pushing dataset (Ebert et al. (2017)).Finally, we demonstrate that the learned representation can be used for action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action-labeled videos than extant supervised methods.
RELATED WORK
Learning structured and minimal representations.Several groups have recently shown how an adaptation of the variational autoencoder (VAE, Kingma & Welling (2014); Rezende et al. ( 2014)) can be used to learn representations that are minimal in the information-theoretic sense.Alemi et al. (2017) showed that the Information Bottleneck (IB) objective function (Tishby et al. (1999); Shwartz-Ziv & Tishby (2017)) can be optimized with a variational approximation that takes the form of the VAE objective with an additional weighting hyperparameter.In parallel, Higgins et al. (2017) showed that a similar formulation can be used to produce disentangled representations.The connection between disentaglement and minimality of representations was further clarified by Burgess et al. (2018).In this work, we apply the IB principle to temporal models to enforce minimality of the representation.
Several groups have proposed methods to learn disentangled representations of static content and pose from video (Denton & Birodkar (2017); Tulyakov et al. (2018)).Jaegle et al. (2018) learn a motion representation by enforcing that the motion acts on video frames as a group-theoretic action.
In contrast, we seek a representation that disentangles the motion from the static pose.Thomas et al. (2017) attempt to learn a disentangled representation of controllable factors of variation.
While the goals of their work are similar to ours, their model relies on active learning and requires an embodied agent with access to the environment.In contrast, our model learns factors of variation purely from passive temporal visual observations, and thus can be applied even if access to the environment is costly or impossible.
Unsupervised learning with video data.Several recent works have exploited temporal information for representation learning.Srivastava et al. (2015) used the Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber (1997)) recurrent neural network architecture to predict future frames and showed that the learned representation was useful for action recognition.Vondrick et al. (2016) showed that architectures using convolutional neural networks (CNNs) can be used to predict actions and objects several seconds into the future.2017)).The focus of this work is on learning action-conditioned predictive models.In contrast, our focus is on the unsupervised discovery of the space of possible actions from video data.
Our model is inspired by methods for stochastic video prediction that, given a sequence of past frames, capture the multimodal distribution of future images (Goroshin et al. (2015); Henaff et al. (2017)).We use the recently proposed recurrent latent variable models based on the variational autoencoder (Babaeizadeh et al. (2018); Denton & Fergus (2018); Lee et al. (2018)).We develop these methods and propose a novel approach to unsupervised representation learning designed to capture an agent's action space.
Sensorimotor abstractions for behavior.There is a long history of work developing sensorimotor representations for applications in reinforcement learning and robotics.Previous work in this domain has primarily focused on introducing hand-crafted abstractions and hierarchies to make sensorimotor mappings more generalizable.Methods for aggregating low-level controls into higher- level representations on which planning and RL can be performed are well-studied in the robotics literature: notable examples include the options framework for hierarchical RL (Sutton et al. (1999); Bacon et al. (2017)), dynamic motion primitives for manipulation (Schaal et al. (2005); Schaal (2006); Niekum et al. (2015)), and recent work that abstracts a learned policy away from low-level control and perception to ease simulation-to-real transfer (Clavera & Abbeel (2017); Müller et al. (2018)).
Other work has learned to separate robot-instance specific controls from task-related skills through modular policies, but this work does not enforce any structure onto the intermediate representation and requires extensive interaction with the environment (Devin et al. (2017)).
APPROACH
In this section, we describe our architecture for learning an action representation that is minimal and composable.In Sec.3.1, we describe a variational video prediction model similar to that of Denton & Fergus (2018) that provides us with a framework for learning a latent representation z t at time t of the change between the past and the current frames.No labeled actions are considered at this stage.In Sec.3.2, we introduce an unsupervised method for imposing composability of the latent that allows us to recover a structured representation that defines CLASP.To verify that the learned representation corresponds to the executed control, we show that we can learn a bijective mapping between the latent representation and the control output executed at that time using a small number of labeled data points (Sec.3.3).In the experimental section, we describe how the learned bijective mapping can be used for tasks such as action-conditioned video prediction (Sec.4.2) and planning in the learned action space (Sec.4.3).
VIDEO PREDICTION MODEL
At the core of our method is a recurrent latent variable model for video prediction based on a temporal extension of the conditional VAE proposed by Chung et al. (2015).We consider the generative model shown in Fig. 2 (left).At each timestep t, the model outputs a latent variable z t ∼ p(z) = N (0, I) associated with this timestep.Given a history of frames x 1:t−1 and latent samples z 2:t , the generative distribution over possible next frames is given by
x t ∼ p θ (x t |x 1:t−1 , z 2:t ) = N (µ θ (x 1:t−1 , z 2:t ), I).
In practice, we generate the next frame by taking the mean of the conditional distribution: xt = µ θ (x 1:t−1 , z 2:t ).
To optimize the log-likelihood of this generative model, we introduce an additional network approximating the posterior of the latent variable
z t ∼ q φ (z t |x t , x t−1 ) = N (µ φ (x t , x t−1 ), σ φ (x t , x t−1 )).
We can optimize the model using the variational lower bound of the log-likelihood in a formulation similar to the original VAE.However, as has been shown recently by Alemi et al. (2018), the standard VAE formulation does not constrain the amount of information contained in the latent variable z.To overcome this, and to learn a minimal representation of z, we reformulate the standard VAE objective in terms of the Information Bottleneck (IB) (Shwartz-Ziv & Tishby (2017)).
IB minimizes the mutual information, I, between the action representation, z t , and input frames, x t−1:t , while maximizing the ability to reconstruct the frame x t as measured by the mutual information between (z t , x t−1 ) and x t :
max p θ ,q φ I((z t , x t−1 ), x t ) − β z I(z t , x t−1:t ).(1)
The two components of the objective are balanced with a Lagrange multiplier β z .When the value of β z is higher, the model learns representations that are more efficient, i.e. minimal in the informationtheoretic sense.We use this property to achieve our first objective of minimality of z.
The variational IB (Alemi et al. (2017)) provides a variational approximation of the IB objective, that simply takes the form of the original VAE objective with an additional constant β z .Aggregating over a sequence of frames, the video prediction objective for our model is given by:
L pred θ,φ (x 1:T ) = T t=1 E q φ (z2:t|x1:t) log p θ (x t |x 1:t−1 , z 2:t ) − β z D KL (q φ (z t |x t−1:t )||p(z)) . (2)
The full derivation of the variational lower bound is given in the appendix of Denton & Fergus (2018) 2 .The full model for one prediction step is shown in the left part of Fig. 2.
CLASP: LEARNING ACTION REPRESENTATIONS WITH COMPOSABILITY
Given a history of frames, the latent variable z t represents the distribution over possible next frames.It can thus be viewed as a representation of possible changes between the previous and the current frame.We will associate the latent variable z t with the distribution of such changes.In video data of an agent executing actions in an environment, the main source of change is the agent itself.Our model is inspired by the observation that a natural way to represent z t in such settings is by the agents' actions at time t.In this section, we describe an objective that encourages the previously described model (Sec.3.1) to learn action representations.
To encourage composability of action representations, we use the procedure illustrated in Fig. 2 (right).
We define an additional random variable
ν t ∼ q ζ (ν t |z t , z t−1 ) = N (µ ζ (z t , z t−1 ), σ ζ (z t , z t−1 ))
that is a representation of the trajectory z t−1:t .The process of composing latent samples into a single trajectory can be repeated several times in an iterative fashion, where the inference model q ζ observes a trajectory representation ν t−1 and the next latent z t to produce the composed trajectory representation ν t ∼ q ζ (ν t |ν t−1 , z t ).The inference model q ζ is parameterized with a multilayer perceptron, MLP comp .
We want ν to encode entire trajectories, but we also require it to have minimal information about individual latent samples.We can encourage these two properties by again using the IB objective:
max p θ ,q φ,ζ I((ν t , x 1 ), x t ) − β ν I(z 2:t , ν t ).(3)
We maximize this objective using the following procedure.Given a trajectory of T frames, we use MLP infer to retrieve the action representations z.Next, we generate a sequence of trajectory representations ν t , each of which is composed from C consecutive action representations z t−C:t .We obtain T C = T /C such representations.Finally, we use ν t to produce the corresponding frames
xt = p θ (x t |x t−C , ν t ) 3 .
The variational approximation to (3) that we use to impose composability takes the following form:
L comp θ,φ,ζ (x 1:T ) = T C t=1 E q φ,ζ (ν1:t|x 1:T ) log p θ (x t×Tc |x 1:(t−1)×T C , ν 1:t ) − β ν D KL (q φ,ζ (ν t |x (t−1)×T C :t×T C )||p(ν)) ,(4)
where the prior distribution over ν is given by the unit Gaussian ν ∼ p(ν) = N (0, I).
The objective above encourages the model to find a minimal representation for the trajectories ν.As the trajectories are composed from only the action representations z, this encourages z to assume a form suitable for efficient composition.This allows us to recover an action representation that is composable.Our overall training objective is the sum of the two objectives:
L total θ,φ,ζ = L comp θ,φ,ζ + L pred θ,φ .(5)
We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP).
GROUNDING THE CONTROL MAPPING
Our approach allows us to learn a latent representation z that is minimal and disentangled from the content of previous images.To use such a learned representation for control, we want to know which action u a certain sample z corresponds to, or vice versa.To determine this correspondence, we learn a simple bijective mapping from a small number of action-annotated frame sequences from the training data.We train the bijection using two lightweight multilayer perceptrons, ẑt = MLP lat (u t ) and ût = MLP act (z t ).Note that only the MLP lat and MLP act networks are trained in this step, as we do not propagate the gradients into the video prediction model.Because we do not have to re-train the video prediction model, this step requires far less data than models with full action supervision (Section 4.3).
We note that standard image-based representations of motion, e.g., optical flow, do not directly form a bijection with actions in most settings.For example, the flow field produced by a reacher (as in Fig. 5) rotating from 12 o'clock to 9 o'clock is markedly different from the flow produced by rotating from 3 o'clock to 12 o'clock, even though the actions producing the two flow fields are identical (a 90 degree counter-clockwise rotation in both cases).In contrast, our representation easily learns a bijection with the true action space.
EMPIRICAL EVALUATION
For evaluation, we consider tasks that involve regression from the latent variable z to actions u and vice versa.By learning this bijection we show that our model finds a representation that directly corresponds to actions and is disentangled from the static scene content.We show that after CLASP is trained, it can be used for both action-conditioned video prediction and planning (see Fig. 4), and provide a procedure to plan in the learned representation.We also validate that our approach requires orders of magnitude fewer labels than supervised approaches, and that it is robust to certain visual characteristics of the agent or the environment.Please refer to Appendix B for the exact architectural parameters.
Datasets.We conduct experiments on a simple simulated reacher dataset and the real-world Berkeley AI Research (BAIR) robot pushing dataset from Ebert et al. (2017).The reacher dataset consists of sequences of a robot reacher arm with one degree of freedom rotating counter-clockwise with random angular distances between consecutive images.We simulate it using OpenAI's Roboschool environment (Klimov & Schulman (2018)).The actions u are encoded as relative angles between images, and constrained to the range u ∈ [0 Baselines.We compare to the original model of Denton & Fergus (2018) that does not use the proposed composability objective.To obtain an upper bound on our method's performance we also compare to fully supervised approaches that train with action annotations: our implementations are based on Oh et al. (2015) for the reacher dataset and the more complex Finn & Levine (2017) for the BAIR dataset.For planning, we also compare to a model based on the approach of Agrawal et al. ( 2016) that learns the forward and inverse dynamics with direct supervision.
Metrics.In case of the action-conditioned video prediction we use the absolute angular position (obtained using a simple edge detection algorithm, see Appendix D) for the reacher dataset and the change of end effector position (obtained via manual annotation) for the BAIR dataset.We choose these metrics as they capture the direct consequences of applied actions, as opposed to more commonly used visual appearance metrics like PSNR or SSIM.For visual servoing in the reacher environment we measure the angular distance to the goal state at the end of servoing.
LEARNED STRUCTURE OF THE ACTION REPRESENTATIONS
First, we inspect the structure of the learned action space for our model.To do so, we train CLASP on the reacher dataset and visualize the learned representation.In Fig. 3 projections of samples, z, from the inference network, q, colored by the corresponding ground truth action, u.To find the two-dimensional subspace with maximal variability, we conducted Principal Component Analysis (PCA) on the means of the distributions generated by q.The first PCA dimension captures 99% of the variance, which is explained by the fact that the robot in consideration has one degree of freedom.While the baseline without composability training fails to learn a representation disentangled from the static content, our method correctly recovers the structure of possible actions of the robot.We further verify that our model recovers a representation of actions and show that this allows us to use the model for two tasks.First, we show that it is possible to transplant the action representations z from a given sequence into one with a different initial state.We run the approximate inference network MLP infer on the donor sequence to get the corresponding action representation z.We then use this sequence of representations z together with a different conditioning image sequence to produce the recipient sequence.While the content of the scene changes, the executed actions should remain the same.Second, we show how our model can be used for action-conditioned video prediction.Given a ground truth sequence annotated with actions u, we infer the representations z directly from u using MLP lat .The inferred representations are fed into the generative model p θ and the resulting sequences are compared to the original sequence.
ACTION-CONDITIONED VIDEO PREDICTION
The quantitative results in Table 1 show that the model trained with the composability objective on the reacher dataset successfully performs the task, with performance similar to the fully supervised model.Denton & Fergus (2018) performs the task only slightly better than random guessing.This shows that it is infeasible to infer the latent z t learned by the baseline model given only the action u t , and confirms our intuition about this from Fig. 3.The qualitative results in Fig. 5 (additional results in Figs. 12, 13 and 14 in the Appendix and on the website) further support this conclusion.
On the BAIR dataset, our model performs better than the baseline of Denton & Fergus (2018), reducing the difference between the best unsupervised method and the supervised baseline by 30 %.This is reflected in qualitative results as frames generated by the baseline model often contain artifacts such as blurriness when the arm is moving or ghosting effects with two arms present in the scene (Figs. 13 and 14 in the Appendix, videos on the website).These results demonstrate the promise of our approach in settings involving more complex, real-world interactions.Similarly to the true action space u, we can use the learned action space z for planning.We demonstrate this on a visual servoing task.The objective of visual servoing is to move an agent from a start state to a goal state, given by images x 0 and x goal , respectively.We use a planning algorithm similar to that of Finn & Levine (2017), but plan trajectories in the latent space z instead of true actions u.We use MLP act to retrieve the actions that correspond to a planned trajectory.
PLANNING IN THE LEARNED ACTION SPACE
Our planning algorithm, based on Model Predictive Control (MPC), is described in Appendix C. The controller plans by sampling a number of action trajectories and iteratively refining them with the Cross Entropy Method (CEM, Rubinstein & Kroese ( 2004)).The state trajectories are estimated by using the learned predictive model.We select the trajectory whose final state is closest to the goal and execute its first action.The distance between the states is measured using the cosine distance between VGG16 representations (Simonyan & Zisserman (2015)).Servoing terminates once the goal is reached or the maximum steps are executed.The baseline of Agrawal et al. (2016) uses a different procedure, as described in the original paper.
We show qualitative results of a servoing rollout in the reacher environmet in Fig. 6 (left) and quantitative results in Table 2.The agent not only reaches the target but also plans accurate trajectories at each intermediate time step.The trajectory planned in the learned space can be correctly decoded into actions, u.
DATA EFFICIENCY
To validate the benefits of learning from passive observations, we measure the data efficiency of CLASP in Fig. 6 (right).In this setup, we train the methods on a large dataset of passive observations and a varied number of observations labeled with actions (100, 1000, 10000 videos).The supervised baselines, which cannot leverage pre-training with passive observations perform poorly in the lowdata regime.In contrast, our model only needs a small number of action-labeled training sequences to achieve good performance, as it learns the structure of actions from passive observations.In the abundant data regime, our model still performs on par with both supervised baselines.We observed similar results for action-conditioned prediction experiments, summarized in Table 3 in the Appendix.These results suggest that our planning approach can be used when the action-labeled data are limited.
ROBUSTNESS TO VARYING VISUAL CHARACTERISTICS
To test the robustness of our approach to different kinds of visual variability in the environment, we conduct experiments on two versions of the reacher dataset with additional variability.In the first, the background of each sequence is replaced with a randomly drawn CIFAR-10 image (Krizhevsky (2009)).In the second, we vary the width and length of the reacher arm in each sequence.We test models trained on these datasets on sequences with variations not seen during training but drawn from the same distribution.The experimental setup is described in more detail in Appendix E. 4 in the appendix.
As shown in Table 2, our model can reliably discover the agent's action space and perform visual servoing under increased visual variability.The transplantation sequences in Fig. 5 show that the action semantics are preserved across changes to the appearance of the environment that do not alter the dynamics.This is evidence that the learned representation captures the dynamics of the environment and is not sensitive to changes in visual characteristics that do not affect the agent's action space.In these two settings, CLASP also requires orders of magnitude less action-conditioned data than the supervised baselines (see Table 4 in the appendix).Our results, combined with the data efficiency result, suggest that our method is robust to visual changes and can be used for passive learning from videos that are obtained under different visual conditions, or even videos of different agents, such as videos obtained from the Internet, as long as the action space of the observed agents coincides with the target agent.
CONCLUSION
We have shown a way of learning the structure of an agent's action space from visual observations alone by imposing the properties of minimality and composability on a latent variable for stochastic video prediction.This strategy offers a data-efficient alternative to approaches that rely on fully supervised action-conditioned methods.The resulting representation can be used for a range of tasks, such as action-conditioned video prediction and planning in the learned latent action space.The representation is insensitive to the static scene content and visual characteristics of the environment.It captures meaningful structure in synthetic settings and achieves promising results in realistic visual settings.
A STOCHASTIC VIDEO PREDICTION
We use an architecture similar to SVG-FP of Denton & Fergus (2018).Input images x t are encoded using a convolutional neural network CNN e (•) to produce a low-dimensional representation CNN e (x t ); output image encodings can be decoded with a neural network with transposed convolutions CNN d (•).We use a Long Short-Term Memory network LSTM(•, •) for the generative network µ θ (x t−1 , z t ) = CNN d (LSTM(CNN e (x t−1 ), z t )), and a multilayer perceptron MLP infer for the approximate inference network [µ φ (x t , x t−1 ), σ φ (x t , x t−1 )] = MLP infer (CNN e (x t ), CNN e (x t−1 )).
During training, our model first observes K past input frames.From these observations, the model generates K − 1 corresponding latents z 2:K and predicts
K − 1 images x2:K = µ θ (x 1:K−1 , z 2:K ).
The model generates T − K further future images: xK+1:T = µ θ (x 1:T −1 , z 1:T ).At test time, latents z t are sampled from the prior N (0, I), and the model behaves identically otherwise.We show samples from the stochastic video prediction model in Fig. 11.
Unlike in Denton & Fergus (2018), the generating network p θ does not observe ground truth frames x K+1:T −1 in the future during training but autoregressively takes its own predicted frames xK+1:T −1 as inputs.This allows the network LSTM to generalize to observing the generated frame encodings LSTM(CNN e (x t−1 ), z t ) at test time when no ground truth future frames are available.We use a recurrence relation of the form LSTM(LSTM(x t−2 , z t−1 ), z t ).To overcome the generalization problem, Denton & Fergus (2018) instead re-encode the produced frames with a recurrence relation of the form LSTM(CNN e (CNN d (LSTM(x t−2 , z t−1 ))), z t ).Our approach omits the re-encoding, which saves a considerable amount of computation.
B EXPERIMENTAL PARAMETERS
For all experiments, we condition our model on five images and roll out ten future images.We use images with a resolution of 64 × 64 pixels.The dimension of the image representation is dim(g(x)) = 128, and the dimensions of the learned representation are dim(z) = dim(ν) = 10.For the reacher dataset, we use the same architecture as Denton & Fergus (2018) for the f, g and LSTM networks.For experiments with the standard blue background (i.e.all except the varied background experiment) we do not use temporal skip-connections.For the BAIR dataset, we do not use f, g and use the same model as Lee et al. (2018) for LSTM.The MLP infer has two hidden layers with 256 and 128 units, respectively.The MLP comp , MLP lat , and MLP act networks each have two hidden layers with 32 units.For MLP lat and MLP act , we tried wider and deeper architectures, but this did not seem to improve performance of either our method or the baseline without composability.This is probably because the latent space in our experiments had either a simple representation that did not need a more powerful network to interpret it, or was entangled with static content, in which case even a more powerful network could not learn the bijection.The number of latent samples z used to produce a trajectory representation ν is C = 4.For all datasets, β z = 10 −2 , β ν = 10 −8 We use the leaky ReLU activation function in the g, f , and MLP networks.We optimize the objective function using the Adam optimizer with parameters β 1 = 0.9, β 2 = 0.999 and a learning rate of 2 × 10 −4 .All experiments were conducted on a single high-end NVIDIA GPU.We trained the models for 4 hours on the reacher dataset, for one day on the BAIR dataset.
We found the following rule for choosing both bottleneck parameters β z and β ν to be both intuitive and effective in practice: they should be set to the highest value at which samples from the approximate inference q produce high-quality images.If the value is too high, the latent samples will not contain enough information to specify the next image.If the value is too low, the divergence between the approximate inference and the prior will be too large and therefore the samples from the prior will be of inferior quality.We note that the problem of determining β is not unique to this work and occurs in all stochastic video prediction methods, as well as VIB and β-VAE.
C VISUAL SERVOING
We use Algorithm 1 for visual servoing.At each time step, we initially sample M latent sequences z 0 from the prior N (0, I) and use the video prediction model to retrieve M corresponding image sequences τ , each with K frames.We define the cost of an image trajectory as the cosine distance between the VGG16 (Simonyan & Zisserman (2015)) feature representations of the target image and 21.8 ± 12.9 2.6 ± 2.6 1.8 ± 1.5 CLASP (varied background) 1.5 ± 1.3 3.8 ± 3.5 3.0 ± 2.2 CLASP (varied agents)
2.0 ± 1.0 2.3 ± 3.4 2.8 ± 2.9
the final image of each trajectory.This is a perceptual distance, as in Johnson et al. (2016).In the update step of the Cross Entropy Method (CEM) algorithm, we rank the trajectories based on their cost and fit a diagonal Gaussian distribution to the latents z that generated the M best sequences.We fit one Gaussian for each prediction time step k ∈ K.After sampling a new set of latents z n+1 from the fitted Gaussian distributions we repeat the procedure for a total of N iterations.
Finally, we pick the latent sequence corresponding to the best rollout of the last iteration and map its first latent sample to the output control action using the learned mapping: u * = MLP act (z * N,0 ).This action is then executed in the environment.The action at the next time step is chosen using the same procedure with the next observation as input.The algorithm terminates when the specified number of servoing steps T has been executed.
D ANGLE DETECTION ALGORITHM
We employ a simple, hand-engineered algorithm to quickly retrieve the absolute angle values from the images of the reacher environment.First we convert the input to a grayscale image and run a simple edge detector to obtain a binary image of the reacher arm.We smooth out noise by morphological opening.We compute the Euclidean distance to the image center for all remaining non-zero pixels and locate the reacher tip at the pixel closest to the known reacher arm length.This gives us the absolute reacher arm angle.
To evaluate the accuracy of our angle detection algorithm, we estimated the angle for all images of the simulated training dataset and compare it to ground truth.A histogram of the angle errors of our algorithm is displayed in Fig. 7.All errors are below 10 degrees and the majority are smaller than 5 degrees.This suggests the output of this model is of a suitable quality to serve as surrogate ground truth.A second approach that used a neural network to regress the angle directly from the pixels achieved similar performance.We attribute the errors to the discretization effects at low image resolutions -it is impossible to achieve accuracy below a certain level due to the discretization.
E EXPERIMENTS WITH VARYING ENVIRONMENTS E.1 ROBUSTNESS TO CHANGING STATIC BACKGROUND
We test the robustness of our method to different static backgrounds by replacing the uniform blue background with images from the CIFAR-10 training set (Krizhevsky (2009)).For each sequence we sample a single background image that is constant over the course of the entire sequence.At test time we use background images that the model has not seen at training time, i.e. sampled from a held-out subset of the CIFAR-10 training set.As in previous experiments, we first train our model on pure visual observations without action-annotations.We then train the networks MLP lat and MLP act on a small set of action-annotated sequences to convergence.For the visual servoing we follow the same algorithm as in the previous experiments (see Appendix C).
Qualitative servoing results of our method on the dataset with varied backgrounds are shown in Fig. 9 and quantitative results in Figure 6 (right).The model accurately predicts the background image into the future and successfully discovers and controls the action space of the agent.The fact that the same bijective mapping between latents and actions works for all backgrounds suggests that the network is able to disentangle the static content of the scene and the dynamics attributed to the moving reacher arm.In addition, we show trajectory transplantation between different backgrounds in Fig. 8 (top), which further validates the claim that the learned latent represents the action consistently, independent of the background.
E.2 LEARNING FROM AGENTS WITH DIFFERENT VISUAL APPEARANCE
We test the ability of our method to learn from agents that differ in their visual appearance from the agent used at test time, but that share a common action space.We construct a dataset in which we vary parameters that determine the visual characteristics of the reacher arm, specifically its thickness and length (see Fig. 10,left).In total our training dataset comprises 72 different configurations spanning a wide variety of visual appearances.We show a qualitative example of a servoing trajectory in Fig. 10 (right).We additionally evaluate the efficacy of training on the novel dataset by following the procedure employed in Section 4.4: we train the mapping between latent representation z and actions to convergence on action-annotated subsets of the training data of varying sizes.The servoing errors in Figure 6 (right) show that we achieve comparable performance independent of whether we train on the target agent we test on or on a set of agents with different and varied visual appearances.Our model is able to learn a latent representation that captures the action space shared between all the agents seen at training time.We can then learn the mapping between this abstract action space and the actions of the agent with the novel visual appearance from a small number of action-annotated sequences.In addition, we show trajectory transplantation between different agents in Fig. 8 (bottom) that further validates our claim that the learned latent represents the action consistently, independent of the agent.3. Here, action representations z t are shown as a function of the absolute angle (α) of the reacher arm at time t − 1 and the relative angle between the reacher at time t and t − 1.We see that the encoding of action learned by the baseline is entangled with the absolute position of the reacher arm.While this representation can be used to predict the consequences of actions given the previous frame, it is impossible to establish a bijection between u t and z t as the correspondence depends on the previous frame x t−1 .Moreover, it is impossible to compose two samples of such a z without access to the intermediate frame.This representation is minimal, as it is a linear transformation (a rotation) of the known optimal representation u t (the ground truth actions).This suggests that composability plays an important role in learning a disentangled representation of actions.
Figure 1 :
1
Figure1: Using latent composition to recover actions from passive data.a) Two sequences starting from different initial states but changing according to the same actions.Without requiring labels, our model learns to represent the action in sequences like these identically.We train a representation z to capture the dynamics of the scene and its compositional structure: applying (z 1 and z 2 ) should have the same effect as applying the composed representation g(z 1 , z 2 ).These properties capture the fact that effector systems, such as a robot arm, use the same composable action space in many different states.b) The learned action space z recovered by our method (PCA visualization).Points are colored by the true action u: true actions can be easily decoded from z, validating that the structure of the action space has been captured.
Figure 2 :
2
Figure 2: Components of the proposed architecture.Left: The stochastic video prediction model, shown for one timestep.During training, we estimate the latent variable z t using the approximate inference network (MLP infer , CNN e ) from the current and previous image.At test time, we produce z t using the prior distribution p(z) ∼ N (0, I).Future frames are estimated by passing z t together with images x t−1 through the generative network (LSTM, CNN d ).Please refer to Appendices A and B for architectural details.Right: Composability training.Latent samples z are concatenated pairwise and passed through the composition network MLP comp that defines a distribution over ν in the trajectory space.A sampled value of ν is decoded into an image through the same generative network (LSTM and CNN d ) and matched to the final image in the composed sequence.
Figure 3 :
3
Figure 3: Visualization of the learned action space, z, on the reacher dataset.Each of the 1000 points depicts a value of z for a different frame pair from the dataset.We plot the projection of z onto the first two principal components of the data.Each point is colored by the value of the ground truth rotation, in radians, depicted in the two images used to infer z for that point.a) The latent space learned by the baseline model has no discernible correspondence to the ground truth actions.b) Our method learns a latent space with a clear correspondence to the ground truth actions.In the Appendix, Fig. 15 further investigates why the baseline fails to produce a disentangled representation.
Figure 4 :
4
Figure 4: Illustration of how the learned representation can be used for a) action-conditioned prediction by inferring the latent variable, z t , from the action, and b) visual servoing by solving the control problem in latent space through iterated rollouts and then mapping the latent variable to robot control actions, u t .
Figure 5 :
5
Figure 5: Transplantation of action representations z from one sequence to another.We infer action representations from the donor sequence and use them to create the recipient sequences from a different initial state.a) the reacher dataset.The previous frame is superimposed onto each frame to illustrate the movement.b) the BAIR dataset.The previous and the current position of the end effector are annotated in each frame (red and blue dots, respectively) to illustrate the movement.c) reacher with varying backgrounds.d) reacher with varying agent shape.The synchronization of movement in the sequences suggests that the learned action representation is disentangled from static content.Best viewed on a screen.Additional generated videos are available at: https: //daniilidis-group.github.io/learned_action_spaces/.
Figure 6 :
6
Figure 6: Visual servoing on the reacher task.Left: Planned and executed servoing trajectories.Each of the first five rows shows the trajectory re-planned at the corresponding timestep.The first image of each sequence is the current state of the system, and the images to the right of it show the model prediction with the lowest associated cost.The target state (the reacher pointing to the upper left) is shown superimposed over each image.Right: Data efficiency measured as final distance to the goal after servoing, shown depending on the number of videos used in training.Each point represents a model trained on a dataset with a restricted number of action-annotated training sequences.Full results are in Table4in the appendix.
Algorithm 1
1
Planning in the learned action space Require: Video prediction model xt:t+K = µ θ (x 1:t−1 , z 2:t+K ) Require: Start and goal images i 0 and i goal 1: for t = 1 . . .T do for K steps, obtain M future sequences τ = xt:t+K 5: Compute cosine distance between final and goal image: c(τ ) = cos(x t+K , i goal ) 6: Choose M best sequences, refit Gaussian distribution: µ n+1 , σ n+1 = fit(z n ) 7: Sample new latents from updated distribution: z n+1 ∼ N (µ n+1 , σ n+1 ) first latent of best sequence to action: u * = MLP act (z * N,0 ) 10: Execute u * and observe next image 11: end forTable 5: Hyperparameters for the visual servoing experiments.We sample an angle uniformly from the angle difference range to create each subsequent image in a sequence., 40 • ]The parameters used for our visual servoing experiments are listed in Tab. 5.
Figure 7 :
7
Figure 7: Error histogram of the angle detection algorithm on the reacher training set.The output of this algorithm is used as a form of surrogate ground truth to evaluate model performance.
Figure 8 :
8
Figure 8: Trajectory transplantation with differing visual characteristics.The trajectory from the top sequence is transplanted to a different environment and initial state in each of the two bottom sequences.Our model achieves almost perfect accuracy, which validates that it has indeed learned a representation of actions disentangled from the static content, such as the background, agent's appearance, and the initial state.The previous frame is superimposed onto each frame to illustrate the movement.Top: dataset with varying backgrounds.Bottom: dataset with varying robots.Additional generated videos are available at: https://daniilidis-group.github.io/learned_action_spaces/.
Figure 9 :
9
Figure 9: Servoing examples with randomly sampled static CIFAR-10 backgrounds.The figure layout follows the layout of Fig. 6 (left).
Figure 10 :
10
Figure 10: Learning from agents with varied visual appearance.Left: Sample agent configurations from the training set.We cover a variety of visual appearances (i.e.arm lengths and widths) but not the configuration used for testing.Right: Test time servoing example after pre-training on observations of agents with varied visual appearances.The figure layout follows the layout of Fig. 6 (left).
Figure 13 :
13
Figure 13: Failure cases of the baseline model on trajectory transplantation.Each example shows top: the ground truth sequence, middle: our predictions, bottom: predictions of the baseline model (Denton & Fergus (2018)).The position of the end effector at the current (blue) and previous (red) timestep is annotated in each frame.The baseline often produces images with two different robot arms and other artifacts.Only six of ten predicted frames are shown for clarity.Best viewed on a computer, additional generated videos are available at: https://daniilidis-group.github.io/learned_action_spaces/.
Figure 15 :
15
Figure15: Visualization of the structure of the learned latent space of the baseline model without composability training on the reacher dataset.The visualization is done in the same manner as in Fig.3.Here, action representations z t are shown as a function of the absolute angle (α) of the reacher arm at time t − 1 and the relative angle between the reacher at time t and t − 1.We see that the encoding of action learned by the baseline is entangled with the absolute position of the reacher arm.While this representation can be used to predict the consequences of actions given the previous frame, it is impossible to establish a bijection between u t and z t as the correspondence depends on the previous frame x t−1 .Moreover, it is impossible to compose two samples of such a z without access to the intermediate frame.This representation is minimal, as it is a linear transformation (a rotation) of the known optimal representation u t (the ground truth actions).This suggests that composability plays an important role in learning a disentangled representation of actions.
Table 1 :
1
Oh et al. (2015)ed video prediction results (mean ± standard deviation across predicted sequences).The "supervised" baseline is taken fromOh et al. (2015)for the reacher dataset and Finn & Levine (2017) for BAIR.
ReacherBAIRMethodAbs. Error [in deg]Rel. Error [in px]Start State90.1 ± 51.8 -Random26.6 ± 21.5 -Denton & Fergus 22.6 ± 17.7 3.6 ± 4.0CLASP (Ours)2.9 ± 2.13.0 ± 2.1Supervised2.6 ± 1.82.0 ± 1.3
Table 2 :
2
Visual servoing performance measured as distance to the goal at the end of servoing (mean ± standard deviation).
ReacherMethodDistance [deg]Start Position97.8 ± 23.7Random27.0 ± 26.8Denton & Fergus (2018)14.1 ± 10.7CLASP (Ours)1.6 ± 1.0Agrawal et al. (2016)2.0 ± 1.5Oh et al. (2015)1.8 ± 1.5CLASP (varied background) 3.0 ± 2.2CLASP (varied agents)2.8 ± 2.9
Table 3 :
3
Oh et al. (2015) angle error (mean ± standard deviation) for action-conditioned video prediction.Note that we could not detect angles on some sequences for the action-conditioned baseline ofOh et al. (2015)trained on only 100 sequences due to bad prediction quality.
ReacherMethodAngle Error [deg]Training Sequences100100010 000Start Position90.6 ± 52.0Random27.7 ± 22.2Denton & Fergus (2018) 27.6 ± 22.8 23.8 ± 18.6 23.6 ± 18.3CLASP (Ours)2.9 ± 2.02.9 ± 2.03.0 ± 2.0Oh et al. (2015)-5.6 ± 4.52.7 ± 1.9Table 4: Visual servoing performance and data efficiency.ReacherMethodDistance [deg]Training Sequences100100010 000Start Position97.8 ± 23.7Random27.0 ± 26.8Denton & Fergus (2018)20.9 ± 13.0 15.5 ± 13.1 14.1 ± 10.7CLASP (Ours)2.0 ± 2.22.2 ± 1.81.6 ± 1.0Agrawal et al. (2016)32.7 ± 21.7 3.6 ± 3.12.0 ± 1.5Oh et al. (2015)
Denton & Fergus (2018) use the objective with βz, but formulate this objective in terms of the original VAE.
To allow the generative model to distinguish between individual action representations z and trajectory representations ν, we concatenate them with a binary indicator set to 0 for z and 1 for ν. With the binary indicator, we can control whether the generative network interprets an input latent as the representation of a single action or a whole trajectory.
The original dataset provides two additional discrete actions: gripper closing and lifting. However, we found that, in this dataset, the spatial position in the horizontal plane explains most of the variance in the end effector position and therefore ignore the discrete actions in this work.
ACKNOWLEDGEMENTSWe thank Nikos Kolotouros and Karl Schmeckpeper for help with annotation, Kenneth Chaney and Nikos Kolotouros for computing support, Stephen Phillips and Nikos Kolotouros for helpful comments on the document, and the members of the GRASP laboratory and CLVR laboratory for many fruitful discussions.We also thank the audiences of the 2018 R:SS workshop on Learning and Inference in Robotics, 2018 International Computer Vision Summer School, and 2018 NeurIPS workshop on Probabilistic Reinforcement Learning and Structured Control for useful feedback.We are grateful for support through the following grants: NSF-DGE-0966142 (IGERT), NSF-IIP-1439681 (I/UCRC), NSF-IIS-1426840, NSF-IIS-1703319, NSF MRI 1626008, ARL RCTA W911NF-10-2-0016, ONR N00014-17-1-2093, and by Honda Research Institute.K.G.D. is supported by a Canadian NSERC Discovery grant.K.G.D. contributed to this article in his personal capacity as an Associate Professor at Ryerson University.Theophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, and Daan Wierstra.Imagination-augmented agents for deep reinforcement learning.In Proceedings of Neural Information Processing Systems, 2017.Shi Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo.Convolutional LSTM network: A machine learning approach for precipitation nowcasting.In Proceedings of Neural Information Processing Systems, 2015.
How do you learn to walk? thousands of steps and dozens of falls per day. Karen Adolph, Whitney Cole, Meghana Komati, Jessie Garciaguirre, Daryaneh Badaly, Jesse Lingeman, Gladys Chan, Rachel Sotsky, Psychological Science. 23112012
Learning to poke by poking: Experiential learning of intuitive physics. Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, Sergey Levine, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2016
Fixing a broken ELBO. Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, Kevin Murphy, Proceedings of International Conference on Machine Learning. International Conference on Machine Learning2018
Deep variational information bottleneck. Ian Alexander A Alemi, Joshua V Fischer, Kevin Dillon, Murphy, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2017
Stochastic variational video prediction. Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, Sergey Levine, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2018
The option-critic architecture. Pierre-Luc Bacon, Jean Harb, Doina Precup, Proceedings of AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence2017
Irina Christopher P Burgess, Arka Higgins, Loic Pal, Nick Matthey, Guillaume Watters, Alexander Desjardins, Lerchner, arXiv:1804.03599Understanding disentangling in β-VAE. 2018
Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. Silvia Chiappa, Sébastien Racanière, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2017
A recurrent latent variable model for sequential data. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, Yoshua Bengio, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2015
Policy transfer via modularity. Ignasi Clavera, Abbeel, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems2017
Unsupervised learning of disentangled representations from video. Emily Denton, Vighnesh Birodkar, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2017
Stochastic Video Generation with a Learned Prior. Emily Denton, Rob Fergus, Proceedings of International Conference on Machine Learning. International Conference on Machine Learning2018
Learning modular neural network policies for multi-task and multi-robot transfer. Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Sergey Levine, Proceedings of IEEE. IEEE2017
Nadia Dominici, Yuri Ivanenko, Germana Cappellini, Andrea Avella, Vito Mondì, Marika Cicchese, Adele Fabiano, Tiziana Silei, Ambrogio Di Paolo, Carlo Giannini, Richard Poppele, Francesco Lacquaniti, Locomotor primitives in newborn babies and their development. 2011334
Self-supervised visual planning with temporal skip connections. Frederik Ebert, Chelsea Finn, Alex X Lee, Sergey Levine, Conference on Robotic Learning. 2017
Deep visual foresight for planning robot motion. Chelsea Finn, Sergey Levine, Proceedings of IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation2017
Unsupervised learning for physical interaction through video prediction. Chelsea Finn, Ian Goodfellow, Sergey Levine, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2016
One-shot visual imitation learning via meta-learning. Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine, Conference on Robotic Learning. 2017
Learning to linearize under uncertainty. Ross Goroshin, Michael F Mathieu, Yann Lecun, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2015
. David Ha, Jurgen Schmidhuber, arXiv:1803.101222018World models
Prediction under uncertainty with error-encoding networks. Mikael Henaff, Junbo Zhao, Yann Lecun, arXiv:1711.049942017
beta-VAE: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2017
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 981997
Understanding image motion with group representations. Andrew Jaegle, Stephen Phillips, Daphne Ippolito, Kostas Daniilidis, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2018
Perceptual losses for real-time style transfer and super-resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, Proceedings of European Conference on Computer Vision. European Conference on Computer Vision2016
Eric R Kandel, James H Schwartz, Thomas M Jessell, Steven A Siegelbaum, Principles of Neural Science. A J Hudspeth, McGraw-Hill Education2012
Auto-encoding variational Bayes. P Diederik, Max Kingma, Welling, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2014
Roboschool: Open-source software for robot simulation. Oleg Klimov, John Schulman, 2018
Learning multiple layers of features from tiny images. Alex Krizhevsky, 2009Technical report
Alex X Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine, arXiv:1804.01523Stochastic adversarial video prediction. 2018
End-to-end training of deep visuomotor policies. Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel, The Journal of Machine Learning Research. 1712016
Continuous control with deep reinforcement learning. Jonathan J Timothy P Lillicrap, Alexander Hunt, Nicolas Pritzel, Tom Heess, Yuval Erez, David Tassa, Daan Silver, Wierstra, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2016
Toward an integration of deep learning and neuroscience. Greg Adam H Marblestone, Konrad P Wayne, Kording, Frontiers in computational neuroscience. 102016
Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis, Nature. 51822015
Driving policy transfer via modularity and abstraction. Matthias Müller, Alexey Dosovitskiy, Bernard Ghanem, Vladen Koltun, Conference on Robotic Learning. 2018
Learning grounded finite-state representations from unstructured demonstrations. Scott Niekum, Sarah Osentoski, George Konidaris, Sachin Chitta, Bhaskara Marthi, Andrew G Barto, International Journal of Robotics Research. 3422015
Action-conditional video prediction using deep networks in atari games. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh, Proceedings of Neural Information Processing Systems. Neural Information Processing Systems2015
Zero-shot visual imitation. Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A Efros, Trevor Darrell, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2018
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, Proceedings of International Conference on Machine Learning. International Conference on Machine Learning2014
Premotor cortex and the recognition of motor actions. Giacomo Rizzolatti, Luciano Fadiga, Vittorio Gallese, Leonardo Fogassi, Cognitive Brain Research. 321996
Neuronal correlates of a perceptual decision in ventral premotor cortex. Ranulfo Romo, Adrián Hernández, Antonio Zainos, Neuron. 4112018/05/18 2004
The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning. Reuven Y Rubinstein, Dirk P Kroese, 2004Springer-VerlagNew York
Dynamic Primitives -A Framework for Motor Control in Humans and Humanoid Robotics. Stefan Schaal, 2006SpringerTokyo
Learning movement primitives. Stefan Schaal, Jan Peters, Jun Nakanishi, Auke Ijspeert, Robotics Research. Paolo Dario, Raja Chatila, Berlin HeidelbergSpringer2005
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz, International Conference on Machine Learning. 2015
Opening the black box of deep neural networks via information. Ravid Shwartz, -Ziv , Naftali Tishby, arXiv:1703.008102017
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2015
Unsupervised learning of video representations using LSTMs. Nitish Srivastava, Elman Mansimov, Ruslan Salakhudinov, Proceedings of International Conference on Machine Learning. International Conference on Machine Learning2015
Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Richard S Sutton, Doina Precup, Satinder Singh, Artificial Intelligence. 1121999
Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, arXiv:1708.01289Doina Precup, and Yoshua Bengio. Independently controllable features. 2017
N Tishby, F C Pereira, W Bialek, The information bottleneck method. Allerton Conference on Communication, Control, and Computing. 1999
MoCoGAN: Decomposing motion and content for video generation. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern Recognition2018
From simple innate biases to complex visual concepts. Shimon Ullman, Daniel Harari, Nimrod Dorfman, Proceedings of the National Academy of Sciences. 109442012
Decomposing motion and content for natural video sequence prediction. Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee, Proceedings of International Conference on Learning Representations. International Conference on Learning Representations2017
Anticipating visual representations from unlabeled video. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern Recognition2016
Unsupervised predictive memory in a goal-directed agent. Greg Wayne, Chia-Chun Hung, David Amos, Mehdi Mirza, Arun Ahuja, Agnieszka Grabska-Barwinska, Jack W Rae, Piotr Mirowski, Joel Z Leibo, Adam Santoro, Mevlana Gemici, Malcolm Reynolds, Tim Harley, Josh Abramson, Shakir Mohamed, Danilo Jimenez Rezende, David Saxton, Adam Cain, Chloe Hillier, David Silver, Koray Kavukcuoglu, Matthew Botvinick, Demis Hassabis, Timothy P Lillicrap, arXiv:1803.107602018 |
219,572,891 | Partitioned Learned Bloom Filters | Learned Bloom filters enhance standard Bloom filters by using a learned model for the represented data set. However, a learned Bloom filter may under-utilize the model by not taking full advantage of the output. The learned Bloom filter uses the output score by simply applying a threshold, with elements above the threshold being interpreted as positives, and elements below the threshold subject to further analysis independent of the output score (using a smaller backup Bloom filter to prevent false negatives). While recent work has suggested additional heuristic approaches to take better advantage of the score, the results are only heuristic. Here, we instead frame the problem of optimal model utilization as an optimization problem. We show that the optimization problem can be effectively solved efficiently, yielding an improved partitioned learned Bloom filter, which partitions the score space and utilizes separate backup Bloom filters for each region. Experimental results from both simulated and real-world datasets show significant performance improvements from our optimization approach over both the original learned Bloom filter constructions and previously proposed heuristic improvements. | [] | Partitioned Learned Bloom Filters
Kapil Vaidya [email protected]
School of Engineering and Applied Sciences
School of Engineering and Applied Sciences
CSAIL MIT
Harvard University
CSAIL MIT
Harvard University
Eric Knorr [email protected]
School of Engineering and Applied Sciences
School of Engineering and Applied Sciences
CSAIL MIT
Harvard University
CSAIL MIT
Harvard University
Tim Kraska [email protected]
School of Engineering and Applied Sciences
School of Engineering and Applied Sciences
CSAIL MIT
Harvard University
CSAIL MIT
Harvard University
Michael Mitzenmacher [email protected]
School of Engineering and Applied Sciences
School of Engineering and Applied Sciences
CSAIL MIT
Harvard University
CSAIL MIT
Harvard University
Partitioned Learned Bloom Filters
Learned Bloom filters enhance standard Bloom filters by using a learned model for the represented data set. However, a learned Bloom filter may under-utilize the model by not taking full advantage of the output. The learned Bloom filter uses the output score by simply applying a threshold, with elements above the threshold being interpreted as positives, and elements below the threshold subject to further analysis independent of the output score (using a smaller backup Bloom filter to prevent false negatives). While recent work has suggested additional heuristic approaches to take better advantage of the score, the results are only heuristic. Here, we instead frame the problem of optimal model utilization as an optimization problem. We show that the optimization problem can be effectively solved efficiently, yielding an improved partitioned learned Bloom filter, which partitions the score space and utilizes separate backup Bloom filters for each region. Experimental results from both simulated and real-world datasets show significant performance improvements from our optimization approach over both the original learned Bloom filter constructions and previously proposed heuristic improvements.
Introduction
Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set [Bloom (1970)]. A Bloom filter may allow false positives, but will not give false negative matches, which makes them suitable for numerous applications in networks, databases and other systems areas. Indeed, there are many thousands of papers describing applications of Bloom filters [Dayan et al. (2018), Dillinger and Manolios (2004), Broder and Mitzenmacher (2003)]. For standard Bloom filters, there are known theoretical lower bounds on the space used [Pagh et al. (2005)]. However, these lower bounds assume the Bloom filter could store any possible set. If the data set or the membership queries have specific structure, it may be possible to beat the lower bounds in practice [Mitzenmacher (2002), Bruck et al. (2006), Mitzenmacher et al. (2020)]. In particular, Kraska et al. (2018) and Mitzenmacher (2018) propose using machine learning models to reduce the space further, by using the learned model to provide a suitable pre-filter for the membership queries. Rae et al. (2019) propose a neural Bloom Filter that learns to write to memory using a distributed write scheme and achieves compression gains over the classical Bloom filter.
Here, we focus on the work of Kraska et al. (2018), which studies how standard index structures, including Bloom filters, can be improved using machine learning models. Given an input the model outputs a score which is supposed to represent the confidence of the input being in the set. Thus, the inputs in the set (keys) should have a higher score value compared to the rest of the inputs (non-keys). This model is used as a pre-filter, where inputs with score values above a threshold are directly classified as being in the set. A smaller backup Bloom filter is used for rest of the inputs, which maintains the property that there are no false negatives. The threshold value is essentially used to partition the space of scores into two regions, with one region above the the threshold and one region below. The input is processed differently depending on which region its score falls. With a sufficiently accurate model, the size of the backup Bloom filter can be reduced significantly over the size of a standard Bloom filter while maintaining overall accuracy. The later work in Mitzenmacher (2018) grows out of Kraska et al. (2018) by providing a formal analysis for the learned Bloom filter structure and proposing some improvements, based on "sandwiching" the pre-filter between two Bloom filters (instead of just a backup). Intuitively, we might be able to do even better by partitioning the score space into more than two regions, and optimizing the processing for each region. That is, simply using a single threshold underutilizes the model, by only comparing the score to a single threshold value. We are not the first to notice this. The work Dai and Shrivastava (2019) suggests using multiple thresholds to divide the space of scores into multiple regions, with a different backup Bloom filter for each score region. The parameters for each of the backup Bloom filters are chosen to improve the overall tradeoff between the false positives and size. However, Dai and Shrivastava (2019) only suggests heuristics for how to divide up the score space and how to choose the false positive rate (and corresponding Bloom filter size) for each region of the score space.
In this work, we show how to frame the overall problem as an optimization problem, which can significantly outperform the heuristic used in Dai and Shrivastava (2019). Moreover, we show that our space savings is linearly proportional to the KL Divergence of the key and non-key score distributions. We present a dynamic programming algorithm to find optimal parameters (up to the discretization used for the dynamic programming) and demonstrate performance improvements over a synthetic dataset and two real world datasets: URL's and EMBER. We note that Dai and Shrivastava (2019) refers to their structure as an adaptive learned Bloom filter, but adaptive has been in multiple papers in multiple ways in the context of Bloom filters; we therefore refer to our approach as a partitioned learned Bloom filter (PLBF). Experimental results show the amount of space saved by PLBF is 2x more than sandwiching approach and 1.7x more than the adaptive LBF heuristics.
Review
We first start by reviewing the standard Bloom filters and some variants of the learned Bloom filter.
Standard Bloom Filter
A Bloom filter representing a set S = {x 1 , x 2 , ..., x n } of n keys is represented by an array of m bits and uses k independent hash functions {h 1 , h 2 , ...h k } with the range of each h i being integer values between 0 and m − 1. We assume the hash function are fully random. Initially all m bits are 0. For every key (x ∈ S), array bits h i (x) are set to 1 for all i ∈ {1, 2, ...k}.
A membership query for y returns true (that is, y ∈ S) if h i (y) = 1 for all i ∈ {1, 2, ...k} and false otherwise. This ensures that the Bloom filter has no false negatives but non-keys (y ∈ S) might return true resulting in a false positive. This false positive rate depends on the space m used by the Bloom Filter. The expected false positive rate is given in Eq.1 ; if one uses the optimal number of hash functions k = m ln 2/n, the false positive rate is as in Eq.2. 1
E[F P R] = 1 − 1 − 1 m kn k (1) E[F P R] = 1 2 m n * ln 2 (2)
Learned Bloom Filter
We are given a set of keys S = {x 1 , x 2 , .., x n } for which to build a Bloom filter. We are also given a sample of the non-keys Q for training. The learned model is then trained on S ∪ Q for a binary classification task and produces a score s(x) ∈ [0, 1]. This score s(x) can be viewed as an estimate of the probability that the element x is in the set S, so a key (x ∈ S) would ideally have a higher score value than the non-keys (x ∈ Q).
As discussed above, Kraska et al. (2018) set a threshold t and inputs satisfying s(x) > t are classified as a key. This process might result in false negatives as some keys (x ∈ S) might have score values below the threshold. A backup Bloom filter is built for just the keys in S satisfying s(x) ≤ t; if few keys have scores below the threshold, the backup Bloom filter can be much smaller that a Bloom filter for the original set. The threshold value divides the score space into two regions (s(x) > t and s(x) ≤ t), with inputs being subjected to different processing based on which region their score values fall. This design is represented in Fig.1(A). Mitzenmacher (2018) proposes using another Bloom filter before the learned model along with a backup Bloom Filter. As the learned Model is used between two Bloom Filters as shown in Fig.1(B), this is referred to as the 'sandwiching' approach. They also provide the analysis of the optimal false positive rate for both learned and sandwiched Bloom filters. Interestingly, the sandwiching approach and analysis can be seen as a special case of our approach and analysis, as we describe later in subsection 3.4.1.
Input (x) s(x) > t
Partitioned Learned Bloom Filter (PLBF)
Design
In the original learned Bloom filter, the scores of the inputs are divided into two regions based on a threshold. The information in the score value is never used beyond comparing it to the threshold. As the score value represents the probability of the input being a key, we can utilize the score better if we segment the scores space into more regions using multiple thresholds, as shown in Fig.1(C), where we use separate backup Bloom filters for each region. We can choose different target false positive rates for each region 2 . The parameters associated with each region are its threshold boundaries and its false positive rate. Setting good values for these parameters is crucial for performance. Our aim is to analyze the performance of the learned Bloom filter with respect to these parameters, and find methods to determine optimal or near-optimal parameters.
Interestingly, we find in our formulation that the resulting parameters correspond to quite natural quantities. Specifically, the optimal false positive rate of a region is proportional to the ratio of key and non-key density in the region. Further, the optimal threshold values turn out to be those maximizing the KL divergence between the key and non-key distributions of the regions.
In what follows, we formulate the problem as an optimization problem in subsection 3.2. In subsection 3.3.1, we find the optimal solution of a relaxed problem which helps us gain some insight into the general problem. We propose a solution for the general problem in subsection 3.3.3.
General Optimization Formulation
To formulate the overall problem as an optimization problem, we consider the variant which minimizes the space used by the Bloom filters in PLBF in order to achieve an overall a target false positive rate (F P R). Here we are assuming the learned model is given. We could have similarly framed it as minimizing the false positive rate given a fixed amount of space.
We assume normalized score values in [0, 1] for convenience. We have region boundaries given by
t i values 0 = t 0 ≤ t 1 ≤ t 2 .....t k−1 ≤ t k = 1, with score values between [t i−1 , t i ] falling into the i th region.
We assume the target number of regions k is given. The i th region has false positive rate f pr i . Let f be the probability distribution of the keys in the score space and F the CDF corresponding to f . Similarly, let g and G be the PDF and CDF distribution of non-keys, respectively. Fig.1(D) represents a standard learned Bloom filter with this framwork, while Fig.1(E) represents k score regions and the parameters for each region. The following optimization problem finds the thresholds t i and the false positive rates f pr i :
min t i=1...k−1 ,f pr i=1...k k i=1 |S| * (F (t i ) − F (t i−1 )) * log 2 1 f pri (3) constraints k i=1 (G (t i ) − G(t i−1 )) * f pr i ≤ F P R (4) f pr i ≤ 1 , i = 1...k (5) (t i − t i−1 ) ≥ 0 , i = 1...k ; t 0 = 0; t k = 1(6)
The minimized term (Eq.3) represents the total size of the backup Bloom filters, obtained by summing the individual backup Bloom filter sizes 3 . The first constraint (Eq.4) ensures that the overall false positive rate stays below the target FPR, as the overall false positive rate is obtained by summing the appropriately weighted rates of each region. The next constraint (Eq.5) ensures that the false positive rates of each class stay below one. The last two constraints (Eq.6) ensure threshold values are increasing and cover the interval [0, 1].
Solving the Optimization Problem
Solving a Relaxed Problem
In this subsection, we look at the optimal solution of a relaxed problem, obtained after removing the false positive rate constraints (Eq.5, giving f pr i ≤ 1), as shown in Eq.7. This relaxation is useful because it allows us to use the Karush-Kuhn-Tucker (KKT) conditions to obtain an exact solution in terms of the t i values, which we used to design algorithms for finding near-optimal solutions. Throughout this section, we assume the the relaxed problem yields a solution for the original problem; we return to this issue in subsection 3.3.3.
min t i=1...k−1 ,f pr i=1...k k i=1 |S| * (F (t i ) − F (t i−1 )) * log 2 1 f pri constraints k i=1 (G(t i ) − G(t i−1 )) * f pr i ≤ F P R (t i − t i−1 ) ≥ 0 i = 1...k; t 0 = 0; t k = 1 (7)
The optimal f pr i values obtained by using the KKT conditions yield Eq.8 (as shown in the appendix. A), giving the exact solution in terms of t i 's.
f pri F P R = F (ti)−F (ti−1) G(ti)−G(ti−1) (8) The numerator F (t i ) − F (t i−1 )
is the key density in the i th region and the denominator G(t i ) − G(t i−1 ) is the non-key density in the i th region. The optimal f pr i is proportional to the ratio of key density to non-key density in a region. This is intuitive as a region with very high key density should allow most inputs to pass as most of them are keys, allowing a high false positive rate for a backup Bloom filter. Similarly, a region with low key density requires a Bloom filter with low false positive rate to prevent non-keys from being allowed.
Since, we have optimal f pr i in terms of t i , we can replace f pr i in the original problem to get a problem only in terms of t i . Eq.9 shows the rearrangement of the minimization term after substitution.
Minimization Term = k i=1 |S| * (F (t i ) − F (t i−1 )) * log 2 G(t i ) − G(t i−1 ) (F (t i ) − F (t i−1 )) * F P R = k i=1 |S| * (F (t i ) − F (t i−1 )) * log 2 1 F P R − |S| * D KL f (t),ĝ (t)(9)
Again, the term being minimized (Eq.9) represents the total space occupied by the backup Bloom filters; the total space used by our method is the sum of backup Bloom filter space and space occupied by the learned model.
|S| * log 2 1 F P R − |S| * D KL f (t),ĝ (t) + Size Of Learned Model(10)
The space occupied by a standard Bloom filter is |S| * log 2 (1/F P R). Thus, the space saved by PLBF in comparison to the standard Bloom filter is:
|S| * D KL f (t),ĝ (t) − Size Of Learned Model(11)
The space saved by PLBF is therefore linearly proportional to the KL divergence of key and non-key densities(f (t),ĝ (t)) of the regions.
Finding the Optimal Thresholds
We have shown that, given a set of thresholds, we can in the relaxed problem find the optimal false positive rates. Here we turn to the question of finding optimal thresholds. We assume that we are given k, the number of regions desired. (We consider the importance of choosing k further in our experimental section.) Given our results above, the optimal thresholds correspond to the points that maximize the KL divergence between (f (t),ĝ (t)). The KL divergence of (f (t),ĝ (t)) is the sum of the terms f i log 2 fi gi , one term per region. Note each term depends only on the proportion of keys and non-keys in that region and is otherwise independent of the other regions. This property allows a recursive definition of KL divergence that is suitable for dynamic programming.
We divide the score space [0, 1] into N consecutive small segments; this provides us a discretization of the score space, with larger N more closely approximating the real interval. Given k, we can find the set of k optimal thresholds using dynamic programming. Let DP KL (n, j) denote the maximum divergence one can get when you divide the first n segments into j parts. Our optimal divergence corresponds to DP KL (N, k). The idea behind the algorithm is that the we can recursively define DP KL (n, j) as represented in Eq.12. Here f , g represent the probability distribution of keys and non-keys in these N segments.
DP KL (n, j) = max DP KL (n − i, j − 1) + n t=i f (t) * log 2 n t=i f (t)) n t=i g (t)(12)
The time complexity of computing DP KL (N, k) and the optimal thresholds is O (N 2 k). Hence one can increase the value of N to get more precise thresholds at the cost of higher computation time.
The Relaxed Problem and the General Problem
Once we obtain the threshold values that maximize the divergence, we can get the optimal f pr i values using Eq.8. In many cases the relaxed solution will also be the optimal solution of the general problem.
Specifically, if f (x)/g(x) < 1/F P R for all x, then f pr i < 1 for all intervals in the relaxed solution.
(This is because f (
x)/g(x) < 1/F P R implies F P R · (F (t i−1 ) − F (t i ))/(G(t i−1 ) − G(t i )) < 1 for al i.)
In this case, optimal solution of the relaxed problem is also optimal for the general problem. Hence if we are aiming for a sufficiently low false positive rate, the relaxed problem suffices.
In some cases we may first assign false positive rates based on the relaxed problem and Eq.8 but find that f pr i ≥ 1 for some regions. For such regions, we can set f pr i = 1, re-solve the relaxed problem with these additional constraints (that is, excluding these regions), and use the result as a solution for the general problem. Some regions might again have a false positive rate above one, so we can repeat the process. The algorithm stops when there is no new region with false positive rate greater than one. Given a set of thresholds, this algorithm is optimal, but this algorithm may not be optimal for the general problem (as the relaxed problem formulation is used to determine the thresholds). However, we expect it will perform very well in most cases (we provide pseudo-code in Appendix.C).
With this enhanced algorithm in mind, we can find additional ways for the relaxed solution to lead us to the optimal solution. Again, the issue is that there might be cases where the f pr i constraint is violated and here we extend the solution to account for that. In an interval where f pr i = 1, we would want to use no Bloom filter. In this part of the score space, keys occur dramatically more frequently than non-keys, and we may simply choose to accept them as keys from our set. (Indeed, this is what is done in the standard learned Bloom filter.) We may thus improve upon our previous solution by assuming that at most one rightmost region of the score space has no Bloom filter. In particular, if f and g are monotonically increasing and decreasing functions, respectively, then indeed without loss of genearlity the last kth region will be the only one with f pr k = 1. Moreover, it is not unreasonable to believe that in practice f and g will be monotonic or nearly so; this simply corresponds to keys being more concentrated on higher scores, and non-keys being more concentrated on lower scores.
The problem of finding the optimal thresholds becomes easier after this assumption as we can safely remove of all the f pr i ≤ 1 constraints for i = k (and hence we can apply the KKT conditions), and we try all possible boundaries for the kth region where f pr k ≤ 1. Specifically, as before we divide the score space into N discrete segments. The algorithm then iterates on all the O(N ) possibilities for t k−1 , setting the false positive rate of the kth region to 1, and then uses the previous dynamic programming algorithm of section 3.3.2 for the rest of the segments. The worst case time complexity is then O (N 3 k). After obtaining a set of thresholds form the DP algorithm, we can set the f pr i = 1 for the rightmost region and use Eq.8 for the rest (we provide pseudo-code in the Appendix.B).
Additional Considerations
First, we show how the sandwiching approach is a special case of our design. Next, we will see how the performance varies w.r.t number of regions.
Sandwiching: A Special Case
We show here that the sandwiching approach can actually be interpreted as a special case of our method. In the sandwiching approach, the learned model is sandwiched between two Bloom filters as shown in Fig.1(B). The input first goes through a Bloom filter and the negatives are discarded. The positives are passed through the learned model where based on their score s(x) they are either directly accepted when s(x) > t or passed through another backup Bloom filter when s(x) ≤ t. In our setting, we note that the pre-filter in the sandwiching approach can be merged with the backup filters to yield backup filters with a modified false positive rate. Fig.1(F) shows what an equivalent design with modified false positive rates would look like. (Here equivalence means we obtain the same false positive rate with the same bit budget; we do not consider compute time.) Thus, we see that the sandwiching approach can be viewed as a special case of the PLBF with two regions.
However, this also tells us we can make the PLBF more efficient by using sandwiching. Specifically, if we find when constructing a PLBF with k regions that f pr i < 1 for all i, we may f pr 0 = max 1≤i≤k f pr i . We may then use an initial Bloom filter with false positive rate f pr 0 , and change the target false positive rates for all other intervals to f pr i /f pr 0 , while keeping the same bit budget. This approach will be somewhat more efficient computationally, as we avoid computing the learned model for some fraction of non-key elements.
Performance against number of regions k
Earlier, we saw the space saved by using the PLBF is linearly proportional to the D KL (f (t),ĝ (t)). If we split any region into two regions, the overall divergence would increase because sum of divergences of the two split regions is always more than the original divergence, as shown in Eq.13. Eq.13 is an application of Jensen's inequality.
(f 1 + f 2 ) * log (f1+f2) (g1+g2) ≤ f 1 * log f1 g1 + f 2 * log f2 g2(13)
Increasing the number of regions therefore always improves performance, but the quantity D KL (f (t),ĝ (t)) has an upper bound D KL (f, g). Hence
lim k→∞ D KL f (t),ĝ (t) = D KL (f, g) (14)
We would hope that in practice a small number of regions k would suffice. This seems to be the the case in our experience; we detail one such experiment (Experiment 4.2.1) in our evaluation.
Evaluation
The baselines used in our evaluation are the standard Bloom filter [Bloom (1970)], the sandwiching LBF [Mitzenmacher (2018)], and the adaptive learned Bloom filter (AdaBF) [Dai and Shrivastava (2019)]. For the evaluation we use 3 different datasets:
URLs: Our first dataset is the URLs dataset, which was also used for evaluation in previous papers [Kraska et al. (2018), Dai and Shrivastava (2019)]. It contains 450176 URLs, of which 103520 (23%) are malicious and the rest 346646 (77%) are benign. We extract 17 features in total from these URL's such as length of host name, use of shortening, counts of various special characters in the URL's, etc.
EMBER: Bloom filters are widely used to match file signatures with the virus signature database. (2018)] is an open source collection of 1.1 million portable executable file (PE file) sha256 hashes that were scanned by VirusTotal sometime in 2017. Out of the 1.1 million files 400K are malicious, 400K are benign, and 300K are unlabeled files. The features of the files are already included in the dataset.
Ember (Endgame Malware Benchmark for Research) [Anderson and Roth
Synthetic: An appealing scenario for our method is when the key density increases and non-key density decreases monotonically with respect to the score value. We simulate this by generating the key and non-key score distribution using Zipfian distributions as in Fig.2(A). Since we directly work on the score distribution the size of the learned model is zero.
As the standard Bloom filter does not use the learned model, to make a fair comparison, we include the size of the learned model with the size of the learned Bloom filter.
Overall Performance
Here, we compare the performance of PLBF w.r.t other baselines. We do this by fixing the target F P R and determining the space used by each method. First, we train the model using 40% percent of the non-key set and the entire key set as training samples. The parameters of each method are then tuned using this model with the aim of achieving the fixed target F P R. The rest of the non-keys are used to evaluate the actual false positive rate achieved by these methods. We plot this actual false positive rate achieved against the space used by the method.
All the methods can function regardless of what type of model is used. We choose the random forest classifier from sklearn [Pedregosa et al. (2011)] as it yields good accuracy on these datasets. The F1 scores of the learned models used for synthetic, URL's and EMBER were 0.99, 0.97, and 0.85, respectively. We consider the size of the model to be the pickle 4 file size on the disk. This size has been added to the learned Bloom filter baselines as well as our method. Again, no model is used for the synthetic dataset, so the size of the model is zero in this case. We use five regions (k = 5) for both PLBF and AdaBF as this is usually enough to achieve good performance as shown in 4.2.1.
The results of the experiment are shown in the Fig.2 along with the distribution of the scores of keys and non-keys (f, g) for each dataset. As we can see from the figure, PLBF gives a better Pareto curve than the other baselines for these datasets. On the synthetic dataset, our performance is much better than the other baselines. The URL's dataset has similar score distribution to the synthetic dataset and we see a similar performance trend. The score distribution for the EMBER dataset indicates that the model here is not as helpful. Due to the limited power of the model, the performance benefit of every method over a standard Bloom filter is much less significant, but still the PLBF offer gains. AdaBF is not consistent w.r.t false positive rate as the performance of its heuristics are not predictable.
The space benefit provided by PLBF versus the standard Bloom filter first increases but converges to a constant time the set size, as given in Eq.11. As the f pr tends to zero, the space saved by PLBF is 1.75x, 2.3x, 1.9x compared to the sandwiching approach and 1.5x, 2x, 1.6x compared to AdaBF on synthetic, URL and EMBER datasets, respectively.
Performance and Model Quality
Here we provide an exemplary experiment to see how the performance of various methods vary with the quality of the model. As discussed earlier, a good model will have high skew of the distributions f and g towards extreme values. We therefore vary the skew parameter of the Zipfian distribution to simulate the model quality. We measure the quality of the model using the standard F1 score. Fig.3(B) represents the space required by various methods to achieve a fixed false positive rate of 0.001 as we vary the F1 score of the model. The figure shows that as the model quality in terms of the F1 score increases, the space required by all the methods decreases (except for the standard Bloom filter, which does not use a model). The space used by all the methods goes to zero as the F1 score goes to 1, as for the synthetic dataset there is no space cost for the model. The data point corresponding to F1 score equal to 0.99 was used to plot Fig.2(A).
Conclusion
Our analysis of the partitioned learned Bloom filter provides a formal framework for improving on learned Bloom filter performance that provides substantially better performance than previous heuristics. As Bloom filters are used across thousands of applications, we hope the PLBF may find many uses where the data set is amenable to a learned model.
APPENDIX
A Solving the Relaxed Problem using KKT conditions
As mentioned in the main text, if we relax the constraint of f pr i ≤ 1, using the stationary KKT conditions we can obtain the optimal f pr i values. Here we show this work. The appropriate Lagrangian equation is given in Eq.15. In this case, the KKT coniditions tell us that the optimal solution is a stationary point of the Lagrangian. Therefore, we find where the derivative of the Lagrangian with respect to f pr i is zero.
L (t i , f pr i , λ, ν i ) = k i=1 (F (t i ) − F (t i−1 )) * log 2 1 f pri +λ * k i=1 (G(t i ) − G(t i−1 )) * f pr i − F P R + k i=1 ν i * (t i−1 − t i )(15)
∂L(ti,f pri,λ,νi) ∂f pri = 0 (16)
∂(F (ti)−F (ti−1)) log 2 1 f pr i ∂f pri = −λ ∂(G(ti)−G(ti−1)) * f pri ∂f pri (17) f pr i = ln(2) * (F (ti)−F (ti−1)) * λ (G(ti)−G(ti−1)) (18) λ = F P R ln(2) * k i=1 (F (ti)−F (ti−1)) = F P R ln 2 (19) f pr i = (F (ti)−F (ti−1)) * F P R (G(ti)−G(ti−1))(20)
Eq.18 expresses f pr i in terms of λ. Summing Eq.18 over all i and using the relationship between F P R and G we get Eq.19. Thus the optimal fpr values turn out to be as given in Eq.20.
B Algorithm for finding thresholds
We provide the pseudocode for the algorithm to find the solution for the relaxed problem; Alg.1 finds the thresholds and false positive rates. As we have described in the main text, this algorithm provides the optimal parameter values, if key and non-key densities are monotonically increasing and decreasing, respectively.
The idea is that only the false positive rate of the rightmost region can be one. The algorithm receives discretized key and non-key densities. The algorithm first iterates over all the possibilities of the rightmost region. For each iteration, it finds the thresholds that maximize the KL divergence for the rest of the array for which a dynamic programming algorithm exists. After calculating these thresholds, it finds the optimal false positive rate for each region using Alg.2. After calculating the thresholds and false positive rates, the algorithm calculates the space used by the back-up Bloom filters in PLBF. It then remembers the index for which the space used was minimal.
C Optimal FPR for given thresholds
We provide the pseudocode for the algorithm to find the optimal false positive rates if threshold values are provided. The corresponding optimization problem is given in Eq.21. As the boundaries for the regions are already defined, one only needs to find the optimal false positive rate for the backup Bloom filter of each region.
Algorithm 1 Solving the general problem
Input F dis -the array containing discretized key density of each region Input G dis -the array containing discretized key density of each region Input F P R -target overall false positive rate Input k -number of regions Output t -the array of threshold boundaries of each region Output f pr -the array of false positive rate of each region Algorithm ThresMaxDivDP -DP algorithm that returns the thresholds maximizing the divergence between key and non-key distribution. Algorithm CalcDensity -returns the region density given thresholds of the regions Algorithm OptimalFPR -returns the optimal false positive rate of the regions given thresholds Algorithm SpaceUsed -returns space used by the back-up Bloom filters given threhsolds and false positive rate per region. 1: procedure SOLVE(F dis , G dis , F P R, k) 2:
M inSpaceU sed ← ∞ Stores minimum space used uptil now 3:
index ← −1 Stores index corresponding to minimum space used 4:
F last ← 0
Key density of the current last region 5:
G last ← 0 Non-key density of the current last region 6: 7:
for i in k − 1, k, ...N − 1 do Iterate over possibilities of last region 8:
F last ← N j=i F dis [j]
Calculate the key density of last region 9:
G last ← N j=i G dis [j]
10:
t ← ThresMaxDivDp(F [1..(i − 1)], G[1..(i − 1)], k − 1)
Find the optimal thresholds for the rest of the array 11:
t.append(i)
12:
F , G ← CalcDensity(F dis , G dis , t)
13:
f pr = OptimalFPR(F , G , F P R, k) Find optimal false positive rates for the current configuration 14:
if (M inSpaceU sed < SpaceUsed(F dis , G dis , t, f pr))
15:
then M inSpaceU sed ← SpaceUsed(F dis , G dis , t, f pr); index ← i Remember the best performance uptil now 16: 17:
F last ← N j=index F dis [j]
18:
G last ← N j=index G dis [j]
19:
Alg.2 gives the pseudocode. We first assign false positive rates based on the relaxed problem but may find that f pr i ≥ 1 for some regions. For such regions, we can set f pr i = 1, re-solve the relaxed problem with these additional constraints (that is, excluding these regions), and use the result as a solution for the general problem. Some regions might again have a false positive rate above one, so we can repeat the process. The algorithm stops when there is no new region with false positive rate greater than one. This algorithm finds the optimal false positive rates for the regions when the thresholds are fixed.
Figure 1 :
1(A),(B),(C) represent the original LBF, LBF with sandwiching, and PLBF designs, respectively. Each region in (C) is defined by score boundaries ti, ti+1 and a false positive rate f pri of the Bloom Filter used for that region. (D),(E) show the LBF and PLBF with score space distributions. (F) represents a PLBF design equivalent to the sandwiching approach used in 3.4.1.
Figure 3 :
3(A) Space saved as we increase the number of regions k (B) Space occupied against F1 score for the Zipfian distribution
tF
← ThresMaxDivDP(F [1..(index − 1)], G[1..(index − 1)], , G ← CalcDensity(F dis , G dis , t) F (t i ) − F (t i−1 )) * log 2 ( 1 f pri ) constraints k i=1 (G(t i ) − G(t i−1 )) * f pr i = F P R f pr i ≤ 1 i = 1...k
We note that these expressions are just very accurate approximations; seeBroder and Mitzenmacher (2003);Bose et al. (2008) for further details.
t 0 t 1 t 2 t k-1
The different false positive rates for each region can be materialized in variety of ways. We can either choose a separate Bloom filter for each region or have a common Bloom filter with varying number of hash functions per region.
The size of a Bloom filter is proportional to |S| * log 2 (1/f pr), where S is the set it represents, and f pr is the false positive rate it achieves. See e.g.Mitzenmacher (2018) for related discussion.
Pickle is the standard way of serializing objects in Python.
Micro-BenchmarksHere, we see how the performance of PLBF varies with number of regions(k) and model quality. The following experiments were done on synthetic dataset.Performance and the Number of RegionsHere, we see how the performance (amount of space saved) of our approach varies as we change the number of regions k. We have seen that the space savings obtained by a learned model is linearly proportional the divergence of the distributions D KL (f (t),ĝ (t)), and this divergence strictly increases with the number of regions but is upper bounded by D KL (f, g).Fig.3(A)shows the space saved as we increase the number of regions k on the synthetic dataset (F1 score=0.99). The red line in the figure represents the upper bound of space savings which corresponds to divergence D KL (f, g). We observe that performance is pretty close to the optimal performance even for five regions. In practice, our experience suggests using 4-6 regions should be sufficient to obtain reasonable performance.Algorithm 2 Finding optimal fpr's given thresholdsInput F -the array containing key density of each region Input G -the array containing non-key density of each region Input F P R -target overall false positive rate Input k -number of regions Output f pr -the array of false positive rate of each region 1: procedure OPTIMALFPR(F , G , F P R, k) 2:Fsum ← 0 sum of key density of regions with f pri = 13:Gsum ← 0 sum of non-key density of regions with f pri = 1 4:Assign relaxed problem solution 6: Modifying the f pri of the regions to ensure target false positive rate is FPR 15: return f pr Array
S Hyrum, Phil Anderson, Roth, arXiv:cs.CR/1804.04637EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models. Hyrum S. Anderson and Phil Roth. 2018. EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models. arXiv:cs.CR/1804.04637
Space/Time Trade-Offs in Hash Coding with Allowable Errors. H Burton, Bloom, 10.1145/362686.362692Commun. ACM. 13Burton H. Bloom. 1970. Space/Time Trade-Offs in Hash Coding with Allowable Errors. Commun. ACM 13, 7 (July 1970), 422-426. https://doi.org/10.1145/362686.362692
On the false-positive rate of Bloom filters. Prosenjit Bose, Hua Guo, Evangelos Kranakis, Anil Maheshwari, Pat Morin, Jason Morrison, Michiel Smid, Yihui Tang, Inform. Process. Lett. 108Prosenjit Bose, Hua Guo, Evangelos Kranakis, Anil Maheshwari, Pat Morin, Jason Morrison, Michiel Smid, and Yihui Tang. 2008. On the false-positive rate of Bloom filters. Inform. Process. Lett. 108, 4 (2008), 210-213.
Survey: Network Applications of Bloom Filters: A Survey. Andrei Z Broder, Michael Mitzenmacher, 10.1080/15427951.2004.10129096Internet Math. 1Andrei Z. Broder and Michael Mitzenmacher. 2003. Survey: Network Applications of Bloom Filters: A Survey. Internet Math. 1, 4 (2003), 485-509. https://doi.org/10.1080/15427951.2004. 10129096
Weighted Bloom filter. Jehoshua Bruck, Jie Gao, Anxiao Jiang, 10.1109/ISIT.2006.261978Proceedings 2006 IEEE International Symposium on Information Theory. 2006 IEEE International Symposium on Information TheorySeattle, Washington, USAIEEEJehoshua Bruck, Jie Gao, and Anxiao Jiang. 2006. Weighted Bloom filter. In Proceedings 2006 IEEE International Symposium on Information Theory, ISIT 2006, The Westin Seattle, Seattle, Washington, USA, July 9-14, 2006. IEEE, 2304-2308. https://doi.org/10.1109/ISIT. 2006.261978
Adaptive Learned Bloom Filter. Zhenwei Dai, Anshumali Shrivastava, arXiv:1910.09131Efficient Utilization of the Classifier. Ada-BFarXiv preprintZhenwei Dai and Anshumali Shrivastava. 2019. Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier. arXiv preprint arXiv:1910.09131 (2019).
Optimal Bloom Filters and Adaptive Merging for LSM-Trees. Niv Dayan, Manos Athanassoulis, Stratos Idreos, 10.1145/3276980ACM Trans. Database Syst. 4316Niv Dayan, Manos Athanassoulis, and Stratos Idreos. 2018. Optimal Bloom Filters and Adaptive Merging for LSM-Trees. ACM Trans. Database Syst. 43, 4, Article 16 (Dec. 2018), 48 pages. https://doi.org/10.1145/3276980
Bloom Filters in Probabilistic Verification. C Peter, Panagiotis Dillinger, Manolios, 10.1007/978-3-540-30494-4_26Formal Methods in Computer-Aided Design, 5th International Conference. Alan J. Hu and Andrew K. MartinAustin, Texas, USASpringer3312Proceedings (Lecture Notes in Computer Science)Peter C. Dillinger and Panagiotis Manolios. 2004. Bloom Filters in Probabilistic Verification. In Formal Methods in Computer-Aided Design, 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004, Proceedings (Lecture Notes in Computer Science), Alan J. Hu and Andrew K. Martin (Eds.), Vol. 3312. Springer, 367-381. https://doi.org/10.1007/ 978-3-540-30494-4_26
The Case for Learned Index Structures. Tim Kraska, Alex Beutel, Ed H Chi, Jeffrey Dean, Neoklis Polyzotis, 10.1145/3183713.3196909Proceedings of the 2018 International Conference on Management of Data (SIGMOD '18). the 2018 International Conference on Management of Data (SIGMOD '18)New York, NY, USAAssociation for Computing MachineryTim Kraska, Alex Beutel, Ed H. Chi, Jeffrey Dean, and Neoklis Polyzotis. 2018. The Case for Learned Index Structures. In Proceedings of the 2018 International Conference on Management of Data (SIGMOD '18). Association for Computing Machinery, New York, NY, USA, 489-504. https://doi.org/10.1145/3183713.3196909
Compressed bloom filters. Michael Mitzenmacher, 10.1109/TNET.2002.803864IEEE/ACM Trans. Netw. 10Michael Mitzenmacher. 2002. Compressed bloom filters. IEEE/ACM Trans. Netw. 10, 5 (2002), 604-612. https://doi.org/10.1109/TNET.2002.803864
A Model for Learned Bloom Filters and Optimizing by Sandwiching. ; S Michael Mitzenmacher, H Bengio, H Wallach, K Larochelle, N Grauman, Advances in Neural Information Processing Systems. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Michael Mitzenmacher. 2018. A Model for Learned Bloom Filters and Optimiz- ing by Sandwiching. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Gar- nett (Eds.). Curran Associates, Inc., 464-473. http://papers.nips.cc/paper/ 7328-a-model-for-learned-bloom-filters-and-optimizing-by-sandwiching.pdf
Adaptive Cuckoo Filters. Michael Mitzenmacher, Salvatore Pontarelli, Pedro Reviriego, 10.1145/3339504J. Exp. Algorithmics. 2511Michael Mitzenmacher, Salvatore Pontarelli, and Pedro Reviriego. 2020. Adaptive Cuckoo Filters. J. Exp. Algorithmics 25, 1, Article 1.1 (March 2020), 20 pages. https://doi.org/10.1145/ 3339504
An Optimal Bloom Filter Replacement. Anna Pagh, Rasmus Pagh, S. Srinivasa Rao, Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '05). the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '05)USAAnna Pagh, Rasmus Pagh, and S. Srinivasa Rao. 2005. An Optimal Bloom Filter Replacement. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '05). Society for Industrial and Applied Mathematics, USA, 823-829.
Scikit-learn: Machine Learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825-2830.
Meta-Learning Neural Bloom Filters. Jack Rae, Sergey Bartunov, Timothy Lillicrap, PMLRProceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning ( Machine Learning ResearchLong Beach, California, USA97Jack Rae, Sergey Bartunov, and Timothy Lillicrap. 2019. Meta-Learning Neural Bloom Filters. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Vol. 97. PMLR, Long Beach, California, USA, 5271-5280. http://proceedings.mlr.press/v97/rae19a.html |
264,490,454 | HOW DO LANGUAGE MODELS BIND ENTITIES IN CONTEXT? | To correctly use in-context information, language models (LMs) must bind entities to their attributes.For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors.We analyze LM representations and identify the binding ID mechanism: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families.Using causal interventions, we show that LMs' internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes.We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability.Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs. | [
246285344,
155092004,
7478738,
255749430,
56657817,
990233
] | HOW DO LANGUAGE MODELS BIND ENTITIES IN CONTEXT?
26 Oct 2023
Jiahai Feng
Jacob Steinhardt
U C Berkeley
France Alice
HOW DO LANGUAGE MODELS BIND ENTITIES IN CONTEXT?
26 Oct 20236ACD033719BBC06CDC398E54C23B0B2FarXiv:2310.17191v1[cs.LG]
To correctly use in-context information, language models (LMs) must bind entities to their attributes.For example, given a context describing a "green square" and a "blue circle", LMs must bind the shapes to their respective colors.We analyze LM representations and identify the binding ID mechanism: a general mechanism for solving the binding problem, which we observe in every sufficiently large model from the Pythia and LLaMA families.Using causal interventions, we show that LMs' internal activations represent binding information by attaching binding ID vectors to corresponding entities and attributes.We further show that binding ID vectors form a continuous subspace, in which distances between binding ID vectors reflect their discernability.Overall, our results uncover interpretable strategies in LMs for representing symbolic knowledge in-context, providing a step towards understanding general in-context reasoning in large-scale LMs.
INTRODUCTION
Modern language models (LMs) excel at many reasoning benchmarks, suggesting that they can perform general purpose reasoning across many domains.However, the mechanisms that underlie LM reasoning remain largely unknown (Räuker et al., 2023).The deployment of LMs in society has led to calls to better understand these mechanisms (Hendrycks et al., 2021), so as to know why they work and when they fail (Mu & Andreas, 2020;Hernandez et al., 2021;Vig et al., 2020b).
In this work, we seek to understand binding, a foundational skill that underlies reasoning.How humans solve binding, i.e. recognize features of an object as bound to that object and not to others, is a fundamental problem in psychology (Treisman, 1996).Here, we study binding in LMs.
Binding arises any time the LM has to reason about two or more objects of the same kind.For example, consider the following passage involving two people and two countries:
Context: Alice lives in the capital city of France.Bob lives in the capital city of Thailand.Question: Which city does Bob live in?
(1)
In this example the LM has to represent the associations lives(Alice, Paris) and lives(Bob, Bangkok).
We call this the binding problem-for the predicate lives, Alice is bound to Paris and Bob to Bangkok.Since predicates are bound in-context, binding must occur in the activations, rather than in the weights as with factual recall (Meng et al., 2022).This raises the question: how do LMs represent binding information in the context such that they can be later recalled?
Overall, our key technical contribution is the identification of a robust general mechanism in LMs for solving the binding problem.The mechanism relies on binding IDs, which are abstract concepts that LMs use internally to mark variables in the same predicate apart from variables in other predicates (Fig. 1).Using causal mediation analysis we empirically verify two key properties of the binding ID mechanism (Section 3).
Turning to the structure of binding IDs, we find that binding IDs are represented as vectors which are bound to variables by simple addition (Section 4).Further, we show that binding IDs occupy a subspace, in the sense that linear combinations of binding IDs are still valid binding IDs, even though random vectors are not.
Answer: France
Figure 1: The Binding ID mechanism.The LM learns abstract binding IDs (drawn as triangles or squares) which distinguish between entity-attribute pairs.Binding functions Γ E and Γ A bind entities and attributes to their abstract binding ID, and store the results in the activations.To answer queries, the LM identifies the attribute that shares the same binding ID as the queried entity.
Lastly, we find that binding IDs are ubiquitous and transferable (Section 5).They are used by every sufficiently large model in the LLaMA (Touvron et al., 2023) and Pythia (Biderman et al., 2023) families, and their fidelity increases with scale.They are used for a variety of synthetic binding tasks with different surface forms, and binding vectors from one task transfer to other tasks.Finally, we qualify our findings by showing that despite their ubiquity, binding IDs are not universal: we exhibit a question-answering task where an alternate mechanism, "direct binding", is used instead.
PRELIMINARIES
In this section we define the binding task and explain causal mediation analysis, our main experimental technique.
Binding task.Formally, the binding task consists of a set of entities E and a set of attributes A.
An n-entity instance of the binding task consists of a context that is constructed from n entities e 0 , . . ., e n−1 ∈ E and n attributes a 0 , . . ., a n−1 ∈ A, and we denote the corresponding context as c = ctxt(e 0 ↔ a 0 , . . ., e n−1 ↔ a n−1 ).For a context c, we use E k (c) and A k (c) to denote the k-th entity and the k-th attribute of the context c, for k ∈ [0, n − 1].We will drop the dependence on c for brevity when the choice of c is clear from context.
In the CAPITALS task, which is the main task we study for most of the paper, E is a set of single-token names, and A is a set of single-token countries.Quote 1 is an example instance of the CAPITALS task with context c = ctxt(Alice ↔ F rance, Bob ↔ T hailand).In this context, E 0 is Alice, A 0 is F rance, etc.
Given a context c, we are interested in the model's behavior when queried with each of the n entities present in c.For any k ∈ [0, n − 1], when queried with the entity E k the model should place high probability on the answer matching A k .In our running example, the model should predict "Paris" when queried with "Alice", and "Bangkok" when queried with "Bob".
To evaluate a model's behavior on a binding task, we sample N = 100 contexts.For each context c, we query the LM with every entity mentioned in the context, which returns a vector of log probabilities over every token in the vocabulary.The mean log prob metric measures the mean of the log probability assigned to the correct attribute token.Top-1 accuracy measures the proportion of queries where the correct attribute token has the highest log probability out of all attribute tokens.However, we will instead use the median-calibrated accuracy (Zhao et al., 2021), which calibrates the log probabilities with the median log probability before taking the top-1 accuracy.We discuss this choice in Appendix A.
Causality in autoregressive LMs.We utilize inherent causal structure in autoregressive LMs.Let an LM have n layers transformer layers and a d model -dimensional activation space.For every token position p, we use Z p ∈ R nlayers×dmodel to denote the stacked set of internal activations1 at token p (see Fig. 2a).We refer to the collective internal activations of the context as Z context .In addition, we denote the activations at the token for the k-th entity as Z E k , and the k-th attribute as Z A k .We sometimes write Z A k (c), Z context (c), etc. to make clear the dependence on the context c.
Z A0 → Z ′ A0 from Z ′ context .
Fig. 2a shows that Z context contains all the information about the context that the LM uses.We thus study the structure of Z context using causal mediation analysis, a widely used tool for understanding neural networks (Vig et al., 2020a;Geiger et al., 2021;Meng et al., 2022).Causal mediation analysis involves substituting one set of activations in a network for another, and we adopt the /.notation (from Mathematica) to denote this.For example, for activations Given a causal graph, causal mediation analysis determines the role of an intermediate node by experimentally intervening on the value of the node and measuring the model's output on various queries.For convenience, when the model answers queries in accordance to a context c, we say that the model believes2 c.If there is no context consistent with the language model's behavior, then we say that the LM is confused.
Z * ∈ R nlayers×dmodel
As an example, suppose we are interested in the role of the activations Z A0 in Fig. 2a.To apply causal mediation analysis, we would:
1. Obtain Z context by running the model on the original context c (which we also refer to as the target context) (Fig. 2a) 2. Obtain Z ′ context by running the model on a different context c ′ (i.e.source context) (Fig. 2b) 3. Modify Z context by replacing Z A0 from the target context with Z ′ A0 from the source context (Fig. 2c), while keeping all other aspects of Z context the same, resulting in
Z intervened context = Z context /.{Z A0 → Z ′ A0 } 4. Evaluate the model's beliefs based on the new Z intervened context
We can infer the causal role of Z A0 from how the intervention Z context /.{Z A0 → Z ′ A0 } changes the model's beliefs.Intuitively, if the model retains its original beliefs c, then Z A0 has no causal role in the model's behavior on the binding task.On the other hand, if the model now believes the source context c ′ , then Z A0 contains all the information in the context.In reality both hypothetical extremes are implausible, and in Section 3 we discuss a more realistic hypothesis.
A subtle point is that we study how different components of Z context store information about the context (and thus influence behavior), and not how Z context itself is constructed.We thus suppress Preprint the causal influence that Z A0 has on downstream parts of Z context (such as Z E1 and Z A1 ) by freezing the values of Z E1 and Z A1 in Z intervened context instead of recomputing them based on Z ′ A0 .
EXISTENCE OF BINDING IDS
In this section, we first describe our hypothesized binding ID mechanism.Then, we identify two key predictions of the mechanism, factorizability and position independence, and verify them experimentally.We provide an informal argument in Appendix B for why this binding ID mechanism is the only mechanism consistent with factorizability and position independence.
Binding ID mechanism.We claim that to bind attributes to entities, the LM learns abstract binding IDs that it assigns to entities and attributes, so that entities and attributes bound together have the same binding ID (Fig. 1).In more detail, our informal description of the binding ID mechanism is:
1.For entity E k , encode both the entity E k and the binding ID k3 in the activations Z E k .
2. For attribute A k , encode both the attribute A k and the binding ID k in the activations Z A k .
3. To answer a query for entity E k , retrieve from Z context the attribute that shares the same binding ID as E k .
Further, for activations Z E k and Z A k , the binding ID and the entity/attribute are the only information they contain that affects the query behavior.
More formally, there are binding functions Γ E (e, k) and Γ A (a, k) that fully specify how Z E and Z A bind entities/attributes with binding IDs.Specifically, if E k = e ∈ E, then we can replace Z E k with Γ E (e, k) without changing the query behavior, and likewise for Z A .
More generally, given Z context with entity representations Γ E (e 0 , 0), . . ., Γ E (e n−1 , n − 1) and attribute representations Γ A (a 0 , π(0)), . . ., Γ A (a n−1 , π(n − 1)) for a permutation π, the LM should answer queries according to the context c = ctxt(e 0 ↔ a π −1 (0) , . . ., e n−1 ↔ a π −1 (n−1) ).This implies two properties in particular, which we will test in the following subsections:
• Factorizability: if we replace Z A k with Z A ′ k , then the model will bind
E k to A ′ k instead of A k , i.e. it will believe c./{A k → A ′ k }. This is because Z ′ A k encodes Γ A (A ′ k , k) and Z A k encodes Γ A (A k , k). Substituting Z A k → Z A ′ k will overwrite Γ A (A k , k) with Γ A (A ′ k , k), causing the model to bind E k to A ′ k .
• Position independence: if we e.g.swap Z A0 and Z A1 , the model still binds A 0 ↔ E 0 and A 1 ↔ E 1 , because it looks up attributes based on binding ID and not position in the context.
In Section 4, we construct fine-grained modifications to the activation Z that modify the binding ID but not the attributes, allowing us to test the binding hypothesis more directly.In Section 5 we extend this further by showing that binding IDs can be transplanted from entirely different tasks.
FACTORIZABILITY OF ACTIVATIONS
The first property of Z context we test is factorizability.In our claimed mechanism, information is highly localized-Z A k contains all relevant information about A k , and likewise for Z E k .Therefore, we expect LMs that implement this mechanism to have factorizable activations: for any contexts c, c ′ , substituting
Z E k → Z E k (c ′ ) into Z context (c) will cause the model to believe c/.{E k → E ′ k }, and substituting Z A k → Z A k (c ′ ) cause the model to believe c/.{A k → A ′ k }.
To test this concretely, we considered the CAPITALS task from Section 2 with n = 2 entity-attribute pairs.We computed representations for two contexts c = ctxt(e 0 ↔ a 0 , e 1 ↔ a 1 ) and c
′ = ctxt(e ′ 0 ↔ a ′ 0 , e ′ 1 ↔ a ′ 1 )
, and used causal mediation analysis (Section 2) to swap representations from the source context c ′ into the target context c.Specifically, we fix k ∈ {0, 1} and intervene on either just the entity (Z E K → Z ′ E k ), just the attribute, neither, or both.We then measure the mean log probs for all possible queries (E -0.39 -3.95 -9.58 -9.75 -2.71 -0.67 -9.24 -9.48 -1.57 -2.11 -6.73 -7.00 -1.59 -2.11 -6.74 -6.93 None -8.29 -3.83 -0.44 -9.34 -8.23 -0.63 -2.91 -9.27 -5.32 -2.07 -1.76 -6.75 -5.38 -2.07 -1.78 -6.70 Attribute 56 -2.12 -6.77 -6.97 -2.72 -0.66 -9.25 -9.50 -0.39 -3.99 -9.58 -9.78 -1.57 -2.11 -6.78 -6.97 Entity -5.38 -2.07 -1.76 -6.76 -8.24 -0.63 -2.92 -9.28 -8.29 -3.88 -0.43 -9.36 -5.38 -2.06 -1.75 -6.72Both (a) Swapping entity/attribute for (E0, A0) -0.39 -3.95 -9.58 -9.75 -2.71 -0.67 -9.24 -9.48 -1.57 -2.11 -6.73 -7.00 -1.59 -2.11 -6.74 -6.93 None -0.39 -9.67 -9.30 -4.01 -2.50 -9.29 -8.97 -0.72 -1.61 -6.79 -6.46 -2.16 -1.66 -6.79 -6.46 -2.12 Attribute -0.39 -3.93 -9.47 -9.69 -1.60 -2.07 -6.62 -6.92 -1.60 -2.09 -6.62 -6.95 -2.79 -0.66 -9.16 -9.43 Entity -0.39 -9.66 -9.19 -3.97 -1.66 -6.79 -6.34 -2.12 -1.64 -6.76 -6.34 -2.14 -2.63 -9.30 -8.87 -0.69Both (b) Swapping entity/attribute for (E1, A1)
0 , E 1 , E ′ 0 , E ′ 1 ). For instance, swapping A k with A ′ k in Z context should lead A ′ k (and not A k ) to have high log-probability when E k is queried. Preprint E0 E1 E 0 E 1 Query nameA0 A1 A 0 A 1 Attributes E0 E1 E 0 E 1 Query name -1.A0 A1 A 0 A 1 AttributesE0 E1 E 0 E 1 Query nameA0 A1 A 0 A 1 Attributes E0 E1 E 0 E 1 Query nameA0 A1 A 0 A 1 Attributes0 → x, X 1 → X 1 − (x − X 0 )},
the grey line is the control condition with no interventions, and the green line is the swapped condition where Z 0 and Z 1 have swapped positions.
Results are shown in Fig. 3 and support the factorizability hypothesis.As an example, consider Fig. 3a.In the None setting (no intervention), we see high log probs for A 0 when queried for E 0 , and for A 1 when queried for E 1 .This indicates that the LM is able to solve this task.Next, consider the Attribute intervention setting (A 0 → A ′ 0 ): querying for E 0 now gives high log probs for A ′ 0 , and querying for E 1 gives A 1 as usual.Finally, in the Both setting (where both entity and attribute are swapped), querying E ′ 0 returns A ′ 0 while querying E 0 leads to approximately uniform predictions.Experiment details.We use LLaMA 30-b here and elsewhere unless otherwise stated.In practice, we found that activations for both the entity token and the subsequent token encode the entity binding information.Thus for all experiments in this paper, we expand the definition of Z E k to include the token activations immediately after E k .
POSITION INDEPENDENCE
We next turn to position independence, which is the other property we expect LMs implementing the binding ID mechanism to have.This says that permuting the order of the Z E k and Z A k should Preprint have no effect on the output, because the LM looks only at the binding IDs and not the positions of entities or attributes activations.
To apply causal interventions to the positions, we use the fact that transformers use positional embeddings to encode the (relative) position of each token in the input.We can thus intervene on these embeddings to "move" one of the Z k 's to another location k ′ .Formally, we let X k describe the position embedding for Z k , and denote the position intervention as {X k → k ′ }.In Appendix C we describe how to do this for rotary position embeddings (RoPE), which underlie all the models we study.For now, we will assume this intervention as a primitive and discuss experimental results.
For our experiments, we again consider the CAPITALS task with n = 2. Let X E0 and X E1 denote the positions of the two entities.We apply interventions of the form
{X E0 → x, X E1 → X E1 − (x − X E0 )}, for x ∈ {X E0 , X E0 + 1, . . . , X E1 }.
This measures the effect of gradually moving the two entity positions past each other: when x = X E0 , no intervention is performed (control condition), and when x = X E1 the entity positions are swapped (swapped condition).We repeat the same experiment with attribute activations and measure the mean log probs in both cases.
Results are shown in Fig. 4. As predicted under position independence, position interventions result in little change in model behavior.Consider the swapped condition at the green line.Had the binding information been entirely encoded in position, we expect a complete switch in beliefs compared to the control condition.In reality, we observe almost no change in mean log probs for entities and a small change in mean log probs for attributes that seems to be part of an overall gradual trend.
We interpret this gradual trend as an artifact of position-dependent bias, and not as evidence against position independence.We view it as a bias because it affects all attributes regardless of how they are bound-attributes that are shifted to later positions always have higher log probs.We provide further discussion of this bias, as well as other experimental details, in Appendix C.
STRUCTURE OF BINDING ID
The earlier section shows evidence for the binding ID mechanism.Here, we investigate two hypotheses on the structure of binding IDs and binding functions.The first is that the binding functions Γ A and Γ E are additive, which lets us think of binding IDs as binding vectors.The second is contingent on the first, and asks if binding vectors have a geometric relationship between each other.
ADDITIVITY OF BINDING FUNCTIONS
Prior interpretability research has proposed that transformers represent features linearly (Elhage et al., 2021).Therefore a natural hypothesis is that both entity/attribute representations and abstract binding IDs are vectors in activation space, and that the binding function simply adds the vectors for entity/attribute and binding ID.We let the binding ID k be represented by the pair of vectors
[b E (k), b A (k)],
and the representations of entity e and attribute a be f E (e) and f A (a) respectively.Then, we hypothesize that the binding functions can be linearly decomposed as:
Γ A (a, k) = f A (a) + b A (k), Γ E (e, k) = f E (e) + b E (k).
(1)
Binding ID vectors seem intuitive and plausibly implementable by transformer circuits.To experimentally test this, we seek to extract b A (k) and b E (k) in order to perform vector arithmetic on them.We use (1) to extract the differences
∆ E (k) := b E (k) − b E (0), ∆ A (k) := b A (k) − b A (0).
Rearranging (1), we obtain
∆ A (k) = Γ A (α, k) − Γ A (α, 0), ∆ E (k) = Γ E (a, k) − Γ E (a, 0).(2)
We estimate As a further check, we perform both attribute and entity mean interventions simultaneously, which should cancel out and thus restore accuracy.Indeed, Table 1 shows that accuracy for Both is above 97%.Finally, to show that the specific directions obtained by the difference vectors matter, we sample random vectors with the same magnitude but random directions, and perform the same mean interventions with the random vectors.These random vectors have no effect on the model behavior.
∆ A (k) by sampling E c,c ′ [Z A k (c) − Z A0 (c ′ )],
THE GEOMETRY OF BINDING ID VECTORS
Section 4.1 shows that we can think of binding IDs as pairs of ID vectors, and that randomly chosen vectors do not function as binding IDs.We next investigate the geometric structure of valid binding vectors and find that linear interpolations or extrapolations of binding vectors are often also valid binding vectors.This suggests that binding vectors occupy a continuous binding subspace.We find evidence of a metric structure in this space, such that nearby binding vectors are hard for the model to distinguish, but far-away vectors can be reliably distinguished and thus used for the binding task.
To perform our investigation, we apply variants of the mean interventions in Section 4.1.As before, we start with an n = 2 context, thus obtaining representations Z 0 = (Z E0 , Z A0 ) and Z 1 = (Z E1 , Z A1 ).We first erase the binding information by subtracting (∆ E (1), ∆ A (1)) from Z 1 , which reduces accuracy to chance.Next, we will add vectors v 0 = (v E0 , v A0 ) and v 1 = (v E1 , v A1 ) to the representations Z; if doing so restores accuracy, then we view (v E0 , v A0 ) and (v E1 , v A1 ) as valid binding pairs.
To generate different choices of v, we take linear combinations across a two-dimensional space.The basis vectors for this space are (∆ E (1), ∆ A (1)) and (∆ E (2), ∆ A (2)) obtained by averaging across an n = 3 context.Fig. 5 shows the result for several different combinations, where the coordinates of v 0 are fixed and shown in green while the coordinates of v 1 vary.When v 1 is close to v 0 , the LM gets close to 50% accuracy, which indicates confusion.Far away from v 1 , the network consistently achieves high accuracy, demonstrating that linear combinations of binding IDs (even with negative coefficients) are themselves valid binding IDs.See Appendix G for details.The geometry of the binding subspace hints at circuits (Elhage et al., 2021) in LMs that process binding vectors.For example, we speculate that certain attention heads might be responsible for comparing binding ID vectors, since the attention mechanism computes attention scores using a quadratic form which could provide the metric over the binding subspace.
GENERALITY AND LIMITATIONS OF BINDING ID
The earlier sections investigate binding IDs for one particular task: the CAPITALS task.In this section, we evaluate their generality.We first show that binding vectors are used for a variety of tasks and models.We then show evidence that the binding vectors are task-agnostic: vectors from one task transfer across many different tasks.However, we show that our mechanism is not fully universal, by exhibiting a question-answering task that uses an alternative binding mechanism.
Generality of binding ID vectors.We evaluate the generality of binding vectors across models and tasks.For a (model, task) pair, we compute the median-calibrated accuracy on the n = 3 context under three conditions: (1) the control condition in which no interventions are performed, and the (2) entity and (3) attribute conditions in which entity or attribute mean interventions (Section 4.1) are performed.We use the mean interventions to permute binding pairs by a cyclic shift and measure accuracy according to this shift (see Appendix F).As shown in Figure 6, the interventions induce the expected behavior on most tasks; moreover, their effectiveness increases with model scale, suggesting that larger models have more robust structured representations.
Transfer across tasks.We next show that binding vectors often transfer across tasks.Without access to the binding vectors
[b E (k), b A (k)], we instead test if the difference vectors [∆ src E (k), ∆ src A (k)
] from a source task, when applied to a target task, result in valid binding IDs.To do so, we follow a similar procedure to Section 4.2: First, we erase binding information by subtracting
[∆ tar E (k), ∆ tar A (k)] for the target task from each target-task representation [Z E k , Z A k ],
which results in near-chance accuracy.Then, we add back in [∆ src E (k), ∆ src A (k)] computed from the source task with the hope of restoring performance.
Table 2 shows results for a variety of source tasks when using CAPITALS as the target task.Accuracy is consistently high, even when the target task has limited surface similarity to the target task.For example, the SHAPES task contains descriptions about geometrical shapes and their colors, and PARALLEL puts all entities before any attributes instead of interleaving them as in CAPITALS.We include two baselines for comparison: replacing ∆ src (k) with the zero vector ("Zeros"), or picking a randomly oriented difference vector as in Table 1 ("Random").Both lead to chance accuracy.See Appendix D for more details on the tasks.The fact that binding vectors transfer across tasks, together with the results from Section 4, suggests that there could be a task-agnostic subspace in the model's activations reserved for binding vectors.
Direct binding in MCQ.While binding IDs are used for many tasks, they are not universal.We briefly identify an alternate binding mechanism, the direct binding mechanism, that is used for a multiple-choice question-answering task (MCQ).In MCQ, each label (A or B) has to be bound to its associated option text.In this task, instead of binding variables to an abstract binding ID, the model directly binds the label to the option (Fig. 7).We provide the full details of this task and further explanations in Appendix E.
RELATED WORK
Causal mediation analysis.In recent years, causal methods have gained popularity in post hoc interpretability (Meng et al., 2022;Geiger et al., 2021).Instead of relying on correlations, which could lead to spurious features (Hewitt & Liang, 2019), causal mediation analysis (Vig et al., 2020a) performs causal interventions on internal states of LMs to understand their causal role on LM behavior.Our work shares the same causal perspective adopted by many in this field.
Knowledge recall.A line of work studies recalling factual associations that LMs learn from pretraining (Geva et al., 2020;Dai et al., 2021;Meng et al., 2022;Geva et al., 2023;Hernandez et al., 2023b).This is spiritually related to binding, as entities must be associated to facts about them.However, this work studies factual relations learned from pretraining and how they are recalled from model weights.In contrast, we study representations of relations learned from context, and how they are recalled from model activations.
More recently, Hernandez et al. (2023a) found a method to construct bound representations by directly binding attribute representations to entity representations.In contrast, our work investigates bound representations constructed by the LM itself, and identifies that the binding ID mechanism (and not direct binding) is the mechanism that LM representations predominantly uses.An avenue for future work is to study how bound representations constructed by Hernandez et al. (2023a) relates to the direct binding mechanism we identified in the MCQ task.
Symbolic representations in connectionist systems.Many works have studied how neural networks represent symbolic concepts in activation space (Mikolov et al., 2013;Tenney et al., 2019;Belinkov & Glass, 2019;Rogers et al., 2021;Patel & Pavlick, 2021).To gain deeper insights into how these representations are used for reasoning, recent works have studied representations used for specialized reasoning tasks (Nanda et al., 2023;Li et al., 2022;2021).Our work shares the motivation of uncovering how neural networks implement structured representations that enable reasoning.
Mechanistic Interpretability.Mechanistic interpretability aims to uncover circuits (Elhage et al., 2021;Wang et al., 2022;Wu et al., 2023), often composed of attention heads, that are embedded in language models.In our work, we study language model internals on a more coarse-grained level.We identify structures in representations that have causal influences on model behavior, but how circuits construct these representations or utilize them is left as future work.
CONCLUSION
In this paper we identify and study the binding problem, a common and fundamental reasoning subproblem.We find that pretrained LMs can solve the binding task by binding entities and attributes to abstract binding IDs.Then, we identify that the binding IDs are vectors from a binding subspace with a notion of distance.Lastly, we find that the binding IDs are used broadly for a variety of binding tasks and are present in all sufficiently large models that we studied.
Taking a broader view, we see our work as a part of the endeavor to interpret LM reasoning by decomposing it into primitive skills.In this work we identified the binding skill, which is used in several settings and has a simple and robust representation structure.An interesting direction of future work would be to identify other primitive skills that support general purpose reasoning and have similarly interpretable mechanisms.
Our work also suggests that ever-larger LMs may still have interpretable representations.A common intuition is that larger models are more complex, and hence more challenging to interpret.Our work provides a counterexample: as LMs become larger, their representations can become more structured and interpretable, since only the larger models exhibited binding IDs (Fig. 6).
Speculating further, the fact that large enough models in two unrelated LM families learn the same structured representation strategy points to a convergence in representations with scale.This raises the philosophical question: could there be an ultimate representation that these LMs are converging towards?Perhaps the properties of natural language corpora and LM inductive biases lead to the inevitable development of certain core representation strategies that are invariant to changes in model hyperparameters or exact dataset composition.This would encouragingly imply that interpretability results can transfer across models-studying the core representations of any sufficiently large model would yield insights into other similarly large models because of their convergent core structure.
A EVALUATION DETAILS
In all of our evaluations, we sample N = 100 instances of contexts from the binding task, obtaining {c i } N i=1 .For succinctness, we write
E (i) k := E k (c i ) and A (i) k := A k (c i ). For the i-th context in- stance, we query E (i) 0 and E (i) 1 which return log probabilities Φ (i) E0 (t) and Φ (i)
E1 (t) over tokens t in the vocabulary.However, we consider only the log probabilities for relevant attributes
Φ (i) E k (A (i) 0 ) and Φ (i) E k (A (i) 1
).We then compute the summary statistics (described below) over the entire population of samples so that we get two scalars, σ E0 and σ E1 describing the performance under each query entity.
• The mean log prob is given by σ
E k = 1 N N i=1 Φ (i) E k (A (i) k ). • The top-1 accuracy is σ E k = 1 N N i=1 1[k = arg max l Φ (i) E k (A (i) l )].
• We adopt the median calibrated accuracy from Zhao et al. (2021).First, we obtain a baseline by computing medians for every attribute m(A l ) := median i,k {Φ (i) Zhao et al. (2021) discusses motivations for the median calibrated accuracy.In our case, the position dependent bias provides addition reasons, which we discuss in Appendix C.
E k (A (i) l )}. Then, compute calibrated log probs Φ(i) E k (A l ) := Φ (i) E k (A l ) − m(A l ). The median calibrated accuracy is then σ E k = 1 N N i=1 1[k = arg max l Φ(i) (A (i) l )].
B NECESSITY OF BINDING ID MECHANISM
In this section, we provide one definition of the binding ID mechanism, and argue informally that under this definition, factorizability and position independence necessarily implies the binding ID mechanism.
First, let us define the binding ID mechanism.Fix n = 2 for simplicity.There are two claims:
1. Representation.There exists a binding function Γ E such that for any contexts c, The second claim follows from Representation and position independence.Pick an arbitrary context c to generate Z context .Then, by factorizability we can make the substitutions
Z E k is represented by Γ E (E k , k),Z E k → Γ E (e π −1 E (k) , k) and Z A k → Γ A (a π −1 A (k) , k), to obtain [Γ E (e π −1 E (0) , 0), Γ A (a π −1 A (0) , 0), Γ E (e π −1 E (1) , 1), Γ A (a π −1 A (0) , 1)].
Because of factorizability, the model believes e 0 ↔ a 0 , e 1 ↔ a 1 if π E = π A , and e 0 ↔ a 1 , e 1 ↔ a 0 otherwise.Now, position independence lets us freely permute {Z E0 , Z E1 } and {Z A0 , Z A1 } without changing beliefs, which achieves the desired [Γ E (e 0 , π E (0)), Γ A (a 0 , π A (0)), Γ E (e 1 , π E (1)), Γ A (a 1 , π A (1))].
C DETAILS FOR POSITION INDEPENDENCE
RoPE Intervention.In Fig. 2a, the context activations Z context is drawn in a line, suggesting a linear form: Z context = [. . ., Z E0 , . . ., Z A0 , . . ., Z E1 , . . ., Z A1 , . . .].We can equivalently think of Z context as a set of pairs: Z context = {(p, Z p ) | p is an index for a context token}.LMs that use Rotary Position Embedding (RoPE) (Su et al., 2021), such as those in the LLaMA and Pythia families, have architectures that allow arbitrarily intervention on the apparent position of an activation (p, Z p ) → (p ′ , Z p ), even if this results in overall context activations that cannot be written down as a list of activations.This is because position information is applied at every layer, and not injected into the residual stream like in absolute position embeddings.Specifically, equation 16 in Su et al. (2021) provides the definition of RoPE (recreated verbatim as follows):
q ⊺ m k n = (R d Θ,m W q x m ) ⊺ (R d Θ,n W k x n ) (3)
Then, making the intervention R d Θ,n → R d Θ,n * changes the apparent position of the activations at position n to the position at n * .
Is the position dependent bias just a bias?For the purposes of determining if position encodes binding, the fact that the LM does not substantially change its beliefs when we switch the positions of the attribute activations (or the entity activations) suggests that position can only play a limited role.However, calling the position dependency of attributes a "bias" implies that it is an artifact that we should correct for.To what extent is this true?
The case for regarding it as a bias is two-fold.First, as discussed by Su et al. (2021), RoPE exhibits long-term position decay, which systematically lowers the attention paid to activations that are further away from the query (i.e. earlier in the context).Plausibly, at some point when computing Preprint the query mechanism, the LM has to make a decision whether to pay attention to the first or the second attribute, and the presence of the long-term position decay can bias this decision, leading to the position dependent bias in the final prediction that we see.
The second reason is that there are systematic and unbiased ways of calibrating the LM to recover the correct answer, in spite of the position dependent bias.We discuss two strategies.Because the position dependent bias modifies the log probs for A 0 (or A 1 ) regardless of which entity is being queried, we can estimate this effect by averaging the log probs for A 0 (or A 1 ) for both queries E 0 and E 1 .Then, when making a prediction, we can subtract this average from the log probs for A 0 (or A 1 ).This corresponds to the median calibrated accuracy metric discussed earlier.The second procedure to mitigate the position dependent bias is an intervention to set all attribute activations to have the same position, which limits the amount of bias position dependency can introduce.These procedures do not require foreknowledge of what the ground truth predicates are, and hence do not leak knowledge into the prediction process -if the calibrated LM answers queries correctly, the information must have come from the context activations and not from the calibration process.
Nonetheless, there are features about the position dependent bias that could be interesting to study.For example, we might hope to predict the magnitudes of the position dependent bias based on RoPE's parameters.However, such an investigation will most likely involve a deeper mechanistic understanding of the query system, which we leave as future work.
D BINDING TASK DETAILS D.1 CAPITALS
Construct a list of one-token names and a list of country-capital pairs that are also each one-token wide.Then, apply the following template: The LM is expected to answer with the capital of the country that is bound to the queried entity.Note that the LM is expected to simultaneously solve the factual recall task of looking up the capital city of a country.
D.2 PARALLEL
The PARALLEL task uses the same country capital setup, but with the prompt template:
Answer the question based on the context below.Keep the answer short.
Context: {E_0} and {E_1} live in the capital cities of {A_0} and {A_1} respectively.
Question: Which city does {qn_subject} live in?
Answer: {qn_subject} lives in the city of This prompt format breaks the confounder in the CAPITALS task that entity always appear in the same sentence as attributes, suggesting binding ID is not merely a syntactic property.
D.3 FRUITS
The FRUITS task uses the same set of names, but for attributes it uses a set of common fruits and food that are one-token wide.The prompt format is:
Answer the question based on the context below.Keep the answer short.
Context: {E_0} likes eating the {A_0}.{E_1} likes eating the {A_1} respectively.
Question: What food does {qn_subject} like?
Answer: {qn_subject} likes the
D.4 SHAPES
The SHAPES tasks have entities which are one-token wide colors, and attributes which are one-token wide shapes.The prompt looks like:
Answer the question based on the context below.Keep the answer short.
Context: The {A_0} is {E_0}.The {A_1} is {E_1}.
Question: Which shape is colored {qn_subject}?
Answer: The {qn_subject} shape is
This task inverts the assumption that entities have to be nouns, and attributes are adjectives.
D.5 BIOS
This task is adapted from the bias in bios dataset De-Arteaga et al. ( 2019), with a prompt format following Hernandez et al. (2023a).The entities are the set of one-token names, and the attributes are a set of biography descriptions obtained using the procedure from Hernandez et al. (2023a).The LM is expected to infer the occupation from this description.This time, the attributes are typically one sentence long, and are no longer one-token wide.We thus do not expect the mean interventions for attributes to work, although we may still expect entity interventions to work.Just inferring the correct occupation is also a much more challenging task than the other synthetic tasks.
The prompt format is:
Answer the question based on the context below.Keep the answer short.
Context: About {E_0}: {A_0} About {E_1}: {A_1}
Question: What occupation does {qn_subject} have?Answer: {qn_subject} has the occupation of
E MCQ TASK
Multiple choice questions (MCQs) can be formulated as a binding task if we put the options before the question.This is to force the LM to represent the binding between label and option text before it sees the questions.We study the SST-2 task (Socher et al., 2013), which is a binary sentiment classification task on movie reviews (either positive or negative).Then, the attributes are single letter labels from A to E, and the entities are "Positive" and "Negative".
The prompt is as follows:
Classify the review using the following options: {A_0}: {E_0} {A_1}: {E_1} Review: {question} Answer: Then, when prompted with a question with a certain sentiment, the LM is expected to retrieve its corresponding label.
E.1 EXPERIMENTS
It turns out that the reversed MCQ format is too out of distribution for LLaMA-30b to solve.However, we find that the instruction finetuned tulu-13b model (Wang et al., 2023) is able to solve this task.
We find that the activations for this task are not factorizable in the same way.Consider the target context: We denote the labels as L 0 and L 1 , so that L 0 is A in the first context and B in the second context.We denote the option texts as O 0 and O 1 .
We perform an experiment where we intervene by copying over a suffix of every line from the source context into the target context, and plot the accuracy based on whether the intervention successfully changes the belief (Fig. 8).The right most point of the plot is the control condition where no interventions are made.The accuracy is near zero because the model currently believes in the original context.At the left most point, we intervene on the entire statement, which is a substitution of the entire Z context .Thus, we observe a near perfect accuracy.
Interestingly, copying over the activations for the tokens corresponding to "ative" and the whitespace following it suffices for almost completely changing the belief, despite having a surface token form that is identical at those two tokens ("ative ⟨WS⟩ " for both source and target contexts).This suggests that those activations captures the binding information that contains both the label and the option text.This leads to the conclusion that binding information is bound directly at those activations, instead of indirectly via binding IDs.
In contrast, binding ID would have predicted that substituting these two tokens would not have made a difference, because the option activations Z O should contain only information about the option text and the binding ID, which is identical for our choice of source and target contexts.
Figure 2 :
2
Figure 2: a) Causal diagram for autoregressive LMs.From input context ctxt(e 0 ↔ a 0 , e 1 ↔ a 1 ), the LM constructs internal representations Z context .We will mainly study the components of Z context boxed in blue.b) A secondary run of the LM on context ctxt(e 2 ↔ a 2 , e 3 ↔ a 3 ) to produce Z ′ context .c) An example intervention where Z context is modified by replacing Z A0 → Z ′ A0 from Z ′ context .
Fig.2ashows that Z context contains all the information about the context that the LM uses.We thus study the structure of Z context using causal mediation analysis, a widely used tool for understanding neural networks(Vig et al., 2020a;Geiger et al., 2021;Meng et al., 2022).Causal mediation analysis involves substituting one set of activations in a network for another, and we adopt the /.notation (from Mathematica) to denote this.For example, for activationsZ * ∈ R nlayers×dmodel, and a token position p in the context, Z context /.{Z p → Z * } = [Z 0 , . . ., Z p−1 , Z * , Z p+1 , . . .]. Similarly, for a context c = ctxt(e 0 ↔ a 0 , . . ., e n−1 ↔ a n−1 ), we have c/.{Ek → e * } = ctxt(e 0 ↔ a 0 , . . ., e * ↔ a k , . . ., e n−1 ↔ a n−1 ).
Figure 3 :
3
Figure 3: Factorizability results.Each row corresponds to querying for a particular entity.Plotted are the mean log prob for all four attributes.Highlighted squares are predicted by factorizability.
Figure 4 :
4
Figure 4: Top: Mean log probs for entity interventions.Bottom: Mean log probs for attributes.For brevity, let Z k refer to Z E k or Z A k .The grey and green vertical lines indicate the original positions for Z 0 and Z 1 respectively.The x-axis marks x, Z 0 's new position.Under the position interventions {X0 → x, X 1 → X 1 − (x − X 0 )},the grey line is the control condition with no interventions, and the green line is the swapped condition where Z 0 and Z 1 have swapped positions.
Figure 5 :
5
Figure 5: The plots show the mean median-calibrated accuracy when one pair of binding ID, v 0 , is fixed at the green circle, and the other, v 1 , is varied across the grid.The binding IDs b(0), b(1), and b(2) are shown as the origin of the arrows, the end of the horizontal arrow, and the end of the diagonal arrow respectively.We use LLaMA-13b for computational reasons.
Figure 7 :
7
Figure 7: Direct binding in MCQ task.O k and L k denote options and labels respectively.Under direct binding, Z O0 and Z O1 are represented by a binding function Λ O that directly binds option and label together, whereas Z L0 and Z L1 are causally irrelevant.
Answer the question based on the context below.Keep the answer short.Context: {E_0} lives in the capital city of {A_0}.{E_1} lives in the capital city of {A_1}.Question: Which city does {qn_subject} live in?Answer: {qn_subject} lives in the city of
Figure 8 :
8
Figure 8: Substitutions for MCQ option suffix
Table 1 :
1
and likewise for ∆ E (k).Left: Mean calibrated accuracies for mean interventions on four test conditions.Columns are the test conditions, and rows are queries.Right: Mean interventions with random vectors.
Mean
interventions.With the difference vectors, we can modify binding IDs by performing mean interventions, and observe how model behavior changes.The attribute mean intervention switches the binding ID vectors in Z A0 and Z A1 with the interventionsZ A0 → Z A0 + ∆ A (1), Z A1 → Z A1 − ∆ A (1).The entity mean intervention similarly switches the binding ID vectors in Z E0 and Z E1 .Additivity predicts that performing either mean intervention will reverse the model behavior: E 0 will be associated with A 1 , and E 1 with A 0 .
Table 2 :
2
The mean median-calibrated accuracy and mean log prob for mean interventions on n = 3 CAPITALS using binding ID estimates from other tasks.Random chance has 0.33 mean accuracy.
Preprint1.01.00.80.8Accuracy0.0 0.2 0.4 0.610 110 0 Size (billions of parameters) 10 1 Family Pythia LLaMa Intervention Control Attribute EntityAccuracy0.0 0.2 0.4 0.6Task CAPITALS PARALLEL SHAPES FRUITS BIOSBIAS control entity attributeFigure 6: Left: models in Pythia and LLaMA on CAPITALS. LLaMA-65b not present for computa-tional reasons. Right: LLaMA-30b on binding tasks. Unlike others, the BIOS task has attributes thatare several tokens long.TaskCAPITALSPARALLELSHAPESFRUITSBIOSZeros RandomMean accuracy0.880.870.710.800.470.300.31Mean log prob-1.01-1.07-1.18-1.21-1.64 -1.86-2.15
in the sense that for any e ∈ E, {Z E k → Γ E (e, k)} leads to the belief c/.{E k → e}.Likewise, there exists a binding function Γ A such that for any contexts c, Z A k is represented by Γ A (A k , k), in the sense that for any a ∈ A, {Z A k → Γ A (a, k)} leads to the belief c/.{A k → a}.These substitutions should also compose appropriately.2.Query.Further, the binding functions Γ A and Γ E satisfy the following property: Choose any 2 permutations π E (k) and π A (k) over {0, 1}, and consider a Z context containing[Γ E (e 0 , π E (0)), Γ A (a 0 , π A (0)), Γ E (e 1 , π E (1)), Γ A (a 1 , π A (1))] .The query system will then believe e 0 ↔ a 0 , e 1 ↔ a 1 if π E = π A , and e 0 ↔ a 1 , e 1 ↔ a 0 otherwise.The first claim follows from factorizability.From factorizability, we can construct the candidate binding functions simply by picking an arbitrary context consistent with the parameters.For any e ∈ E and any binding ID k ∈ [0, n − 1], pick any context c such that E k (c) = e.Then, let Γ E (e, k) = Z E k (c).Γ A can be constructed similarly.Our factorizability results show that the binding functions constructed this way satisfy the Representation claim.
These are the pre-transformer layer activations, sometimes referred to as the residual stream.
We do not claim or assume that LMs actually have beliefs in the sense that humans do. This is a purely notational choice to reduce verbosity.
In Fig.1we used shapes to denote abstract binding IDs. In the text, we will identify the abstract binding IDs with the integers {0, 1, . . . , n − 1} so that the k-th entity/attribute has the abstract binding ID k.
ACKNOWLEDGMENTSWe thank Danny Halawi, Fred Zhang, Erik Jenner, Cassidy Laidlaw, Shawn Im, Arthur Conmy, Shivam Singhal, and Olivia Watkins for their helpful feedback.JF was supported by the Long-Term Future Fund.JS was supported by the National Science Foundation under Grants No. 2031899 and 1804794.In addition, we thank Open Philanthropy for its support of both JS and the Center for Human-Compatible AI.PreprintF GENERALITY DETAILS Suppose π is a cyclic shift, say π(0) = 1, π(1) = 2, π(2) = 0.Then, we can perform mean interventions based on the cyclic shift on entities as follows:We then expect the belief to follow the same shift, so that the LM believes E k ↔ A π(k) .Similarly, we can perform mean interventions on attributes as follows:However, this time we expect the belief to follow the inverse shift, i.e.As usual, we sample ∆ using 500 samples.We perform the intervention using both cyclic shifts over 3 elements, (i.e.π and π −1 ), and report the mean results over these two shifts.G GEOMETRY DETAILSAn experimental challenge we face is that we do not have access to the binding ID vectors b A , b E themselves, only differences between them, ∆ A , ∆ E .For clarity of exposition we first describe the procedure we would perform if we had access to the binding ID vectors, before describing the actual experiment.In the ideal case, we would obtain two pairs of binding ID vectors,Then, we can construct two linear combinations of these two binding ID vectors as candidate binding IDs, The second problem is that we cannot arbitrarily set the binding ID vector of an activation to another binding ID vector.Instead, we can only add vectors to activations.We thus perform two sets of interventions.We first perform the mean interventions on the second binding ID pair to turnAt this point, the LM sees two entities with the same binding ID and two attributes with the same binding ID, and is confused.Then, we can add candidate binding vector ID offsets to these activations.More precisely, let η, ν be coefficients for the linear combinations of the basis vectors.Define nowas the candidate binding vector ID offsets.Then, we addto the respective two pairs of binding IDs, and evaluate if the model has regained its beliefs.Concretely, the intervention we apply is parameterized by (η 0 , ν 0 , η 1 , ν 1 ) and are as follows:).We are now interested in the question: if we have coefficients (η 0 , ν 0 ) and (η 1 , ν 1 ), are the binding vectors constructed from those coefficients valid binding IDs?In our experiments (Fig.5), we fix the value of η 0 and ν 0 at varying positions (green circles), and vary η 1 and ν 1 .We plot the mean median-calibrated accuracy.We find that near the green circle, the model is completely confused, responding with near-chance accuracy.This verifies that the erasure step works as intended.In addition, we find that there appears to be a binding metric subspace in that as long as candidate binding IDs are sufficiently far apart, the LM recovers its ability to distinguish between the two, even when candidate binding IDs are outside of the convex hull between the three binding IDs used to generate the basis vectors.
Analysis methods in neural language processing: A survey. Yonatan Belinkov, James Glass, 10.1162/tacla00254Transactions of the Association for Computational Linguistics. 72019
Pythia: A suite for analyzing large language models across training and scaling. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, O' Kyle, Eric Brien, Mohammad Hallahan, Shivanshu Aflah Khan, Purohit, Sai Usvsn, Edward Prashanth, Raff, International Conference on Machine Learning. PMLR2023
Knowledge neurons in pretrained transformers. Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei, ArXiv, abs/2104.086962021233296761
Bias in bios: A case study of semantic representation bias in a high-stakes setting. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman, Kalai , proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and Transparency2019
A mathematical framework for transformer circuits. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam Mccandlish, Chris Olah, 2021Transformer Circuits Thread
Causal abstractions of neural networks. Atticus Preprint, Hanson Geiger, Thomas Lu, Christopher Icard, Potts, Advances in Neural Information Processing Systems. 202134
Transformer feed-forward layers are keyvalue memories. R Mor Geva, Jonathan Schuster, Omer Berant, Levy, ArXiv, abs/2012.149132020
Dissecting recall of factual associations in auto-regressive language models. Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson, arXiv:2304.147672023arXiv preprint
Unsolved problems in ml safety. Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt, arXiv:2109.139162021arXiv preprint
Natural language descriptions of deep visual features. Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, Jacob Andreas, International Conference on Learning Representations. 2021
Measuring and manipulating knowledge representations in language models. Evan Hernandez, Belinda Z Li, Jacob Andreas, arXiv:2304.007402023aarXiv preprint
Evan Hernandez, Sen Arnab, Tal Sharma, Kevin Haklay, Martin Meng, Jacob Wattenberg, Yonatan Andreas, David Belinkov, Bau, arXiv:2308.09124Linearity of relation decoding in transformer language models. 2023barXiv preprint
Designing and interpreting probes with control tasks. John Hewitt, Percy Liang, arXiv:1909.033682019arXiv preprint
Belinda Z Li, Maxwell Nye, Jacob Andreas, arXiv:2106.00737Implicit representations of meaning in neural language models. 2021arXiv preprint
Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg, arXiv:2210.13382Emergent world representations: Exploring a sequence model trained on a synthetic task. 2022arXiv preprint
Locating and editing factual associations in gpt. Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov, Advances in Neural Information Processing Systems. 202235
Linguistic regularities in continuous space word representations. Tomáš Mikolov, Wen-Tau Yih, Geoffrey Zweig, Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies. the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies2013
Compositional explanations of neurons. Jesse Mu, Jacob Andreas, Advances in Neural Information Processing Systems. 202033
Progress measures for grokking via mechanistic interpretability. Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt, The Eleventh International Conference on Learning Representations. 2023
Mapping language models to grounded conceptual spaces. Roma Patel, Ellie Pavlick, International Conference on Learning Representations. 2021
Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell, 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE2023
A primer in bertology: What we know about how bert works. Anna Rogers, Olga Kovaleva, Anna Rumshisky, Transactions of the Association for Computational Linguistics. 82021
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAOctober 2013Association for Computational Linguistics
Roformer: Enhanced transformer with rotary position embedding. Jianlin Preprint, Yu Su, Shengfeng Lu, Ahmed Pan, Bo Murtadha, Yunfeng Wen, Liu, arXiv:2104.098642021arXiv preprint
Bert rediscovers the classical nlp pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick, Association for Computational Linguistics. 2019
Thibaut Hugo Touvron, Gautier Lavril, Xavier Izacard, Marie-Anne Martinet, Timothée Lachaux, Baptiste Lacroix, Naman Rozière, Eric Goyal, Hambro, arXiv:2302.13971Faisal Azhar, et al. Llama: Open and efficient foundation language models. 2023arXiv preprint
The binding problem. Anne Treisman, Current opinion in neurobiology. 621996
Investigating gender bias in language models using causal mediation analysis. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, Stuart Shieber, Advances in neural information processing systems. 2020a33
Investigating gender bias in language models using causal mediation analysis. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, Stuart Shieber, Advances in Neural Information Processing Systems. 2020b33
Interpretability in the wild: A circuit for indirect object identification in gpt-2 small. Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, Jacob Steinhardt, arXiv:2211.005932022arXiv preprint
How far can camels go? exploring the state of instruction tuning on open resources. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Raghavi Khyathi, David Chandu, Kelsey Wadden, Noah A Macmillan, Iz Smith, Beltagy, arXiv:2306.047512023arXiv preprint
Zhengxuan Wu, Atticus Geiger, Christopher Potts, Noah D Goodman, arXiv:2305.08809Interpretability at scale: Identifying causal mechanisms in alpaca. 2023arXiv preprint
Calibrate before use: Improving few-shot performance of language models. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, International Conference on Machine Learning. PMLR2021 |
52,893,515 | Small nonlinearities in activation functions create bad local minima in neural networks | We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with "slightest" nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general "no spurious local minima" is a property limited to deep linear networks, and insights obtained from linear networks are not robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all (in contrast to previous results) practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on local optimality in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic. | [
16138044,
4429876
] | Small nonlinearities in activation functions create bad local minima in neural networks
Chulhee Yun [email protected]
Massachusetts Institute of Technology
02139CambridgeMA
Suvrit Sra [email protected]
Massachusetts Institute of Technology
02139CambridgeMA
Ali Jadbabaie [email protected]
Massachusetts Institute of Technology
02139CambridgeMA
Small nonlinearities in activation functions create bad local minima in neural networks
We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with "slightest" nonlinearity, the empirical risks have spurious local minima in most cases. Our results thus indicate that in general "no spurious local minima" is a property limited to deep linear networks, and insights obtained from linear networks are not robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all (in contrast to previous results) practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our results make the least restrictive assumptions relative to existing results on local optimality in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.
Introduction
Neural network training reduces to solving nonconvex empirical risk minimization problems, a task that is in general intractable. But success stories of deep learning suggest that local minima of the empirical risk could be close to global minima. Choromanska et al. [4] use spherical spinglass models from statistical physics to justify how the size of neural networks may result in local minima that are close to global. However, due to the complexities introduced by nonlinearity, a rigorous understanding of optimality in deep neural networks remains elusive.
Initial steps towards understanding optimality have focused on deep linear networks. This area has seen substantial recent progress. In deep linear networks there is no nonlinear activation; the output is simply a multilinear function of the input. Baldi and Hornik [1] prove that some shallow networks have no spurious local minima, and Kawaguchi [10] extends this result to squared error deep linear networks, showing that they only have global minima and saddle points. Several other works on linear nets have also appeared [8, 13-15, 27, 28].
The theory of nonlinear neural networks (which is the actual setting of interest), however, is still in its infancy. There have been attempts to extend the "local minima are global" property from linear to nonlinear networks, but recent results suggest that this property does not usually hold [28]. Although not unexpected, rigorously proving such results turns out to be non-trivial, forcing several authors (e.g., [6,18,24]) to make somewhat unrealistic assumptions (realizability and Gaussianity) on data.
In contrast, we prove existence of spurious local minima under the least restrictive (to our knowledge) assumptions. Since seemingly subtle changes to assumptions can greatly influence the analysis as well as the applicability of known results, let us first summarize what is known; this will also help provide a better intuitive perspective on our results (as the technical details are somewhat involved).
What is known so far?
There is a large and rapidly expanding literature of optimization of neural networks. Some works focus on the loss surface [1,10,14,[16][17][18][19][20][21][24][25][26][27][28], while others study the convergence of gradient-based methods for optimizing this loss [3,6,22]. In particular, our focus is on the loss surface itself, independent of any algorithmic concerns; this is reflected in the works summarized below.
For ReLU networks, the works [21,28] provide counterexample datasets that lead to spurious local minima, dashing hopes of "local implies global" properties. However, these works fail to provide statements about generic datasets, and one can argue that their setups are limited to isolated pathological examples. In comparison, our Theorem 1 shows existence of spurious local minima for almost all datasets, a much more general result. Zhou and Liang [28] also give characterization of critical points of shallow ReLU networks, but with more than one hidden node the characterization provided is limited to certain regions.
There are also results that study population risk of shallow ReLU networks under a restrictive assumption that input data is i.i.d. Gaussian distributed [6,18,24]. Moreover, these works also assume realizability, i.e., the output data is generated from a neural network with the same architecture as the model one trains, with unknown true parameters. These assumptions enable one to compute the population risk in a closed form, and ensure that one can always achieve zero loss at global minima. The authors of [18,24] study the population risk function of the form
E x [(∑ k i=1 ReLU(w T i x) − ReLU(v T i x)) 2 ]
, where the true parameters v i 's are orthogonal unit vectors. Through extensive experiments and computer-assisted local minimality checks, Safran and Shamir [18] show existence of local minima for k ≥ 6. However, this result is empirical and does not have constructive proofs. Wu et al. [24] show that with k = 2, there is no spurious local minima on the manifold w 1 2 = w 2 2 = 1. Du et al. [6] study population risk of one-hidden-layer CNN. They show that there can be a spurious local minimum, but gradient descent converges to the global minimum with probability at least 1/4.
Our paper focuses on empirical risk instead of population risk, and does not assume either Gaussianity or realizability. Our assumption on the dataset is that it is not linearly fittable 1 , which is vastly more general and realistic than assuming that input data is Gaussian or that the output is generated from an unknown neural network. Our results also show that [24] fails to extend to empirical risk and non-unit parameter vectors (see the discussion after Theorem 2).
Laurent and von Brecht [14] studies one-hidden-layer networks with hinge loss for classification. Under linear separability, the authors prove that Leaky-ReLU networks don't have bad local minima, while ReLU networks do. Our focus is on regression, and we only make mild assumptions on data.
For deep linear networks, the most relevant result to ours is [13]. When all hidden layers are wider than the input or output layers, Laurent and Brecht [13] prove that any local minimum of a deep linear network under differentiable convex loss is global. 2 They prove this by showing a statement about relationship between linear vs. multilinear parametrization. Our result in Theorem 4 is strictly more general that their results, and presents a comprehensive characterization.
A different body of literature [16,17,20,25,26] considers sufficient conditions for global optimality in nonlinear networks. These results make certain architectural assumptions (and some technical restrictions) that may not usually apply to realistic networks. There are also other works on global optimality conditions for specially designed architectures [7,9].
Contributions and Summary of Results
We summarize our key contributions more precisely below. Our work encompasses results for both nonlinear and linear neural networks. First, we study whether the "local minima are global" property holds for nonlinear networks. Unfortunately, our results here are negative. Specifically, we prove -For piecewise linear and nonnegative homogeneous activation functions (e.g., ReLU), we prove in Theorem 1 that if linear models cannot perfectly fit the data, one can construct infinitely many local minima that are not global. In practice, most datasets are not linearly fittable, hence this result gives a constructive proof of spurious local minima for generic datasets. In contrast, several existing results either provide only one counterexample [21,28], or make restrictive assumptions of realizability [6,18] or linear separability [14]. This result is presented in Section 2.
-In Theorem 2 we tackle more general nonlinear activation functions, and provide a simple architecture (with squared loss) and dataset, for which there exists a local minimum inferior to the global minimum for a realizable dataset. Our analysis applies to a wide range of activations, including sigmoid, tanh, arctan, ELU [5], SELU [11], and ReLU. Considering that realizability of data simplifies the analysis and ensures zero loss at global optima, our counterexample that is realizable and yet has a spurious local minimum is surprising, suggesting that the situation is likely worse for non-realizable data. See Section 3 for details.
We complement our negative results by presenting the following positive result on linear networks: -Assume that the hidden layers are as wide as either the input or the output, and that the empirical risk ((W j ) H+1 j=1 ) equals 0 (W H+1 W H · · · W 1 ), where 0 is a differentiable loss function and W i is the weight matrix for layer i. Theorem 4 shows if (Ŵ j ) H+1 j=1 is a critical point of , then its type of stationarity (local min/max, or saddle) is closely related to the behavior of 0 evaluated at the productŴ H+1 · · ·Ŵ 1 . If we additionally assume that any critical point of 0 is a global minimum, Corollary 5 shows that the empirical risk only has global minima and saddles, and provides a simple condition to distinguish between them. To the best of our knowledge, this is the most general result on deep linear networks and it subsumes several previous results, e.g., [10,13,27,28]. This result is in Section 4.
Notation.
For an integer a ≥ 1, [a] denotes the set of integers from 1 to a (inclusive). For a vector v, we use [v] i to denote its i-th component, while [v] [i] denotes a vector comprised of the first i components of v. Let 1 (·) (0 (·) ) be the all ones (zeros) column vector or matrix with size (·).
"ReLU-like" networks: bad local minima exist for most data
We study below whether nonlinear neural networks provably have spurious local minima. We show in §2 and §3 that even for extremely simple nonlinear networks, one encounters spurious local minima. We first consider ReLU and ReLU-like networks. Here, we prove that as long as linear models cannot perfectly fit the data, there exists a local minimum strictly inferior to the global one. Using nonnegative homogeneity, we can scale the parameters to get infinitely many local minima.
Consider a training dataset that consists of m data points. The inputs and the outputs are of dimension d x and d y , respectively. We aggregate these items, and write X ∈ R d x ×m as the data matrix and Y ∈ R d y ×m as the label matrix. Consider the 1-hidden-layer neural network
Y = W 2 h(W 1 X + b 1 1 T m ) + b 2 1 T m , where h is a nonlinear activation function, W 2 ∈ R d y ×d 1 , b 2 ∈ R d y , W 1 ∈ R d 1 ×d x , and b 1 ∈ R d 1 .
We analyze the empirical risk with squared loss
(W 1 , W 2 , b 1 , b 2 ) = 1 2 W 2 h(W 1 X+b 1 1 T m )+b 2 1 T m −Y 2 F .
Next, define a class of piecewise linear nonnegative homogeneous functions
h s + ,s − (x) = max{s + x, 0} + min{s − x, 0},(1)
where s + > 0, s − ≥ 0 and s + = s − . Note that ReLU and Leaky-ReLU are members of this class.
Main results and discussion
We use the shorthandX : Then, there is a spurious local minimum whose risk is the same as linear least squares model. Moreover, due to nonnegative homogeneity ofh s + ,s − , there are infinitely many such local minima.
= X T 1 m T ∈ R (d x +1)×m .
Noticing that most real world datasets cannot be perfectly fit with linear models, Theorem 1 shows that when we use the activationh s + ,s − , the empirical risk has bad local minima for almost all datasets that one may encounter in practice. Although it is not very surprising that neural networks have spurious local minima, proving this rigorously is non-trivial. We provide a constructive and deterministic proof for this problem that holds for very general datasets, which is in contrast to experimental results of [18]. We emphasize that Theorem 1 also holds even for "slightest" nonlinearities, e.g., when s + = 1 + and s − = 1 where > 0 is small. This suggests that the "local min is global" property is limited to the trivial setting of linear neural networks.
Existing results on squared error loss either provide one counterexample [21,28], or assume realizability and Gaussian input [6,18]. Realizability is an assumption that the output is generated by a network with unknown parameters. In real datasets, neither input is Gaussian nor output is generated by neural networks; in contrast, our result holds for most realistic situations, and hence delivers useful insight.
There are several results proving sufficient conditions for global optimality of nonlinear neural networks [16,20,25]. But they rely on assumptions that the network width scales with the number of data points. For instance, applying Theorem 3.4 of [16] to our network proves that ifX has linearly independent columns and other assumptions hold, then any critical point with W 2 = 0 is a global minimum. However, linearly independent columns already imply row(X) = R m , so even linear models RX can fit any Y; i.e., there is less merit in using a complex model to fit Y. Theorem 1 does not make any structural assumption other than d 1 ≥ 2, and addresses the case where it is impossible to fit Y with linear models, which is much more realistic.
It is worth comparing our result with [14], who use hinge loss based classification and assume linear separability to prove "no spurious local minima" for Leaky-ReLU networks. Their result does not contradict our theorem because the losses are different and we do not assume linear separability.
One might wonder if our theorem holds even with d 1 ≥ m. Venturi et al. [23] showed that onehidden-layer neural networks with d 1 ≥ m doesn't have spurious valleys; however, their result shows nonexistence of strict spurious local minima, whereas due toh s + ,s − we only have non-strict local minima. Based on [2], one might claim that with wide enough hidden layer and random W 1 and b 1 , one can fit any Y; however, this is not the case, by our assumption that linear models RX cannot fit Y. Note that there is a non-trivial region in the parameter space where W 1 X + b 1 1 T m > 0 (entry-wise). In this region, the output of neural networkŶ is still a linear combination of rows of X, soŶ cannot fit Y; in fact, it can only do as well as linear models.
Analysis of Theorem 1
The proof of the theorem is split into two steps. First, we prove that there exist local minima (Ŵ j ,b j ) 2 j=1 whose risk value is the same as the linear least squares solution, and that there are infinitely many such minima. Second, we will construct a tuple of parameters (W j ,b j ) 2 j=1 that has strictly smaller empirical risk than (Ŵ j ,b j ) 2 j=1 .
Step 1: A local minimum as good as the linear solution. The main idea here is to exploit the weights from the linear least squares solution, and to tune the parameters so that all inputs to hidden nodes become positive. Doing so makes the hidden nodes "locally linear," so that the constructed (Ŵ j ,b j ) 2 j=1 that produce linear least squares estimates at the output become locally optimal.
Recall thatX = X T 1 m T ∈ R (d x +1)×m , and define a linear least squares loss 0 (R) :
= 1 2 RX − Y 2 F that is minimized atW, so that ∇ 0 (W) = (WX − Y)X T = 0. Since d y = 1, the solutionW ∈ R d y ×(d x +1) is a row vector. For all i ∈ [m], letȳ i =W x T i 1
T be the output of the linear least squares model, and similarlyȲ =WX.
Let η := min {−1, 2 min iȳi }, a negative constant makingȳ i − η > 0 for all i. Define parameterŝ W 1 = α [W] [d x ] 0 (d 1 −1)×d x ,b 1 = α [W] d x +1 − η −η1 d 1 −1 ,Ŵ 2 = 1 αs + 0 T d 1 −1 ,b 2 = η, where α > 0Y = 1 αs + s + (αȲ − αη1 T m ) + η1 T m =Ȳ, so that the loss ((Ŵ j ,b j ) 2 j=1 ) = 1 2 Ȳ − Y 2 F = 0 (W). So far, we checked that (Ŵ j ,b j ) 2 j=1
has the same empirical risk as a linear least squares solution. It now remains to show that this point is indeed a local minimum of . To that end, we consider the perturbed parameters (Ŵ j + ∆ j ,b j + δ j ) 2 j=1 , and check their risk is always larger. A useful point is that sinceW is a minimum of 0 (R)
= 1 2 RX − Y 2 F , we have (WX − Y)X T = (Ȳ − Y) X T 1 m = 0,(2)
so (Ȳ − Y)X T = 0 and (Ȳ − Y)1 m = 0. For small enough perturbations, (Ŵ 1 + ∆ 1 )x i + (b 1 + δ 1 ) > 0 still holds for all i. So, we can observe that
((Ŵ j + ∆ j ,b j + δ j ) 2 j=1 ) = 1 2 Ȳ − Y +∆X +δ1 T m 2 F = 1 2 Ȳ − Y 2 F + 1 2 ∆ X +δ1 T m 2 F ,(3)
where∆ andδ are∆ := s + (Ŵ 2 ∆ 1 + ∆ 2Ŵ1 + ∆ 2 ∆ 1 ) andδ := s + (Ŵ 2 δ 1 + ∆ 2b1 + ∆ 2 δ 1 ) + δ 2 ; they are aggregated perturbation terms. We used (2) to obtain the last equality of (3). Thus,
((Ŵ j + ∆ j ,b j + δ j ) 2 j=1 ) ≥ ((Ŵ j ,b j ) 2 j=1 ) for small perturbations, proving (Ŵ j ,b j ) 2 j=1
is indeed a local minimum of . Since this is true for arbitrary α > 0, there are infinitely many such local minima. We can also construct similar local minima by permuting hidden nodes, etc.
Step 2: A point strictly better than the local minimum. The proof of this step is more involved. In the previous step, we "pushed" all the input to the hidden nodes to positive side, and took advantage of "local linearity" of the hidden nodes near (Ŵ j ,b j ) 2 j=1 . But to construct parameters (W j ,b j ) 2 j=1 that have strictly smaller risk than (Ŵ j ,b j ) 2 j=1 (to prove that (Ŵ j ,b j ) 2 j=1 is a spurious local minimum), we make the sign of inputs to the hidden nodes different depending on data.
To this end, we sort the indices of data points in increasing order ofȳ i ; i.e.,ȳ 1 ≤ȳ 2 ≤ · · · ≤ȳ m . Define the set J := {j ∈ [m − 1] | ∑ i≤j (ȳ i − y i ) = 0,ȳ j <ȳ j+1 }. The remaining construction is divided into two cases: J = ∅ and J = ∅, whose main ideas are essentially the same. We present the proof for J = ∅, and defer the other case to Appendix A2 as it is rarer, and its proof, while instructive for its perturbation argument, is technically too involved.
Case 1: J = ∅. Pick any j 0 ∈ J . We can observe that
∑ i≤j 0 (ȳ i − y i ) = − ∑ i>j 0 (ȳ i − y i ),
because of (2). Define β =ȳ j 0 +ȳ j 0 +1 2 , so thatȳ i − β < 0 for all i ≤ j 0 andȳ i − β > 0 for all i > j 0 .
Then, let γ be a constant satisfying 0 < |γ| ≤ȳ j 0 +1 −ȳ j 0 4
, whose value will be specified later. Since |γ| is small enough, sign(
ȳ i − β) = sign(ȳ i − β + γ) = sign(ȳ i − β − γ). Now select parameters W 1 = [W] [d x ] −[W] [d x ] 0 (d 1 −2)×d x ,b 1 = [W] d x +1 − β + γ −[W] d x +1 + β + γ 0 d 1 −2 ,W 2 = 1 s + +s − 1 −1 0 T d 1 −2 ,b 2 = β. Recall again that [W] [d x ] x i + [W] d x +1 =ȳ i . For i ≤ j 0 ,ȳ i − β + γ < 0 and −ȳ i + β + γ > 0, sô y i = s − (ȳ i − β + γ) s + + s − − s + (−ȳ i + β + γ) s + + s − + β =ȳ i − s + − s − s + + s − γ.
Similarly, for i > j 0 ,
ȳ i − β + γ > 0 and −ȳ i + β + γ < 0 results inŷ i =ȳ i + s + −s − s + +s − γ.
Here, we push the outputsŷ i of the network by s + −s − s + +s − γ fromȳ i , and the direction of the "push" varies depending on whether i ≤ j 0 or i > j 0 .
The empirical risk for this choice of parameters is
((W j ,b j ) 2 j=1 ) = 1 2 ∑ i≤j 0 ȳ i − s + − s − s + + s − γ − y i 2 + 1 2 ∑ i>j 0 ȳ i + s + − s − s + + s − γ − y i 2 = 0 (W) − 2 ∑ i≤j 0 (ȳ i − y i ) s + − s − s + + s − γ + O(γ 2 ). Since ∑ i≤j 0 (ȳ i − y i ) = 0 and s + = s − , we can choose sign(γ) = sign([∑ i≤j 0 (ȳ i − y i )](s + − s − )), and choose small |γ| so that ((W j ,b j ) 2 j=1 ) < 0 (W) = ((Ŵ j ,b j ) 2 j=1 ), proving that (Ŵ j ,b j ) 2 j=1
is a spurious local minimum.
Counterexample: bad local minima for many activations
The proof of Theorem 1 crucially exploits the piecewise linearity of the activation functions. Thus, one may wonder whether the spurious local minima seen there are an artifact of the specific nonlinearity. We show below that this is not the case. We provide a counterexample nonlinear network and a dataset for which a wide range of nonlinear activations result in a local minimum that is strictly inferior to the global minimum with exactly zero empirical risk. Examples of such activation functions include popular activation functions such as sigmoid, tanh, arctan, ELU, SELU, and ReLU.
We consider again the squared error empirical risk of a one-hidden-layer nonlinear neural network:
((W j , b j ) 2 j=1 ) := 1 2 W 2 h(W 1 X+b 1 1 T m )+b 2 1 T m −Y 2 F , where we fix d x = d 1 = 2 and d y = 1. Also, let h (k) (x) be the k-th derivative of h : R → R,
whenever it exists at x. For short, let h and h denote the first and second derivatives.
Main results and discussion
Theorem 2. Let the loss ((W j , b j ) 2 j=1 ) and network be as defined above. Consider the dataset
X = 1 0 1 2 0 1 1 2 , Y = 0 0 1 .
For this network and dataset the following results hold:
1. If there exist real numbers v 1 , v 2 , v 3 , v 4 ∈ R such that (C2.1) h(v 1 )h(v 4 ) = h(v 2 )h(v 3 ), and (C2.2) h(v 1 )h v 3 +v 4 2 = h(v 3 )h v 1 +v 2 2 ,
then there is a tuple (W j ,b j ) 2 j=1 at which equals 0. 2. If there exist real numbers v 1 , v 2 , u 1 , u 2 ∈ R such that the following conditions hold:
(C2.3) u 1 h(v 1 ) + u 2 h(v 2 ) = 1 3 , (C2.4) h is infinitely differentiable at v 1 and v 2 , (C2.5) there exists a constant c > 0 such that |h (n) (v 1 )| ≤ c n n! and |h (n) (v 2 )| ≤ c n n!. (C2.6) (u 1 h (v 1 )) 2 + u 1 h (v 1 ) 3 > 0, (C2.7) (u 1 h (v 1 )u 2 h (v 2 )) 2 < ((u 1 h (v 1 )) 2 + u 1 h (v 1 ) 3 )((u 2 h (v 2 )) 2 + u 2 h (v 2 ) 3 ), then there exists a tuple (Ŵ j ,b j ) 2
j=1 such that the output of the network is the same as the linear least squares model, the risk (
(Ŵ j ,b j ) 2 j=1 ) = 1 3 , and (Ŵ j ,b j ) 2 j=1 is a local minimum of .
Theorem 2 shows that for this architecture and dataset, activations that satisfy (C2.1)-(C2.7) introduce at least one spurious local minimum. Notice that the empirical risk is zero at the global minimum. This means that the data X and Y can actually be "generated" by the network, which satisfies the realizability assumption that others use [6,18,24]. Notice that our counterexample is "easy to fit," and yet, there exists a local minimum that is not global. This leads us to conjecture that with harder datasets, the problems with spurious local minima could be worse. The proof of Theorem 2 can be found in Appendix A3.
Discussion. Note that the conditions (C2.1)-(C2.7) only require existence of certain real numbers rather than some global properties of activation h, hence are not as restrictive as they look. Conditions (C2.1)-(C2.2) come from a choice of tuple (W j ,b j ) 2 j=1 that perfectly fits the data. Condition (C2.3) is necessary for constructing (Ŵ j ,b j ) 2 j=1 with the same output as the linear least squares model, and Conditions (C2.4)-(C2.7) are needed for showing local minimality of (Ŵ j ,b j ) 2 j=1 via Taylor expansions. The class of functions that satisfy conditions (C2.1)-(C2.7) is quite large, and includes the nonlinear activation functions used in practice. The next corollary highlights this observation (for a proof with explicit choices of the involved real numbers, please see Appendix A5). Admittedly, Theorem 2 and Corollary 3 give one counterexample instead of stating a claim about generic datasets. Nevertheless, this example shows that for many practical nonlinear activations, the desirable "local minimum is global" property cannot hold even for realizable datasets, suggesting that the situation could be worse for non-realizable ones.
Remark: "ReLU-like" activation functions. Recall the piecewise linear nonnegative homogeneous activation functionh s + ,s − . They do not satisfy condition (C2.7), so Theorem 2 cannot be directly applied. Also, if s − = 0 (i.e., ReLU), conditions (C2.1)-(C2.2) are also violated. However, the statements of Theorem 2 hold even forh s + ,s − , which is shown in Appendix A6. Recalling again s + = 1 + and s − = 1, this means that even with the "slightest" nonlinearity in activation function, the network has a global minimum with risk zero while there exists a bad local minimum that performs just as linear least squares models. In other words, "local minima are global" property is rather brittle and can only hold for linear neural networks. Another thing to note is that in Appendix A6, the bias parameters are all zero, for both (W j ,b j ) 2 j=1 and (Ŵ j ,b j ) 2 j=1 . For models without bias parameters, (Ŵ j ) 2 j=1 is still a spurious local minimum, thus showing that [24] fails to extend to empirical risks and non-unit weight vectors.
Global optimality in linear networks
In this section we present our results on deep linear neural networks. Assuming that the hidden layers are at least as wide as either the input or output, we show that critical points of the loss with a multilinear parameterization inherit the type of critical points of the loss with a linear parameterization. As a corollary, we show that for differentiable losses whose critical points are globally optimal, deep linear networks have only global minima or saddle points. Furthermore, we provide an efficiently checkable condition for global minimality.
Suppose the network has H hidden layers having widths d 1 , . . . , d H . To ease notation, we set d 0 = d x and d H+1 = d y . The weights between adjacent layers are kept in matrices W j ∈ R d j ×d j−1 (j ∈ [H + 1]), and the outputŶ of the network is given by the product of weight matrices with the data matrix:
Ŷ = W H+1 W H · · · W 1 X. Let (W j ) H+1
j=1 be the tuple of all weight matrices, and W i:j denote the product W i W i−1 · · · W j+1 W j for i ≥ j, and the identity for i = j − 1. We consider the empirical risk ((W j ) H+1 j=1 ), which, for linear networks assumes the form
((W j ) H+1 j=1 ) := 0 (W H+1:1 ),(4)
where 0 is a suitable differentiable loss. For example, when 0
(R) = 1 2 RX − Y 2 F , ((W j ) H+1 j=1 ) = 1 2 W H+1:1 X − Y 2 F = 0 (W H+1:1 ). Lastly, we write ∇ 0 (M) ≡ ∇ R 0 (R)| R=M .
Remark: bias terms. We omit the bias terms b 1 , . . . , b H+1 here. This choice is for simplicity; models with bias can be handled by the usual trick of augmenting data and weight matrices.
Main results and discussion
We are now ready to state our first main theorem, whose proof is deferred to Appendix A7.
Theorem 4. Suppose that for all j, d j ≥ min{d x , d y }, and that the loss is given by (4), where 0 is differentiable on R d y ×d x . For any critical point (Ŵ j ) H+1 j=1 of the loss , the following claims hold:
1. If ∇ 0 (Ŵ H+1:1 ) = 0, then (Ŵ j ) H+1 j=1 is a saddle of . 2. If ∇ 0 (Ŵ H+1:1 ) = 0, then (a) (Ŵ j ) H+1
j=1 is a local min (max) of ifŴ H+1:1 is a local min (max) of 0 ; moreover,
(b) (Ŵ j ) H+1
j=1 is a global min (max) of if and only ifŴ H+1:1 is a global min (max) of 0 .
3. If there exists j * ∈ [H + 1] such thatŴ H+1:j * +1 has full row rank andŴ j * −1:1 has full column rank, then ∇ 0 (Ŵ H+1:1 ) = 0, so 2(a) and 2(b) hold. Also,
(a)Ŵ H+1:1 is a local min (max) of 0 if (Ŵ j ) H+1 j=1 is a local min (max) of .
Let us paraphrase Theorem 4 in words. In particular, it states that if the hidden layers are "wide enough" so that the product W H+1:1 can attain full rank and if the loss assumes the form (4) for a differentiable loss 0 , then the type (optimal or saddle point) of a critical point (Ŵ j ) H+1 j=1 of is governed by the behavior of 0 at the productŴ H+1:1 .
Note that for any critical point (Ŵ j ) H+1 j=1 of the loss , either ∇ 0 (Ŵ H+1:1 ) = 0 or ∇ 0 (Ŵ H+1:1 ) = 0. Parts 1 and 2 handle these two cases. Also observe that the condition in Part 3 implies ∇ 0 = 0, so Part 3 is a refinement of Part 2. A notable fact is that a sufficient condition for Part 3 isŴ H+1:1 having full rank. For example, if d x ≥ d y , full-rankŴ H+1:1 implies rank(Ŵ H+1:2 ) = d y , whereby the condition in Part 3 holds with j * = 1.
IfŴ H+1:1 is not critical for 0 , then (Ŵ j ) H+1 j=1 must be a saddle point of . IfŴ H+1:1 is a local min/max of 0 , (Ŵ j ) H+1 j=1 is also a local min/max of . Notice, however, that Part 2(a) does not address the case of saddle points; whenŴ H+1:1 is a saddle point of 0 , the tuple (Ŵ j ) H+1 j=1 can behave arbitrarily. However, with the condition in Part 3, statements 2(a) and 3(a) hold at the same time, so thatŴ H+1:1 is a local min/max of 0 if and only if (Ŵ j ) H+1 j=1 is a local min/max of . Observe that the same "if and only if" statement holds for saddle points due to their definition; in summary, the types (min/max/saddle) of the critical points (Ŵ j ) H+1 j=1 andŴ H+1:1 match exactly. Although Theorem 4 itself is of interest, the following corollary highlights its key implication for deep linear networks.
(Ŵ j ) H+1 j=1 of , if ∇ 0 (Ŵ H+1:1 ) = 0, then (Ŵ j ) H+1 j=1 is a saddle of , while if ∇ 0 (Ŵ H+1:1 ) = 0, then (Ŵ j ) H+1 j=1 is a global min (max) of . Proof If ∇ 0 (Ŵ H+1:1 ) = 0, thenŴ H+1:1 is a saddle point by Theorem 4.1. If ∇ 0 (Ŵ H+1:1 ) = 0, thenŴ H+1:1 is a global min (max) of 0 by assumption. By Theorem 4.2(b), (Ŵ j ) H+1
j=1 must be a global min (max) of .
Corollary 5 shows that for any differentiable loss function 0 whose critical points are global minima, the loss has only global minima and saddle points, therefore satisfying the "local minima are global" property. In other words, for such an 0 , the multilinear re-parametrization introduced by deep linear networks does not introduce any spurious local minima/maxima; it only introduces saddle points. Importantly, Corollary 5 also provides a checkable condition that distinguishes global minima from saddle points. Since is nonconvex, it is remarkable that such a simple necessary and sufficient condition for global optimality is available.
Our result generalizes previous works on linear networks such as [10,27,28], because it provides conditions for global optimality for a broader range of loss functions without assumptions on datasets. Laurent and Brecht [13] proved that if (Ŵ j ) H+1 j=1 is a local min of , thenŴ H+1:1 is a critical point of 0 . First, observe that this result is implied by Theorem 4.1. So our result, which was proved in parallel and independently, is strictly more general. With additional assumption that critical points of 0 are global minima, Laurent and Brecht [13] showed that "local min is global" property holds for linear neural networks; our Corollay 5 gives a simple and efficient test condition as well as proving there are only global minima and saddles, which is clearly stronger.
Discussion and Future Work
We investigated the loss surface of deep linear and nonlinear neural networks. We proved two theorems showing existence of spurious local minima on nonlinear networks, which apply to almost all datasets (Theorem 1) and a wide class of activations (Theorem 2). We concluded by Theorem 4, showing a general result studying the behavior of critical points in multilinearly parametrized functions, which unifies other existing results on linear networks. Given that spurious local minima are common in neural networks, a valuable future research direction will be investigating how far local minima are from global minima in general, and how the size of the network affects this gap. Another direction would be to add regularizers and see how they affect the loss surface. Additionally, one can try to show algorithmic results in a similar flavor as [6]. We hope that our paper will be a stepping stone to such future research.
A1 Notation
We first list notation used throughout the appendix. For integers a ≤ b, [a, b] denotes the set of integers between them. We
write [b], if a = 1. For a vector v, we use [v] i to denote its i-th component, while [v] [i] denotes a vector comprised of the first i components of v. Let 1 d (or 0 d ) be the all ones (zeros) column vector in R d . For a subspace V ⊆ R d , we denote by V ⊥ its orthogonal complement.
For a matrix A, [A] i,j is the (i, j)-th entry and [A] ·,j its j-th column. Let σ max (A) and σ min (A) denote the largest and smallest singular values of A, respectively; row(A), col(A), rank(A), and A F denote respectively the row space, column space, rank, and Frobenius norm of matrix A. 1. There are someȳ j 's that are duplicate; i.e. for some i = j,ȳ i =ȳ j .
2. Ifȳ j is non-duplicate, meaning thatȳ j−1 <ȳ j <ȳ j+1 ,ȳ j = y j holds.
3. Ifȳ j is duplicate, ∑ i:ȳ i =ȳ j (ȳ i − y i ) = 0 holds.
4.
There exists at least one duplicateȳ j such that, for thatȳ j , there exist at least two different i's that satisfyȳ i =ȳ j andȳ i = y i .
Proof
We prove this by showing if any of these statements are not true, then we have J = ∅ or a contradiction.
1. If all theȳ j 's are distinct and J = ∅, by definition of J ,ȳ j = y j for all j. This violates our assumption that linear models cannot perfectly fit Y.
2. If we haveȳ j = y j for a non-duplicateȳ j , at least one of the following statements must hold:
∑ i≤j−1 (ȳ i − y i ) = 0 or ∑ i≤j (ȳ i − y i ) = 0, meaning that j − 1 ∈ J or j ∈ J .
3. Supposeȳ j is duplicate and ∑ i:ȳ i =ȳ j (ȳ i − y i ) = 0. Let k = min{i |ȳ i =ȳ j } and l = max{i | y i =ȳ j }. Then at least one of the following statements must hold:
∑ i≤k−1 (ȳ i − y i ) = 0 or ∑ i≤l (ȳ i − y i ) = 0. If ∑ i≤k−1 (ȳ i − y i ) = 0, we can also see thatȳ k−1 <ȳ k , so k − 1 ∈ J . Similarly, if ∑ i≤l (ȳ i − y i ) = 0, then l ∈ J .
4. Since ∑ i:ȳ i =ȳ j (ȳ i − y i ) = 0 holds for any duplicateȳ j , ifȳ i = y i holds for one i then there must be at least two of them that satisfiesȳ i = y i . If this doesn't hold for all duplicateȳ i , with Part 2 this means thatȳ j = y j holds for all j. This violates our assumption that linear models cannot perfectly fit Y.
From Lemma A.1.4, we saw that there is a duplicate value ofȳ j such that some of the data points i satisfyȳ i =ȳ j andȳ i = y i . The proof strategy in this case is essentially the same, but the difference is that we choose one of such duplicateȳ j , and then choose a vector v ∈ R d x to "perturb" the linear least squares solution [W] [d x ] in order to break the tie between i's that satisfiesȳ i =ȳ j andȳ i = y i . We start by defining the minimum among such duplicate valuesȳ * ofȳ j 's, and a set of indices j that satisfiesȳ j =ȳ * .ȳ * = min{ȳ j | ∃i = j such thatȳ i =ȳ j andȳ i = y i },
J * = {j ∈ [m] |ȳ j =ȳ * }.
Then, we define a subset of J * : J * = = {j ∈ J * |ȳ j = y j }. By Lemma A.1.4, cardinality of J * = is at least two. Then, we define a special index in J * = :
j 1 = argmax j∈J * = x j 2 ,
Index j 1 is the index of the "longest" x j among elements in J * = . Using the definition of j 1 , we can partition J * into two sets:
J * ≥ = {j ∈ J * | x j , x j 1 ≥ x j 1 2 2 }, J * < = {j ∈ J * | x j , x j 1 < x j 1 2 2 }.
For the indices in J * , we can always switch the indices without loss of generality. So we can assume that j ≤ j 1 = max J * ≥ for all j ∈ J * ≥ and j > j 1 for all j ∈ J * < . We now define a vector that will be used as the "perturbation"
to [W] [d x ] . Define a vector v ∈ R d x , which is a scaled version of x j 1 : v = g M x j 1 2 x j 1 ,
where the constants g and M are defined to be
g = 1 4 min |ȳ i −ȳ j | | i, j ∈ [m],ȳ i =ȳ j , M = max i∈[m]
x i 2 .
The constant M is the largest x i 2 among all the indices, and g is one fourth times the minimum gap between all distinct values ofȳ i . Now, consider perturbing [W] [d x ] by a vector −αv T . where α ∈ (0, 1] will be specified later. Observe that
W − αv T 0 x i 1 =W x i 1 − αv T x i =ȳ i − αv T x i .
Recall that j ≤ j 1 = max J * ≥ for all j ∈ J * ≥ and j > j 1 for all j ∈ J * < . We are now ready to present the following lemma:
Lemma A.2. Define j 2 = argmax j∈J * < x j , x j 1 , β =ȳ * − α 2 v T (x j 1 + x j 2 ). Then,ȳ i − αv T x i − β < 0 for all i ≤ j 1 , y i − αv T x i − β > 0 for all i > j 1 . Also, ∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = −2(ȳ j 1 − y j 1 ) = 0. Proof First observe that, for any x i , |αv T x i | ≤ α v 2 x i 2 ≤ g M x i 2 ≤ g.
By definition of g, we have 2g <ȳ j −ȳ i for anyȳ i <ȳ j . Using this, we can see that
y i <ȳ j =⇒ȳ i − αv T x i ≤ȳ i + g <ȳ j − g ≤ȳ j − αv T x j . (A.1)
In words, ifȳ i andȳ j are distinct and there is an orderȳ i <ȳ j , perturbation of [W] [d x ] by −αv T does not change the order. Also, since v is only a scaled version of x j 1 , from the definitions of J * ≥ and J *
< , v T (x j − x j 1 ) ≥ 0 for j ∈ J * ≥ and v T (x j − x j 1 ) < 0 for j ∈ J * < . (A.2) By definition of j 2 , v T (x j 2 − x j 1 ) < 0 and v T (x j 2 − x j ) ≥ 0 for all j ∈ J * < . (A.3)
It is left to prove the statement of the lemma using case analysis, using the inequalities (A.1), (A.2), and (A.3). For all i's such thatȳ i <ȳ * =ȳ j 1 ,
y i − αv T x i − β =ȳ i − αv T x i −ȳ * + α 2 v T (x j 1 + x j 2 ) = (ȳ i − αv T x i ) − (ȳ * − αv T x j 1 ) + α 2 v T (x j 2 − x j 1 ) < 0.
Similarly, for all i such thatȳ i >ȳ * =ȳ j 2 ,
y i − αv T x i − β = (ȳ i − αv T x i ) − (ȳ * − αv T x j 2 ) + α 2 v T (x j 1 − x j 2 ) > 0.
For j ∈ J * ≥ (j ≤ j 1 ), we knowȳ j =ȳ * , sō
y j − αv T x j − β = ȳ * − αv T x j − ȳ * − α 2 v T (x j 1 + x j 2 ) = αv T [(x j 1 − x j )] + α 2 v T [(x j 2 − x j 1 )] < 0.
Also, for j ∈ J * < (j > j 1 ),
y j − αv T x j − β = ȳ * − αv T x j − ȳ * − α 2 v T (x j 1 + x j 2 ) = α 2 v T [(x j 1 − x j ) + (x j 2 − x j )] > 0.
This finishes the case analysis and proves the first statements of the lemma. One last thing to prove is that
∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = −2(ȳ j 1 − y j 1 ) = 0.
Recall from Lemma A.1.2 that for non-duplicateȳ j , we haveȳ j = y j . Also by Lemma A.1.3 ifȳ j is duplicate,
∑ i:ȳ i =ȳ j (ȳ i − y i ) = 0. So, ∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = ∑ i∈J * < (ȳ i − y i ) − ∑ i∈J * ≥ (ȳ i − y i ) .
Recall the definition of J * = = {j ∈ J * |ȳ j = y j }. For j ∈ J * \J * = ,ȳ j = y j . So,
∑ i∈J * < (ȳ i − y i ) − ∑ i∈J * ≥ (ȳ i − y i ) = ∑ i∈J * < ∩J * = (ȳ i − y i ) − ∑ i∈J * ≥ ∩J * = (ȳ i − y i ) .
Recall the definition of j 1 = argmax j∈J * = x j 2 . For any other j ∈ J * = \{j 1 },
x j 1 2 2 ≥ x j 2 x j 1 2 ≥ x j , x j 1 ,
where the first ≥ sign is due to definition of j 1 , and the second is from Cauchy-Schwarz inequality. Since x j 1 and x j are distinct by assumption, they must differ in either length or direction, or both.
So, we can check that at least one of "≥" must be strict inequality, so x j 1 2 2 > x j , x j 1 for all j ∈ J * = \{j 1 }. Thus, J * = \{j 1 } = J * < ∩ J * = and {j 1 } = J * ≥ ∩ J * = , proving that
∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = ∑ j∈J * = \{j 1 } (ȳ i − y i ) − ȳ j 1 − y j 1 .
Also, by Lemma A.1.3,
0 = ∑ i∈J * (ȳ i − y i ) = ∑ i∈J * = (ȳ i − y i ) = (ȳ j 1 − y j 1 ) + ∑ j∈J * = \{j 1 } (ȳ i − y i ).
Wrapping up all the equalities, we can conclude that
∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = −2 ȳ j 1 − y j 1 ,
finishing the proof of the last statement.
It is time to present the parameters (W j ,b j ) 2 j=1 , whose empirical risk is strictly smaller than the local minimum (Ŵ j ,b j ) 2 j=1 with a sufficiently small choice of α ∈ (0, 1]. Now, let γ be a constant such that
γ = sign((ȳ j 1 − y j 1 )(s + − s − )) αv T (x j 1 − x j 2 ) 4 . (A.4)
Its absolute value is proportional to α ∈ (0, 1], which is a undetermined number that will be specified at the end of the proof. Since |γ| is small enough, we can check that
sign(ȳ i − αv T x i − β) = sign(ȳ i − αv T x i − β + γ) = sign(ȳ i − αv T x i − β − γ).
Then, assign parameter values
W 1 = [W] [d x ] − αv T −[W] [d x ] + αv T 0 (d 1 −2)×d x ,b 1 = [W] d x +1 − β + γ −[W] d x +1 + β + γ 0 d 1 −2 , W 2 = 1 s + + s − 1 −1 0 T d 1 −2 ,b 2 = β.
With these parameter values,W
1 x i +b 1 = ȳ i − αv T x i − β + γ −ȳ i + αv T x i + β + γ 0 d 1 −2 .
As we saw in Lemma A.2, for i ≤ j 1 ,ȳ i − αv T x i − β + γ < 0 and −ȳ i + αv T x i + β + γ > 0. Sô
y i =W 2hs+,s− (W 1 x i +b 1 ) +b 2 = 1 s + + s − s − (ȳ i − αv T x i − β + γ) − 1 s + + s − s + (−ȳ i + αv T x i + β + γ) + β =ȳ i − αv T x i − s + − s − s + + s − γ.
Similarly, for i > j 1 ,ȳ i − αv T x i − β + γ > 0 and −ȳ i + αv T x i + β + γ < 0, sô
y i =W 2hs+,s− (W 1 x i +b 1 ) +b 2 =ȳ i − αv T x i + s + − s − s + + s − γ.
Now, the squared error loss of this point is
((W j ,b j ) 2 j=1 ) = 1 2 Ŷ − Y 2 F = 1 2 ∑ i≤j 1 ȳ i − αv T x i − s + − s − s + + s − γ − y i 2 + 1 2 ∑ i>j 1 ȳ i − αv T x i + s + − s − s + + s − γ − y i 2 = 1 2 m ∑ i=1 ȳ i − αv T x i − y i 2 + ∑ i>j 1 ȳ i − αv T x i − y i − ∑ i≤j 1 ȳ i − αv T x i − y i s + − s − s + + s − γ + O(γ 2 ) = 0 (W) − α m ∑ i=1 (ȳ i − y i ) x T i v + O(α 2 ) + ∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) s + − s − s + + s − γ + O(αγ) + O(γ 2 ). Recall that ∑ m i=1 (ȳ i − y i ) x T i = 0 for least squares estimatesȳ i . From Lemma A.2, we saw that ∑ i>j 1 (ȳ i − y i ) − ∑ i≤j 1 (ȳ i − y i ) = −2(ȳ j 1 − y j 1 )
. As seen in the definition of γ (A.4), the magnitude of γ is proportional to α. Substituting (A.4), we can express the loss as
((W j ,b j ) 2 j=1 ) = 0 (W) − α|(ȳ j 1 − y j 1 )(s + − s − )|v T (x j 1 − x j 2 ) 2(s + + s − ) + O(α 2 ).
Recall that v T (x j 1 − x j 2 ) > 0 from (A.3). Then, for sufficiently small α ∈ (0, 1],
((W j ,b j ) 2 j=1 ) < 0 (W) = ((Ŵ j ,b j ) 2 j=1 ),
therefore proving that (Ŵ j ,b j ) 2 j=1 is a spurious local minimum.
A3 Proof of Theorem 2
A3.1 Proof of Part 1
Given v 1 , v 2 , v 3 , v 4 ∈ R satisfying conditions (C2.1) and (C2.2), we can pick parameter values (W j ,b j ) 2 j=1 to perfectly fit the given dataset:
W 1 = v 1 v 2 v 3 v 4 ,b 1 = 0 0 ,W 2 = h(v 3 )h v 1 +v 2 2 −h(v 1 )h v 3 +v 4 2 −1 [h(v 3 ) −h(v 1 )],b 2 = 0.
With these values, we can check thatŶ = 0 0 1 , hence perfectly fitting Y, thus the loss ((W j ,b j ) 2 j=1 ) = 0.
A3.2 Proof of Part 2
Given conditions (C2.3)-(C2.7) on v 1 , v 2 , u 1 , u 2 ∈ R, we prove below that there exists a local minimum (Ŵ j ,b j ) 2 j=1 for which the output of the network is the same as linear least squares model, and its empirical risk is ((Ŵ j ,b j ) 2 j=1 ) = 1 3 . If the conditions of Part 1 also hold, this local minimum is strictly inferior to the global one.
First, compute the outputȲ of linear least squares model to obtainȲ = 1 3 1 3 1 3 . Now assign parameter valuesŴ
1 = v 1 v 1 v 2 v 2 ,b 1 = 0 0 ,Ŵ 2 = u 1 u 2 ,b 2 = 0.
With these values we can check thatŶ = 1
3 1 3 1 3 , under condition (C2.3): u 1 h(v 1 ) + u 2 h(v 2 ) = 1 3 .
The empirical risk is ((Ŵ j ,b j ) 2 j=1 ) = 1 2 ( 1 9 + 1 9 + 4 9 ) = 1 3 . It remains to show that this is indeed a local minimum of . To show this, we apply perturbations to the parameters to see if the risk after perturbation is greater than or equal to ((Ŵ j ,b j ) 2 j=1 ). Let the perturbed parameters bě
W 1 = v 1 + δ 11 v 1 + δ 12 v 2 + δ 21 v 2 + δ 22 ,b 1 = β 1 β 2 ,W 2 = u 1 + 1 u 2 + 2 ,b 2 = γ, (A.5)
where δ 11 , δ 12 , δ 21 , δ 22 , β 1 , β 2 , 1 , 2 , and γ are small real numbers. The next lemma rearranges the terms in ((W j ,b j ) 2 j=1 ) into a form that helps us prove local minimality of (Ŵ j ,b j ) 2 j=1 . Appendix A4 gives the proof of Lemma A.3, which includes as a byproduct some equalities on polynomials that may be of wider interest. (1), and o(1) contains terms that diminish to zero as perturbations vanish.
((W j ,b j ) 2 j=1 ) ≥ 1 3 + α 1 (δ 11 − δ 12 ) 2 + α 2 (δ 21 − δ 22 ) 2 + α 3 (δ 11 −δ 12 )(δ 21 −δ 22 ), (A.6) where α i = u i h (v i ) 12 + u 2 i (h (v i )) 2 4 + o(1), for i = 1, 2, and α 3 = u 1 u 2 h (v 1 )h (v 2 ) 2 + o
To make the the sum of the last three terms of (A.6) nonnegative, we need to satisfy α 1 ≥ 0 and α 2 3 − 4α 1 α 2 ≤ 0; these inequalities are satisfied for small enough perturbations because of conditions (C2.6)-(C2.7). Thus, we conclude that ((W j ,b j ) 2 j=1 ) ≥ 1 3 = ((Ŵ j ,b j ) 2 j=1 ) for small enough perturbations, proving that (Ŵ j ,b j ) 2 j=1 is a local minimum.
A4 Proof of Lemma A.3
The goal of this lemma is to prove that
((W j ,b j ) 2 j=1 ) = 1 3 + 3 2 (perturbations) 2 + u 1 h (v 1 ) 12 + u 2 1 (h (v 1 )) 2 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 12 + u 2 2 (h (v 2 )) 2 4 + o(1) (δ 21 − δ 22 ) 2 + u 1 u 2 h (v 1 )h (v 2 ) 2 + o(1) (δ 11 − δ 12 )(δ 21 − δ 22 ), (A.7)
where o(1) contains terms that diminish to zero as perturbations decrease.
Using the perturbed parameters,
W 1 X +b 1 1 T m = v 1 + δ 11 + β 1 v 1 + δ 12 + β 1 v 1 + δ 11 +δ 12 2 + β 1 v 2 + δ 21 + β 2 v 2 + δ 22 + β 2 v 2 + δ 21 +δ 22 2 + β 2 ,
so the empirical risk can be expressed as
((W j ,b j ) 2 j=1 ) = 1 2 W 2 h W 1 X +b 1 1 T m +b 2 1 T m − Y 2 F = 1 2 [(u 1 + 1 )h(v 1 + δ 11 + β 1 ) + (u 2 + 2 )h(v 2 + δ 21 + β 2 ) + γ] 2 + 1 2 [(u 1 + 1 )h(v 1 + δ 12 + β 1 ) + (u 2 + 2 )h(v 2 + δ 22 + β 2 ) + γ] 2 + 1 2 (u 1 + 1 )h v 1 + δ 11 + δ 12 2 + β 1 + (u 2 + 2 )h v 2 + δ 21 + δ 22 2 + β 2 + γ − 1 2 (A.8)
So, the empirical risk (A.8) consists of three terms, one for each training example. By expanding the activation function h using Taylor series expansion and doing algebraic manipulations, we will derive the equation (A.7) from (A.8).
Using the Taylor series expansion, we can express h(v 1 + δ 11 + β 1 ) as
h(v 1 + δ 11 + β 1 ) = h(v 1 ) + ∞ ∑ n=1 h (n) (v 1 ) n! (δ 11 + β 1 ) n .
Using a similar expansion for h(v 2 + δ 21 + β 2 ), the first term of (A.8) can be written as
1 2 [(u 1 + 1 )h(v 1 + δ 11 + β 1 ) + (u 2 + 2 )h(v 2 + δ 21 + β 2 ) + γ] 2 = 1 2 (u 1 + 1 ) h(v 1 ) + ∞ ∑ n=1 h (n) (v 1 ) n! (δ 11 + β 1 ) n + (u 2 + 2 ) h(v 2 ) + ∞ ∑ n=1 h (n) (v 2 ) n! (δ 21 + β 2 ) n + γ 2 = 1 2 1 3 + 1 h(v 1 ) + (u 1 + 1 ) ∞ ∑ n=1 h (n) (v 1 ) n! (δ 11 + β 1 ) n + 2 h(v 2 ) + (u 2 + 2 ) ∞ ∑ n=1 h (n) (v 2 ) n! (δ 21 + β 2 ) n + γ 2 ,
where we used u 1 h(v 1 ) + u 2 h(v 2 ) = 1 3 . To simplify notation, let us introduce the following function:
t(δ 1 , δ 2 ) = 1 h(v 1 ) + 2 h(v 2 ) + γ + (u 1 + 1 ) ∞ ∑ n=1 h (n) (v 1 ) n! (δ 1 + β 1 ) n + (u 2 + 2 ) ∞ ∑ n=1 h (n) (v 2 ) n! (δ 2 + β 2 ) n .
With this new notation t(δ 1 , δ 2 ), after doing similar expansions to the other terms of (A.8), we get
((W j ,b j ) 2 j=1 ) = 1 2 1 3 + t(δ 11 , δ 21 ) 2 + 1 2 1 3 + t(δ 12 , δ 22 ) 2 + 1 2 − 2 3 + t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 = 1 3 + 1 3 t(δ 11 , δ 21 ) + t(δ 12 , δ 22 ) − 2t δ 11 + δ 12 2 , δ 21 + δ 22 2 + 1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 + 1 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 (A.9)
Before we show the lower bounds, we first present the following lemmas that will prove useful shortly. These are simple yet interesting lemmas that might be of independent interest. Lemma A.4. For n ≥ 2,
a n + b n − 2 a + b 2 n = (a − b) 2 p n (a, b),
where p n is a polynomial in a and b. All terms in p n have degree exactly n − 2. When n = 2, p 2 (a, b) = 1 2 . Proof The exact formula for p n (a, b) is as the following:
p n (a, b) = n−2 ∑ k=0 k + 1 − 2 −n+1 k ∑ l=0 (k + 1 − l)
n l a n−k−2 b k .
Using this, we can check the lemma is correct just by expanding both sides of the equation. The rest of the proof is straightforward but involves some complicated algebra. So, we omit the details for simplicity.
Lemma A.5. For n 1 , n 2 ≥ 1,
a n 1 c n 2 + b n 1 d n 2 − 2 a + b 2 n 1 c + d 2 n 2 =(a − b) 2 q n 1 ,n 2 (a, b, d) + (c − d) 2 q n 2 ,n 1 (c, d, b) + (a − b)(c − d)r n 1 ,n 2 (a, b, c, d)
where q n 1 ,n 2 and r n 1 ,n 2 are polynomials in a, b, c and d. All terms in q n 1 ,n 2 and r n 1 ,n 2 have degree exactly n 1 + n 2 − 2. When n 1 = n 2 = 1, q 1,1 (a, b, d) = 0 and r 1,1 (a, b, c, d) = 1 2 . Proof The exact formulas for q n 1 ,n 2 (a, b, d), q n 2 ,n 1 (c, d, b), and r n 1 ,n 2 (a, b, c, d) are as the following:
q n 1 ,n 2 (a, b, d) = n 1 −2 ∑ k 1 =0 k 1 + 1 − 2 −n 1 +1 k 1 ∑ l 1 =0 (k 1 + 1 − l 1 ) n 1 l 1 a n 1 −k 1 −2 b k 1 d n 2 , q n 2 ,n 1 (c, d, b) = n 2 −2 ∑ k 2 =0 k 2 + 1 − 2 −n 2 +1 k 2 ∑ l 2 =0 (k 2 + 1 − l 2 ) n 2 l 2 b n 1 c n 2 −k 2 −2 d k 2 , r n 1 ,n 2 (a, b, c, d) = n 1 −1 ∑ k 1 =0 n 2 −1 ∑ k 2 =0 1 − 2 −n 1 −n 2 +1 k 1 ∑ l 1 =0 k 2 ∑ l 2 =0 n 1 l 1 n 2 l 2 a n 1 −k 1 −1 b k 1 c n 2 −k 2 −1 d k 2 .
Similarly, we can check the lemma is correct just by expanding both sides of the equation. The remaining part of the proof is straightforward, so we will omit the details.
Using Lemmas A.4 and A.5, we will expand and simplify the "cross terms" part and "squared terms" part of (A.9). For the "cross terms" in (A.9), let us split t(δ 1 , δ 2 ) into two functions t 1 and t 2 :
t 1 (δ 1 , δ 2 ) = 1 h(v 1 ) + 2 h(v 2 ) + γ + (u 1 + 1 )h (v 1 )(δ 1 + β 1 ) + (u 2 + 2 )h (v 2 )(δ 2 + β 2 ) t 2 (δ 1 , δ 2 ) =(u 1 + 1 ) ∞ ∑ n=2 h (n) (v 1 ) n! (δ 1 + β 1 ) n + (u 2 + 2 ) ∞ ∑ n=2 h (n) (v 2 ) n! (δ 2 + β 2 ) n ,
so that t(δ 1 , δ 2 ) = t 1 (δ 1 , δ 2 ) + t 2 (δ 1 , δ 2 ). It is easy to check that
t 1 (δ 11 , δ 21 ) + t 1 (δ 12 , δ 22 ) − 2t 1 δ 11 + δ 12 2 , δ 21 + δ 22 2 = 0.
Also, using Lemma A.4, we can see that (δ 11 + β 1 ) n + (δ 12 + β 1 ) n − 2 δ 11 + δ 12 2 + β 1 n = (δ 11 − δ 12 ) 2 p n (δ 11 + β 1 , δ 12 + β 1 ),
(δ 21 + β 2 ) n + (δ 22 + β 2 ) n − 2 δ 21 + δ 22 2 + β 2 n = (δ 21 − δ 22 ) 2 p n (δ 21 + β 2 , δ 22 + β 2 ), so t 2 (δ 11 , δ 21 ) + t 2 (δ 12 , δ 22 ) − 2t 2 δ 11 + δ 12 2 , δ 21 + δ 22 2 =(u 1 + 1 )(δ 11 − δ 12 ) 2 ∞ ∑ n=2 h (n) (v 1 ) n! p n (δ 11 + β 1 , δ 12 + β 1 ) + (u 2 + 2 )(δ 21 − δ 22 ) 2 ∞ ∑ n=2 h (n) (v 2 ) n! p n (δ 21 + β 2 , δ 22 + β 2 ).
Consider the summation
∞ ∑ n=2 h (n) (v 1 ) n! p n (δ 11 + β 1 , δ 12 + β 1 ).
We assumed that there exists a constant c > 0 such that |h (n) (v 1 )| ≤ c n n!. From this, for small enough perturbations δ 11 , δ 12 , and β 1 , we can see that the summation converges, and the summands converge to zero as n increases. Because all the terms in p n (n ≥ 3) are of degree at least one, we can thus write
∞ ∑ n=2 h (n) (v 1 ) n! p n (δ 11 + β 1 , δ 12 + β 1 ) = h (v 1 ) 4 + o(1).
So, for small enough δ 11 , δ 12 , and β 1 , the term
=t 2 (δ 11 , δ 21 ) + t 2 (δ 12 , δ 22 ) − 2t 2 δ 11 + δ 12 2 , δ 21 + δ 22 2 =(u 1 + o(1)) h (v 1 ) 4 + o(1) (δ 11 − δ 12 ) 2 + (u 2 + o(1)) h (v 2 ) 4 + o(1) (δ 21 − δ 22 ) 2 = u 1 h (v 1 ) 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 4 + o(1) (δ 21 − δ 22 ) 2 . (A.10)
Now, it is time to take care of the "squared terms." We will express the terms as 11) and expand and simplify the terms in
1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 + 1 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 = 3 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 + 1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 − t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 ,(A.1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 − t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 .
This time, we split t(δ 1 , δ 2 ) in another way, this time into three parts:
t 3 = 1 h(v 1 ) + 2 h(v 2 ) + γ, t 4 (δ 1 ) = (u 1 + 1 ) ∞ ∑ n=1 h (n) (v 1 ) n! (δ 1 + β 1 ) n , t 5 (δ 2 ) = (u 2 + 2 ) ∞ ∑ n=1 h (n) (v 2 ) n! (δ 2 + β 2 ) n , so that t(δ 1 , δ 2 ) = t 3 + t 4 (δ 1 ) + t 5 (δ 2 ). With this, 1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 − t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 =t 3 t 4 (δ 11 ) + t 4 (δ 12 ) − 2t 4 δ 11 + δ 12 2 + t 5 (δ 21 ) + t 5 (δ 22 ) − 2t 5 δ 21 + δ 22 2 + 1 2 (t 4 (δ 11 )) 2 + (t 4 (δ 12 )) 2 − 2 t 4 δ 11 + δ 12 2 2 + 1 2 (t 5 (δ 21 )) 2 + (t 5 (δ 22 )) 2 − 2 t 5 δ 21 + δ 22 2 2 + t 4 (δ 11 )t 5 (δ 21 ) + t 4 (δ 12 )t 5 (δ 22 ) − 2t 4 δ 11 + δ 12 2 t 5 δ 21 + δ 22 2 . (A.12)
We now have to simplify the equation term by term. We first note that t 4 (δ 11 ) + t 4 (δ 12 ) − 2t 4 δ 11 + δ 12 2 + t 5 (δ 21 ) + t 5 (δ 22 ) − 2t 5 δ 21 + δ 22 2 =t 2 (δ 11 , δ 21 ) + t 2 (δ 12 , δ 22 ) − 2t 2 δ 11 + δ 12 2 ,
δ 21 + δ 22 2 , so t 3 t 4 (δ 11 ) + t 4 (δ 12 ) − 2t 4 δ 11 + δ 12 2 + t 5 (δ 21 ) + t 5 (δ 22 ) − 2t 5 δ 21 + δ 22 2 =t 3 t 2 (δ 11 , δ 21 ) + t 2 (δ 12 , δ 22 ) − 2t 2 δ 11 + δ 12 2 , δ 21 + δ 22 2 =o(1) u 1 h (v 1 ) 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 4 + o(1) (δ 21 − δ 22 ) 2 , (A.13)
as seen in (A.10). Next, we have (t 4 (δ 11 )) 2 + (t 4 (δ 12 )) 2 − 2 t 4 δ 11 + δ 12 2
2 =(u 1 + 1 ) 2 ∞ ∑ n 1 ,n 2 =1
h (n 1 ) (v 1 )h (n 2 ) (v 1 ) n 1 !n 2 ! (δ 11 + β 1 ) n 1 +n 2 + (δ 12 + β 1 ) n 1 +n 2 − 2 δ 11 + δ 12 2 + β 1
n 1 +n 2 , =(u 1 + 1 ) 2 (δ 11 − δ 12 ) 2 ∞ ∑ n 1 ,n 2 =1
h (n 1 ) (v 1 )h (n 2 ) (v 1 ) n 1 !n 2 ! p n 1 +n 2 (δ 11 + β 1 , δ 12 + β 1 )
= u 2 1 (h (v 1 )) 2 2 + o(1) (δ 11 − δ 12 ) 2 , (A.14)
when perturbations are small enough. We again used Lemma A.4 in the second equality sign, and the facts that p n 1 +n 2 (·) = o(1) whenever n 1 + n 2 > 2 and that p 2 (·) = 1 2 . In a similar way,
(t 5 (δ 21 )) 2 + (t 5 (δ 22 )) 2 − 2 t 5 δ 21 + δ 22 2 2 = u 2 2 (h (v 2 )) 2 2 + o(1) (δ 21 − δ 22 ) 2 . (A.15)
Lastly,
t 4 (δ 11 )t 5 (δ 21 ) + t 4 (δ 12 )t 5 (δ 22 ) − 2t 4 δ 11 + δ 12 2 t 5 δ 21 + δ 22 2 =(u 1 + 1 )(u 2 + 2 ) ∞ ∑ n 1 ,n 2 =1 h (n 1 ) (v 1 )h (n 2 ) (v 2 ) n 1 !n 2 ! (δ 11 + β 1 ) n 1 (δ 21 + β 2 ) n 2 + (δ 12 + β 1 ) n 1 (δ 22 + β 2 ) n 2 − 2 δ 11 + δ 12 2 + β 1 n 1 δ 21 + δ 22 2 + β 2 n 2 , =(u 1 + 1 )(u 2 + 2 ) (δ 11 − δ 12 ) 2 ∞ ∑ n 1 ,n 2 =1 h (n 1 ) (v 1 )h (n 2 ) (v 2 ) n 1 !n 2 ! q n 1 ,n 2 (δ 11 + β 1 , δ 12 + β 1 , δ 22 + β 2 ) + (δ 21 − δ 22 ) 2 ∞ ∑ n 1 ,n 2 =1 h (n 1 ) (v 1 )h (n 2 ) (v 2 ) n 1 !n 2 ! q n 2 ,n 1 (δ 21 + β 2 , δ 22 + β 2 , δ 12 + β 1 ) + (δ 11 − δ 12 )(δ 21 − δ 22 ) ∞ ∑ n 1 ,n 2 =1
h (n 1 ) (v 1 )h (n 2 ) (v 2 ) n 1 !n 2 ! r n 1 ,n 2 (δ 11 + β 1 , δ 12 + β 1 , δ 21 + β 2 , δ 22 + β 2 )
=(u 1 u 2 + o(1)) (δ 11 − δ 12 ) 2 o(1) + (δ 21 − δ 22 ) 2 o(1) + (δ 11 − δ 12 )(δ 21 − δ 22 ) h (v 1 )h (v 2 ) 2 + o(1) , (A.16)
where the second equality sign used Lemma A.5 and the third equality sign used the facts that q n 1 ,n 2 (·) = o(1) and r n 1 ,n 2 (·) = o(1) whenever n 1 + n 2 > 2, and that q 1,1 (·) = 0 and r 1,1 (·) = 1 2 . If we substitute (A.13), (A.14), (A.15), and (A.16) into (A.12),
1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 − t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 =o(1) u 1 h (v 1 ) 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 4 + o(1) (δ 21 − δ 22 ) 2 + 1 2 u 2 1 (h (v 1 )) 2 2 + o(1) (δ 11 − δ 12 ) 2 + 1 2 u 2 2 (h (v 2 )) 2 2 + o(1) (δ 21 − δ 22 ) 2 + (u 1 u 2 + o(1)) (δ 11 − δ 12 ) 2 o(1) + (δ 21 − δ 22 ) 2 o(1) + (δ 11 − δ 12 )(δ 21 − δ 22 ) h (v 1 )h (v 2 ) 2 + o(1) = u 2 1 (h (v 1 )) 2 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 2 (h (v 2 )) 2 4 + o(1) (δ 21 − δ 22 ) 2 + u 1 u 2 h (v 1 )h (v 2 ) 2 + o(1) (δ 11 − δ 12 )(δ 21 − δ 22 ). (A.17)
We are almost done. If we substitute (A.10), (A.11), and (A.17) into (A.9), we can get
((W j ,b j ) 2 j=1 ) = 1 3 + 3 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 + u 1 h (v 1 ) 12 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 12 + o(1) (δ 21 − δ 22 ) 2 + u 2 1 (h (v 1 )) 2 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 2 (h (v 2 )) 2 4 + o(1) (δ 21 − δ 22 ) 2 + u 1 u 2 h (v 1 )h (v 2 ) 2 + o(1) (δ 11 − δ 12 )(δ 21 − δ 22 ) = 1 3 + 3 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 + u 1 h (v 1 ) 12 + u 2 1 (h (v 1 )) 2 4 + o(1) (δ 11 − δ 12 ) 2 + u 2 h (v 2 ) 12 + u 2 2 (h (v 2 )) 2 4 + o(1) (δ 21 − δ 22 ) 2 + u 1 u 2 h (v 1 )h (v 2 ) 2 + o(1) (δ 11 − δ 12 )(δ 21 − δ 22 ),
which is the equation (A.7) that we were originally aiming to show.
A5 Proof of Corollary 3
For the proof of this corollary, we present the values of real numbers that satisfy assumptions (C2.1)-(C2.7), for each activation function listed in the corollary: sigmoid, tanh, arctan, exponential linear units (ELU, [5]), scaled exponential linear units (SELU, [11]). To remind the readers what the assumptions were, we list the assumptions again. For (C2.1)-(C2.2), there exist real numbers v 1 , v 2 , v 3 , v 4 ∈ R such that
(C2.1) h(v 1 )h(v 4 ) = h(v 2 )h(v 3 ), (C2.2) h(v 1 )h v 3 +v 4 2 = h(v 3 )h v 1 +v 2 2 .
For (C2.3)-(C2.7), there exist real numbers v 1 , v 2 , u 1 , u 2 ∈ R such that the following assumptions hold:
(C2.3) u 1 h(v 1 ) + u 2 h(v 2 ) = 1 3 , (C2.4) h is infinitely differentiable at v 1 and v 2 ,
(C2.5) There exists a constant c > 0 such that |h (n) (v 1 )| ≤ c n n! and |h (n) (v 2 )| ≤ c n n!.
(C2.6) (u 1 h (v 1 )) 2 + u 1 h (v 1 ) 3 > 0, (C2.7) (u 1 h (v 1 )u 2 h (v 2 )) 2 < ((u 1 h (v 1 )) 2 + u 1 h (v 1 )
where α > 0, and λ = 1 (ELU) or λ > 1 (SELU). In this case, assumptions (C2.1)-(C2.2) are satisfied by
(v 1 , v 2 , v 3 , v 4 ) = h −1 − λα 2 , h −1 − λα 4 , h −1 − λα 4 , h −1 − λα 8 .
Assumptions (C2.3)-(C2.7) are satisfied by
(v 1 , v 2 , u 1 , u 2 ) = 1 3 , log 2 3 , 2 λ , 1 λα ,
where (C2.4)-(C2.5) are satisfied because h(x) is real analytic at v 1 and v 2 .
A6 Proof of Theorem 2 for "ReLU-like" activation functions.
Recall the piecewise linear nonnegative homogeneous activation function
h s + ,s − (x) = s + x x ≥ 0 s − x x < 0,
where s + > 0, s − ≥ 0 and s + = s − , we will prove that the statements of Theorem 2 hold forh s + ,s − .
A6.1 Proof of Part 1
In the case of s − > 0, assumptions (C2.1)-(C2.2) are satisfied by
(v 1 , v 2 , v 3 , v 4 ) = 1 s + , − 1 s − , − 1 s − , 1 s + .
The rest of the proof can be done in exactly the same way as the proof of Theorem 2.1, provided in Appendix A3. For s − = 0, which corresponds to the case of ReLU, define parameters
W 1 = 0 2 −2 1 ,b 1 = 0 0 ,W 2 = 1 s + − 2 s + ,b 2 = 0.
We can check thath
s + ,s − (W 1 X +b 1 1 T 3 ) = s + 0 2 1 0 1 0 , soW 2hs+,s− (W 1 X +b 1 1 T 3 ) +b 2 1 T 3 = 0 0 1 .
A6.2 Proof of Part 2
Assumptions (C2.3)-(C2.6) are satisfied by
(v 1 , v 2 , u 1 , u 1 ) = 1 4s + , 1 4s + ,2 3 , 2 3 .
Assign parameter valueŝ
W 1 = v 1 v 1 v 2 v 2 ,b 1 = 0 0 ,Ŵ 2 = u 1 u 2 ,b 2 = 0.
It is easy to compute that the output of the neural network isŶ = 1 3 1 3 1 3 , so ((Ŵ j ,b j ) 2 j=1 ) = 1 3 . Now, it remains to show that this is indeed a local minimum of . To show this, we apply perturbations to the parameters to see if the risk after perturbation is greater than or equal to ((Ŵ j ,b j ) 2 j=1 ). Let the perturbed parameters bě
W 1 = v 1 + δ 11 v 1 + δ 12 v 2 + δ 21 v 2 + δ 22 ,b 1 = β 1 β 2 ,W 2 = u 1 + 1 u 2 + 2 ,b 2 = γ,
where δ 11 , δ 12 , δ 21 , δ 22 , β 1 , β 2 , 1 , 2 , and γ are small enough real numbers.
Using the perturbed parameters,
W 1 X +b 1 1 T m = v 1 + δ 11 + β 1 v 1 + δ 12 + β 1 v 1 + δ 11 +δ 12 2 + β 1 v 2 + δ 21 + β 2 v 2 + δ 22 + β 2 v 2 + δ 21 +δ 22 2 + β 2 ,
so the empirical risk can be expressed as
((W j ,b j ) 2 j=1 ) = 1 2 W 2hs+,s− W 1 X +b 1 1 T m +b 2 1 T m − Y 2 F = 1 2 [(u 1 + 1 )s + (v 1 + δ 11 + β 1 ) + (u 2 + 2 )s + (v 2 + δ 21 + β 2 ) + γ] 2 + 1 2 [(u 1 + 1 )s + (v 1 + δ 12 + β 1 ) + (u 2 + 2 )s + (v 2 + δ 22 + β 2 ) + γ] 2 + 1 2 (u 1 + 1 )s + v 1 + δ 11 + δ 12 2 + β 1 + (u 2 + 2 )s + v 2 + δ 21 + δ 22 2 + β 2 + γ − 1 2 .
To simplify notation, let us introduce the following function:
t(δ 1 , δ 2 ) = s + 1 v 1 + s + 2 v 2 + γ + s + (u 1 + 1 )(δ 1 + β 1 ) + s + (u 2 + 2 )(δ 2 + β 2 )
It is easy to check that t(δ 11 , δ 21 ) + t(δ 12 , δ 22 ) − 2t δ 11 + δ 12 2 , δ 21 + δ 22 2 = 0.
With this new notation t(δ 1 , δ 2 ), we get
((W j ,b j ) 2 j=1 ) = 1 2 1 3 + t(δ 11 , δ 21 ) 2 + 1 2 1 3 + t(δ 12 , δ 22 ) 2 + 1 2 − 2 3 + t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 = 1 3 + 1 3 t(δ 11 , δ 21 ) + t(δ 12 , δ 22 ) − 2t δ 11 + δ 12 2 , δ 21 + δ 22 2 + 1 2 [t(δ 11 , δ 21 )] 2 + 1 2 [t(δ 12 , δ 22 )] 2 + 1 2 t δ 11 + δ 12 2 , δ 21 + δ 22 2 2 ≥ 1 3 = ((Ŵ j ,b j ) 2 j=1 ).
A7 Proof of Theorem 4
Before we start, note the following partial derivatives, which can be computed using straightforward matrix calculus:
∂ ∂W j = (W H+1:j+1 ) T ∇ 0 (W H+1:1 )(W j−1:1 ) T ,
for all j ∈ [H + 1].
A7.1 Proof of Part 1, if d y ≥ d x
For Part 1, we must show that if ∇ 0 (Ŵ H+1:1 ) = 0 then (Ŵ j ) H+1 j=1 is a saddle point of . Thus, we show that (Ŵ j ) H+1 j=1 is neither a local minimum nor a local maximum. More precisely, for each j, let B (W j ) be an -Frobenius-norm-ball centered at W j , and ∏ H+1 j=1 B (W j ) their Cartesian product. We wish to show that for every > 0, there exist tuples
(P j ) H+1 j=1 , (Q j ) H+1 j=1 ∈ ∏ H+1 j=1 B (Ŵ j ) such that ((P j ) H+1 j=1 ) > ((Ŵ j ) H+1 j=1 ) > ((Q j ) H+1 j=1 ). (A.18)
To prove (A.18), we exploit ((Ŵ j ) H+1 j=1 ) = 0 (Ŵ H+1:1 ), and the assumption ∇ 0 (Ŵ H+1:1 ) = 0. The key idea is to perturb the tuple (Ŵ j ) H+1 j=1 so that the directional derivative of 0 along P H+1:1 −Ŵ H+1:1 is positive. Since 0 is differentiable, if P H+1:1 −Ŵ H+1:1 is small, then
((P j ) H+1 j=1 ) = 0 (P H+1:1 ) > 0 (Ŵ H+1:1 ) = ((Ŵ j ) H+1 j=1 ). Similarly, we can show ((Q j ) H+1 j=1 ) < ((Ŵ j ) H+1 j=1 ).
The key challenge lies in constructing these perturbations; we outline our approach below; this construction may be of independent interest too. For this section, we assume that d x ≥ d y for simplicity; the case d y ≥ d x is treated in Appendix A7.2.
Since ∇ 0 (Ŵ H+1:1 ) = 0, col(∇ 0 (Ŵ H+1:1 )) ⊥ must be a strict subspace of R d y . Consider ∂ /∂W 1 at a critical point to see that (Ŵ H+1:2 ) T ∇ 0 (Ŵ H+1:1 ) = 0, so col(Ŵ H+1:2 ) ⊆ col(∇ 0 (Ŵ H+1:1 )) ⊥ R d y . This strict inclusion implies rank(Ŵ H+1:2 ) < d y ≤ d 1 , so that null(Ŵ H+1:2 ) is not a trivial subspace. Moreover, null(Ŵ H+1:2 ) ⊇ null(Ŵ H:2 ) ⊇ · · · ⊇ null(Ŵ 2 ). We can split the proof into two cases: null(Ŵ H+1:2 ) = null(Ŵ H:2 ) and null(Ŵ H+1:2 ) = null(Ŵ H:2 ).
Let the SVD of ∇ 0 (Ŵ H+1:1 ) = U l ΣU T r . Recall [U l ] ·,1 and [U r ] ·,1 denote first columns of U l and U r , respectively.
Case 1: null(Ŵ H+1:2 ) = null(Ŵ H:2 ). In this case, null(Ŵ H+1:2 ) null(Ŵ H:2 ). We will perturb W 1 andŴ H+1 to obtain the tuples (P j ) H+1 j=1 and (Q j ) H+1 j=1 . To create our perturbation, we choose two unit vectors as follows:
v 0 = [U r ] ·,1 , v 1 ∈ null(Ŵ H+1:2 ) ∩ null(Ŵ H:2 ) ⊥ .
Then, define ∆ 1 := v 1 v T 0 ∈ R d 1 ×d x , and V 1 :=Ŵ 1 + ∆ 1 ∈ B (Ŵ 1 ). Since v 1 lies in null(Ŵ H+1:2 ), observe thatŴ H+1:2 V 1 =Ŵ H+1:1 + Ŵ H+1:2 v 1 v T 0 =Ŵ H+1:1 . With this definition of V 1 , we can also see that
∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ H:2 ) T = ∇ 0 (Ŵ H+1:1 )(Ŵ H:1 ) T + ∇ 0 (Ŵ H+1:1 )v 0 v T 1 (Ŵ H:2 ) T .
Note that ∇ 0 (Ŵ H+1:1 )(Ŵ H:1 ) T is equal to ∂ /∂W H+1 at a critical point, hence is zero. Since v 0 = [U r ] ·,1 , we have ∇ 0 (Ŵ H+1:1 )v 0 = σ max (∇ 0 (Ŵ H+1:1 ))[U l ] ·,1 , which is a nonzero column vector, and since v 1 ∈ null(Ŵ H:2 ) ⊥ = row(Ŵ H:2 ), v T 1 (Ŵ H:2 ) T is a nonzero row vector. From this observation, ∇ 0 (Ŵ H+1:1 )v 0 v T 1 (Ŵ H:2 ) T is nonzero, and so is ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ H:2 ) T . We are now ready to define the perturbation onŴ H+1 :
∆ H+1 := ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ H:2 ) T ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ H:2 ) T F
, so thatŴ H+1 + ∆ H+1 ∈ B (Ŵ H+1 ). Then, observe that ∆ H+1ŴH:2 V 1 , ∇ 0 (Ŵ H+1:1 ) = ∆ H+1 , ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ H:2 ) T > 0, by definition of ∆ H+1 . In other words, ∆ H+1ŴH:2 V 1 is an ascent direction of 0 atŴ H+1:1 . Now choose the tuples
(P j ) H+1 j=1 = (V 1 ,Ŵ 2 , . . . ,Ŵ H ,Ŵ H+1 + η∆ H+1 ), (Q j ) H+1 j=1 = (V 1 ,Ŵ 2 , . . . ,Ŵ H ,Ŵ H+1 − η∆ H+1 ),
where η ∈ (0, 1] is chosen suitably. It is easy to verify that (P j ) H+1 j=1 , (Q j ) H+1 j=1 ∈ ∏ H+1 j=1 B (Ŵ j ), and that the products P H+1:1 =Ŵ H+1:1 + η∆ H+1ŴH:2 V 1 , Q H+1:1 =Ŵ H+1:1 − η∆ H+1ŴH:2 V 1 .
Since 0 is differentiable, for small enough η ∈ (0, 1], 0 (P H+1:1 ) > 0 (Ŵ H+1:1 ) > 0 (Q H+1:1 ), proving (A.18). This construction is valid for any > 0, so we are done.
Case 2: null(Ŵ H+1:2 ) = null(Ŵ H:2 ). By and large, the proof of this case goes the same, except that we need a little more care on what perturbations to make. Define
j * = max{j ∈ [2, H] | null(Ŵ j:2 ) null(Ŵ j−1:2 )}.
When you start from j = H down to j = 2 and compare null(Ŵ j:2 ) and null(Ŵ j−1:2 ), the first iterate j at which you have null(Ŵ j:2 ) = null(Ŵ j−1:2 ) is j * . If all null spaces of matrices fromŴ H:2 toŴ 2 are equal, j * = 2 which follows from the notational convention that null(Ŵ 1:2 ) = null(I d 1 ) = {0}. According to j * , in Case 2 we perturbŴ 1 ,Ŵ H+1 ,Ŵ H , . . . ,Ŵ j * to get (P j ) H+1 j=1 and (Q j ) H+1 j=1 . Recall the definition of left-null space of matrix A: leftnull(A) = {v | v T A = 0}. By definition of j * , note that null(Ŵ H+1:2 ) = null(Ŵ H:2 ) = · · · = null(Ŵ j * :2 ) ⇔ row(Ŵ H+1:2 ) = row(Ŵ H:2 ) = · · · = row(Ŵ j * :2 ) ⇔ rank(Ŵ H+1:2 ) = rank(Ŵ H:2 ) = · · · = rank(Ŵ j * :2 ), which means the products are all rank-deficient (recall rank(Ŵ H+1:2 ) < d y and all d j ≥ d y ), and hence they all have nontrivial left-null spaces leftnull(Ŵ H:2 ), . . . , leftnull(Ŵ j * :2 ) as well.
We choose some unit vectors as the following:
v 0 = [U r ] ·,1 , v 1 ∈ null(Ŵ j * :2 ) ∩ null(Ŵ j * −1:2 ) ⊥ , v H+1 = [U l ] ·,1 , v H ∈ leftnull(Ŵ H:2 ), · · · v j * ∈ leftnull(Ŵ j * :2 ).
Then, for a γ ∈ (0, ] whose value will be specified later, define
∆ 1 := γv 1 v T 0 ∈ R d 1 ×d x ,
where c j = [C j ] α,β . To show that the matrix product (A.23) is nonzero, it suffices to show that its (α, β)-th entry (A.25) is nonzero. If c 1 = · · · = c H−j * +1 = 0, then with the choice of γ = , (A.25) is trivially nonzero. If some of c 1 , . . . , c H−j * +1 are nonzero, we can scale γ ∈ (0, ] arbitrarily small, so that |c 1 γ + · · · + c H−j * +1 γ H−j * +1 | > |c H−j * +2 γ H−j * +2 |, and thus (A.25) can never be zero. From this, with sufficiently small γ, the matrix product (A.23) is nonzero. Now define the perturbation onŴ j * :
∆ j * := (V H+1:j * +1 ) T ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ j * −1:2 ) T (V H+1:j * +1 ) T ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ j * −1:2 ) T F , so thatŴ j * + ∆ j * ∈ B (Ŵ j * ). Then, observe that V H+1:j * +1 ∆ j * Ŵ j * −1:2 V 1 , ∇ 0 (Ŵ H+1:1 ) = tr((V H+1:j * +1 ∆ j * Ŵ j * −1:2 V 1 ) T ∇ 0 (Ŵ H+1:1 )) = tr(∆ T j * (V H+1:j * +1 ) T ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ j * −1:2 ) T ) = ∆ j * , (V H+1:j * +1 ) T ∇ 0 (Ŵ H+1:1 )V T 1 (Ŵ j * −1:2 ) T > 0.
This means that V H+1:j * +1 ∆ j * Ŵ j * −1:2 V 1 and −V H+1:j * +1 ∆ j * Ŵ j * −1:2 V 1 are ascent and descent directions, respectively, of 0 (R) atŴ H+1:1 . After that, the proof is very similar to the previous case. We can define
(P j ) H+1 j=1 = (V 1 ,Ŵ 2 , . . . ,Ŵ j * −1 ,Ŵ j * + η∆ j * , V j * +1 , . . . , V H+1 ) ∈ ∏ H+1 j=1 B (Ŵ j ) (Q j ) H+1 j=1 = (V 1 ,Ŵ 2 , . . . ,Ŵ j * −1 ,Ŵ j * − η∆ j * , V j * +1 , . . . , V H+1 ) ∈ ∏ H+1 j=1 B (Ŵ j ), where 0 < η ≤ 1 is small enough, to show that by differentiability of 0 (R), we get ((P j ) H+1 j=1 ) > ((Ŵ j ) H+1 j=1 ) > ((Q j ) H+1 j=1 ).
A7.2 Proof of Part 1, if d y ≥ d x
First, note that ∇ 0 (Ŵ H+1:1 )(Ŵ H:1 ) T = 0, because it is ∂ ∂W H+1 evaluated at a critical point (Ŵ j ) H+1 j=1 . This equation implies row(∇ 0 (Ŵ H+1:1 )) ⊥ ⊇ row(Ŵ H:1 ). Since ∇ 0 (Ŵ H+1:1 ) = 0, row(∇ 0 (Ŵ H+1:1 )) ⊥ cannot be the whole R d x , and it is a strict subspace of R d x . Observe thatŴ H:1 ∈ R d H ×d x and d x ≤ d H . Since row(Ŵ H:1 ) ⊆ row(∇ 0 (Ŵ H+1:1 )) ⊥ R d x , this means rank(Ŵ H:1 ) < d x , hence leftnull(Ŵ H:1 ) is not a trivial subspace. Now observe that
leftnull(Ŵ H:1 ) ⊇ leftnull(Ŵ H:2 ) ⊇ · · · ⊇ leftnull(Ŵ H ),
where some of left-null spaces in the right could be zero-dimensional. The procedure of choosing the perturbation depends on these left-null spaces. We can split the proof into two cases: leftnull(Ŵ H:1 ) = leftnull(Ŵ H:2 ) and leftnull(Ŵ H:1 ) = leftnull(Ŵ H:2 ). Because the former case is simpler, we prove the former case first. Before we dive in, again take SVD of ∇ 0 (Ŵ H+1:1 ) = U l ΣU T r . Since ∇ 0 (Ŵ H+1:1 ) = 0, there is at least one positive singular value, so σ max (∇ 0 (Ŵ H+1:1 )) > 0. Recall the notation that [U l ] ·,1 and [U r ] ·,1 are first column vectors of U l and U r , respectively. Case 2: leftnull(Ŵ H:1 ) = leftnull(Ŵ H:2 ). By and large, the proof of this case goes the same, except that we need a little more care on what perturbations to make. Define
j * = min{j ∈ [2, H] | leftnull(Ŵ H:j ) leftnull(Ŵ H:j+1 )}.
When you start from j = 2 up to j = H and compare leftnull(Ŵ H:j ) and leftnull(Ŵ H:j+1 ), the first iterate j at which you have leftnull(Ŵ H:j ) = leftnull(Ŵ H:j+1 ) is j * . If all left-null spaces of matrices fromŴ H:2 toŴ H are equal, j * = H which follows from the notational convention that leftnull(Ŵ H:H+1 ) = leftnull(I d H ) = {0}. According to j * , in Case 2 we perturbŴ H+1 ,Ŵ 1 ,Ŵ 2 , . . . , W j * to get (P j ) H+1 j=1 and (Q j ) H+1 j=1 . By definition of j * , note that leftnull(Ŵ H:1 ) = leftnull(Ŵ H:2 ) = · · · = leftnull(Ŵ H:j * ) ⇔ col(Ŵ H:1 ) = col(Ŵ H:2 ) = · · · = col(Ŵ H:j * ) ⇔ rank(Ŵ H:1 ) = rank(Ŵ H:2 ) = · · · = rank(Ŵ H:j * ) which means the products are all rank-deficient (recall rank(Ŵ H:1 ) < d x and all d j ≥ d x ), and hence they all have nontrivial null spaces null(Ŵ H:2 ), . . . , null(Ŵ H:j * ) as well.
We choose some unit vectors as the following:
v 0 = [U r ] ·,1 , v 1 ∈ null(Ŵ H:2 ), · · · v j * −1 ∈ null(Ŵ H:j * ) v H ∈ leftnull(Ŵ H:j * ) ∩ leftnull(Ŵ H:j * +1 ) ⊥ , v H+1 = [U l ] ·,1 .
Then, for a γ ∈ (0, ] whose value will be specified later, define
∆ 1 := γv 1 v T 0 ∈ R d 1 ×d x , · · · ∆ j * −1 := γv j * −1 v T j * −2 ∈ R d j * −1 ×d j * −2 , ∆ H+1 := γv H+1 v T H ∈ R d y ×d H ,
and V j :=Ŵ j + ∆ j accordingly for j = 1, . . . , j * − 1, H + 1. By definition of ∆ j 's, note that V H+1ŴH:j * V j * −1:1 =V H+1ŴH:j * −1 V j * −2:1 + V H+1ŴH:j * ∆ j * −1 V j * −2:1 = V H+1ŴH:j * −1 V j * −2:1 (A. 26) =V H+1ŴH:j * −2 V j * −3:1 + V H+1ŴH:j * −1 ∆ j * −2 V j * −3:1 = V H+1ŴH:j * −2 V j * −3:1 (A. 27) = · · · =V H+1ŴH:1 + V H+1ŴH:2 ∆ 1 = V H+1ŴH:1 (A.28) =Ŵ H+1:1 + ∆ H+1ŴH:1 =Ŵ H+1:1 , (A.29)
where in (A. 26) we used the definition that v j * −1 ∈ null(Ŵ H:j * ), in (A.27) that v j * −2 ∈ null(Ŵ H:j * −1 ), in (A.28) that v 1 ∈ null(Ŵ H:2 ), and in (A.29) that v H ∈ leftnull(Ŵ H:j * ).
This means that V H+1ŴH:j * +1 ∆ j * V j * −1:1 and −V H+1ŴH:j * +1 ∆ j * V j * −1:1 are ascent and descent directions, respectively, of 0 (R) atŴ H+1:1 . After that, the proof is very similar to the previous case. We can define (P j ) H+1 j=1 = (V 1 , . . . , V j * −1 ,Ŵ j * + η∆ j * ,Ŵ j * +1 , . . . ,Ŵ H , V H+1 ) ∈ ∏ H+1 j=1 B (Ŵ j ) (Q j ) H+1 j=1 = (V 1 , . . . , V j * −1 ,Ŵ j * − η∆ j * ,Ŵ j * +1 , . . . ,Ŵ H , V H+1 ) ∈ ∏ H+1 j=1 B (Ŵ j ), where 0 < η ≤ 1 is small enough, to show that by differentiability of 0 (R), we get ((P j ) H+1 j=1 ) > ((Ŵ j ) H+1 j=1 ) > ((Q j ) H+1 j=1 ).
A7.3 Proof of Part 2(a)
In this part, we show that if ∇ 0 (Ŵ H+1:1 ) = 0 andŴ H+1:1 is a local min of 0 , then (Ŵ j ) H+1 j=1 is a local min of . The proof for local max case can be done in a very similar way.
SinceŴ H+1:1 is a local minimum of 0 , there exists > 0 such that, for any R satisfying R −Ŵ H+1:1 F ≤ , we have 0 (R) ≥ 0 (Ŵ H+1:1 ). We prove that (Ŵ j ) H+1 j=1 is a local minimum of by showing that there exists a neighborhood of (Ŵ j ) H+1 j=1 in which any point (V j ) H+1 j=1 satisfies ((V j ) H+1 j=1 ) ≥ ((Ŵ j ) H+1 j=1 ). Now define 0 < j ≤ 2(H + 1) max Ŵ H+1:j+1 F Ŵ j−1:1 F , 1 .
Observe that a max{a,1} ≤ 1 for a ≥ 0. Then, for all j ∈ [H + 1], pick any V j such that V j −Ŵ j F ≤ j . Denote ∆ j = V j −Ŵ j for all j. Now, by triangle inequality and submultiplicativity of Frobenius norm, (Ŵ H+1 + ∆ H+1 ) · · · (Ŵ 1 +
≤ 2 + O(max j 2 j ) ≤ ,
for small enough j 's. Given this, for any (V j ) H+1 j=1 in the neighborhood of (Ŵ j ) H+1 j=1 defined by j 's, V H+1:1 −Ŵ H+1:1 F ≤ , so 0 (V H+1:1 ) ≥ 0 (Ŵ H+1:1 ), meaning ((V j ) H+1 j=1 ) ≥ ((Ŵ j ) H+1 j=1 ). Thus, (Ŵ j ) H+1 j=1 is a local minimum of .
A7.4 Proof of Part 2(b)
For this part, we want to show that if ∇ 0 (Ŵ H+1:1 ) = 0, then (Ŵ j ) H+1 j=1 is a global min (or max) of if and only ifŴ H+1:1 is a global min (or max) of 0 . We prove this by showing the following: if d j ≥ min{d x , d y } for all j ∈ [H], for any R ∈ R d y ×d x there exists a decomposition (W j ) H+1 j=1 such that R = W H+1:1 .
We divide the proof into two cases: d x ≥ d y and d y ≥ d x . ((W j ) H+1 j=1 ).
Thus, any (Ŵ j ) H+1 j=1 attaining a global min of must have inf R 0 (R) = 0 (Ŵ H+1:1 ), soŴ H+1:1 is also a global min of 0 (R). Conversely, if 0 (Ŵ H+1:1 ) = inf 0 (R), then ((Ŵ j ) H+1 j=1 ) = inf ((W j ) H+1 j=1 ), so (Ŵ j ) H+1 j=1 is a global min of . We can prove the global max case similarly.
A7.5 Proof of Part 3 and 3(a)
Suppose there exists j * ∈ [H + 1] such thatŴ H+1:j * +1 has full row rank andŴ j * −1:1 has full column rank. For simplicity, define A :=Ŵ H+1:j * +1 and B :=Ŵ j * −1:1 . Since A T has linearly independent columns, B T has linearly independent rows, and ∂ /∂W j * = 0 at (Ŵ j ) H+1 j=1 , A T ∇ 0 (Ŵ H+1:1 )B T = 0 =⇒ ∇ 0 (Ŵ H+1:1 ) = 0, hence Parts 2(a) and 2(b) are implied.
For Part 3(a), we want to prove that if (Ŵ j ) H+1 j=1 is a local min of , thenŴ H+1:1 is a local min of 0 . By definition of local min, ∃ > 0 such that, for any (V j ) H+1 j=1 for which V j −Ŵ j F ≤ (for j ∈ [H + 1]), we have ((V j ) H+1 j=1 ) ≥ ((Ŵ j ) H+1 j=1 ). To show thatŴ H+1:1 is a local min of 0 , we have to show there exists a neighborhood ofŴ H+1:1 such that, any point R in that neighborhood satisfies 0 (R) ≥ 0 (Ŵ H+1:1 ). To prove this, we state the following lemma: Lemma A.6. Suppose A :=Ŵ H+1:j * +1 has full row rank and B :=Ŵ j * −1:1 has full column rank. Then, any R satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B) can be decomposed into R = V H+1:1 , where V j * =Ŵ j * + A T (AA T ) −1 (R −Ŵ H+1:1 )(B T B) −1 B T , and V j =Ŵ j for j = j * . Also, V j −Ŵ j F ≤ for all j.
Proof Since A :=Ŵ H+1:j * +1 has full row rank and B :=Ŵ j * −1:1 has full column rank, σ min (A) > 0, σ min (B) > 0, and AA T and B T B are invertible. Consider any R satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B) . Given the definitions of V j 's in the statement of the lemma, we can check the identity that R = V H+1:1 by V H+1:1 = AV j B = AŴ j B + (R −Ŵ H+1:1 ) =Ŵ H+1:1 + (R −Ŵ H+1:1 ) = R. Now It is left to show that V j * −Ŵ j * F ≤ , so that (V j ) H+1 j=1 indeed satisfies V j −Ŵ j F ≤ for all j. We can show that The lemma shows that for any R = V H+1:1 satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B) , we have 0 (R) = 0 (V H+1:1 ) = ((V j ) H+1 j=1 ) ≥ ((Ŵ j ) H+1 j=1 ) = 0 (Ŵ H+1:1 ). We can prove the local maximum part by a similar argument.
Corollary 3 .
3For the counterexample in Theorem 2, the set of activation functions satisfying conditions (C2.1)-(C2.7) include sigmoid, tanh, arctan, ELU, and SELU.
Corollary 5 .
5In addition to the assumptions in Theorem 4, assume that any critical point of 0 is a global min (max). For any critical point
Let null(A) := {v | Av = 0} and leftnull(A) := {v | v T A = 0} be the null space and the left-null space of A, respectively. When A is a square matrix, let tr(A) be the trace of A. For matrices A and B of the same size, A, B = tr(A T B) denotes the usual trace inner product of A and B. Equivalently, A, B = tr(A T B) = tr(AB T ). Let 0 d×m be the all zeros matrix in R d×m . A2 Proof of Theorem 1, Step 2, Case 2 Case 2. J = ∅. We start with a lemma discussing what J = ∅ implies. Lemma A.1. If J = ∅, the following statements hold:
Lemma A. 3 .
3Assume there exist real numbers v 1 , v 2 , u 1 , u 2 such that conditions (C2.3)-(C2.5) hold. Then, for perturbed parameters (W j ,b j ) 2 j=1 defined in (A.5),
σ max (A T (AA T ) −1 ) = 1/σ min (A), σ max ((B T B) −1 B T ) = 1/σ min (B).
j * F = A T (AA T ) −1 (R −Ŵ H+1:1 )(B T B) −1 B T F ≤σ max (A T (AA T ) −1 )σ max ((B T B) −1 B T ) R −Ŵ H+1:1 F ≤ 1 σ min (A)σ min (B) · σ min (A)σ min (B) = .
The main result of this section, Theorem 1, considers the case where linear models cannot fit Y, i.e., Y = RX for all matrix R. With ReLUlike activation (1) and a few mild assumptions, Theorem 1 shows that there exist spurious local minima.Theorem 1. Suppose that the following conditions hold:
(C1.1) Output dimension is d y = 1, and linear models RX cannot perfectly fit Y.
(C1.2) All the data points x i 's are distinct.
(C1.3) The activation function h ish s + ,s − .
(C1.4) The hidden layer has at least width 2: d 1 ≥ 2.
is any arbitrary fixed positive constant, [W] [d x ] gives the first d x components of W, and [W] d x +1 the last component. Sinceȳ i = [W] [d x ] x i + [W] d x +1 , for any i,Ŵ 1 x i +b 1 > 0 d 1 (component-wise), given our choice of η. Thus, all hidden node inputs are positive. Moreover,
Case 1: d x ≥ d y .If d x ≥ d y , by assumption d j ≥ d y for all j ∈ [H]. Recall that W 1 ∈ R d 1 ×d x . Given R ∈ R d y ×d x , we can fill the first d y rows of W 1 with R and let any other entries be zero. For all the other matrices W 2 , . . . , W H+1 , we put ones to the diagonal entries while putting zeros to all the other entries. We can check that, by this construction, R = W H+1:1 for this given R.Case 2: d y ≥ d x . If d y ≥ d x , we have d j ≥ d x for all j ∈ [H]. Recall W H+1 ∈ R d y ×d H .Given R ∈ R d y ×d x , we can fill the first d x columns of W H+1 with R and let any other entries be zero. For all the other matrices W 1 , . . . , W H , we put ones to the diagonal entries while putting zeros to all the other entries. By this construction, R = W H+1:1 for given R.Once this fact is given, by ((W j ) H+1 j=1 ) = 0 (W H+1:1 ),inf
R
0 (R) = inf
W H+1:1
0 (W H+1:1 ) = inf
(W j ) H+1
j=1
That is, given input data matrices X and Y, there is no matrix R such that Y = RX.2 Although their result overlaps with a subset of Theorem 4, our theorem was obtained independently.
)((u 2 h (v 2 )) 2 + u 2 h (v 2 ) 3 ).For each function, we now present the appropriate real numbers that satisfy the assumptions.
A5.1 SigmoidWhen h is sigmoid,Assumptions (C2.1)-(C2.2) are satisfied byand assumptions (C2.3)-(C2.7) are satisfied byAmong them, (C2.4)-(C2.5) follow because sigmoid function is an real analytic function[12].A5.2 tanhWhen h is hyperbolic tangent, assumptions (C2.1)-(C2.A5.3 arctanWhen h is inverse tangent, assumptions (C2.1)-(C2.2) are satisfied byAssumptions (C2.4)-(C2.5) hold because inverse tangent function is real analytic.A5.4 ELU and SELUWhen h is ELU or SELU,and V j :=Ŵ j + ∆ j accordingly for j = 1, j * + 1, . . . , H + 1. By definition of ∆ j 's, note that=Ŵ H+1:1 +Ŵ H+1:2 ∆ 1 =Ŵ H+1:1 , (A.22)where in (A.19)we used the definition that v j * ∈ leftnull(Ŵ j * :2 ), in (A.20) that v j * +1 ∈ leftnull(Ŵ j * +1:2 ), in (A.21) that v H ∈ leftnull(Ŵ H:2 ), and in (A.22) that v 1 ∈ null(Ŵ j * :2 ). Now consider the following matrix product:We are going to show that for small enough γ ∈ (0, ], this product is nonzero. If we expand (A.23), there are many terms in the summation. However, note that the expansion can be arranged in the following form:where C j ∈ R d j * ×d j * −1 for all j and C j doesn't depend on γ, and specificallyBecause the C 0 is exactly equal to ∂ ∂W j * evaluated at a critical point ((Ŵ j ) H+1 j=1 ), C 0 = 0. Also, due to definitions of ∆ j 's,2 ) T will be a nonzero row vector. Thus, the product C H−j * +2 will be nonzero.Since C H−j * +2 = 0, we can pick any index (α, β) such that the (α, β)-th entry of C H−j * +2 , denoted as [C H−j * +2 ] α,β , is nonzero. Then, the (α, β)-th entry of (A.24) can be written asNote that (Ŵ H+1:2 ) T ∇ 0 (Ŵ H+1:1 ) is exactly equal to ∂ ∂W 1 evaluated at (Ŵ j ) H+1 j=1 , hence is zero by assumption that (Ŵ j ) H+1 j=1 is a critical point. Since v H ∈ leftnull(Ŵ H:2 ) ⊥ = col(Ŵ H:2 ), (Ŵ H:2 ) T v H is a nonzero column vector, and since v H+1 = [U l ] ·,1 , v T H+1 ∇ 0 (Ŵ H+1:1 ) = σ max (∇ 0 (Ŵ H+1:1 ))([U r ] ·,1 ) T , which is a nonzero row vector. From this observation, we can see that (Ŵ H:2 ) T v H v T H+1 ∇ 0 (Ŵ H+1:1 ) is nonzero, and so is (Ŵ H:2 ) T V T H+1 ∇ 0 (Ŵ H+1:1 ). Now define the perturbation onŴ 1 :. Then, observe that V H+1ŴH:2 ∆ 1 , ∇ 0 (Ŵ H+1:1 ) = tr((V H+1ŴH:2 ∆ 1 ) T ∇ 0 (Ŵ H+1:1 )) = tr(∆ T 1 (Ŵ H:2 ) T V T H+1 ∇ 0 (Ŵ H+1:1 )) = ∆ 1 , (Ŵ H:2 ) T V T H+1 ∇ 0 (Ŵ H+1:1 ) > 0, by definition of ∆ 1 . This means that V H+1ŴH:2 ∆ 1 and −V H+1ŴH:2 ∆ 1 are ascent and descent directions, respectively, of 0 (R) atŴ H+1:1 . Since 0 is a differentiable function, there exists small enough 0 < η ≤ 1 that satisfies 0 (Ŵ H+1:1 + ηV H+1ŴH:2 ∆ 1 ) > 0 (Ŵ H+1:1 ), 0 (Ŵ H+1:1 − ηV H+1ŴH:2 ∆ 1 ) < 0 (Ŵ H+1:1 ).Now defineWe can check (P j ) H+1 j=1 , (Q j ) H+1 j=1 ∈ ∏ H+1 j=1 B (Ŵ j ), and P H+1:1 =Ŵ H+1:1 + ηV H+1ŴH:2 ∆ 1 .Q H+1:1 =Ŵ H+1:1 − ηV H+1ŴH:2 ∆ 1 .By definition of ((W j ) H+1 j=1 ), this shows that ((P j ) H+1 j=1 ) > ((Ŵ j ) H+1 j=1 ) > ((Q j ) H+1 j=1 ). This construction holds for any > 0, proving that (Ŵ j ) H+1 j=1 can be neither a local maximum nor a local minimum. Now consider the following matrix product:We are going to show that for small enough γ ∈ (0, ], this product is nonzero. If we expand (A.30), there are many terms in the summation. However, note that the expansion can be arranged in the following form:where C j ∈ R d j * ×d j * −1 for all j and C j doesn't depend on γ, and specificallyBecause the C 0 is exactly equal to ∂ ∂W j * evaluated at a critical point ((Ŵ j ) H+1 j=1 ), C 0 = 0. Also, due to definitions of ∆ j 's,1and v 0 = [U r ] ·,1 , the product v T H+1 ∇ 0 (Ŵ H+1:1 )v 0 = σ max (∇ 0 (Ŵ H+1:1 )) > 0. Finally, v T j * −1 is a nonzero row vector. Thus, the product C j * will be nonzero.Since C j * = 0, we can pick any index (α, β) such that the (α, β)-th entry of C j * , denoted as [C j * ] α,β , is nonzero. Then, the (α, β)-th entry of (A.31) can be written aswhere c j = [C j ] α,β . To show that the matrix product (A.30) is nonzero, it suffices to show that its (α, β)-th entry (A.32) is nonzero. If c 1 = · · · = c j * −1 = 0, then with the choice of γ = , (A.32) is trivially nonzero. If some of c 1 , . . . , c j * −1 are nonzero, we can scale γ ∈ (0, ] arbitrarily small, so that |c 1 γ + · · · + c j * −1 γ j * −1 | > |c j * γ j * |, and thus (A.32) can never be zero. From this, with sufficiently small γ, the matrix product (A.30) is nonzero. Now define the perturbation onŴ j * : ∆ j * := (Ŵ H:j * +1 ) T V T H+1 ∇ 0 (Ŵ H+1:1 )(V j * −1:1 ) T (Ŵ H:j * +1 ) T V T H+1 ∇ 0 (Ŵ H+1:1 )(V j * −1:1 ) T F , so thatŴ j * + ∆ j * ∈ B (Ŵ j * ). Then, observe that V H+1ŴH:j * +1 ∆ j * V j * −1:1 , ∇ 0 (Ŵ H+1:1 ) = tr((V H+1ŴH:j * +1 ∆ j * V j * −1:1 ) T ∇ 0 (Ŵ H+1:1 )) = tr(∆ T j * (Ŵ H:j * +1 ) T V T H+1 ∇ 0 (Ŵ H+1:1 )(V j * −1:1 ) T ) = ∆ j * , (Ŵ H:j * +1 ) T V T H+1 ∇ 0 (Ŵ H+1:1 )(V j * −1:1 ) T > 0.
Neural networks and principal component analysis: Learning from examples without local minima. P Baldi, K Hornik, Neural networks. 21P. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1):53-58, 1989.
Convex neural networks. Y Bengio, N L Roux, P Vincent, O Delalleau, P Marcotte, Advances in neural information processing systems. Y. Bengio, N. L. Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In Advances in neural information processing systems, pages 123-130, 2006.
Globally optimal gradient descent for a convnet with gaussian inputs. A Brutzkus, A Globerson, International Conference on Machine Learning. A. Brutzkus and A. Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. In International Conference on Machine Learning, pages 605-614, 2017.
The loss surfaces of multilayer networks. A Choromanska, M Henaff, M Mathieu, G B Arous, Y Lecun, Artificial Intelligence and Statistics. A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pages 192-204, 2015.
Fast and accurate deep network learning by exponential linear units (elus). D.-A Clevert, T Unterthiner, S Hochreiter, arXiv:1511.07289arXiv preprintD.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima. S S Du, J D Lee, Y Tian, B Poczos, A Singh, arXiv:1712.00779arXiv preprintS. S. Du, J. D. Lee, Y. Tian, B. Poczos, and A. Singh. Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima. arXiv preprint arXiv:1712.00779, 2017.
Porcupine neural networks:(almost) all local optima are global. S Feizi, H Javadi, J Zhang, D Tse, arXiv:1710.02196arXiv preprintS. Feizi, H. Javadi, J. Zhang, and D. Tse. Porcupine neural networks:(almost) all local optima are global. arXiv preprint arXiv:1710.02196, 2017.
Topology and geometry of half-rectified network optimization. C D Freeman, J Bruna, International Conference on Learning Representations. C. D. Freeman and J. Bruna. Topology and geometry of half-rectified network optimization. In International Conference on Learning Representations, 2017.
Global optimality in neural network training. B D Haeffele, R Vidal, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionB. D. Haeffele and R. Vidal. Global optimality in neural network training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7331-7339, 2017.
Deep learning without poor local minima. K Kawaguchi, Advances in Neural Information Processing Systems. K. Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pages 586-594, 2016.
Self-normalizing neural networks. G Klambauer, T Unterthiner, A Mayr, S Hochreiter, Advances in Neural Information Processing Systems. G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter. Self-normalizing neural networks. In Advances in Neural Information Processing Systems, pages 972-981, 2017.
A primer of real analytic functions. S G Krantz, H R Parks, Springer Science & Business MediaS. G. Krantz and H. R. Parks. A primer of real analytic functions. Springer Science & Business Media, 2002.
Deep linear networks with arbitrary loss: All local minima are global. T Laurent, J Brecht, International Conference on Machine Learning. T. Laurent and J. Brecht. Deep linear networks with arbitrary loss: All local minima are global. In International Conference on Machine Learning, pages 2908-2913, 2018.
T Laurent, J Brecht, arXiv:1712.10132The multilinear structure of relu networks. arXiv preprintT. Laurent and J. von Brecht. The multilinear structure of relu networks. arXiv preprint arXiv:1712.10132, 2017.
Depth creates no bad local minima. H Lu, K Kawaguchi, arXiv:1702.08580arXiv preprintH. Lu and K. Kawaguchi. Depth creates no bad local minima. arXiv preprint arXiv:1702.08580, 2017.
The loss surface of deep and wide neural networks. Q Nguyen, M Hein, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Q. Nguyen and M. Hein. The loss surface of deep and wide neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 2603-2612, 2017.
Q Nguyen, M Hein, arXiv:1710.10928Optimization landscape and expressivity of deep cnns. arXiv preprintQ. Nguyen and M. Hein. Optimization landscape and expressivity of deep cnns. arXiv preprint arXiv:1710.10928, 2017.
I Safran, O Shamir, arXiv:1712.08968Spurious local minima are common in two-layer relu neural networks. arXiv preprintI. Safran and O. Shamir. Spurious local minima are common in two-layer relu neural net- works. arXiv preprint arXiv:1712.08968, 2017.
O Shamir, arXiv:1804.06739Are resnets provably better than linear predictors. arXiv preprintO. Shamir. Are resnets provably better than linear predictors? arXiv preprint arXiv:1804.06739, 2018.
No bad local minima: Data independent training error guarantees for multilayer neural networks. D Soudry, Y Carmon, arXiv:1605.08361arXiv preprintD. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.
G Swirszcz, W M Czarnecki, R Pascanu, arXiv:1611.06310Local minima in training of neural networks. arXiv preprintG. Swirszcz, W. M. Czarnecki, and R. Pascanu. Local minima in training of neural networks. arXiv preprint arXiv:1611.06310, 2016.
An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. Y Tian, International Conference on Machine Learning. Y. Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. In International Conference on Machine Learning, pages 3404-3413, 2017.
Neural networks with finite intrinsic dimension have no spurious valleys. L Venturi, A Bandeira, J Bruna, arXiv:1802.06384arXiv preprintL. Venturi, A. Bandeira, and J. Bruna. Neural networks with finite intrinsic dimension have no spurious valleys. arXiv preprint arXiv:1802.06384, 2018.
No spurious local minima in a two hidden unit relu network. C Wu, J Luo, J D Lee, International Conference on Learning Representations Workshop. C. Wu, J. Luo, and J. D. Lee. No spurious local minima in a two hidden unit relu network. In International Conference on Learning Representations Workshop, 2018.
B Xie, Y Liang, L Song, arXiv:1611.03131Diverse neural network learns true target functions. arXiv preprintB. Xie, Y. Liang, and L. Song. Diverse neural network learns true target functions. arXiv preprint arXiv:1611.03131, 2016.
On the local minima free condition of backpropagation learning. X.-H Yu, G.-A Chen, IEEE Transactions on Neural Networks. 65X.-H. Yu and G.-A. Chen. On the local minima free condition of backpropagation learning. IEEE Transactions on Neural Networks, 6(5):1300-1303, 1995.
Global optimality conditions for deep neural networks. C Yun, S Sra, A Jadbabaie, International Conference on Learning Representations. C. Yun, S. Sra, and A. Jadbabaie. Global optimality conditions for deep neural networks. In International Conference on Learning Representations, 2018.
Critical points of neural networks: Analytical forms and landscape properties. Y Zhou, Y Liang, International Conference on Learning Representations. Y. Zhou and Y. Liang. Critical points of neural networks: Analytical forms and landscape properties. In International Conference on Learning Representations, 2018. |
51,926,976 | CODE2SEQ: GENERATING SEQUENCES FROM STRUCTURED REPRESENTATIONS OF CODE | The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present CODE2SEQ: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models. | [
3495200,
12718048,
17355,
1998416,
8820379,
1918428
] | CODE2SEQ: GENERATING SEQUENCES FROM STRUCTURED REPRESENTATIONS OF CODE
Uri Alon [email protected]
Facebook AI Research
Technion, Technion
Omer Levy [email protected]
Facebook AI Research
Technion, Technion
Eran Yahav [email protected]
Facebook AI Research
Technion, Technion
CODE2SEQ: GENERATING SEQUENCES FROM STRUCTURED REPRESENTATIONS OF CODE
The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present CODE2SEQ: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models.
INTRODUCTION
Modeling the relation between source code and natural language can be used for automatic code summarization (Allamanis et al., 2016), documentation (Iyer et al., 2016), retrieval (Allamanis et al., 2015b), and even generation (Balog et al., 2016;Rabinovich et al., 2017;Yin and Neubig, 2017;Devlin et al., 2017;Murali et al., 2017). In this work, we consider the general problem of generating a natural language sequence from a given snippet of source code.
A direct approach is to frame the problem as a machine translation problem, where the source sentence is the sequence of tokens in the code and the target sentence is a corresponding natural language sequence. This approach allows one to apply state-of-the-art neural machine translation (NMT) models from the sequence-to-sequence (seq2seq) paradigm Luong et al., 2015;Vaswani et al., 2017), yielding state-ofthe-art performance on various code captioning and documentation benchmarks (Iyer et al., 2016;Allamanis et al., 2016;Loyola et al., 2017) despite having extremely long source sequences.
We present an alternative approach for encoding source code that leverages the syntactic structure of programming languages: CODE2SEQ. Specifically, we represent a given code snippet as a set of compositional paths over its abstract syntax tree (AST), where each path is compressed to a fixed-length vector using LSTMs (Hochreiter and Schmidhuber, 1997). During decoding, CODE2SEQ attends over a different weighted sum of the path-vectors to produce each output token, much like NMT models attend over token representations in the source sentence.
We show the effectiveness of our code2seq model on two tasks: (1) code summarization (Figure 1a), where we predict a Java method's name given its body, and (2) code captioning (Figure 1b), where we predict a natural language sentence that describes a given C# snippet. On both tasks, our CODE2SEQ model outperforms models that were explicitly designed for code, such as the model of Allamanis et al. (2016) and CodeNN (Iyer et al., 2016), as well as state-of-the-art NMT models (Luong et al., 2015;Vaswani et al., 2017). To examine the importance of each component of the 1 arXiv:1808.01400v2 [cs.LG] 10 Oct 2018
Code summarization in Java:
Code captioning in C#: (a) (b) Figure 1: Example of (a) code summarization of a Java code snippet, and (b) code captioning of a C# code snippet, along with the predictions produced by our models. The highlighted paths in each example are the top-attended paths in each decoding step. Because of space limitations we included only the top attended path for each decoding step, but hundreds of paths are attended at each step. Additional examples are presented in Appendix A and Appendix B.
model, we conduct a thorough ablation study. In particular, we show the importance of structural encoding of code, by showing how our model yields a significant improvement over an ablation that uses only token-level information without syntactic paths. To the best of our knowledge, this is the first work to leverage the syntactic structure of code for end-to-end generation of sequences.
REPRESENTING CODE AS AST PATHS
An Abstract Syntax Tree (AST) is a tree which uniquely represents a source code snippet in a given language and grammar. The leaves of the tree are called terminals, and usually refer to userdefined values which represent identifiers and names from the code. The non-leaf nodes are called nonterminals and represent a restricted set of structures in the language, e.g., loops, expressions, and variable declarations. For example, Figure 2c shows a partial AST for the code snippet of Figure 2a. Names (such as num) and types (such as int) are represented as values of terminals; syntactic structures such as variable declaration (VarDec) and a do-while loop (DoStmt) are represented as nonterminals.
Given the AST of a code snippet, we consider all pairwise paths between terminals, and represent them as sequences of terminal and nonterminal nodes. We then use these paths with their terminals' values to represent the code snippet itself. For example, consider the two Java methods of Figure 2. Both of these methods count occurrences of a character in a string. They have exactly the same functionality, although a different implementation, and therefore different surface forms. Encoding these snippets of code as sequences of tokens might overlook the recurring patterns that suggest the common method name. However, a structural observation reveals syntactic paths that are common to both methods, and differ only in a single node of a Do-while statement versus a For statement. This example shows the effectiveness of a syntactic encoding of code. Such an encoder can generalize much better to unseen examples because the AST normalizes a lot of the surface form variance. Since our encoding is compositional, the encoder can generalize even if the paths are not identical (e.g., a For node in one path and a While in the other).
Since a code snippet can contain an arbitrary number of such paths, we sample k paths as the representation of the code snippet. To avoid bias, k new paths are sampled afresh in every training iteration. In Section 5 we show that this runtime-sampling provides regularization and improves results compared to sampling the same k paths for each example in advance.
Formally, we use C to denote a given snippet of code. Every training iteration k pairs of terminals are uniformly sampled from within the AST of C. Each pair of terminals v i 1 , v i li implies a single path between them: v i 1 v i 2 ...v i li . Finally, the input code example is represented as a set of these k random AST paths:
v 1 1 v 1 2 ...v 1 l1 , ..., v k 1 v k 2 ...v k l k
, where l j is the length of the jth path. Figure 2: An example of two Java methods that have exactly the same functionality. Although having a different sequential (token-based) representation, considering syntactic patterns reveals recurring paths, which might differ only in a single node (a ForStmt node instead of a Do-while node).
MODEL ARCHITECTURE
Our model follows the standard encoder-decoder architecture for NMT (Section 3.1), with the significant difference that the encoder does not read the input as a flat sequence of tokens. Instead, the encoder creates a vector representation for each AST path separately (Section 3.2). The decoder then attends over the encoded AST paths (rather than the encoded tokens) while generating the target sequence. An illustration of our model is shown in Figure 3.
ENCODER-DECODER FRAMEWORK
Contemporary NMT models are largely based on an encoder-decoder architecture Luong et al., 2015;, where the encoder maps an input sequence of tokens x = (x 1 , ..., x n ) to a sequence of continuous representations z = (z 1 , ..., z n ). Given z, the decoder then generates a sequence of output tokens y = (y 1 , ..., y m ) one token at a time, hence modeling the conditional probability: p (y 1 , ..., y m |x 1 , ..., x n ).
At each decoding step, the probability of the next target token depends on the previously generated token, and can therefore be factorized as:
p (y 1 , ..., y m |x 1 , ..., x n ) = m j=1 p (y j |y <j , z 1 , ..., z n )
In attention-based models, at each time step t in the decoding phase, a context vector c t is computed by attending over the elements in z using the decoding state h t , typically computed by an LSTM.
α t = sof tmax (h t W a z) c t = n i α t i z i(2)
The context vector c t and the decoding state h t are then combined to predict the current target token y t . Previous work differs in the way the context vector is computed and in the way it is combined with the current decoding state. A standard approach (Luong et al., 2015) is to pass c t and h t through a multi-layer perceptron (MLP) and then predict the probability of the next token using softmax:
p (y t |y <t , z 1 , ..., z n ) = sof tmax (W s tanh (W c [c t ; h t ]))(3)
AST ENCODER
Given a set of AST paths (x 1 , ..., x k ), our goal is to create a vector representation z i for each path
x i = v i 1 v i 2 ...v i li .
We represent each path separately using a bi-directional LSTM to encode the path, and sub-token embeddings to capture the compositional nature of the terminals' values (the tokens).
Path Representation Each AST path is composed of nodes and their child indices from a limited vocabulary of up to 364 symbols. We represent each node using a learned embedding matrix E nodes and then encode the entire sequence using the final states of a bi-directional LSTM:
h 1 , ..., h l = LST M (E nodes v1 , ..., E nodes v l ) (4) encode path(v 1 ...v l ) = [h → l ; h ← 1 ](5)
Token Representation The first and last node of an AST path are terminals whose values are tokens in the code. Following Allamanis et al. (2015a;, we split code tokens into subtokens; for example, a token with the value ArrayList, will be decomposed into Array and List. This is somewhat analogous to byte-pair encoding in NMT (Sennrich et al., 2016), although in the case of programming languages, coding conventions such as camel notation provide us with an explicit partition of each token. Specifically, we use a learned embedding matrix E subtokens to represent each subtoken, and then sum the subtoken vectors to represent the full token:
encode token(w) = s∈split(w) E subtokens s(6)
The LSTM decoder may also predict subtokens at each step (e.g. when generating method names), although the decoder's subtoken embedding matrix will be different.
Combined Representation
To represent the entire path x = v 1 ...v l , we concatenate the path's representation with each of the token representation of each terminal node, and apply a fullyconnected layer:
z = tanh (W in [encode path(v 1 ...v l ); encode token(value(v 1 )); encode token(value(v l ))]) (7)
where value is the mapping of a terminal node to its associated value, and W in is a (2d path + 2d token ) × d hidden matrix.
Decoder Start State To provide the decoder with an initial state, we average the combined representations of all the paths:
h 0 = 1 k k i=1 z i(8)
Unlike typical encoder-decoder models, the order of the input random paths is not taken into account. Each path is encoded separately and the combined representations are aggregated with mean pooling to initialize the decoder's state. This represents the given source code as a set of random paths.
Attention Finally, the decoder generates the output sequence while attending over the combined representations z i , ...z k , similarly to the way that seq2seq models attend over the source symbols.
EXPERIMENTS
We evaluate our model on two code-to-sequence tasks: summarization (Section 4.1), in which we predict Java methods' names from their bodies, and captioning (Section 4.2), where we generate natural language descriptions of C# code snippets. We thus demonstrate that our approach can produce both method names and natural language outputs, and can encode a code snippet in any language for which an AST can be constructed (i.e., a parser exists).
Setup The values of all of the parameters are initialized using the initialization heuristic of Glorot and Bengio (2010). We optimize the cross-entropy loss (Rubinstein, 1999;2001) with a Nesterov momentum (Nesterov, 1983) of 0.95 and an initial learning rate of 0.01, decayed by a factor of 0.95 every epoch. We apply dropout (Srivastava et al., 2014) of 0.25 on the input vectors x j , and a recurrent dropout of 0.5 on the LSTM that encodes the AST paths. We used d tokens = d nodes = d hidden = d target = 128. Each LSTM that encodes the AST paths had 128 cells and the decoder LSTM had 320 cells. We used k = 200 as the number of random paths on each example.
CODE SUMMARIZATION
In this task, we predict a Java method's name given its body. As was previously observed (Allamanis et al., 2016;Alon et al., 2018a), this is a good benchmark because a method name in open-source Java projects tends to be succinct and precise, and a method body is often a complete logical unit. We predict the target method name as a sequence of sub-tokens, e.g., setMaxConnectionsPerServer is predicted as the sequence "set max connections per server". The target sequence length is about 3 on average. We adopt the measure used by Allamanis et al. (2016) and Alon et al. (2018a), who measured precision, recall, and F1 score over the target sequence, case insensitive.
Data We experiment with this task across three datsets:
Java-small -Contains 11 relatively large Java projects, which were originally used for 11 distinct models for training and predicting within the scope of the same project (Allamanis et al., 2016). We use the same data, but train and predict across projects: we took 9 projects for training, 1 project for validation and 1 project as our test set. This dataset contains about 700K examples.
Java-med -A new dataset of the 1000 top-starred Java projects from GitHub. We randomly select 800 projects for training, 100 for validation and 100 for testing. This dataset contains about 4M examples and we make it publicly available.
Java-large -A new dataset of the 9500 top-starred Java projects from GitHub that were created since January 2007. We randomly select 9000 projects for training, 200 for validation and 300 for testing. This dataset contains about 16M examples and we make it publicly available.
Baselines We re-trained all of the baselines on all of the datasets of this task using the original implementations of the authors. We compare CODE2SEQ to the following baselines: Allamanis et al. (2016) who used a convolutional attention network to predict method names, syntactic paths with Conditional Random Fields (CRFs) (Alon et al., 2018b), and code2vec (Alon et al., 2018a). In addition, we compared to three NMT baselines that read the input source code as a stream of tokens: a 2-layer bidirectional encoder-decoder LSTMs (split tokens and full tokens) with global attention (Luong et al., 2015), and the Transformer (Vaswani et al., 2017) which achieved state-of-the-art results for translation tasks.
Our model is incomparable to the model of Allamanis et al. (2018) because they targeted a different task of predicting variable names, and are unable to generate target sequences. We could not compare to the work of Liang and Zhu (2018) due to replicability issues. 1
We put significant effort to strengthen the NMT baselines in order to provide a fair comparison: (1) we split tokens to subtokens, as in our model (e.g., HashSet → Hash Set) -this was shown to Performance Table 1 shows the results for the code summarization task. Our model significantly outperforms the baselines in both precision and recall across all the three datasets, demonstrating that there is added value in leveraging ASTs to encode source code. Our model improves over the best baselines, BiLSTM with split tokens, by between 4 to 8 F1 points on all benchmarks. The BiLSTM with split tokens consistently achieved about 10 F1 score more than BiLSTM with full tokens, and for this reason we included only a split token Transformer baseline. Our model outperforms ConvAttention (Allamanis et al., 2016), which was designed specifically for this task, and outperforms Paths+CRFs (Alon et al., 2018b) which used syntactic features. Examples for predictions made by our model and each of the baselines can be found in Appendix B.
Data Efficiency ConvAttention (Allamanis et al., 2016) performed even better than the Transformer on the Java-small dataset, but could not scale and leverage the larger datasets. Paths+CRFs showed very low results on the Java-small dataset, which is expected due to the sparse nature of their paths and the CRF model. When comparing our model with the best among the baselines (BiLSTM with split tokens), we see that our model achieves a relative improvement of 7.3% on Java-large, but as the dataset goes smaller -the larger the relative difference becomes: 13% on Java-med and 22% on Java-small; when comparing to the Transformer: a relative improvement of 23% on Java-large and 37% on Java-small. These results show the data efficiency of our architecture: while the data-hungry NMT baselines require large datasets, our model can leverage both small and large datasets.
CODE CAPTIONING
For this task we consider predicting a full natural language sentence given a short C# code snippet. We used the dataset of CodeNN (Iyer et al., 2016), which consists of 66,015 pairs of questions and answers from StackOverflow. They used a semi-supervised classifier to filter irrelevant examples and asked human annotators to provide two additional titles for the examples in the test set, making a total of three reference titles for each code snippet. The target sequence length in this task is about 10 on average. This dataset is especially challenging as it is orders of magnitude smaller than the code summarization datasets. Additionally, StackOverflow code snippets are typically short, incomplete at times, and aim to provide an answer to a very specific question. We evaluated using BLEU score with smoothing, using the same evaluation scripts as Iyer et al. (2016).
Baselines We present results compared to CodeNN, 2-layer bidirectional LSTMs with attention, and the Transformer. As before, we provide a fair comparison by splitting tokens to subtokens, and replacing UNK during inference. We also include numbers from baselines used by Iyer et al. (2016). Iyer et al. (2016), and verified by us. Another visualization can be found in Appendix C.
Model BLEU MOSES † (Koehn et al., 2007) 11.57 IR † 13.66 SUM-NN † (Rush et al., 2015) 19.31 2-layer BiLSTM 19.78 Transformer (Vaswani et al., 2017) 19.68 CodeNN † (Iyer et al., 2016) 20.53 code2seq 23.04
Results
ABLATION STUDY
To better understand the importance of different components of our model, we conducted an extensive ablation study. We vary our model in different ways and measure the change in performance. These experiments were performed for the code summarization task, on the validation set of the Java-med dataset. We examine several alternative designs:
1. No AST nodes -instead of encoding an AST path using an LSTM, take only the first and last terminal values for constructing an input vector 2. No decoder -no sequential decoding; instead, predict the target sequence as a single symbol using a single softmax layer.
3. No token splitting -no subtoken encoding; instead, embed the full token.
4.
No tokens -use only the AST nodes without using the values associated with the terminals.
5.
No attention -decode the target sequence given the initial decoder state, without attention.
6.
No random -no re-sampling of k paths in each iteration; instead, sample in advance and use the same k paths for each example throughout the whole training process. Table 3 shows the results of these alternatives. As seen, not encoding AST nodes resulted in a degradation especially in the precision: a decrease of 5.42 compared to 2.66 for the recall. Using a single prediction with no decoder reduces recall by more than a third. This shows that the method name prediction task should be addressed as a sequential prediction, despite the methods' relatively short names. Using no token splitting or no tokens at all drastically hurt the results, showing the significance of encoding both subtokens and syntactic paths. Despite the low results of no tokens, it is still surprising that the model can achieve around half the score of the full model, as using no tokens is equivalent to reasoning about code which has no identifier names, types, APIs, and constant values and can be very difficult even for a human. The no attention experiment shows the contribution of attention in our model, which is very close in its relative value to the contribution of attention in seq2seq models (Luong et al., 2015;. The no random experiment shows the positive contribution of sampling k different paths afresh on every training iteration, instead of using the same sample of paths from each example during whole training. This approach provides data-level regularization that gives additional improvement to a model that is very powerful already. Another visualization can be found in Appendix C.
RELATED WORK
The growing availability of open source repositories creates new opportunities for using machine learning to process source code en masse. Several papers model code as a sequence of tokens (Iyer et al., 2016;Allamanis et al., 2016;Loyola et al., 2017), characters (Bielik et al., 2017), and API calls (Raychev et al., 2014). While sometimes obtaining satisfying results, these models treat code as a sequence rather than a tree. This forces these techniques to implicitly re-learn the (predefined) syntax of the programming language, wasting resources and reducing accuracy.
Code representation models that use syntactic information have usually been evaluated on relatively easier tasks, which mainly focus on "filling the blanks" in a given program (Alon et al., 2018b;Allamanis et al., 2018) or semantic classification of code snippets (Alon et al., 2018a). Moreover, none of the models that use syntactic relations are compositional, and therefore the number of possible syntactic relations is fixed either before or after training, and often consumes a lot of memory. The syntactic paths of Alon et al. (2018b;a) are represented monolithically, and are therefore limited to only a subset of the paths that were observed enough times during training. As a result, they cannot represent unseen relations. In contrast, by representing AST paths node-by-node using LSTMs, our model can represent and use any syntactic path in any unseen example. Further, our model decodes the output sequence step-by-step while attending over the input paths, and can thus generate unseen sequences, compared to code2vec (Alon et al., 2018a) which has a closed vocabulary. Allamanis et al. (2018) represent code with Gated Graph Neural Networks. Nodes in the graph represent identifiers, and edges represent syntactic and semantic relations in the code such as "Com-putedFrom" and "LastWrite". The kinds of edges (features) are designed for the semantics of a specific programming language, for a specific task, and require an expert to think of and implement.
In contrast, our model has minimal assumptions on the input language and is general enough not to require neither expert semantic knowledge nor the manual design of features. Our model can therefore be easily implemented for various input languages. Liang and Zhu (2018) presented a Tree-RNN model for learning code, but its evaluation contains many irregularities (for example, the seq2seq baselines were deprived of non-alphanumeric tokens, which are critical for capturing assignments, method calls, and other basic operations).
CONCLUSION
We presented a novel code-to-sequence model which considers the unique syntactic structure of source code with a sequential modeling of natural language. The core idea is to sample paths in the Abstract Syntax Tree of a code snippet, encode those paths with an LSTM and attend to them while generating the target sequence.
We demonstrate our approach by using it to predict method names across three datasets of varying sizes, predict natural language captions given partial and short code snippets, in two programming languages. Our model performs significantly better than previous programming-language-oriented works and state of the art NMT models applied in our settings.
We believe that the principles presented in this paper can serve as a basis for a wide range of tasks which involve source code and natural language, and can be extended to other kinds of generated outputs. To this end, we make all our code, datasets, and trained models publicly available.
8
A CODE CAPTIONING EXAMPLES Figure 5 contains examples from our test set for the code captioning task in C#, along with the prediction of our model and each of the baselines. Figure 4 shows a timestep-by-timestep example, with the symbol decoded at each timestep and the top-attended path at that step. The width of the path is proportional to the attention it was given by the model (because of space limitations we included only the top-attended path for each decoding step, but hundreds of paths are attended at each step). Figure 7 contains examples from our test set for the code summarization task in C#, along with the prediction of our model and each of the baselines. The presented predictions are made by models that were trained on the same Java-large dataset.
B CODE SUMMARIZATION
C CODE CAPTIONING RESULTS Figure 8 shows a bar chart of the BLEU score of our model and the baselines, in the code captioning task (predicting natural language descriptions for C# code snippets). The numbers are the same as in Table 2. D CODE SUMMARIZATION RESULTS Figure 9 shows a bar chart of the F1 score of our model and the baselines, in the code summarization task (predicting method names in Java). The numbers are the F1 columns from Table 1. E ABLATION STUDY RESULTS Figure 10 shows a bar chart of the relative decrease in precision and recall for each of the ablations described in Section 5 and presented in Table 3.
Sensitivity to k we experimented with different values of k, the number of sampled paths from each example (which we set to 200 in the final model). Lower values than k = 100 showed worse results. Decreasing to k = 100 showed a minor degradation, and increasing to k = 300 did not seem to consistently improve.
Figure 3 :
3Our model encodes each AST path as a vector, and uses their average as the decoder's start state. The decoder generates an output sequence while attending over the encoded paths.
Figure 4 :Figure 8 :
48child node to a treeview in c # Example of code captioning for a C# code snippet from our test set. The text boxes at the bottom of each figure are the predictions produced by our model at each decoding step. The highlighted paths in each figure are the top-attended paths in each decoding step, and their widths are proportional to their attention weight (because of space limitations we included only the topattended path for each decoding step, but hundreds of paths are attended at each step)†(Koehn et al., 2007) IR † SUM-NN †(Rush et al., 2015) 2-layer BiLSTM (split tokens) Transformer (split tokens) CodeNN †(Iyer et al., 2016) code2seq (this work) Visualization of the BLEU score of our model compared to the baselines, for the code captioning task. The values are the the same as in
Figure 10 :
10The relative decrease in precision and recall in each of the ablations, compared to the full model.
Table 1 :
1Our model significantly outperforms previous PL-oriented and NMT models. Another visualization can be found in Appendix D.Model
Java-small
Java-med
Java-large
Prec
Rec
F1
Prec
Rec
F1
Prec
Rec
F1
ConvAttention (Allamanis et al., 2016)
50.25
24.62
33.05
60.82
26.75
37.16
60.71
27.60
37.95
Paths+CRFs (Alon et al., 2018b)
8.39
5.63
6.74
32.56
20.37
25.06
32.56
20.37
25.06
code2vec (Alon et al., 2018a)
18.51
18.74
18.62
38.12
28.31
32.49
48.15
38.40
42.73
2-layer BiLSTM (no token splitting)
32.40
20.40
25.03
48.37
30.29
37.25
58.02
37.73
45.73
2-layer BiLSTM
42.63
29.97
35.20
55.15
41.75
47.52
63.53
48.77
55.18
Transformer (Vaswani et al., 2017)
38.13
26.70
31.41
50.11
35.01
41.22
59.13
40.58
48.13
code2seq
50.64
37.40
43.02
61.24
47.07
53.23
64.03
55.02
59.19
Absolute Gain over BiLSTM
+8.01
+7.43
+7.82
+6.09
+5.32
+5.71
+0.50
+6.25
+4.01
improve the results by about 10 F1 points (Table 1), (2) we deliberately kept the original casing of
the source tokens since we found it to improve their results, and (3) during inference, we replaced
generated UNK tokens with the source tokens that were given the highest attention. For the 2-layer
BiLSTM we used embeddings of size 512, each of the encoder and decoder had 512 cells, and
the default hyperparameters of OpenNMT (Klein et al., 2017). For the Transformer, we used their
original hyperparameters (Vaswani et al., 2017). This resulted in a Transformer model with 169M
parameters, a BiLSTM model with 134M parameters, while our code2seq model had only 37M. 2
Table 2 :
2Our model outperforms previous work in the code captioning task. † Results previously reported by
Table 2
2summarizes the results for the code captioning task. Our model achieves a BLEU
score of 23.04, which improves by 2.51 points (12.2% relative) over CodeNN, who introduced this
dataset, and over all the other baselines including BiLSTMs and the Transformer, which achieved
slightly lower results than CodeNN. Examples for predictions made by our model and each of the
baselines can be found in Appendix E. These results show that when the training examples are
short and incomplete code snippets, our model generalizes better to unseen examples than a shallow
textual token-level approach, thanks to its syntactic observation of the data.
Table 3 :
3Variations on the code2seq model, performed on the dev set of Java-med.Model
Precision Recall F1
∆F1
code2seq (original model)
60.93
45.77
52.27
No AST nodes (only tokens)
55.51
43.11
48.53
-3.74
No decoder
47.99
28.96
36.12 -16.15
No token splitting
48.53
34.80
40.53 -11.74
No tokens (only AST nodes)
33.78
21.23
26.07 -26.20
No attention
57.00
41.89
48.29
-3.98
No random (sample k paths in advance) 59.08
44.07
50.49
-1.78
Table 2 .
2Our model achieves significantly higher results than the baselines.Figure 9: Visualization of the F1 score of our model compared to the baselines, for the code summarization task, across datasets. The values are the F1 columns from Table 1. Our model achieves significantly higher results than the baselines.Java-small
Java-med
Java-large
10
20
30
40
50
60
33.05
37.16
37.95
6.74
25.06
25.06
18.62
32.49
42.73
25.03
37.25
45.73
35.2
47.52
55.18
31.41
41.22
48.13
43.02
53.23
59.19
F1
score
ConvAttention
Paths+CRFs
code2vec
2-layer BiLSTM (full tokens)
2-layer BiLSTM (split tokens)
Transformer (split tokens)
code2seq (this work)
While the code ofLiang and Zhu (2018) is technically available, it lacks running instructions. We also tried running our model on their benchmarks, but could not obtain the same preprocessed and train/test partitioned data. The authors could not provide instructions or data by the time of this submission.
We also trained versions of the NMT baselines in which we down-matched the sizes and number of parameters to our model. These baselines seemed to benefit from more parameters, so the results reported here are for the versions that had many more parameters than our model.
Model PredictionMOSES †(Koehn et al., 2007)How can TreeView TreeView a TreeView nodes from XML parentText string to a treeview node from a TreeView parentText of a tree treeNodeDivisions from to child childText XML node of MDI child childText created in a tree nodes in IR † How to set the name of a tabPage progragmatically SUM-NN †(Rush et al., 2015)how to get data from xml file in c# 2-layer BiLSTM (split tokens) how to add child nodes to treeview Transformer (split tokens) how to add child node in treeview in c # CodeNN †(Iyer et al., 2016)How to get all child nodes in TreeView ? code2seq (this work) add a child node to a treeview in c # var excel = new ExcelQueryFactory("excelFileName"); var firstRow = excel.Worksheet().First(); var companyName = firstRow["CompanyName"];Model PredictionMOSES †(Koehn et al., 2007)How into string based on an firstRow a companyName firstRow ? How to IR † Facebook C# SDK Get Current User SUM-NN †(Rush et al., 2015)how can i get the value of a string? 2-layer BiLSTM (split tokens) how to get the value of a cell in excel using c # Transformer (split tokens) getting the first row in excel CodeNN †(Iyer et al., 2016)how do I get the value of an xml file in c # ? code2seq (this work) get the value of a column in excel using c #(Iyer et al., 2016)How to get the value of an array in C # ? code2seq (this work) get the image from a pdf file in c # void Main() { string text = File.ReadAllText(@"T:\File1.txt"); int num = 0; text = (Regex.Replace(text, "map", delegate(Match m) { return "map" + num++; })); File.WriteAllText(@"T:\File1.txt", text); }Model PredictionMOSES †(Koehn et al., 2007)How to File then How to HTML ? C # How to Write to IR † C# remove extra carriage returns from Stream SUM-NN †(Rush et al., 2015)how do i create a text file in c# 2-layer BiLSTM (split tokens) how to read a text file from a text file Transformer (split tokens) how to write a . txt file in c # CodeNN †(Iyer et al., 2016)how to read a text file in c # ? code2seq (this work) replace a string in a text fileModelPrediction ConvAttention(Allamanis et al., 2016)add Paths+CRFs(Alon et al., 2018b)call code2vec(Alon et al., 2018a)log response 2-layer BiLSTM (full tokens) handle request 2-layer BiLSTM (split tokens) report child request Transformer (split tokens) add child Gold: add child request code2seq (this work) add child request public static int ______(int value) { return value <= 0 ? 1 : value >= 0x40000000 ? 0x40000000 : 1 << (32 -Integer.numberOfLeadingZeros(value -1)); }ModelPrediction ConvAttention(Allamanis et al., 2016)get Paths+CRFs(Alon et al., 2018b)test bit inolz code2vec(Alon et al., 2018a)multiply 2-layer BiLSTM (full tokens) next power of two 2-layer BiLSTM (split tokens) { (replaced UNK) Transformer (split tokens) get bit length Gold: find next positive power of two code2seq (this work) get power of twoModelPrediction ConvAttention(Allamanis et al., 2016)is Paths+CRFs(Alon et al., 2018b)equals code2vec(Alon et al., 2018a)contains ignore case 2-layer BiLSTM (full tokens) contains ignore case 2-layer BiLSTM (split tokens) contains Transformer (split tokens) contains Gold:contains ignore case code2seq (this work) contains ignore caseFigure 7: Java examples from our test set for the code summarization task, along with the prediction of our model and each of the baselines.
Suggesting accurate method and class names. Miltiadis Allamanis, Earl T Barr, Christian Bird, Charles Sutton, http:/doi.acm.org/10.1145/2786805.2786849Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015. the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015New York, NY, USAACMMiltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pages 38-49, New York, NY, USA, 2015a. ACM. ISBN 978- 1-4503-3675-8. doi:10.1145/2786805.2786849. URL http://doi.acm.org/10.1145/ 2786805.2786849.
Bimodal Modelling of Source Code and Natural Language. Miltiadis Allamanis, Daniel Tarlow, Andrew D Gordon, Yi Wei, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine Learning37JMLR.orgMiltiadis Allamanis, Daniel Tarlow, Andrew D. Gordon, and Yi Wei. Bimodal Modelling of Source Code and Natural Language. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of JMLR Proceedings, pages 2123-2132. JMLR.org, 2015b.
A convolutional attention network for extreme summarization of source code. Miltiadis Allamanis, Hao Peng, Charles A Sutton, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine LearningNew York City, NY, USAMiltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for ex- treme summarization of source code. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2091-2100, 2016. URL http://jmlr.org/proceedings/papers/v48/allamanis16.html.
Learning to represent programs with graphs. Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi, ICLR. Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. Learning to represent programs with graphs. In ICLR, 2018.
Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav, arXiv:1803.09473code2vec: Learning distributed representations of code. arXiv preprintUri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. code2vec: Learning distributed repre- sentations of code. arXiv preprint arXiv:1803.09473, 2018a.
A general path-based representation for predicting program properties. Uri Alon, Meital Zilberstein, Omer Levy, Eran Yahav, http:/doi.acm.org/10.1145/3192366.3192412Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018. the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018New York, NY, USAACMUri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. A general path-based representation for predicting program properties. In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018, pages 404-419, New York, NY, USA, 2018b. ACM. ISBN 978-1-4503-5698-5. doi:10.1145/3192366.3192412. URL http: //doi.acm.org/10.1145/3192366.3192412.
Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/ abs/1409.0473.
Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow, Deepcoder, arXiv:1611.01989Learning to write programs. arXiv preprintMatej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989, 2016.
PHOG: probabilistic model for code. Pavol Bielik, Veselin Raychev, Martin T Vechev, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine LearningNew York City, NY, USAPavol Bielik, Veselin Raychev, and Martin T. Vechev. PHOG: probabilistic model for code. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2933-2942, 2016. URL http://jmlr.org/ proceedings/papers/v48/bielik16.html.
Program synthesis for character level language modeling. Pavol Bielik, Veselin Raychev, Martin Vechev, ICLR. Pavol Bielik, Veselin Raychev, and Martin Vechev. Program synthesis for character level language modeling. In ICLR, 2017.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Robustfill: Neural program learning under noisy i/o. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-Rahman Mohamed, Pushmeet Kohli, International Conference on Machine Learning. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Robustfill: Neural program learning under noisy i/o. In International Conference on Machine Learning, pages 990-998, 2017.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 249-256, 2010.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735- 1780, November 1997. ISSN 0899-7667. doi:10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
Summarizing source code using a neural attention model. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyLong Papers1Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016. URL http://aclweb.org/anthology/P/P16/P16-1195.pdf.
OpenNMT: Open-Source Toolkit for Neural Machine Translation. G Klein, Y Kim, Y Deng, J Senellart, A M Rush, G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints, 2017.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07. the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07Stroudsburg, PA, USAAssociation for Computational LinguisticsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 177-180, Stroudsburg, PA, USA, 2007. Association for Computational Linguistics. URL http://dl.acm.org/citation.cfm?id=1557769.
Automatic generation of text descriptive comments for code blocks. Yuding Liang, Kenny Qili Zhu, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceNew Orleans, Louisiana, USAYuding Liang and Kenny Qili Zhu. Automatic generation of text descriptive comments for code blocks. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018, 2018. URL https://www.aaai.org/ocs/ index.php/AAAI/AAAI18/paper/view/16492.
A neural architecture for generating natural language descriptions from source code changes. Pablo Loyola, Edison Marrese-Taylor, Yutaka Matsuo, 10.18653/v1/P17-2045Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Pablo Loyola, Edison Marrese-Taylor, and Yutaka Matsuo. A neural architecture for generating natural language descriptions from source code changes. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 287- 292. Association for Computational Linguistics, 2017. doi:10.18653/v1/P17-2045. URL http: //www.aclweb.org/anthology/P17-2045.
Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalThang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412-1421, 2015. URL http://aclweb.org/anthology/D/D15/D15-1166.pdf.
Bayesian sketch learning for program synthesis. CoRR, abs/1703.05698. Vijayaraghavan Murali, Swarat Chaudhuri, Chris Jermaine, Vijayaraghavan Murali, Swarat Chaudhuri, and Chris Jermaine. Bayesian sketch learning for pro- gram synthesis. CoRR, abs/1703.05698, 2017. URL http://arxiv.org/abs/1703.
A method for solving the convex programming problem with convergence rate o (1/kˆ2). E Yurii, Nesterov, In Dokl. Akad. Nauk SSSR. 269Yurii E Nesterov. A method for solving the convex programming problem with convergence rate o (1/kˆ2). In Dokl. Akad. Nauk SSSR, volume 269, pages 543-547, 1983.
Abstract syntax networks for code generation and semantic parsing. Maxim Rabinovich, Mitchell Stern, Dan Klein, 10.18653/v1/P17-1105Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Association for Computational LinguisticsMaxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code genera- tion and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139-1149. Association for Com- putational Linguistics, 2017. doi:10.18653/v1/P17-1105. URL http://www.aclweb.org/ anthology/P17-1105.
Code completion with statistical language models. Veselin Raychev, Martin Vechev, Eran Yahav, http:/doi.acm.org/10.1145/2666356.2594321SIGPLAN Not. 496Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. SIGPLAN Not., 49(6):419-428, June 2014. ISSN 0362-1340. doi:10.1145/2666356.2594321. URL http://doi.acm.org/10.1145/2666356.2594321.
Predicting program properties from "big code. Veselin Raychev, Martin Vechev, Andreas Krause, http:/doi.acm.org/10.1145/2676726.2677009Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '15. the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '15New York, NY, USAACMVeselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from "big code". In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '15, pages 111-124, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3300-9. doi:10.1145/2676726.2677009. URL http://doi.acm.org/ 10.1145/2676726.2677009.
Probabilistic model for code with decision trees. Veselin Raychev, Pavol Bielik, Martin Vechev, http:/doi.acm.org/10.1145/2983990.2984041Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2016. the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2016New York, NY, USAACMVeselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2016, pages 731-747, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4444-9. doi:10.1145/2983990.2984041. URL http://doi.acm.org/10.1145/2983990.2984041.
The cross-entropy method for combinatorial and continuous optimization. Reuven Rubinstein, Methodology and computing in applied probability. 12Reuven Rubinstein. The cross-entropy method for combinatorial and continuous optimization. Methodology and computing in applied probability, 1(2):127-190, 1999.
Combinatorial optimization, cross-entropy, ants and rare events. Y Reuven, Rubinstein, 54Stochastic optimization: algorithms and applicationsReuven Y Rubinstein. Combinatorial optimization, cross-entropy, ants and rare events. Stochastic optimization: algorithms and applications, 54:303-363, 2001.
A neural attention model for abstractive sentence summarization. Alexander M Rush, Sumit Chopra, Jason Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAlexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379-389, 2015. URL http://aclweb.org/anthology/D/D15/D15-1044.pdf.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany, August 2016. As- sociation for Computational Linguistics. URL http://www.aclweb.org/anthology/ P16-1162.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of machine learning research. 151Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929-1958, 2014.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112, 2014.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Infor- mation Processing Systems, pages 6000-6010, 2017.
A syntactic neural model for general-purpose code generation. Pengcheng Yin, Graham Neubig, 10.18653/v1/P17-1041Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code genera- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 440-450. Association for Computational Linguistics, 2017. doi:10.18653/v1/P17-1041. URL http://www.aclweb.org/anthology/P17-1041. |
209,977,508 | META-Q-LEARNING | This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL. | [] | META-Q-LEARNING
Rasool Fakoor
Amazon Web Services
Pratik Chaudhari
University of Pennsylvania
†
Stefano Soatto
Amazon Web Services
Alexander Smola
Amazon Web Services
META-Q-LEARNING
Published as a conference paper at ICLR 2020
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.
INTRODUCTION
: How well does meta-RL work? Average returns on validation tasks compared for two prototypical meta-RL algorithms, MAML (Finn et al., 2017) and PEARL (Rakelly et al., 2019), with those of a vanilla Q-learning algorithm named TD3 (Fujimoto et al., 2018b) that was modified to incorporate a context variable that is a representation of the trajectory from a task (TD3context). Even without any meta-training and adaptation on a new task, TD3-context is competitive with these sophisticated algorithms.
Reinforcement Learning (RL) algorithms have demonstrated good performance on simulated data. There are however two main challenges in translating this performance to real robots: (i) robots are complex and fragile which precludes extensive data collection, and (ii) a real robot may face an environment that is different than the simulated environment it was trained in. This has fueled research into Meta-Reinforcement Learning (meta-RL) which develops algorithms that "meta-train" on a large number of different environments, e.g., simulated ones, and aim to adapt to a new environment with few data.
How well does meta-RL work today? Fig. 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks. 1 We compared them to the following simple baseline: an off-policy RL algorithm (TD3 by Fujimoto et al. (2018b)) and which was trained to maximize the average reward over all training tasks and modified to use a "context variable" that represents the trajectory. All algorithms in this figure use the same evaluation protocol. It is surprising that this simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms. This is the first contribution of our paper: we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks.
Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning (MQL) that builds upon the above result. MQL uses a simple meta-training procedure: it maximizes the average Published as a conference paper at ICLR 2020 rewards across all meta-training tasks using off-policy updates to obtain
θ meta = arg max θ 1 n n k=1 E τ ∼D k k (θ)(1)
where k (θ) is the objective evaluated on the transition τ obtained from the task D k (θ), e.g., 1-step temporal-difference (TD) error would set k (θ) = TD 2 (θ; τ ). This objective, which we call the multi-task objective, is the simplest form of meta-training.
For adapting the policy to a new task, MQL samples transitions from the meta-training replay buffer that are similar to those from the new task. This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias. We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so. The adaptation phase of MQL solves
arg max θ E τ ∼D new new (θ) + E τ ∼Dmeta β(τ ; D new , D meta ) new (θ) − 1 − ESS θ − θ meta 2 2
(2) where D meta is the meta-training replay buffer, the propensity score β(τ ; D new , D meta ) is the odds of a transition τ belonging to D new versus D meta , and ESS is the Effective Sample Size between D new and D meta that is a measure of the similarly of the new task with the meta-training tasks. The first term computes off-policy updates on the new task, the second term performs β(·)-weighted off-policy updates on old data, while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation.
We perform extensive experiments in Sec. 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms.
2 BACKGROUND This section introduces notation and formalizes the meta-RL problem. We discuss techniques for estimating the importance ratio between two probability distributions in Sec. 2.2.
Consider a Markov Decision Processes (MDP) denoted by
x t+1 = f k (x t , u t , ξ t ) x 0 ∼ p k 0 ,(3)
where x t ∈ X ⊂ R d are the states and u t ∈ U ⊂ R p are the actions. The dynamics f k is parameterized by k ∈ {1, . . . , n} where each k corresponds to a different task. The domain of all these tasks, X for the states and U for the actions, is the same. The distribution p k 0 denotes the initial state distribution and ξ t is the noise in the dynamics. Given a deterministic policy u θ (x t ), the actionvalue function for γ-discounted future rewards r k t := r k (x t , u θ (x t )) over an infinite time-horizon is
q k (x, u) = E ξ (·) ∞ t=0 γ t r k t | x 0 = x, u 0 = u, u t = u θ (x t ) .(4)
Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics f k and reward function r k . Given one task k ∈ {1, . . . , n}, the standard Reinforcement Learning (RL) formalism solves for
θ k = arg max θ k (θ) where k (θ) = E x∼p0 q k (x, u θ (x)) .(5)
Let us denote the dataset of all states, actions and rewards pertaining to a task k and policy u θ (x) by
D k (θ) = x t , u θ (x t ), r k , x t+1 = f k (x t , u θ (x t ), ξ t ) t≥0, x(0)∼p k 0 , ξ(·)
; we will often refer to D k as the "task" itself. The Deterministic Policy Gradient (DPG) algorithm (Silver et al., 2014) for solving (5) learns a ϕ-parameterized approximation q ϕ to the optimal 2m samples by solving
w * = min w 1 2m (x,z) log 1 + e −zw x + c w 2 .(11)
This gives
β(x) = P(z = −1|x) P(z = 1|x) = e −w * x .(12)
Normalized Effective Sample Size ( ESS): A related quantity to β(x) is the normalized Effective Sample Size ( ESS) which we define as the relative number of samples from the target distribution p(x) required to obtain an estimator with performance (say, variance) equal to that of the importance sampling estimator (10). It is not possible to compute the ESS without knowing both densities q(x) and p(x) but there are many heuristics for estimating it. A popular one in the Monte Carlo literature (Kong, 1992;Smith, 2013;Elvira et al., 2018) is
ESS = 1 m ( m k=1 β(x k )) 2 m k=1 β(x k ) 2 ∈ [0, 1](13)
where X = {x 1 , . . . , x m } is some finite batch of data. Observe that if two distributions q and p are close then the ESS is close to one; if they are far apart the ESS is close to zero.
MQL
This section describes the MQL algorithm. We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec. 3.1. The adaptation procedure is described in Sec. 3.2.
3.1 META-TRAINING MQL performs meta-training using the multi-task objective. Note that if one sets
k meta (θ) k (θ) = E x∼p k 0 q k (x, u θ (x))(14)
in (8) then the parameters θ meta are such that they maximize the average returns over all tasks from the meta-training set. We use an off-policy algorithm named TD3 (Fujimoto et al., 2018b) as the building block and solve for
θ meta = arg min θ 1 n n k=1 E τ ∼D k TD 2 (θ) ;(15)
where TD(·) is defined in (7). As is standard in TD3, we use two action-value functions parameterized by ϕ 1 and ϕ 2 and take their minimum to compute the target in (7). This trick known as "double-Q-learning" reduces the over-estimation bias. Let us emphasize that (14) is a special case of the procedure outlined in (8). The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used, for instance, in existing gradient-based meta-RL algorithms.
Remark 1. Let us compare the critical points of the m-step MAML objective (9) to those of the multi-task objective which uses (14). As is done by the authors in Nichol et al. (2018), we can perform a Taylor series expansion around the parameters θ to obtain
∇ k meta (θ) = ∇ k (θ) + 2α(m − 1) ∇ 2 k (θ) ∇ k (θ) + O(α 2 ).(16)
Further, note that ∇ k meta in (16) is also the gradient of the loss
k (θ) + α(m − 1) ∇ k (θ) 2 2(17)
up to first order. This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks: parameters with large ∇ k 2 will be far from the local maxima of k (θ). The parameters α and m control this under-fitting. Larger the number of gradient steps, larger the under-fitting effect. This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks.
3.1.1 DESIGNING CONTEXT As discussed in Sec. 1 and 4.4, the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP. The optimal policy on the entire trajectory of the states, actions and the rewards. We therefore design a recurrent context variable z t that depends on {(x i , u i , r i )} i≤t . We set z t to the hidden state at time t of a Gated Recurrent Unit (GRU by Cho et al. (2014)) model. All the policies u θ (x) and value functions q ϕ (x, u) in MQL are conditioned on the context and implemented as u θ (x, z) and q ϕ (x, u, z). Any other recurrent model can be used to design the context; we used a GRU because it offers a good trade-off between a rich representation and computational complexity.
Remark 2 (MQL uses a deterministic context that is not permutation invariant). We have aimed for simplicity while designing the context. The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant. Indeed, the direction of time affords crucial information about the dynamics of a task to the agent, e.g., a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order. Further, the context in MQL is a deterministic function of the trajectory. Both these aspects are different than the context used by Rakelly et al. (2019) who design an inference network and sample a probabilistic context conditioned on a moving window. RL algorithms are quite complex and challenging to reproduce. Current meta-RL techniques which build upon them further exacerbate this complexity. Our demonstration that a simple context variable is enough is an important contribution.
ADAPTATION TO A NEW TASK
We next discuss the adaptation procedure which adapts the meta-trained policy θ meta to a new task D new with few data. MQL optimizes the adaptation objective introduced in (2) into two steps.
1. Vanilla off-policy adaptation: The first step is to update the policy using the new data as
arg max θ E τ ∼D new new (θ) − λ 2 θ − θ meta 2 2 .(18)
The quadratic penalty θ − θ meta 2 keeps the parameters close to θ meta . This is crucial to reducing the variance of the model that is adapted using few data from the new task (Reddi et al., 2015). Off-policy learning is critical in this step because of its sample efficiency. We initialize θ to θ meta while solving (18).
2. Importance-ratio corrected off-policy updates: The second step of MQL exploits the metatraining replay buffer. Meta-training tasks D meta are disjoint from D new but because they are expected to come from the same task distribution, transitions collected during meta-training can potentially be exploited to adapt the policy. This is difficult to do on two counts. First, the meta-training transitions do not come from D new . Second, even for transitions from the same task, it is non-trivial to update the policy because of extrapolation error (Fujimoto et al., 2018a): the value function has high error on states it has not seen before. Our use of the propensity score to reweigh transitions is a simpler version of the conditional generative model used by Fujimoto et al. (2018a) in this context.
MQL fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the new task in step 1. The context variable z t is the feature for this classifier. The logistic classifier estimates the importance ratio β(τ ; D new , D meta ) and can be used to reweigh data from the meta-training replay buffer for taking updates as Figure 2: Average undiscounted return of TD3 and TD3-context compared with PEARL for validation tasks from four meta-RL environments. The agent fails to learn if the policy is conditioned only on the state. In contrast, everything else remaining same, if TD3 is provided access to context, the rewards are much higher.
arg max θ E τ ∼Dmeta β(τ ; D new , D meta ) new (θ) − λ 2 θ − θ meta 2 2 .(19)
In spite of not adaptating on the validation tasks, TD3-context is comparable to PEARL.
We have again included a quadratic penalty θ − θ meta 2 that keeps the new parameters close to θ meta . Estimating the importance ratio involves solving a convex optimization problem on few samples (typically, 200 from the new task and 200-400 from the meta-training tasks). This classifier allows MQL to exploit the large amount of past data. In practice, we perform as many as 100× more weight updates using (19) than (18). (18)(19). This relaxes the quadratic penalty if the new task is similar to the metatraining tasks ( ESS is large) and vice-versa. While λ could be tuned as a hyper-parameter, our empirical results show that adapting it using ESS is a simple and effective heuristic.
Remark 4 (Details of estimating the importance ratio). It is crucial to ensure that the logistic classifier for estimating β generalizes well if we are to reweigh transitions in the meta-training replay buffer that are different than the ones the logistic was fitted upon. We do so in two ways: (i) the regularization co-efficient in (11) is chosen to be relatively large, that way we prefer false negatives than risk false positives; (ii) transitions with very high β are valuable for updating (19) but cause a large variance in stochastic gradient descent-based updates, we clip β before taking the update in (19). The clipping constant is a hyper-parameter and is given in Sec. 4. MQL requires having access to the meta-training replay buffer during adaptation. This is not a debilitating requirement and there are a number of clustering techniques that can pick important transitions from the replay-buffer if a robotic agent is limited by available hard-disk space. The meta-training replay buffer is at most 3 GB for the experiments in Sec. 4.
EXPERIMENTS
This section presents the experimental results of MQL. We first discuss the setup and provide details the benchmark in Sec. 4.1. This is followed by empirical results and ablation experiments in Sec. 4.2.
SETUP
Tasks and algorithms: We use the MuJoCo (Todorov et al., 2012) simulator with OpenAI Gym (Brockman et al., 2016) on continuous-control meta-RL benchmark tasks. These tasks have different rewards, randomized system parameters (Walker-2D-Params) and have been used in previous papers such as Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019). We compare against standard baseline algorithms, namely MAML (TRPO (Schulman et al., 2015) variant) (Finn et al., 2017), RL2 (Duan et al., 2016), ProMP (Rothfuss et al., 2018) and PEARL (Rakelly et al., 2019). We obtained the training curves and hyper-parameters for all the three algorithms from the published code by Rakelly et al. (2019).
We will compare the above algorithms against: (i) vanilla TD3 (Fujimoto et al., 2018a) without any adaptation on new tasks, (ii) TD3-context: TD3 with GRU-based context Sec. 3.1.1 without any adaptation, and (iii) MQL: TD3 with context and adaptation on new task using the procedure in Sec. 3.2. All the three variants use the multi-task objective for meta-training (15). We use Adam (Kingma & Ba, 2014) for optimizing all the loss functions in this paper.
Evaluation: Current meta-RL benchmarks lack a systematic evaluation procedure. 2 For each environment, Rakelly et al. (2019) constructed a fixed set of meta-training tasks (D meta ) and a validation set of tasks D new that are disjoint from the meta-training set. To enable direct comparison with published empirical results, we closely followed the evaluation code of Rakelly et al. (2019) to create these tasks. We also use the exact same evaluation protocol as that of these authors, e.g., 200 time-steps of data from the new task, or the number of evaluation episodes. We report the undiscounted return on the validation tasks with statistics computed across 5 random seeds. Figure 3: Comparison of the average undiscounted return of MQL (orange) against existing meta-RL algorithms on continuous-control environments. We compare against four existing algorithms, namely MAML (green), RL2 (red), PROMP (purple) and PEARL (blue). In all environments except Walker-2D-Params and Ant-Goal-2D, MQL is better or comparable to existing algorithms in terms of both sample complexity and final returns.
RESULTS
2 For instance, training and validation tasks are not explicitly disjoint in Finn et al. (2017); Rothfuss et al. (2018) and these algorithms may benefit during adaptation from having seen the same task before. The OpenAI Gym environments used in Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019) provide different rewards for the same task. The evaluation protocol in existing papers, e.g., length of episode for a new task, amount of data available for adaptation from the new task, is not consistent. This makes reproducing experiments and comparing numerical results extremely difficult.
Our first result, in Fig. 2, is to show that vanilla off-policy learning with context, without any adaptation is competitive with state of the art meta-RL algorithms. We used a standard implementation of TD3 and train on the meta-training tasks using the multi-task objective (15). Hyperparameters for these tasks are provided in Appendix D. This result is surprising and had gone unnoticed in the current literature. Policies that have access to the context can easily generalize to the validation tasks and achieve performance that is comparable to more sophisticated meta-RL algorithms.
We next evaluate MQL against existing meta-RL benchmarks on all environments. The results are shown in Fig. 3. We see that for all environments except Walker-2D-Params and Ant-Goal-2D, MQL obtains comparable or better returns on the validation tasks. In most cases, in particular for the challenging Humanoid-Direc-2D environment, MQL converges faster than existing algorithms. MAML and ProMP require about 100M time-steps to converge to returns that are significantly worse than the returns of off-policy algorithms like MQL and PEARL. Compare the training curve for TD3-context for the Ant-Goal-2D environment in Fig. 2 with that of the same environment in Fig. 3: the former shows a prominent dip in performance as meta-training progresses; this dip is absent in Fig. 3 and can be attributed to the adaptation phase of MQL.
ABLATION EXPERIMENTS
We conduct a series of ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Fwd-Back and Ant-Fwd-Back. Fig. 4a shows that the adaptation in MQL in (18) and (19) improves performance. Also observe that MQL has a smaller standard deviation in the returns as compared to TD3-context which does not perform any adaptation; this can be seen as the adaptation phase making up for the lost performance of the meta-trained policy on a difficult task. Next, we evaluate the importance of the additional data from the replay buffer in MQL. Fig. 4b compares the performance of MQL with and without updates in (19). We see that the old data, even if it comes from different tasks, is useful to improve the performance on top of (18). Fig. 4c shows the effectiveness of setting λ = 1 − ESS as compared to a fixed value of λ = 0.5. We see that modulating the quadratic penalty with ESS helps, the effect is minor for Sec. 4.3. The ideal value of λ depends on a given task and using 1 − ESS can help to adjust to different tasks without the need to do hyper-parameter search per task. Finally, Fig. 5 shows the evolution of λ and β(z) during meta-training. The coefficient λ is about 0.55 and β(z) is 0.8 for a large fraction of the time. The latter indicates that propensity score estimation is successful in sampling transitions from the meta-training replay buffer that are similar to the validation tasks. The value of λ remains relatively unchanged during training. This value indicates the fraction of transitions in the old data that are similar to those from the new tasks; since there are two distinct tasks in Ant-Fwd-Back, the value λ = 0.55 is appropriate. Figure 5: Evolution of λ and β(z) during meta-training.
RELATED WORK
Learning to learn: The idea of building an inductive bias for learning a new task by training on a large number of related tasks was established in a series of works (Utgoff, 1986;Schmidhuber, 1987;Baxter, 1995;Thrun, 1996;Thrun & Pratt, 2012). These papers propose building a base learner that fits on each task and a meta-learner that learns properties of the base learners to output a new base learner for a new task. The recent literature instantiates this idea in two forms: (i) the meta-learner directly predicts the base-learner (Wang et al., 2016;Snell et al., 2017) and (ii) the meta-learner learns the updates of the base-learner (Bengio et al., 1992;Hochreiter et al., 2001;Finn et al., 2017).
Meta-training versus multi-task training: Metatraining aims to train a policy that can be adapted effi- ciently on a new task. Conceptually, the improved efficiency of a meta-learner comes from two things: (i) building a better inductive bias to initialize the learning (Schmidhuber et al., 1997;Baxter, 1995;2000;Mitchell, 1980), or (ii) learning a better learning procedure (Bengio et al., 1997;Lee et al., 2019). The two notions of meta-learning above are complementary to each other and in fact, most recent literature using deep neural networks, e.g., MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017) confirms to the first notion of building a better inductive bias.
The multi-task training objective in MQL is the simplest possible instantiation of this idea: it maximizes the average reward on all tasks and learns a better prior without explicitly training for improving adaptation. This aspect of MQL coincides with a recent trend in meta-learning for image classification where it has been observed that modifications to episodic meta-training (Snell et al., 2017;Gidaris & Komodakis, 2018;, or even foregoing meta-training completely (Dhillon et al., 2019) performs better. We speculate two reasons for this phenomenon: (i) meta-training methods are complex to implement and tune, and (ii) powerful function classes such as deep neural networks may have leftover capacity to adapt to a new task even if they are not explicitly trained for adaptation.
Context-based approaches: Both forms of meta-learning above have been employed relatively successfully for image classification (Snell et al., 2017;Ravi & Larochelle, 2016;Finn et al., 2017). It has however been difficult to replicate that empirical performance in RL: sensitivity to hyperparameters (Henderson et al., 2018) precludes directly predicting the base-learner while long-range temporal dependencies make it difficult to learn the updates of the base learner (Nichol et al., 2018). Recent methods for meta-RL instead leverage context and learn a policy that depends on just on the current state x t but on the previous history. This may be done in a recurrent fashion Hausknecht & Stone, 2015) or by learning a latent representation of the task (Rakelly et al., 2019). Context is a powerful construct: as Fig. 1 shows, even a simple vanilla RL algorithm (TD3) when combined with context performs comparably to state-of-the-art meta-RL algorithms. However, context is a meta-training technique, it does not suggest a way to adapt a policy to a new task. For instance, Rakelly et al. (2019) do not update parameters of the policy on a new task. They rely on the latent representation of the context variable generalizing to new tasks. This is difficult if the new task is different from the training tasks; we discuss this further in Sec. 3.1.1.
Policy-gradient-based algorithms versus off-policy methods: Policy-gradient-based methods have high sample complexity (Ilyas et al., 2018). This is particularly limiting for meta-RL (Finn et al., 2017;Rothfuss et al., 2018;Houthooft et al., 2018) where one (i) trains on a large number of tasks and, (ii) aims to adapt to a new task with few data. Off-policy methods offer substantial gains in sample complexity. This motivates our use of off-policy updates for both meta-training and adaptation. Off-policy updates allow using past data from other policies. MQL exploits this substantially, it takes up to 100× more updates using old data than new data during adaptation. Off-policy algorithms are typically very sensitive to hyper-parameters (Fujimoto et al., 2018a) but we show that MQL is robust to such sensitivity because it adapts automatically to the distribution shift using the Effective Sample Size (ESS).
Propensity score estimation has been extensively studied in both statistics (Robert & Casella, 2013;Quionero-Candela et al., 2009) and RL (Dudík et al., 2011;Jiang & Li, 2015;Kang et al., 2007;Bang & Robins, 2005). It is typically used to reweigh data from the proposal distribution to compute estimators on the target distribution. MQL uses propensity scores in a novel way: we fit a propensity score estimator on a subset of the meta-training replay buffer and use this model to sample transitions from the replay buffer that are similar to the new task. The off-policy updates in MQL are essential to exploiting this data. The coefficient of the proximal term in the adaptation-phase objective (18-19) using the effective sample size (ESS) is inspired from the recent work of Fakoor et al. (2019).
DISCUSSION
The algorithm proposed in this paper, namely MQL, builds upon on three simple ideas. First, Q-learning with context is sufficient to be competitive on current meta-RL benchmarks. Second, maximizing the average reward of training tasks is an effective meta-learning technique. The metatraining phase of MQL is significantly simpler than that of existing algorithms and yet it achieves comparable performance to the state of the art. This suggests that we need to re-think meta-learning in the context of rich function approximators such as deep networks. Third, if one is to adapt to new tasks with few data, it is essential to exploit every available avenue. MQL recycles data from the meta-training replay buffer using propensity estimation techniques. This data is essentially free and is completely neglected by other algorithms. This idea can potentially be used in problems outside RL such as few-shot and zero-shot image classification.
Finally, this paper sheds light on the nature of benchmark environments in meta-RL. The fact that even vanilla Q-learning with a context variable-without meta-training and without any adaptation-is competitive with state of the art algorithms indicates that (i) training and validation tasks in the current meta-RL benchmarks are quite similar to each other and (ii) current benchmarks may be insufficient to evaluate meta-RL algorithms. Both of these are a call to action and point to the need to invest resources towards creating better benchmark problems for meta-RL that drive the innovation of new algorithms.
A PSEUDO-CODE
The pseudo-code for MQL during training and adaption are given in Algorithm 1 and Algorithm 2. After MQL is trained for a given environment as described in Algorithm 1, it returns the meta-trained policy θ and replay buffer containing train tasks.
Next, Algorithm 2 runs the adaptation procedure which adapts the meta-trained policy to a test task D with few data. To do so, MQL optimizes the adaptation objective into two steps. After gathering data from a test task D, MQL first updates the policy using the new data (line 4). MQL then fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the test task and then estimates ESS (lines 5-6). Finally, the adaptation step runs for n iterations (lines 7 -10) in which MQL can exploit past data in which it uses propensity score to decide whether or not a given sample is related to the current test task. Gather data from task D using policy π θ while feeding transitions through context GRU. Add trajectory to the replay buffer. Input: Test task D, meta-training replay buffer, meta-trained policy θmeta 1 Initialize temporary buffer buf 2 θ ← θmeta 3 buf ← Gather data from D using π θ meta 4 Update Eqn. (18) using buf 5 Fit β(D) using buf and meta-training replay buffer using Eqn. (12) 6 Estimate ESS using β(D) using Eqn. (13) 7 for i ≤ n do 8 b ← sample mini-batch from meta-training replay buffer 9 Calculate β for b 10 Update θ using Eqn. (19) 11 Evaluate θ on a new rollout from task D 12 return θ B OUT-OF-DISTRIBUTION TASKS MQL is designed for explicitly using data from the new task along with off-policy data from old, possibly very different tasks. This is on account of two things: (i) the loss function of MQL does not use the old data if it is very different from the new task, β is close to zero for all samples, and (ii) the first term in (18) makes multiple updates using data from the new task. To explore this aspect, we create an out-of-distribution task using the "Half-Cheetah-Vel" environment wherein we use disjoint sets of velocities for meta-training and testing. The setup is as follows:
• Half-Cheetah-Vel-OOD-Medium: target velocity for a training task is sampled uniformly randomly from [0, 2.5] while that for test task is sampled uniformly randomly from [2.5, 3.0]. This is what we call "medium" hardness task because although the distributions of train and test velocities is disjoint, they are close to each other.
• Half-Cheetah-Vel-OOD-Hard: target velocity for a training task is sampled uniformly randomly from [0, 1.5] while that for test task is sampled uniformly randomly from [2.5, 3.0]. This is a "hard" task because the distributions of train and test velocities are far away from each other. Fig. 6a shows that MQL significantly outperforms PEARL when the train and test target velocities come from disjoint sets. We used the published code of PEARL (Rakelly et al., 2019) for this experiment. This shows that the adaptation in MQL is crucial to generalizing to new situations which are not a part of the meta-training process. Fig. 6b shows the evolution of the proximal penalty coefficient λ and the propensity score β(z) during meta-training for the medium-hard task. We see that λ ≈ 0.8 while β(z) ≈ 0.2 throughout training. This indicates that MQL automatically adjusts its test-time adaptation to use only few samples in (19) if the test task provides transitions quite different than those in the replay buffer.
We next discuss results on the harder task Half-Cheetah-Vel-OOD-Hard. There is a very large gap between training and test target velocities in this case. Fig. 7a shows the comparison with the same test protocol as the other experiments in this paper. In particular, we collect 200 time-steps from the new task and use it for adaptation in both MQL and TD3-context. Since this task is particularly hard, we also ran an experiment where 1200 time-steps (6 episodes) are given to the two algorithms for adaptation. The results are shown in Fig. 7b. In both cases, we see that MQL is better than TD3context by a large margin (the standard deviation on these plots is high because the environment is hard). Note that since we re-initialize the hidden state of the context network at the beginning of each episode, TD3-context cannot take advantage of the extra time-steps. MQL on the other hand updates the policy explicitly and can take advantage of this extra data.
For sake of being thorough, we collected 800 time-steps from the new task from the same episode, the results are shown in Fig. 8a. We again notice that MQL results in slightly higher rewards than TD3-context in spite of the fact that both the algorithms suffer large degradation in performance as compared to Figs. 7a and 7b.
Figs. 7c, 7d and 8b show that the proximal penalty coefficient λ ≈ 1 and the propensity score β(z) ≈ 0 for a large fraction of training. This proves that MQL is able to automatically discard samples unrelated to the new test during the adaptation phase.
(a) (b) Figure 6: Comparison of the average return of MQL (orange) against existing PEARL algorithms (blue). Fig. 6a shows that MQL significantly outperforms PEARL when the train and test target velocities come from disjoint sets. Fig. 6b shows the evolution of the proximal penalty coefficient λ and the propensity score β(z). We see that β(z) is always small which demonstrates that MQL automatically adjusts the adaptation in (19) if the test task is different from the training tasks. shows the comparison with the same test protocol as the other experiments in this paper. In particular, we collect 200 time-steps from the new task and use it for adaptation in both MQL and TD3-context. Since this task is particularly hard, we also ran an experiment where 1200 time-steps (6 episodes) where results are shown in Fig. 7b. In both cases, we see that MQL is better than TD3-context by a large margin (the standard deviation on these plots is high because the environment is hard). (c,d) Evolution of λ and β(z) during meta-training.
shows the evolution of the proximal penalty coefficient λ and the propensity score β(z). We see in Fig. 7c and Fig. 7d that β(z) is always small which demonstrates that MQL automatically adjusts the adaptation if the test task is different from the training tasks. For these experiments, we collected 800 time-steps from the new task from the same episode, the results are shown in Fig. 8a. We again notice that MQL results in slightly higher rewards than TD3-context in spite of the fact that both the algorithms suffer large degradation in performance as compared to Figs. 7a and 7b. (b) Evolution of λ and β(z) during meta-training shows the evolution of the proximal penalty coefficient λ and the propensity score β(z).
(a) (b) (c) (d) (e) (f) Figure 9: Ablation studies to examine various components of MQL. Fig. 9a shows that the adaptation in MQL in (18) and (19) improves performance. One reason for that is because test and training tasks in Walker-2D-Params are very similar as it shown in Fig. 10b. Next, we evaluate the importance of the additional data from the replay buffer in MQL. Fig. 9b compares the performance of MQL with and without updates in (19). We see that the old data, even if it comes from different tasks, is useful to improve the performance on top of (18). Fig. 9c and Fig. 9f show the effectiveness of setting λ = 1 − ESS as compared to a fixed value of λ = 0.5. We see that modulating the quadratic penalty with ESS helps, the effect is minor for Sec. 4.3. The ideal value of λ depends on a given task and using 1 − ESS can help to adjust to different tasks without the need to do hyper-parameter search per task.
(a) (b) Figure 10: Evolution of λ and β(z) during meta-training shows the evolution of the proximal penalty coefficient λ and the propensity score β(z). We see in Fig. 10a that β(z) stays around 0.4 which demonstrates MQL automatically adjusts the adaptation if the test task is different from the training tasks. Fig. 10b shows that test and training tasks are very similar as β(z) is around 0.6.
C MORE ABLATION STUDIES
We conduct a series of additional ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Vel and Walker-2D-Params. Fig. 9 and Fig. 10 show the result of these experiments. These experiments show that adaptation phase is more useful for Half-Cheetah-Vel than Walker-2D-Params as test and training tasks are very similar in Walker-2D-Params which helps TD3-context achieves strong performance that leaves no window for improvement with adaptation.
D HYPER-PARAMETERS AND MORE DETAILS OF THE EMPIRICAL RESULTS
Figure 1
1Figure 1: How well does meta-RL work? Average returns on validation tasks compared for two prototypical meta-RL algorithms, MAML (Finn et al., 2017) and PEARL (Rakelly et al., 2019), with those of a vanilla Q-learning algorithm named TD3 (Fujimoto et al., 2018b) that was modified to incorporate a context variable that is a representation of the trajectory from a task (TD3context). Even without any meta-training and adaptation on a new task, TD3-context is competitive with these sophisticated algorithms.
Remark 3 (
3Picking the coefficient λ). Following Fakoor et al. (2019), we pick λ = 1 − ESS for both the steps
Figure 4 :
4Ablation studies to examine various components of MQL.
Algorithm 1 :
1MQL -Meta-training Input: Set of training tasks Dmeta 1 Initialize the replay buffer 2 Initialize parameters θ of an off-policy method, e.g.
7 b
7← Sample mini-batch from buffer 8 Update parameters θ using mini-batch b and Eqn. (15) 9 θmeta ← θ 10 return θmeta , replay buffer Algorithm 2: MQL -Adaptation
Figure 7 :
7(a,b) Comparison of the average return of MQL (orange) against TD3-context (blue).Fig. 7a
Figure 8 :
8(a) Comparison of the average return of MQL (orange) against TD3-context (blue).
Table 1 :
1Hyper-parameters for MQL and TD3 for continuous-control meta-RL benchmark tasks. We use a network with two full-connected layers for all environments. The batch-size in Adam is fixed to 256 for all environments. The abbreviation HC stands for Half-Cheetah. These hyper-parameters were tuned by grid-search.Humanoid HC-Vel Ant-FB Ant-Goal Walker HC-FBβ clipping
1
1.1
1
1.2
2
0.8
TD3 exploration noise
0.3
0.3
0.3
0.2
0.3
0.2
TD3 policy noise
0.2
0.3
0.3
0.4
0.3
0.2
TD3 policy update frequency
2
2
2
4
4
3
Parameter updates per iteration
(meta-training)
500
1000
200
1000
200
200
Adaptation parameter updates per
episode (eq. 18)
10
5
10
5
10
10
Adaptation parameter updates per
episode (eq. 19)
200
400
100
400
400
300
GRU sequence length
20
20
20
25
10
10
Context dimension
20
20
15
30
30
30
Adam learning rate
0.0003
0.001
0.0003
0.0004 0.0008
0.0003
Linear-time estimators for propensity scores. Deepak Agarwal, Lihong Li, Alexander Smola, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. the Fourteenth International Conference on Artificial Intelligence and StatisticsDeepak Agarwal, Lihong Li, and Alexander Smola. Linear-time estimators for propensity scores. In Pro- ceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 93-100, 2011.
Doubly robust estimation in missing data and causal inference models. Heejung Bang, M James, Robins, Biometrics. 614Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962-973, 2005.
Learning internal representations. Jonathan Baxter, Flinders University of S. Aust.Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.
A model of inductive bias learning. Jonathan Baxter, Journal of artificial intelligence research. 12Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12:149-198, 2000.
On the optimization of a synaptic learning rule. Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, Jan Gecsei, Preprints Conf. Optimality in Artificial and Biological Neural Networks. Univ. of TexasSamy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pp. 6-8. Univ. of Texas, 1992.
On the optimization of a synaptic learning rule. Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, Jan Gecsei, Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule, 1997.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, arXiv:1606.01540Jie Tang, and Wojciech Zaremba. OpenAI Gym. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
A closer look at fewshot classification. Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang, Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few- shot classification. 2018.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.
A baseline for few-shot image classification. Pratik Guneet S Dhillon, Avinash Chaudhari, Stefano Ravichandran, Soatto, arXiv:1909.02729Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. arXiv:1909.02729, 2019.
Rl2: Fast reinforcement learning via slow reinforcement learning. Yan Duan, John Schulman, Xi Chen, L Peter, Ilya Bartlett, Pieter Sutskever, Abbeel, arXiv:1611.02779Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforce- ment learning via slow reinforcement learning. arXiv:1611.02779, 2016.
Miroslav Dudík, John Langford, Lihong Li, arXiv:1103.4601Doubly robust policy evaluation and learning. Miroslav Dudík, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv:1103.4601, 2011.
Rethinking the effective sample size. Víctor Elvira, Luca Martino, Christian P Robert, arXiv:1809.04129Víctor Elvira, Luca Martino, and Christian P Robert. Rethinking the effective sample size. arXiv:1809.04129, 2018.
Rasool Fakoor, Pratik Chaudhari, Alexander J Smola, arXiv:1905.01756Policy-on policy-off policy optimization. 3Rasool Fakoor, Pratik Chaudhari, and Alexander J Smola. P3o: Policy-on policy-off policy optimization. arXiv:1905.01756, 2019.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126- 1135. JMLR. org, 2017.
Off-policy deep reinforcement learning without exploration. Scott Fujimoto, David Meger, Doina Precup, arXiv:1812.02900Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv:1812.02900, 2018a.
Addressing function approximation error in actor-critic methods. Scott Fujimoto, Dave Herke Van Hoof, Meger, arXiv:1802.09477Scott Fujimoto, Herke van Hoof, and Dave Meger. Addressing function approximation error in actor-critic methods. arXiv:1802.09477, 2018b.
Dynamic few-shot visual learning without forgetting. Spyros Gidaris, Nikos Komodakis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSpyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367-4375, 2018.
Deep recurrent q-learning for partially observable mdps. Matthew Hausknecht, Peter Stone, 2015 AAAI Fall Symposium Series. Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver, arXiv:1512.04455Memory-based control with recurrent neural networks. Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv:1512.04455, 2015.
Deep reinforcement learning that matters. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger, Thirty-Second AAAI Conference on Artificial Intelligence. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep rein- forcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Learning to learn using gradient descent. Sepp Hochreiter, Steven Younger, Peter R Conwell, International Conference on Artificial Neural Networks. SpringerSepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.
Evolved policy gradients. Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, Jonathan Openai, Pieter Ho, Abbeel, Advances in Neural Information Processing Systems. Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Advances in Neural Information Processing Systems, pp. 5400-5409, 2018.
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry, arXiv:1811.02553Are deep policy gradient algorithms truly policy gradient algorithms. Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Alek- sander Madry. Are deep policy gradient algorithms truly policy gradient algorithms? arXiv:1811.02553, 2018.
Doubly robust off-policy value evaluation for reinforcement learning. Nan Jiang, Lihong Li, arXiv:1511.03722Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv:1511.03722, 2015.
Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. D Y Joseph, Joseph L Kang, Schafer, Statistical science. 224Joseph DY Kang, Joseph L Schafer, et al. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science, 22(4):523-539, 2007.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
A note on importance sampling using standardized weights. Augustine Kong, 348University of Chicago, Dept. of StatisticsTech. RepAugustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992.
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto, arXiv:1904.03758Meta-learning with differentiable convex optimization. Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. arXiv:1904.03758, 2019.
P Timothy, Jonathan J Lillicrap, Alexander Hunt, Nicolas Pritzel, Tom Heess, Yuval Erez, David Tassa, Daan Silver, Wierstra, arXiv:1509.02971Continuous control with deep reinforcement learning. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
The need for biases in learning generalizations. M Tom, Mitchell, Department of Computer Science, Laboratory for Computer Science Research . . .Tom M Mitchell. The need for biases in learning generalizations. Department of Computer Science, Laboratory for Computer Science Research . . . , 1980.
On first-order meta-learning algorithms. Alex Nichol, Joshua Achiam, John Schulman, arXiv:1803.02999Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv:1803.02999, 2018.
Dataset shift in machine learning. Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, Neil D Lawrence, The MIT PressJoaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
Efficient off-policy metareinforcement learning via probabilistic context variables. Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine, arXiv:1903.08254Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy meta- reinforcement learning via probabilistic context variables. arXiv:1903.08254, 2019.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Doubly robust covariate shift correction. J Sashank, Barnabás Reddi, Alexander J Póczos, Smola, AAAI. Sashank J. Reddi, Barnabás Póczos, and Alexander J. Smola. Doubly robust covariate shift correction. In AAAI, 2015.
A probability path. I Sidney, Resnick, Springer Science & Business MediaSidney I Resnick. A probability path. Springer Science & Business Media, 2013.
Monte Carlo statistical methods. Christian Robert, George Casella, Springer Science & Business MediaChristian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013.
Jonas Rothfuss, Dennis Lee, Ignasi Clavera, arXiv:1810.06784Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. arXiv:1810.06784, 2018.
Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis. Jurgen Schmidhuber, Institut f. Informatik, Tech. Univ. MunichJurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Jürgen Schmidhuber, Jieyu Zhao, Marco Wiering, 1573-0565Machine Learning. 28Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105-130, Jul 1997. ISSN 1573-0565.
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Philipp Michael I Jordan, Moritz, International Conference on Machine Learning. 37John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, volume 37, pp. 1889-1897, 2015.
Deterministic policy gradient algorithms. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin Riedmiller, International Conference on Machine Learning. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning, 2014.
Sequential Monte Carlo methods in practice. Adrian Smith, Springer Science & Business MediaAdrian Smith. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 2013.
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Advances in Neural Information Processing Systems. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077-4087, 2017.
Is learning the n-th thing any easier than learning the first?. Sebastian Thrun, Advances in neural information processing systems. Sebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pp. 640-646, 1996.
Learning to learn. Sebastian Thrun, Lorien Pratt, Springer Science & Business MediaSebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Mujoco: A physics engine for model-based control. Emanuel Todorov, Tom Erez, Yuval Tassa, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEEmanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012.
Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach. E Paul, Utgoff, 2Paul E Utgoff. Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach, 2:107-148, 1986.
Learning to reinforcement learn. Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, Matthew Botvinick, abs/1611.05763CoRRJane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos, Charles Blun- dell, Dharshan Kumaran, and Matthew Botvinick. Learning to reinforcement learn. CoRR, abs/1611.05763, 2016. URL http://arxiv.org/abs/1611.05763. |
65,455,367 | ON THE CONVERGENCE OF ADAM AND BEYOND | Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with "long-term memory" of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance. | [
6628106
] | ON THE CONVERGENCE OF ADAM AND BEYOND
Sashank J Reddi [email protected]
Google New York New York
10011NYUSA
Satyen Kale [email protected]
Google New York New York
10011NYUSA
Sanjiv Kumar [email protected]
Google New York New York
10011NYUSA
ON THE CONVERGENCE OF ADAM AND BEYOND
Published as a conference paper at ICLR 2018
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with "long-term memory" of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
INTRODUCTION
Stochastic gradient descent (SGD) is the dominant method to train deep networks today. This method iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the loss evaluated on a minibatch. In particular, variants of SGD that scale coordinates of the gradient by square roots of some form of averaging of the squared coordinates in the past gradients have been particularly successful, because they automatically adjust the learning rate on a per-feature basis. The first popular algorithm in this line of research is ADAGRAD (Duchi et al., 2011;McMahan & Streeter, 2010), which can achieve significantly better performance compared to vanilla SGD when the gradients are sparse, or in general small.
Although ADAGRAD works well for sparse settings, its performance has been observed to deteriorate in settings where the loss functions are nonconvex and gradients are dense due to rapid decay of the learning rate in these settings since it uses all the past gradients in the update. This problem is especially exacerbated in high dimensional problems arising in deep learning. To tackle this issue, several variants of ADAGRAD, such as RMSPROP (Tieleman & Hinton, 2012), ADAM (Kingma & Ba, 2015), ADADELTA (Zeiler, 2012), NADAM (Dozat, 2016), etc, have been proposed which mitigate the rapid decay of the learning rate using the exponential moving averages of squared past gradients, essentially limiting the reliance of the update to only the past few gradients. While these algorithms have been successfully employed in several practical applications, they have also been observed to not converge in some other settings. It has been typically observed that in these settings some minibatches provide large gradients but only quite rarely, and while these large gradients are quite informative, their influence dies out rather quickly due to the exponential averaging, thus leading to poor convergence.
In this paper, we analyze this situation in detail. We rigorously prove that the intuition conveyed in the above paragraph is indeed correct; that limiting the reliance of the update on essentially only the past few gradients can indeed cause significant convergence issues. In particular, we make the following key contributions:
• We elucidate how the exponential moving average in the RMSPROP and ADAM algorithms can cause non-convergence by providing an example of simple convex optimization prob-lem where RMSPROP and ADAM provably do not converge to an optimal solution. Our analysis easily extends to other algorithms using exponential moving averages such as ADADELTA and NADAM as well, but we omit this for the sake of clarity. In fact, the analysis is flexible enough to extend to other algorithms that employ averaging squared gradients over essentially a fixed size window (for exponential moving averages, the influences of gradients beyond a fixed window size becomes negligibly small) in the immediate past. We omit the general analysis in this paper for the sake of clarity. • The above result indicates that in order to have guaranteed convergence the optimization algorithm must have "long-term memory" of past gradients. Specifically, we point out a problem with the proof of convergence of the ADAM algorithm given by Kingma & Ba (2015). To resolve this issue, we propose new variants of ADAM which rely on long-term memory of past gradients, but can be implemented in the same time and space requirements as the original ADAM algorithm. We provide a convergence analysis for the new variants in the convex setting, based on the analysis of Kingma & Ba (2015), and show a datadependent regret bound similar to the one in ADAGRAD. • We provide a preliminary empirical study of one of the variants we proposed and show that it either performs similarly, or better, on some commonly used problems in machine learning.
PRELIMINARIES
Notation. We use S + d to denote the set of all positive definite d × d matrices. With slight abuse of notation, for a vector a ∈ R d and a positive definite matrix M ∈ R d × R d , we use a/M to denote M −1 a, M i 2 to denote 2 -norm of i th row of M and √ M to represent M 1/2 . Furthermore, for any vectors a, b ∈ R d , we use √ a for element-wise square root, a 2 for element-wise square, a/b to denote element-wise division and max(a, b) to denote element-wise maximum. For any vector θ i ∈ R d , θ i,j denotes its j th coordinate where j ∈ [d]. The projection operation Π F ,A (y) for A ∈ S d + is defined as arg min x∈F A 1/2 (x − y) for y ∈ R d . Finally, we say F has bounded diameter D ∞ if x − y ∞ ≤ D ∞ for all x, y ∈ F.
Optimization setup. A flexible framework to analyze iterative optimization methods is the online optimization problem in the full information feedback setting. In this online setup, at each time step t, the optimization algorithm picks a point (i.e. the parameters of the model to be learned) x t ∈ F, where F ∈ R d is the feasible set of points. A loss function f t (to be interpreted as the loss of the model with the chosen parameters in the next minibatch) is then revealed, and the algorithm incurs loss f t (x t ). The algorithm's regret at the end of T rounds of this process is given by
R T = T i=1 f t (x t ) − min x∈F T i=1 f t (x)
. Throughout this paper, we assume that the feasible set F has bounded diameter and ∇f t (x) ∞ is bounded for all t ∈ [T ] and x ∈ F.
Our aim to is to devise an algorithm that ensures R T = o(T ), which implies that on average, the model's performance converges to the optimal one. The simplest algorithm for this setting is the standard online gradient descent algorithm (Zinkevich, 2003), which moves the point x t in the opposite direction of the gradient g t = ∇f t (x t ) while maintaining the feasibility by projecting onto the set F via the update rule
x t+1 = Π F (x t − α t g t ),
where Π F (y) denotes the projection of y ∈ R d onto the set F i.e., Π F (y) = min x∈F x − y , and α t is typically set to α/ √ t for some constant α. The aforementioned online learning problem is closely related to the stochastic optimization problem: min x∈F E z [f (x, z)], popularly referred to as empirical risk minimization (ERM), where z is a training example drawn training sample over which a model with parameters x is to be learned, and f (x, z) is the loss of the model with parameters x on the sample z. In particular, an online optimization algorithm with vanishing average regret yields a stochastic optimization algorithm for the ERM problem (Cesa-Bianchi et al., 2004). Thus, we use online gradient descent and stochastic gradient descent (SGD) synonymously.
Generic adaptive methods setup. We now provide a framework of adaptive methods that gives us insights into the differences between different adaptive methods and is useful for understanding the flaws in a few popular adaptive methods. Algorithm 1 provides a generic adaptive framework that encapsulates many popular adaptive methods. Note the algorithm is still abstract because the
Algorithm 1 Generic Adaptive Method Setup
Input: x1 ∈ F, step size {αt > 0} T t=1 , sequence of functions {φt, ψt} T t=1 for t = 1 to T do gt = ∇ft(xt) mt = φt(g1, . . . , gt) and Vt = ψt(g1, . . . , gt)
xt+1 = xt − αtmt/ √ Vt xt+1 = Π F , √ V t (xt+1) end for
"averaging" functions φ t and ψ t have not been specified. Here φ t : F t → R d and ψ t : F t → S d + . For ease of exposition, we refer to α t as step size and α t V −1/2 t as learning rate of the algorithm and furthermore, restrict ourselves to diagonal variants of adaptive methods encapsulated by Algorithm 1 where V t = diag(v t ) . We first observe that standard stochastic gradient algorithm falls in this framework by using: φ t (g 1 , . . . , g t ) = g t and ψ t (g 1 , . . . , g t ) = I,
and α t = α/ √ t for all t ∈ [T ](SGD)
. While the decreasing step size is required for convergence, such an aggressive decay of learning rate typically translates into poor empirical performance. The key idea of adaptive methods is to choose averaging functions appropriately so as to entail good convergence. For instance, the first adaptive method ADAGRAD (Duchi et al., 2011), which propelled the research on adaptive methods, uses the following averaging functions:
φ t (g 1 , . . . , g t ) = g t and ψ t (g 1 , . . . , g t ) = diag( t i=1 g 2 i ) t ,(ADAGRAD)
and step size α t = α/ √ t for all t ∈ [T ]. In contrast to a learning rate of α/ √ t in SGD, such a setting effectively implies a modest learning rate decay of α/ i g 2 i,j for j ∈ [d]. When the gradients are sparse, this can potentially lead to huge gains in terms of convergence (see Duchi et al. (2011)). These gains have also been observed in practice for even few non-sparse settings.
Adaptive methods based on Exponential Moving Averages. Exponential moving average variants of ADAGRAD are popular in the deep learning community. RMSPROP, ADAM, NADAM, and ADADELTA are some prominent algorithms that fall in this category. The key difference is to use an exponential moving average as function ψ t instead of the simple average function used in ADAGRAD. ADAM 1 , a particularly popular variant, uses the following averaging functions:
φ t (g 1 , . . . , g t ) = (1 − β 1 ) t i=1 β t−i 1 g i and ψ t (g 1 , . . . , g t ) = (1 − β 2 )diag( t i=1 β t−i 2 g 2 i ),(ADAM)
for some β 1 , β 2 ∈ [0, 1). This update can alternatively be stated by the following simple recursion:
m t,i = β 1 m t−1,i + (1 − β 1 )g t,i and v t,i = β 2 v t−1,i + (1 − β 2 )g 2 t,i(1)
and m 0,i = 0 and v 0,i = 0 for all i ∈ [d]. and t ∈ [T ]. A value of β 1 = 0.9 and β 2 = 0.999 is typically recommended in practice. We note the additional projection operation in Algorithm 1 in comparison to ADAM. When F = R d , the projection operation is an identity operation and this corresponds to the algorithm in (Kingma & Ba, 2015). For theoretical analysis, one requires α t = 1/ √ t for t ∈ [T ], although, a more aggressive choice of constant step size seems to work well in practice. RMSPROP, which appeared in an earlier unpublished work (Tieleman & Hinton, 2012) is essentially a variant of ADAM with β 1 = 0. In practice, especially in deep learning applications, the momentum term arising due to non-zero β 1 appears to significantly boost the performance. We will mainly focus on ADAM algorithm due to this generality but our arguments also apply to RMSPROP and other algorithms such as ADADELTA, NADAM.
THE NON-CONVERGENCE OF ADAM
With the problem setup in the previous section, we discuss fundamental flaw in the current exponential moving average methods like ADAM. We show that ADAM can fail to converge to an optimal solution even in simple one-dimensional convex settings. These examples of non-convergence contradict the claim of convergence in (Kingma & Ba, 2015), and the main issue lies in the following quantity of interest:
Γ t+1 = V t+1 α t+1 − √ V t α t .(2)
This quantity essentially measures the change in the inverse of learning rate of the adaptive method with respect to time. One key observation is that for SGD and ADAGRAD, Γ t 0 for all t ∈ [T ]. This simply follows from update rules of SGD and ADAGRAD in the previous section. In particular, update rules for these algorithms lead to "non-increasing" learning rates. However, this is not necessarily the case for exponential moving average variants like ADAM and RMSPROP i.e., Γ t can potentially be indefinite for t ∈ [T ] . We show that this violation of positive definiteness can lead to undesirable convergence behavior for ADAM and RMSPROP. Consider the following simple sequence of linear functions for F = [−1, 1]:
f t (x) = Cx, for t mod 3 = 1 −x, otherwise,
where C > 2. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Suppose β 1 = 0 and β 2 = 1/(1 + C 2 ). We show that ADAM converges to a highly suboptimal solution of x = +1 for this setting. Intuitively, the reasoning is as follows. The algorithm obtains the large gradient C once every 3 steps, and while the other 2 steps it observes the gradient −1, which moves the algorithm in the wrong direction. The large gradient C is unable to counteract this effect since it is scaled down by a factor of almost C for the given value of β 2 , and hence the algorithm converges to 1 rather than −1. We formalize this intuition in the result below. Theorem 1. There is an online convex optimization problem where ADAM has non-zero average regret i.e., R T /T 0 as T → ∞.
We relegate all proofs to the appendix. A few remarks are in order. One might wonder if adding a small constant in the denominator of the update helps in circumventing this problem i.e., the update for ADAM in Algorithm 1 ofx t+1 is modified as follows:
x t+1 = x t − α t m t / V t + I.
(3) The algorithm in (Kingma & Ba, 2015) uses such an update in practice, although their analysis does not. In practice, selection of the parameter appears to be critical for the performance of the algorithm. However, we show that for any constant > 0, there exists an online optimization setting where, again, ADAM has non-zero average regret asymptotically (see Theorem 6 in Section F of the appendix).
The above examples of non-convergence are catastrophic insofar that ADAM and RMSPROP converge to a point that is worst amongst all points in the set [−1, 1]. Note that above example also holds for constant step size α t = α. Also note that classic SGD and ADAGRAD do not suffer from this problem and for these algorithms, average regret asymptotically goes to 0. This problem is especially aggravated in high dimensional settings and when the variance of the gradients with respect to time is large. This example also provides intuition for why large β 2 is advisable while using ADAM algorithm, and indeed in practice using large β 2 helps. However the following result shows that for any constant β 1 and β 2 with β 1 < √ β 2 , we can design an example where ADAM has non-zero average rate asymptotically. Theorem 2. For any constant β 1 , β 2 ∈ [0, 1) such that β 1 < √ β 2 , there is an online convex optimization problem where ADAM has non-zero average regret i.e., R T /T 0 as T → ∞.
The above results show that with constant β 1 and β 2 , momentum or regularization via will not help in convergence of the algorithm to the optimal solution. Note that the condition β 1 < √ β 2 is benign and is typically satisfied in the parameter settings used in practice. Furthermore, such condition is assumed in convergence proof of Kingma & Ba (2015). We can strengthen this result by providing a similar example of non-convergence even in the easier stochastic optimization setting:
Algorithm 2 AMSGRAD Input: x1 ∈ F, step size {αt} T t=1 , {β1t} T t=1 , β2 Set m0 = 0, v0 = 0 andv0 = 0 for t = 1 to T do gt = ∇ft(xt) mt = β1tmt−1 + (1 − β1t)gt vt = β2vt−1 + (1 − β2)g 2 t vt = max(vt−1, vt) andVt = diag(vt) xt+1 = Π F , √V t (xt − αtmt/ √v t) end for
Theorem 3. For any constant β 1 , β 2 ∈ [0, 1) such that β 1 < √ β 2 , there is a stochastic convex optimization problem for which ADAM does not converge to the optimal solution.
These results have important consequences insofar that one has to use "problem-dependent" , β 1 and β 2 in order to avoid bad convergence behavior. In high-dimensional problems, this typically amounts to using, unlike the update in Equation (3), a different , β 1 and β 2 for each dimension. However, this defeats the purpose of adaptive methods since it requires tuning a large set of parameters. We would also like to emphasize that while the example of non-convergence is carefully constructed to demonstrate the problems in ADAM, it is not unrealistic to imagine scenarios where such an issue can at the very least slow down convergence.
We end this section with the following important remark. While the results stated above use constant β 1 and β 2 , the analysis of ADAM in (Kingma & Ba, 2015) actually relies on decreasing β 1 over time. It is quite easy to extend our examples to the case where β 1 is decreased over time, since the critical parameter is β 2 rather than β 1 , and as long as β 2 is bounded away from 1, our analysis goes through. Thus for the sake of clarity, in this paper we only prove non-convergence of ADAM in the setting where β 1 is held constant.
A NEW EXPONENTIAL MOVING AVERAGE VARIANT: AMSGRAD
In this section, we develop a new principled exponential moving average variant and provide its convergence analysis. Our aim is to devise a new strategy with guaranteed convergence while preserving the practical benefits of ADAM and RMSPROP. To understand the design of our algorithms, let us revisit the quantity Γ t in (2). For ADAM and RMSPROP, this quantity can potentially be negative. The proof in the original paper of ADAM erroneously assumes that Γ t is positive semi-definite and is hence, incorrect (refer to Appendix D for more details). For the first part, we modify these algorithms to satisfy this additional constraint. Later on, we also explore an alternative approach where Γ t can be made positive semi-definite by using values of β 1 and β 2 that change with t.
AMSGRAD uses a smaller learning rate in comparison to ADAM and yet incorporates the intuition of slowly decaying the effect of past gradients on the learning rate as long as Γ t is positive semidefinite. Algorithm 2 presents the pseudocode for the algorithm. The key difference of AMSGRAD with ADAM is that it maintains the maximum of all v t until the present time step and uses this maximum value for normalizing the running average of the gradient instead of v t in ADAM. By doing this, AMSGRAD results in a non-increasing step size and avoids the pitfalls of ADAM and RMSPROP i.e., Γ t 0 for all t ∈ [T ] even with constant β 2 . Also, in Algorithm 2, one typically uses a constant β 1t in practice (although, the proof requires a decreasing schedule for proving convergence of the algorithm).
To gain more intuition for the updates of AMSGRAD, it is instructive to compare its update with ADAM and ADAGRAD. Suppose at particular time step t and coordinate i ∈ [d], we have v t−1,i > g 2 t,i > 0, then ADAM aggressively increases the learning rate, however, as we have seen in the previous section, this can be detrimental to the overall performance of the algorithm. On the other hand, ADAGRAD slightly decreases the learning rate, which often leads to poor performance in practice since such an accumulation of gradients over a large time period can significantly decrease the learning rate. In contrast, AMSGRAD neither increases nor decreases the learning rate and furthermore, decreases v t which can potentially lead to non-decreasing learning rate even if gradient is large in the future iterations. For rest of the paper, we use g 1:t = [g 1 . . . g t ] to denote the matrix obtained by concatenating the gradient sequence. We prove the following key result for AMSGRAD.
Theorem 4. Let {x t } and {v t } be the sequences obtained from Algorithm 2, α t = α/ √ t, β 1 = β 11 , β 1t ≤ β 1 for all t ∈ [T ] and γ = β 1 / √ β 2 < 1. Assume that F has bounded diameter D ∞ and ∇f t (x) ∞ ≤ G ∞ for all t ∈ [T ]
and x ∈ F. For x t generated using the AMSGRAD (Algorithm 2), we have the following bound on the regret
R T ≤ D 2 ∞ √ T α(1 − β 1 ) d i=1v 1/2 T,i + D 2 ∞ (1 − β 1 ) 2 T t=1 d i=1 β 1tv 1/2 t,i α t + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 .
The following result falls as an immediate corollary of the above result.
Corollary 1. Suppose β 1t = β 1 λ t−1 in Theorem 4, then we have R T ≤ D 2 ∞ √ T α(1 − β 1 ) d i=1v 1/2 T,i + β 1 D 2 ∞ G ∞ (1 − β 1 ) 2 (1 − λ) 2 + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 .
The above bound can be considerably better than O( (Duchi et al., 2011). Furthermore, in Theorem 4, one can use a much more modest momentum decay of β 1t = β 1 /t and still ensure a regret of O( √ T ). We would also like to point out that one could consider taking a simple average of all the previous values of v t instead of their maximum. The resulting algorithm is very similar to ADAGRAD except for normalization with smoothed gradients rather than actual gradients and can be shown to have similar convergence as ADAGRAD.
√ dT ) regret of SGD when d i=1v 1/2 T,i √ d and d i=1 g 1:T,i 2 √ dT
EXPERIMENTS
In this section, we present empirical results on both synthetic and real-world datasets. For our experiments, we study the problem of multiclass classification using logistic regression and neural networks, representing convex and nonconvex settings, respectively.
Synthetic Experiments: To demonstrate the convergence issue of ADAM, we first consider the following simple convex setting inspired from our examples of non-convergence:
f t (x) = 1010x, for t mod 101 = 1 −10x, otherwise,
with the constraint set F = [−1, 1]. We first observe that, similar to the examples of nonconvergence we have considered, the optimal solution is x = −1; thus, for convergence, we expect the algorithms to converge to x = −1. For this sequence of functions, we investigate the regret and the value of the iterate x t for ADAM and AMSGRAD. To enable fair comparison, we set β 1 = 0.9 and β 2 = 0.99 for ADAM and AMSGRAD algorithm, which are typically the parameters settings used for ADAM in practice. Figure 1 shows the average regret (R t /t) and value of the iterate (x t ) for this problem. We first note that the average regret of ADAM does not converge to 0 with increasing t. Furthermore, its iterates x t converge to x = 1, which unfortunately has the largest regret amongst all points in the domain. On the other hand, the average regret of AMSGRAD converges to 0 and its iterate converges to the optimal solution. Figure 1 also shows the stochastic optimization setting:
f t (x) = 1010x, with probability 0.01 −10x, otherwise.
Similar to the aforementioned online setting, the optimal solution for this problem is x = −1. Again, we see that the iterate x t of ADAM converges to the highly suboptimal solution x = 1.
Logistic Regression: To investigate the performance of the algorithm on convex problems, we compare AMSGRAD with ADAM on logistic regression problem. We use MNIST dataset for this experiment, the classification is based on 784 dimensional image vector to one of the 10 class labels. The step size parameter α t is set to α/ √ t for both ADAM and AMSGRAD in for our experiments, consistent with the theory. We use a minibatch version of these algorithms with minibatch size set to 128. We set β 1 = 0.9 and β 2 is chosen from the set {0.99, 0.999}, but they are fixed throughout the experiment. The parameters α and β 2 are chosen by grid search. We report the train and test loss with respect to iterations in Figure 2. We can see that AMSGRAD performs better than ADAM with respect to both train and test loss. We also observed that AMSGRAD is relatively more robust to parameter changes in comparison to ADAM.
Neural Networks: For our first experiment, we trained a simple 1-hidden fully connected layer neural network for the multiclass classification problem on MNIST. Similar to the previous experiment, we use β 1 = 0.9 and β 2 is chosen from {0.99, 0.999}. We use a fully connected 100 rectified linear units (ReLU) as the hidden layer for this experiment. Furthermore, we use constant α t = α throughout all our experiments on neural networks. Such a parameter setting choice of ADAM is consistent with the ones typically used in the deep learning community for training neural networks. A grid search is used to determine parameters that provides the best performance for the algorithm.
Finally, we consider the multiclass classification problem on the standard CIFAR-10 dataset, which consists of 60,000 labeled examples of 32 × 32 images. We use CIFARNET, a convolutional neural network (CNN) with several layers of convolution, pooling and non-linear units, for training a multiclass classifer for this problem. In particular, this architecture has 2 convolutional layers with 64 channels and kernel size of 6 × 6 followed by 2 fully connected layers of size 384 and 192. The network uses 2 × 2 max pooling and layer response normalization between the convolutional layers (Krizhevsky et al., 2012). A dropout layer with keep probability of 0.5 is applied in between the fully connected layers (Srivastava et al., 2014). The minibatch size is also set to 128 similar to previous experiments. The results for this problem are reported in Figure 2. The parameters for ADAM and AMSGRAD are selected in a way similar to the previous experiments. We can see that AMSGRAD performs considerably better than ADAM on train loss and accuracy. Furthermore, this performance gain also translates into good performance on test loss.
EXTENSION: ADAMNC ALGORITHM
An alternative approach is to use an increasing schedule of β 2 in ADAM. This approach, unlike Algorithm 2 does not require changing the structure of ADAM but rather uses a non-constant β 1 and β 2 . The pseudocode for the algorithm, ADAMNC, is provided in the appendix (Algorithm 3). We show that by appropriate selection of β 1t and β 2t , we can achieve good convergence rates. Theorem 5. Let {x t } and {v t } be the sequences obtained from Algorithm 3, α t = α/ √ t, β 1 = β 11 and β 1t ≤ β 1 for all t ∈ [T ]. Assume that F has bounded diameter D ∞ and ∇f t (x) ∞ ≤ G ∞ for all t ∈ [T ] and x ∈ F. Furthermore, let {β 2t } be such that the following conditions are satisfied:
1. 1 α T t j=1 Π t−j k=1 β 2(t−k+1) (1 − β 2j )g 2 j,i ≥ 1 ζ t j=1 g 2 j,i for some ζ > 0 and all t ∈ [T ], j ∈ [d]. 2. v 1/2 t,i αt ≥ v 1/2 t−1,i αt−1 for all t ∈ {2, · · · , T } and i ∈ [d].
Then for x t generated using the ADAMNC (Algorithm 3), we have the following bound on the regret
R T ≤ D 2 ∞ 2α(1 − β 1 ) d i=1 √ T v 1/2 T,i + D 2 ∞ (1 − β 1 ) 2 T t=1 d i=1 β 1t v 1/2 t,i α t + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 .
The above result assumes selection of {(α t , β 2t )} such that Γ t 0 for all t ∈ {2, · · · , T }. However, one can generalize the result to deal with the case where this constraint is violated as long as the violation is not too large or frequent. Following is an immediate consequence of the above result. Corollary 2. Suppose β 1t = β 1 λ t−1 and β 2t = 1 − 1/t in Theorem 5, then we have
D 2 ∞ 2α(1 − β 1 ) d i=1 g 1:T,i 2 + β 1 D 2 ∞ G ∞ (1 − β 1 ) 2 (1 − λ) 2 + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 .
The above corollary follows from a trivial fact that v t,i = t j=1 g 2 j,i /t for all i ∈ [d] when β 2t = 1 − 1/t. This corollary is interesting insofar that such a parameter setting effectively yields a momentum based variant of ADAGRAD. Similar to ADAGRAD, the regret is data-dependent and can be considerably better than O( Duchi et al., 2011). It is easy to generalize this result for setting similar settings of β 2t . Similar to Corollary 1, one can use a more modest decay of β 1t = β 1 /t and still ensure a data-dependent regret of O( √ T ).
√ dT ) regret of SGD when d i=1 g 1:T,i 2 √ dT (
DISCUSSION
In this paper, we study exponential moving variants of ADAGRAD and identify an important flaw in these algorithms which can lead to undesirable convergence behavior. We demonstrate these problems through carefully constructed examples where RMSPROP and ADAM converge to highly suboptimal solutions. In general, any algorithm that relies on an essentially fixed sized window of past gradients to scale the gradient updates will suffer from this problem.
We proposed fixes to this problem by slightly modifying the algorithms, essentially endowing the algorithms with a long-term memory of past gradients. These fixes retain the good practical performance of the original algorithms, and in some cases actually show improvements.
The primary goal of this paper is to highlight the problems with popular exponential moving average variants of ADAGRAD from a theoretical perspective. RMSPROP and ADAM have been immensely successful in development of several state-of-the-art solutions for a wide range of problems. Thus, it is important to understand their behavior in a rigorous manner and be aware of potential pitfalls while using them in practice. We believe this paper is a first step in this direction and suggests good design principles for faster and better stochastic optimization.
APPENDIX A PROOF OF THEOREM 1
Proof. We consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence:
f t (x) = Cx, for t mod 3 = 1 −x, otherwise,
where C ≥ 2. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Without loss of generality, assume that the initial point is x 1 = 1. This can be assumed without any loss of generality because for any choice of initial point, we can always translate the coordinate system such that the initial point is x 1 = 1 in the new coordinate system and then choose the sequence of functions as above in the new coordinate system. Also, since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. Consider the execution of ADAM algorithm for this sequence of functions with
β 1 = 0, β 2 = 1 1 + C 2 and α t = α √ t where α < √ 1 − β 2 .
Note that since gradients of these functions are bounded, F has bounded L ∞ diameter and β 2 1 / √ β 2 < 1. Hence, the conditions on the parameters required for ADAM are satisfied (refer to (Kingma & Ba, 2015) for more details).
Our main claim is that for iterates {x t } ∞ t=1 arising from the updates of ADAM, we have x t > 0 for all t ∈ N and furthermore, x 3t+1 = 1 for all t ∈ N ∪ {0}. For proving this, we resort to the principle of mathematical induction. Since x 1 = 1, both the aforementioned conditions hold for the base case. Suppose for some t ∈ N ∪ {0}, we have x i > 0 for all i ∈ [3t + 1] and x 3t+1 = 1. Our aim is to prove that x 3t+2 and x 3t+3 are positive and x 3t+4 = 1. We first observe that the gradients have the following form:
∇f i (x) = C, for i mod 3 = 1 −1, otherwise From (3t + 1) th update of ADAM in Equation (1), we obtain
x 3t+2 = x 3t+1 − αC (3t + 1)(β 2 v 3t + (1 − β 2 )C 2 ) = 1 − αC (3t + 1)(β 2 v 3t + (1 − β 2 )C 2 ) .
The equality follows from the induction hypothesis. We observe the following: αC
(3t + 1)(β 2 v 3t + (1 − β 2 )C 2 ) ≤ αC (3t + 1)(1 − β 2 )C 2 = α (3t + 1)(1 − β 2 )) < 1.(4)
The second inequality follows from the step size choice that α < √ 1 − β 2 . Therefore, we have 0 <x 3t+2 < 1 and hence x 3t+2 =x 3t+2 > 0. Furthermore, after the (3t + 2) th and (3t + 3) th updates of ADAM in Equation (1), we have the following:
x 3t+3 = x 3t+2 + α (3t + 2)(β 2 v 3t+1 + (1 − β 2 )) , x 3t+4 = x 3t+3 + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) .
Since x 3t+2 > 0, it is easy to see that x 3t+3 > 0. To complete the proof, we need to show that x 3t+4 = 1. In order to prove this claim, we show thatx 3t+4 ≥ 1, which readily translates to x 3t+4 = 1 because x 3t+4 = Π F (x 3t+4 ) and F = [−1, 1] here Π F is the simple Euclidean projection (note that in one-dimension, Π F , √ Vt = Π F ). We observe the following:
x 3t+4 = min(x 3t+3 , 1) + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) .
The above equality is due to the fact thatx 3t+3 > 0 and property of projection operation onto the set F = [−1, 1]. We consider the following two cases:
1. Supposex 3t+3 ≥ 1, then it is easy to see from the above equality thatx 3t+4 > 1.
2. Supposex 3t+3 < 1, then we have the following:
x 3t+4 =x 3t+3 + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) = x 3t+2 + α (3t + 2)(β 2 v 3t+1 + (1 − β 2 )) + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) = 1 − αC (3t + 1)(β 2 v 3t + (1 − β 2 )C 2 ) + α (3t + 2)(β 2 v 3t+1 + (1 − β 2 )) + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) .
The third equality is due to the fact that x 3t+2 =x 3t+2 . Thus, to provex 3t+4 > 1, it is enough to the prove:
αC (3t + 1)(β 2 v 3t + (1 − β 2 )C 2 ) T1 ≤ α (3t + 2)(β 2 v 3t+1 + (1 − β 2 )) + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) T2
We have the following bound on term T 1 from Equation (4):
T 1 ≤ α (3t + 1)(1 − β 2 )) .(5)
Furthermore, we lower bound T 2 in the following manner:
T 2 = α (3t + 2)(β 2 v 3t+1 + (1 − β 2 )) + α (3t + 3)(β 2 v 3t+2 + (1 − β 2 )) ≥ α β 2 C 2 + (1 − β 2 ) 1 √ 3t + 2 + 1 √ 3t + 3 ≥ α β 2 C 2 + (1 − β 2 ) 1 2(3t + 1) + 1 2(3t + 1) = √ 2α (3t + 1)(β 2 C 2 + (1 − β 2 )) = α (3t + 1)(1 − β 2 ) ≥ T 1 .(6)
The first inequality is due to the fact that v t ≤ C 2 for all t ∈ N. The last inequality follows from inequality in Equation (5). The last equality is due to following fact:
β 2 C 2 + (1 − β 2 ) 2 = 1 − β 2
for the choice of β 2 = 1/(1 + C 2 ). Therefore, we have T 2 ≥ T 1 and hence,x 3t+4 ≥ 1.
Therefore, from both the cases, we see that x 3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t ∈ N ∪ {0}. Thus, we have
f 3t+1 (x 3t+1 )+f 3t+2 (x 3t+2 )+f 3t+2 (x 3t+2 )−f 3t+1 (−1)−f 3t+2 (−1)−f 3t+3 (−1) ≥ 2C−4 = 2C−4.
Therefore, for every 3 steps, ADAM suffers a regret of at least 2C − 4. More specifically, R T ≥ (2C − 4)T /3. Since C ≥ 2, this regret can be very large and furthermore, R T /T 0 as T → ∞, which completes the proof.
B PROOF OF THEOREM 2
Proof. The proof generalizes the optimization setting used in Theorem 1. Throughout the proof, we assume β 1 < √ β 2 , which is also a condition (Kingma & Ba, 2015) assume in their paper. In this proof, we consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence:
f t (x) = Cx, for t mod C = 1 −x, otherwise,
where C ∈ N, C mod 2 = 0 satisfies the following:
(1 − β 1 )β C−1 1 C ≤ 1 − β C−1 1 , β (C−2)/2 2 C 2 ≤ 1, 3(1 − β 1 ) 2 √ 1 − β 2 1 + γ(1 − γ C−1 ) 1 − γ + β C/2−1 1 1 − β 1 < C 3 ,(7)
where γ = β 1 / √ β 2 < 1. It is not hard to see that these conditions hold for large constant C that depends on β 1 and β 2 . Since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret since C ≥ 2. Furthermore, the gradients have the following form:
∇f i (x) = C,
for t mod C = 1 −1, otherwise Our first observation is that m kC ≤ 0 for all k ∈ N ∪ {0}. For k = 0, this holds trivially due to our initialization. For the general case, observe the following:
m kC+C = −(1 − β 1 ) − (1 − β 1 )β 1 − · · · − (1 − β 1 )β C−2 1 + (1 − β 1 )β C−1 1 C + β C 1 m kC (8) = −(1 − β C−1 1 ) + (1 − β 1 )β C−1 1 C + β C 1 m kC .(9)
If m kC ≤ 0, it can be easily shown that m kC+C ≤ 0 for our selection of C in Equation (7) by using the principle of mathematical induction. With this observation we continue to the main part of the proof. Let T be such that t + C ≤ τ 2 t for all t ≥ T where τ ≤ 3/2. All our analysis focuses on iterations t ≥ T . Note that any regret before T is just a constant because T is independent of T and thus, the average regret is negligible as T → ∞. Consider an iterate at time step t of the form kC after T . Our claim is that
x t+C ≥ min{x t + c t , 1}(10)
for some c t > 0. To see this, consider the updates of ADAM for the particular sequence of functions we considered are:
x t+1 = Π F x t − α √ t (1 − β 1 )C + β 1 m t (1 − β 2 )C 2 + β 2 v t , x t+i = Π F x t+i−1 − α √ t + i − 1 −(1 − β 1 ) + β 1 m t+i−1 (1 − β 2 ) + β 2 v t+i−1 for i ∈ {2, · · · , C}.
For i ∈ {2, · · · , C}, we use the following notation:
δ t = − α √ t (1 − β 1 )C + β 1 m t (1 − β 2 )C 2 + β 2 v t , δ t+i = − α √ t + i −(1 − β 1 ) + β 1 m t+i (1 − β 2 ) + β 2 v t+i for i ∈ {1, · · · , C − 1}.
Note that if δ t+j ≥ 0 for some j ∈ {1, · · · , C − 1} then δ t+l ≥ 0 for all l ∈ {j, · · · , C − 1}. This follows from the fact that the gradient is negative for all time steps i ∈ {2, · · · , C}. Using Lemma 6 for {x t+1 , · · · , x t+C } and {δ t , · · · , δ t+C−1 }, we have the following:
x t+C ≥ min 1, x t + t+C−1 i=t δ i .
Let i = C/2. In order to prove our claim in Equation (10), we need to prove the following:
δ = t+C−1 i=t δ i > 0.
To this end, we observe the following:
t+C−1 i=t+1 δ i = C−1 i=1 − α √ t + i −(1 − β 1 ) + β 1 m t+i (1 − β 2 ) + β 2 v t+i = C i=2 − α √ t + i − 1 −(1 − β 1 ) + (1 − β 1 ) i−2 j=1 β j 1 (−1) + (1 − β 1 )β i−1 1 C + β i 1 m t (1 − β 2 ) + β 2 v t+i−1 ≥ C i=2 α √ t + i − 1 (1 − β 1 ) + (1 − β 1 ) i−2 j=1 β j 1 − (1 − β 1 )β i−1 1 C (1 − β 2 ) + β 2 v t+i−1 ≥ C i=2 α τ √ t (1 − β 1 ) + (1 − β 1 ) i−2 j=1 β j 1 (1 − β 2 ) + β 2 v t+i−1 − C i=2 α √ t (1 − β 1 )β i−1 1 C (1 − β 2 ) + β 2 v t+i−1 ≥ C i=2 α τ √ t (1 − β 1 ) + (1 − β 1 ) i−2 j=1 β j 1 (1 − β 2 ) + β 2 v t+i−1 − C i=2 α √ t (1 − β 1 )β i−1 1 C (1 − β 2 ) + β i−1 2 (1 − β 2 )C 2 ≥ α τ √ t C i=i 1 − β i−1 1 (1 − β 2 ) + 2β 2 − α √ t γ(1 − β 1 )(1 − γ C−1 ) (1 − γ) (1 − β 2 ) ≥ α τ √ t √ 1 + β 2 C − i − β i −1 1 1 − β 1 − α √ t γ(1 − β 1 )(1 − γ C−1 ) (1 − γ) (1 − β 2 ) ≥ 0.
The first equality follows from the definition of m t+i+1 . The first inequality follows from the fact that m t ≤ 0 when t mod C = 0 (see Equation (9) and arguments based on it). The second inequality follows from the definition of τ that t + C ≤ τ 2 t for all t ≥ T . The third inequality is due to the fact that v t+i−1 ≥ (1 − β 2 )β i−2 2 C 2 . The last inequality follows from our choice of C. The fourth inequality is due to the following upper bound that applies for all i ≤ i ≤ C:
v t+i−1 = (1 − β 2 ) t+i−1 j=1 β t+i−1−j 2 g 2 j ≤ (1 − β 2 ) k h=1 β t+i−1−hC 2 C 2 + t+i−1 j=1 β t+i−1−j 2 ≤ (1 − β 2 ) β i −1 2 C 2 k−1 h=0 β hC 2 + 1 1 − β 2 ≤ (1 − β 2 ) β i −1 2 C 2 1 − β C 2 + 1 1 − β 2 ≤ 2.
The first inequality follows from online problem setting for the counter-example i.e., gradient is C once every C iterations and −1 for the rest. The last inequality follows from the fact that β i −1 2 C 2 ≤ 1 and β C 2 ≤ β 2 . Furthermore, from the above inequality, we have
t+C−1 i=t δ i ≥ δ t + α τ √ t √ 1 + β 2 C − i − β i −1 1 1 − β 1 − α √ t γ(1 − β 1 )(1 − γ C−1 ) (1 − γ) (1 − β 2 ) = − α √ t (1 − β 1 )C + β 1 m t (1 − β 2 )C 2 + β 2 v t + α τ √ t √ 1 + β 2 C − i − β i −1 1 1 − β 1 − α √ t γ(1 − β 1 )(1 − γ C−1 ) (1 − γ) (1 − β 2 ) ≥ − α √ t (1 − β 1 )C (1 − β 2 )C 2 + α τ √ t √ 1 + β 2 C − i − β i −1 1 1 − β 1 − α √ t γ(1 − β 1 )(1 − γ C−1 ) (1 − γ) (1 − β 2 ) ≥ α τ √ t C 3 − β C/2−1 1 1 − β 1 − 3(1 − β 1 ) 2 √ 1 − β 2 1 + γ(1 − γ C−1 ) 1 − γ = α √ t λ
Note that from our choice of C, it is easy to see that λ ≥ 0. Also, observe that λ is independent of t. Thus, x t+C ≥ min{1, x t + λ/ √ t}. From this fact, we also see the following:
1. If x t = 1, then x t+C = 1 for all t ≥ T such that t mod C = 0.
2. There exists constant T 1 ≥ T such that x T 1 = 1 where T 1 mod C = 0.
The first point simply follows from the relation x t+C ≥ min{1, x t + λ/ √ t}. The second point is due to divergent nature of the sum
∞ t=t 1/ √ t. Therefore, we have C i=1 f (kC+i) (x kC+i ) − C i=1 f (kC+i) (−1) ≥ 2C − 2(C − 1) = 2.
where kC ≥ T 1 . Thus, when t ≥ T 1 , for every C steps, ADAM suffers a regret of at least 2. More specifically, R T ≥ 2(T − T 1 )/C. Thus, R T /T 0 as T → ∞, which completes the proof.
C PROOF OF THEOREM 3
Proof. Let δ be an arbitrary small positive constant, and C be a large enough constant chosen as a function of β 1 , β 2 , δ that will be determined in the proof.
Consider the following one dimensional stochastic optimization setting over the domain [−1, 1]. At each time step t, the function f t (x) is chosen i.i.d. as follows:
f t (x) = Cx with probability p := 1+δ C+1 −x with probability 1 − p
The expected function is F (x) = δx; thus the optimum point over [−1, 1] is x = −1. At each time step t the gradient g t equals C with probability p and −1 with probability 1 − p. Thus, the step taken by ADAM is
∆ t = −α t (β 1 m t−1 + (1 − β 1 )g t ) β 2 v t−1 + (1 − β 2 )g 2 t .
We now show that for a large enough constant C, E[∆ t ] ≥ 0, which implies that the ADAM's steps keep drifting away from the optimal solution x = −1.
Lemma 1. For a large enough constant C (as a function of β 1 , β 2 , δ), we have E[∆ t ] ≥ 0.
Proof. Let E t [·] denote expectation conditioned on all randomness up to and including time t − 1.
Taking conditional expectation of the step, we have
1 α t E t [∆ t ] = p · −(β 1 m t−1 + (1 − β 1 )C) β 2 v t−1 + (1 − β 2 )C 2 + (1 − p) · −(β 1 m t−1 − (1 − β 1 )) β 2 v t−1 + (1 − β 2 ) = p · −(β 1 m t−1 + (1 − β 1 )C) β 2 v t−1 + (1 − β 2 )C 2 T1 +(1 − p) · −β 1 m t−1 β 2 v t−1 + (1 − β 2 ) T2 +(1 − p) · 1 − β 1 β 2 v t−1 + (1 − β 2 ) T3(11)
We will bound the expectation of the terms T 1 , T 2 and T 3 above separately.
First, for T 1 , we have
T 1 ≥ −(β 1 C + (1 − β 1 )C) (1 − β 2 )C 2 ≥ − 1 √ 1 − β 2 .(12)
Next, we bound E[T 2 ]. Define k = log(C+1) log(1/β1) . This choice of k ensures that β k 1 C ≤ 1 − β k 1 . Now, note that
m t−1 = (1 − β 1 ) t−1 i=1 β t−1−i 1 g i . Let E denote the event that for every i = t − 1, t − 2, . . . , max{t − k, 1}, g i = −1. Note that Pr[E] ≥ 1 − kp.
Assuming E happens, we can bound m t−1 as follows:
m t−1 ≤ (1−β 1 ) t−1 i=max{t−k,1} β t−1−i 1 ·−1+(1−β 1 ) max{t−k,1}−1 i=1 β t−1−i 1 ·C ≤ −(1−β k 1 )+β k 1 C ≤ 0,
and so T 2 ≥ 0.
With probability at most kp, the event E doesn't happen. In this case, we bound T 2 as follows. We first bound m t−1 in terms of v t−1 using the Cauchy-Schwarz inequality as follows:
m t−1 = (1 − β 1 ) t−1 i=1 β t−1−i 1 g i ≤ (1 − β 1 ) t−1 i=1 β t−1−i 2 g 2 i t−1 i=1 ( β 2 1 β2 ) t−1−i ≤ (1 − β 1 ) β 2 (1 − β 2 )(β 2 − β 2 1 ) A · √ v t−1 .
Thus, v t−1 ≥ m 2 t−1 /A 2 . Thus, we have
T 2 = −β 1 m t−1 β 2 v t−1 + (1 − β 2 ) ≥ −β 1 |m t−1 | β 2 (m 2 t−1 /A 2 ) = −β 1 (1 − β 1 ) (1 − β 2 )(β 2 − β 2 1 )
.
Hence, we have
E[T 2 ] ≥ 0 · (1 − kp) + −β 1 (1 − β 1 ) (1 − β 2 )(β 2 − β 2 1 ) · kp = −β 1 (1 − β 1 )kp (1 − β 2 )(β 2 − β 2 1 )(13)
Finally, we lower bound E[T 3 ] using Jensen's inequality applied to the convex function 1 √
x :
E[T 3 ] ≥ (1 − β 1 ) β 2 E[v t−1 ] + (1 − β 2 ) ≥ (1 − β 1 ) β 2 (1 + δ)C 2 + (1 − β 2 ) .
The last inequality follows by using the facts v t−1 = (1 − β 2 ) t−1 i=1 β t−1−i 2 g 2 i , and the random variables g 2 1 , g 2 2 , . . . , g 2 t−1 are i.i.d., and so
E[v t−1 ] = (1−β t−1 2 )E[g 2 1 ] = (1−β t−1 2 )(C 2 p+(1−p)) = (1−β t−1 2 )(1+δ)C−δ ≤ (1+δ)C.(14)
Combining the bounds in (12), (13), and (14) in the expression for ADAM's step, (11), and plugging in the values of the parameters k and p we get the following lower bound on E[∆ t ]:
− 1 + δ C + 1 · 1 √ 1 − β 2 + −β 1 (1 − β 1 ) log(C+1) log(1/β1) (1 − β 2 )(β 2 − β 2 1 ) + 1 − 1 + δ C + 1 · (1 − β 1 ) β 2 (1 + δ)C + (1 − β 2 ) .
It is evident that for C large enough (as a function of δ, β 1 , β 2 ), the above expression can be made non-negative.
For the sake of simplicity, let us assume, as is routinely done in practice, that we are using a version of ADAM that doesn't perform any projection steps 2 . Then the lemma implies that
E[x t+1 ] ≥ E[x t ].
Via a simple induction, we conclude that E[x t ] ≥ x 1 for all t. Thus, if we assume that the starting point
x 1 ≥ 0, then E[x t ] ≥ 0. Since F is a monotonically increasing function, we have E[F (x t )] ≥ F (0) = 0, whereas F (−1) = −δ.
Thus the expected suboptimality gap is always δ > 0, which implies that ADAM doesn't converge to the optimal solution.
D PROOF OF THEOREM 4
The proof of Theorem 4 presented below is along the lines of the Theorem 4.1 in (Kingma & Ba, 2015) which provides a claim of convergence for ADAM. As our examples showing nonconvergence of ADAM indicate, the proof in (Kingma & Ba, 2015) has problems. The main issue in their proof is the incorrect assumption that Γ t defined in their equation (3) is positive semidefinite, and we also identified problems in lemmas 10.3 and 10.4 in their paper. The following proof fixes these issues and provides a proof of convergence for AMSGRAD.
Proof. We begin with the following observation:
x t+1 = Π F , √V t (x t − α tV −1/2 t m t ) = min x∈F V 1/4 t (x − (x t − α tV −1/2 t m t )) . Furthermore, Π F , √V t (x * ) = x * for all x * ∈ F.
In this proof, we will use x * i to denote the i th coordinate of x * . Using Lemma 4 with u 1 = x t+1 and u 2 = x * , we have the following:
V 1/4 t (x t+1 − x * ) 2 ≤ V 1/4 t (x t − α tV −1/2 t m t − x * ) 2 = V 1/4 t (x t − x * ) 2 + α 2 t V −1/4 t m t 2 − 2α t m t , x t − x * = V 1/4 t (x t − x * ) 2 + α 2 t V −1/4 t m t 2 − 2α t β 1t m t−1 + (1 − β 1t )g t , x t − x *
Rearranging the above inequality, we have
g t , x t − x * ≤ 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + α t 2(1 − β 1t ) V −1/4 t m t 2 + β 1t 1 − β 1t m t−1 , x t − x * ≤ 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + α t 2(1 − β 1t ) V −1/4 t m t 2 + β 1t 2(1 − β 1t ) α t V −1/4 t m t−1 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 .(15)
The second inequality follows from simple application of Cauchy-Schwarz and Young's inequality. We now use the standard approach of bounding the regret at each step using convexity of the function f t in the following manner:
T t=1 f t (x t ) − f t (x * ) ≤ T t=1 g t , x t − x * ≤ T t=1 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + α t 2(1 − β 1t ) V −1/4 t m t 2 + β 1t 2(1 − β 1t ) α t−1 V −1/4 t−1 m t−1 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 ≤ T t=1 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + α t (1 − β 1 ) V −1/4 t m t 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 .(16)
The first inequality is due to convexity of function f t . The second inequality follows from the bound in Equation (15) and the fact thatv t−1,i ≤v t,i . For further bounding this inequality, we need the following intermediate result.
Lemma 2. For the parameter settings and conditions assumed in Theorem 4, we have T t=1
α t V −1/4 t m t 2 ≤ α √ 1 + log T (1 − β 1 )(1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2
Proof. We start with the following:
T t=1 α t V −1/4 t m t 2 = T −1 t=1 α t V −1/4 t m t 2 + α T d i=1 m 2 T,i v T,i ≤ T −1 t=1 α t V −1/4 t m t 2 + α T d i=1 m 2 T,i √ v T,i ≤ T −1 t=1 α t V −1/4 t m t 2 + α d i=1 ( T j=1 (1 − β 1j )Π T −j k=1 β 1(T −k+1) g j,i ) 2 T ((1 − β 2 ) T j=1 β T −j 2 g 2 j,i )
The first inequality follows from the definition ofv T,i , which is maximum of all v T,i until the current time step. The second inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner:
T t=1 α t V −1/4 t m t 2 ≤ T −1 t=1 α t V −1/4 t m t 2 + α d i=1 ( T j=1 Π T −j k=1 β 1(T −k+1) )( T j=1 Π T −j k=1 β 1(T −k+1) g 2 j,i ) T ((1 − β 2 ) T j=1 β T −j 2 g 2 j,i ) ≤ T −1 t=1 α t V −1/4 t m t 2 + α d i=1 ( T j=1 β T −j 1 )( T j=1 β T −j 1 g 2 j,i ) T ((1 − β 2 ) T j=1 β T −j 2 g 2 j,i ) ≤ T −1 t=1 α t V −1/4 t m t 2 + α 1 − β 1 d i=1 T j=1 β T −j 1 g 2 j,i T ((1 − β 2 ) T j=1 β T −j 2 g 2 j,i ) ≤ T −1 t=1 α t V −1/4 t m t 2 + α (1 − β 1 ) T (1 − β 2 ) d i=1 T j=1 β T −j 1 g 2 j,i β T −j 2 g 2 j,i ≤ T −1 t=1 α t V −1/4 t m t 2 + α (1 − β 1 ) T (1 − β 2 ) d i=1 T j=1 γ T −j |g j,i |(17)
The first inequality follows from Cauchy-Schwarz inequality. The second inequality is due to the fact that β 1k ≤ β 1 for all k ∈ [T ]. The third inequality follows from the inequality T j=1 β T −j 1 ≤ 1/(1 − β 1 ). By using similar upper bounds for all time steps, the quantity in Equation (17) can further be bounded as follows:
T t=1 α t V −1/4 t m t 2 ≤ T t=1 α (1 − β 1 ) t(1 − β 2 ) d i=1 t j=1 γ t−j |g j,i | = α (1 − β 1 ) (1 − β 2 ) d i=1 T t=1 1 √ t t j=1 γ t−j |g j,i | = α (1 − β 1 ) (1 − β 2 ) d i=1 T t=1 |g t,i | T j=t γ j−t √ j ≤ α (1 − β 1 ) (1 − β 2 ) d i=1 T t=1 |g t,i | T j=t γ j−t √ t ≤ α (1 − β 1 ) (1 − β 2 ) d i=1 T t=1 |g t,i | 1 (1 − γ) √ t ≤ α (1 − β 1 )(1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 T t=1 1 t ≤ α √ 1 + log T (1 − β 1 )(1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2
The third inequality follows from the fact that T j=t γ j−t ≤ 1/(1 − γ). The fourth inequality is due to simple application of Cauchy-Schwarz inequality. The final inequality is due to the following bound on harmonic sum: T t=1 1/t ≤ (1 + log T ). This completes the proof of the lemma.
We now return to the proof of Theorem 4. Using the above lemma in Equation (16) 3 , we have:
T t=1 f t (x t ) − f t (x * ) ≤ T t=1 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 = 1 2α 1 (1 − β 1 ) V 1/4 1 (x 1 − x * ) 2 + 1 2 T t=2 V 1/4 t (x t − x * ) 2 α t (1 − β 1t ) − V 1/4 t−1 (x t − x * ) 2 α t−1 (1 − β 1(t−1) ) + T t=1 β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 = 1 2α 1 (1 − β 1 ) V 1/4 1 (x 1 − x * ) 2 + 1 2 T t=2 V 1/4 t (x t − x * ) 2 α t (1 − β 1(t−1) ) − V 1/4 t (x t − x * ) 2 α t (1 − β 1(t−1) ) + V 1/4 t (x t − x * ) 2 α t (1 − β 1t ) − V 1/4 t−1 (x t − x * ) 2 α t−1 (1 − β 1(t−1) ) + T t=1 β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 ≤ 1 2α 1 (1 − β 1 ) V 1/4 1 (x 1 − x * ) 2 + 1 2(1 − β 1 ) T t=2 V 1/4 t (x t − x * ) 2 α t − V 1/4 t−1 (x t − x * ) 2 α t−1 + T t=1 β 1t α t (1 − β 1 ) 2 V 1/4 t (x t − x * ) 2 + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 = 1 2α 1 (1 − β 1 ) d i=1v 1/2 1,i (x 1,i − x * i ) 2 + 1 2(1 − β 1 ) T t=2 d i=1 (x t,i − x * i ) 2 v 1/2 t,i α t −v 1/2 t−1,i α t−1 + 1 (1 − β 1 ) 2 T t=1 d i=1 β 1t (x t,i − x * i ) 2v 1/2 t,i α t + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 .(18)
3 For the sake of clarity, we provide more details of the proof here compared to the original version. Also, the original proof had a missing constant factor of 2 1−β 1 in the regret bound, which has been addressed here.
Algorithm 3 ADAMNC
Input: x1 ∈ F, step size {αt > 0} T t=1 , {(β1t, β2t)} T t=1
Set m0 = 0 and v0 = 0 for t = 1 to T do gt = ∇ft(xt)
mt = β1tmt−1 + (1 − β1t)gt vt = β2tvt−1 + (1 − β2t)g 2 t and Vt = diag(vt) xt+1 = Π F , √ V t (xt − αtmt/ √ vt) end for
The second inequality uses the fact that β 1t ≤ β 1 , and the observations thatv
1/2 t,i αt ≥v 1/2 t−1,i αt−1 and V 1/4 t (x t − x * ) 2 α t (1 − β 1t ) − V 1/4 t (x t − x * ) 2 α t (1 − β 1(t−1) ) ≤ β 1t α t (1 − β 1 ) 2 V 1/4 t (x t − x * ) 2 .
In order to further simplify the bound in Equation (18), we need to use telescopic sum. Using the above fact and L ∞ bound on the feasible region and making use of the above property in Equation (18), we have:
T t=1 f t (x t ) − f t (x * ) ≤ 1 2α 1 (1 − β 1 ) d i=1v 1/2 1,i D 2 ∞ + 1 2(1 − β 1 ) T t=2 d i=1 D 2 ∞ v 1/2 t,i α t −v 1/2 t−1,i α t−1 + 1 (1 − β 1 ) 2 T t=1 d i=1 D 2 ∞ β 1tv 1/2 t,i α t + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 = D 2 ∞ 2α T (1 − β 1 ) d i=1v 1/2 T,i + D 2 ∞ (1 − β 1 ) 2 T t=1 d i=1 β 1tv 1/2 t,i α t + α √ 1 + log T (1 − β 1 ) 2 (1 − γ) (1 − β 2 ) d i=1 g 1:T,i 2 .
The equality follows from simple telescopic sum, which yields the desired result. One important point to note here is that the regret of AMSGRAD can be bounded by O(G ∞ √ T ). This can be easily seen from the proof of the aforementioned lemma where in the analysis the term
E PROOF OF THEOREM 5
Proof. Using similar argument to proof of Theorem 4 until Equation (15), we have the following
g t , x t − x * ≤ 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + α t 2(1 − β 1t ) V −1/4 t m t 2 + β 1t 2(1 − β 1t ) α t V −1/4 t m t−1 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 .(19)
The second inequality follows from simple application of Cauchy-Schwarz and Young's inequality. We now use the standard approach of bounding the regret at each step using convexity of the function 1/(1 − β 1 ). Using similar argument for all time steps, the quantity in Equation (21) can be bounded as follows:
T t=1 α t V −1/4 t m t 2 ≤ ζ 1 − β 1 d i=1 T j=1 T −j l=0 β l 1 g 2 j,i j k=1 g 2 k,i ≤ ζ (1 − β 1 ) 2 d i=1 T j=1 g 2 j,i j k=1 g 2 k,i ≤ 2ζ (1 − β 1 ) 2 d i=1 g 1:T,i 2 .
The final inequality is due to Lemma 5. This completes the proof of the lemma.
Using the above lemma in Equation (20) , we have:
T t=1 f t (x t ) − f t (x * ) ≤ T t=1 1 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 − V 1/4 t (x t+1 − x * ) 2 + β 1t 2α t (1 − β 1t ) V 1/4 t (x t − x * ) 2 + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 ≤ 1 2α 1 (1 − β 1 ) V 1/4 1 (x 1 − x * ) 2 + 1 2(1 − β 1 ) T t=2 V 1/4 t (x t − x * ) 2 α t − V 1/4 t−1 (x t−1 − x * ) 2 α t + T t=1 β 1t α t (1 − β 1 ) 2 V 1/4 t (x t − x * ) 2 + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 = 1 2α 1 (1 − β 1 ) d i=1 v 1/2 1,i (x 1,i − x * i ) 2 + 1 2(1 − β 1 ) T t=2 d i=1 (x t,i − x * i ) 2 v 1/2 t,i α t − v 1/2 t−1,i α t−1 + 1 (1 − β 1 ) 2 T t=1 d i=1 β 1t (x t,i − x * i ) 2 v 1/2 t,i α t + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 .(22)
The first inequality and second inequality use the fact that β 1t ≤ β 1 and argument similar to that in Theorem 4. Furthermore, from the theorem statement, we know that that {(α t .β 2t )} are selected such that the following holds: v 1/2 t,i
α t ≥ v 1/2 t−1,i α t−1 .
Using the L ∞ bound on the feasible region and making use of the above property in Equation (22), we have:
T t=1 f t (x t ) − f t (x * ) ≤ 1 2α 1 (1 − β 1 ) d i=1 v 1/2 1,i D 2 ∞ + 1 2(1 − β 1 ) T t=2 d i=1 D 2 ∞ v 1/2 t,i α t − v 1/2 t−1,i α t−1 + 1 (1 − β 1 ) 2 T t=1 d i=1 D 2 ∞ β 1t v 1/2 t,i α t + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 = D 2 ∞ 2α T (1 − β 1 ) d i=1 v 1/2 T,i + D 2 ∞ (1 − β 1 ) 2 T t=1 d i=1 β 1t v 1/2 t,i α t + 2ζ (1 − β 1 ) 3 d i=1 g 1:T,i 2 .
The equality follows from simple telescopic sum, which yields the desired result.
F PROOF OF THEOREM 6
Theorem 6. For any > 0, ADAM with the modified update in Equation (3) and with parameter setting such that all the conditions in (Kingma & Ba, 2015) are satisfied can have non-zero average regret i.e., R T /T 0 as T → ∞ for convex {f i } ∞ i=1 with bounded gradients on a feasible set F having bounded D ∞ diameter. Therefore, for every 3 steps, ADAM suffers a regret of at least 2C − 4. More specifically, R T ≥ (2C − 4)T /3. Since C ≥ 2, this regret can be very large and furthermore, R T /T 0 as T → ∞, which completes the proof of the case where = 1. For the general case, we consider the following sequence of functions:
f t (x) = C √ x, for t mod 3 = 1 − √ x, otherwise,
The functions are essentially rescaled in a manner so that the resultant updates of ADAM correspond to the one in the optimization setting described above. Using essentially the same argument as above, it is easy to show that the regret R T ≥ (2C − 4) √ T /3 and thus, the average regret is non-zero asymptotically, which completes the proof. G AUXILIARY LEMMA Lemma 4 ((McMahan & Streeter, 2010)). For any Q ∈ S d + and convex feasible set F ⊂ R d , suppose u 1 = min x∈F Q 1/2 (x−z 1 ) and u 2 = min x∈F Q 1/2 (x−z 2 ) then we have Q 1/2 (u 1 − u 2 ) ≤ Q 1/2 (z 1 − z 2 ) .
Proof. We provide the proof here for completeness. Since u 1 = min x∈F Q 1/2 (x − z 1 ) and u 2 = min x∈F Q 1/2 (x − z 2 ) and from the property of projection operator we have the following: z 1 − u 1 , Q(z 2 − z 1 ) ≥ 0 and z 2 − u 2 , Q(z 1 − z 2 ) ≥ 0.
Combining the above inequalities, we have
u 2 − u 1 , Q(z 2 − z 1 ) ≥ z 2 − z 1 , Q(z 2 − z 1 ) .(24)
Also, observe the following:
u 2 − u 1 , Q(z 2 − z 1 ) ≤ 1 2 [ u 2 − u 1 , Q(u 2 − u 1 ) + z 2 − z 1 , Q(z 2 − z 1 ) ]
The above inequality can be obtained from the fact that (u 2 − u 1 ) − (z 2 − z 1 ), Q((u 2 − u 1 ) − (z 2 − z 1 )) ≥ 0 as Q ∈ S d + and rearranging the terms. Combining the above inequality with Equation (24), we have the required result.
Lemma 5 ( (Auer & Gentile, 2000)). For any non-negative real numbers y 1 , · · · , y t , the following holds:
t i=1 y i i j=1 y j ≤ 2 t i=1 y i .
Lemma 6. Suppose F = [a, b] for a, b ∈ R and y t+1 = Π F (y t + δ t ) for all the t ∈ [T ], y 1 ∈ F and furthermore, there exists i ∈ [T ] such that δ j ≤ 0 for all j ≤ i and δ j > 0 for all j > i. Then we have, y T +1 ≥ min{b, y 1 + T j=1 δ j }.
Proof. It is first easy to see that y i+1 ≥ y 1 + i j=1 δ j since δ j ≤ 0 for all j ≤ i. Furthermore, also observe that y T +1 ≥ min{b, y i+1 + T j=i+1 δ j } since δ j ≥ 0 for all j > i. Combining the above two inequalities gives us the desired result.
Figure 1 :
1Performance comparison of ADAM and AMSGRAD on synthetic example on a simple one dimensional convex problem inspired by our examples of non-convergence. The first two plots (left and center) are for the online setting and the the last one (right) is for the stochastic setting.
Figure 2 :
2Performance comparison of ADAM and AMSGRAD for logistic regression, feedforward neural network and CIFARNET. The top row shows performance of ADAM and AMSGRAD on logistic regression (left and center) and 1-hidden layer feedforward neural network (right) on MNIST. In the bottom row, the two plots compare the training and test loss of ADAM and AMSGRAD with respect to the iterations for CIFARNET.
also be bounded by O(G ∞ √ T ). Thus, the regret of AMSGRAD is upper bounded by minimum of O(G ∞ √ T ) and the bound in the Theorem 4 and therefore, the worst case dependence of regret on T in our case is O( √ T ).
Here, for simplicity, we remove the debiasing step used in the version of ADAM used in the original paper byKingma & Ba (2015). However, our arguments also apply to the debiased version as well.
Projections can be easily handled with a little bit of work but the analysis becomes more messy.
g 2 j,i j k=1 g 2 k,i(21)The first inequality follows from Cauchy-Schwarz inequality. The second inequality is due to the fact that β 1k ≤ β 1 for all k ∈ [T ]. The third inequality follows from the inequality T j=1 β T −j 1 ≤
f t in the following manner:The inequalities follow due to convexity of function f t and the fact thatFor further bounding this inequality, we need the following intermediate result.Lemma 3. For the parameter settings and conditions assumed in Theorem 5, we have T t=1Proof. We start with the following:The first inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner:Proof. Let us first consider the case where = 1 (in fact, the same setting works for any ≤ 1). The general case can be proved by simply rescaling the sequence of functions by a factor of √ . We show that the same optimization setting in Theorem 1 where f t are linear functions and F = [−1, 1], hence, we only discuss the details that differ from the proof of Theorem 1. In particular, we define the following function sequence:where C ≥ 2. Similar to the proof of Theorem 1, we assume that the initial point is x 1 = 1 and the parameters are:The proof essentially follows along the lines of that of Theorem 1 and is through principle of mathematical induction. Our aim is to prove that x 3t+2 and x 3t+3 are positive and x 3t+4 = 1. The base case holds trivially. Suppose for some t ∈ N ∪ {0}, we have x i > 0 for all i ∈ [3t + 1] and x 3t+1 = 1. For (3t + 1) th update, the only change from the update of in Equation(1)is the additional in the denominator i.e., we havêThe last inequality follows by simply dropping v 3t term and using the relation that α < √ 1 − β 2 . Therefore, we have 0 <x 3t+2 < 1 and hence x 3t+2 =x 3t+2 > 0. Furthermore, after the (3t + 2) th and (3t + 3) th updates of ADAM in Equation(1), we have the following:Since x 3t+2 > 0, it is easy to see that x 3t+3 > 0. To complete the proof, we need to show that x 3t+4 = 1. The only change here from the proof of Theorem 1 is that we need to show the following: αThe first inequality is due to the fact that v t ≤ C 2 for all t ∈ N. The last equality is due to following fact:for the choice of β 2 = 2/[(1 + C 2 )C 2 ] and = 1. Therefore, we see that x 3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t ∈ N ∪ {0}. Thus, we have
On the generalization ability of on-line learning algorithms. Peter Auer, Claudio Gentile, Proceedings of the 13th Annual Conference on Learning Theory. the 13th Annual Conference on Learning TheoryNicolò Cesa-Bianchi, Alex Conconi50Adaptive and self-confident on-line learning algorithmsPeter Auer and Claudio Gentile. Adaptive and self-confident on-line learning algorithms. In Pro- ceedings of the 13th Annual Conference on Learning Theory, pp. 107-117, 2000. Nicolò Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 50:2050-2057, 2004.
Incorporating Nesterov Momentum into Adam. Timothy Dozat, Proceedings of 4th International Conference on Learning Representations, Workshop Track. 4th International Conference on Learning Representations, Workshop TrackTimothy Dozat. Incorporating Nesterov Momentum into Adam. In Proceedings of 4th International Conference on Learning Representations, Workshop Track, 2016.
Adaptive subgradient methods for online learning and stochastic optimization. John C Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of 3rd International Conference on Learning Representations. 3rd International Conference on Learning RepresentationsDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference on Learning Representations, 2015.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097- 1105, 2012.
Adaptive bound optimization for online convex optimization. Brendan Mcmahan, Matthew J Streeter, Proceedings of the 23rd Annual Conference On Learning Theory. the 23rd Annual Conference On Learning TheoryBrendan McMahan and Matthew J. Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference On Learning Theory, pp. 244-256, 2010.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958, 2014.
RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. T Tieleman, G Hinton, T. Tieleman and G. Hinton. RmsProp: Divide the gradient by a running average of its recent mag- nitude. COURSERA: Neural Networks for Machine Learning, 2012.
Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. Matthew D Zeiler, Proceedings of the 20th International Conference on Machine Learning. the 20th International Conference on Machine Learning5701ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701, 2012. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928-936, 2003. |
3,300,406 | EIGENOPTION DISCOVERY THROUGH THE DEEP SUCCESSOR REPRESENTATION | Options in reinforcement learning allow agents to hierarchically decompose a task into subtasks, having the potential to speed up learning and planning. However, autonomously learning effective sets of options is still a major challenge in the field. In this paper we focus on the recently introduced idea of using representation learning methods to guide the option discovery process. Specifically, we look at eigenoptions, options obtained from representations that encode diffusive information flow in the environment. We extend the existing algorithms for eigenoption discovery to settings with stochastic transitions and in which handcrafted features are not available. We propose an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels. It exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation. We use traditional tabular domains to provide intuition about our approach and Atari 2600 games to demonstrate its potential. | [
14717992,
7774489
] | EIGENOPTION DISCOVERY THROUGH THE DEEP SUCCESSOR REPRESENTATION
Marlos C Machado
University of Alberta
EdmontonABCanada
Clemens Rosenbaum
University of Massachusetts
AmherstMAUSA
Xiaoxiao Guo
IBM Research
Yorktown Heights
NYUSA
Miao Liu
IBM Research
Yorktown Heights
NYUSA
Gerald Tesauro
IBM Research
Yorktown Heights
NYUSA
Murray Campbell
IBM Research
Yorktown Heights
NYUSA
EIGENOPTION DISCOVERY THROUGH THE DEEP SUCCESSOR REPRESENTATION
Under review as a conference paper at ICLR 2018
Options in reinforcement learning allow agents to hierarchically decompose a task into subtasks, having the potential to speed up learning and planning. However, autonomously learning effective sets of options is still a major challenge in the field. In this paper we focus on the recently introduced idea of using representation learning methods to guide the option discovery process. Specifically, we look at eigenoptions, options obtained from representations that encode diffusive information flow in the environment. We extend the existing algorithms for eigenoption discovery to settings with stochastic transitions and in which handcrafted features are not available. We propose an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels. It exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation. We use traditional tabular domains to provide intuition about our approach and Atari 2600 games to demonstrate its potential.
INTRODUCTION
Sequential decision making usually involves planning, acting, and learning about temporally extended courses of actions over different time scales. In the reinforcement learning framework, options are a well-known formalization of the notion of actions extended in time; and they have been shown to speed up learning and planning when appropriately defined (e.g., Brunskill & Li, 2014;Guo et al., 2017;Solway et al., 2014). In spite of that, autonomously identifying good options is still an open problem. This problem is known as the problem of option discovery.
Option discovery has received ample attention over many years, with varied solutions being proposed (e.g., Bacon et al., 2017;Ş imsek & Barto, 2004;Daniel et al., 2016;Florensa et al., 2017;Konidaris & Barto, 2009;Mankowitz et al., 2016;McGovern & Barto, 2001). Recently, Machado et al. (2017) and Vezhnevets et al. (2017) proposed the idea of learning options that traverse directions of a latent representation of the environment. In this paper we further explore this idea.
More specifically, we focus on the concept of eigenoptions (Machado et al., 2017), options learned using a model of diffusive information flow in the environment. They have been shown to improve agents' performance by reducing the expected number of time steps a uniform random policy needs in order to traverse the state space. Eigenoptions are defined in terms of proto-value functions (PVFs; Mahadevan, 2005), basis functions learned from the environment's underlying state-transition graph. PVFs and eigenoptions have been defined and thoroughly evaluated in the tabular case. Currently, eigenoptions can be used in environments where it is infeasible to enumerate states only when a linear representation of these states is known beforehand.
In this paper we extend the notion of eigenoptions to stochastic environments with non-enumerated states, which are commonly approximated by feature representations. Despite methods that learn representations generally being more flexible, more scalable, and often leading to better performance, current algorithms for eigenoption discovery cannot be combined with representation learn-Under review as a conference paper at ICLR 2018 ing. We introduce an algorithm that is capable of discovering eigenoptions while learning representations. The learned representations implicitly approximate the model of diffusive information flow (hereafter abbreviated as the DIF model) in the environment. We do so by exploiting the equivalence between PVFs and the successor representation (SR; Dayan, 1993). Notably, by using the SR we also start to be able to deal with stochastic transitions naturally, a limitation of previous algorithms.
We evaluate our algorithm in a tabular domain as well as on Atari 2600 games. We use the tabular domain to provide intuition about our algorithm and to compare it to the algorithms in the literature. Our evaluation in Atari 2600 games provides promising evidence of the applicability of our algorithm in a setting in which a representation of the agent's observation is learned from raw pixels.
BACKGROUND
In this section we discuss the reinforcement learning setting, the options framework, and the set of options known as eigenoptions. We also discuss the successor representation, which is the main concept used in the proposed algorithm.
REINFORCEMENT LEARNING AND OPTIONS
We consider the reinforcement learning (RL) problem in which a learning agent interacts with an unknown environment in order to maximize a reward signal. RL is often formalized as a Markov decision process (MDP), described as a 5-tuple: S, A, p, r, γ . At time t the agent is in state s t ∈ S where it takes action a t ∈ A that leads to the next state s t+1 ∈ S according to the transition probability kernel p(s |s, a). The agent also observes a reward R t+1 generated by the function r : S × A → R. The agent's goal is to learn a policy π :
S × A → [0, 1] that maximizes the expected discounted return G t . = E π,p ∞ k=0 γ k R t+k+1 |s t , where γ ∈ [0, 1]
is the discount factor. In this paper we are interested in the class of algorithms that determine the agent's policy by being greedy with respect to estimates of value functions; either w.r.t. the state value v π (s), or w.r.t. the state-action value function q π (s, a). Formally, v π (s) = E π,p [G t |s] = a π(a|s)q π (s, a). Notice that in large problems these estimates have to be approximated because it is infeasible to learn a value for each state-action pair. This is generally done by parameterizing q π (s, a) with a set of weights θ such that q(s, a, θ) ≈ q π (s, a). Currently, neural networks are the most successful parametrization approach in the field (e.g., Mnih et al., 2015;Tesauro, 1995). One of the better known instantiations of this idea is the algorithm called Deep Q-network (DQN; Mnih et al., 2015), which uses a neural network to estimate state-action value functions from raw pixels.
Options (Sutton et al., 1999) are our main topic of study. They are temporally extended actions that allow us to represent courses of actions. An option ω ∈ Ω is a 3-tuple ω = I ω , π ω , T ω where I ω ⊆ S denotes the option's initiation set, π ω : S × A → [0, 1] denotes the option's policy, and T ω ⊆ S denotes the option's termination set. We consider the call-and-return option execution model in which a meta-policy µ : S → Ω dictates the agent's behavior (notice A ⊆ Ω). After the agent decides to follow option ω from a state in I ω , actions are selected according to π ω until the agent reaches a state in T ω . We are interested in learning I ω , π ω , and T ω from scratch.
PROTO-VALUE FUNCTIONS AND EIGENOPTIONS
Eigenoptions are options that maximize eigenpurposes r e i , intrinsic reward functions obtained from the DIF model (Machado et al., 2017). Formally,
r e i (s, s ) = e φ(s ) − φ(s) ,(1)
where φ(·) denotes a feature representation of a given state (e.g., one-hot encoding in the tabular case) and e denotes an eigenvector encoding the DIF model at a specific timescale. Each intrinsic reward function, defined by the eigenvector being used, incentivizes the agent to traverse a different latent dimension of the state space.
In the tabular case, the algorithms capable of learning eigenoptions encode the DIF model through the combinatorial graph Laplacian L = D −1/2 (D − W )D −1/2 , where W is the graph's weight matrix and D is the diagonal matrix whose entries are the row sums of W . The weight matrix Figure 1: Successor representation, with respect to the uniform random policy, of state A (left). This example is similar to Dayan's (1993). The red color represents larger values while the blue color represents smaller values (states that are temporally further away).
is a square matrix where the ij-th entry represents the connection between states i and j. Notice that this approach does not naturally deal with stochastic or unidirectional transitions because W is generally defined as a symmetric adjacency matrix. Importantly, the eigenvectors of L are also known as proto-value functions (PVFs; Mahadevan, 2005;Mahadevan & Maggioni, 2007).
In settings in which states cannot be enumerated, the DIF model is represented through a matrix of transitions T , with row i encoding the transition vector φ(s t ) − φ(s t−1 ), where φ(·) denotes a fixed linear feature representation known beforehand (i can be different from t if transitions are observed more than once). Machado et al. (2017) justifies this sampling strategy with the fact that, in the tabular case, if every transition is sampled once, the right eigenvectors of matrix T converge to PVFs. Because transitions are added only once, regardless of their frequency, this algorithm is not well suited to stochastic environments. In this paper we introduce an algorithm that naturally deals with stochasticity and that does not require φ(·) to be known beforehand. Our algorithm learns the environment's DIF model while learning a representation of the environment from raw pixels.
THE SUCCESSOR REPRESENTATION
The successor representation (SR; Dayan, 1993) determines state generalization by how similar its successor states are. It is defined to be the expected future occupancy of state s given the agent's policy is π and its starting state is s. It can be seen as defining state similarity in terms of time. See Figure 1 for an example. The Euclidean distance between state A and state C is smaller than the Euclidean distance between state A and state B. However, if one considers the gray tiles to be walls, an agent in state A can reach state B much quicker than state C. The SR captures this distinction, ensuring that state A is more similar to state B than it is to state C.
Let 1 {·} denote the indicator function, the SR, Ψ π (s, s ), is formally defined, for γ < 1, as :
Ψ π (s, s ) = E π,p ∞ t=0 γ t 1 {St=s } S 0 = s .
This expectation can be estimated from samples with temporal-difference error (Sutton, 1988):
Ψ(s, j) ←Ψ(s, j) + η 1 {s=j} + γΨ(s , j) −Ψ(s, j) ,(2)
where η is the step-size. In the limit, the SR converges to Ψ π = (I −γT π ) −1 . This lets us decompose the value function into the product between the SR and the immediate reward (Dayan, 1993):
v π (s) = s ∈S Ψ π (s, s )r(s ).
The SR is directly related to several other ideas in the field. It can be seen as the dual approach to dynamic programming and to value-function based methods in reinforcement learning (Wang et al., 2007). Moreover, the eigenvectors generated from its eigendecomposition are equivalent to proto-value functions (Stachenfeld et al., 2014; and to slow feature analysis (Sprekeler, 2011).
Alg. 1 Eigenoption discovery through the SR
Ψ ← LEARNREPRESENTATION() E ← EXTRACTEIGENPURPOSES(Ψ) for each eigepurpose ei ∈ E do Ie i , πe i , Te i ← LEARNEIGENOPTION(ei) end for
Alg. 2 LEARNREPRESENTATION() with the SR for a given number of steps n do Observe s ∈ S, take action a ∈ A selected according to π(s), and observe a next state s ∈ S for each state j ∈ S dô Ψ(s, j) ←Ψ(s, j)+
η 1 {s=j} + γΨ(s , j) −Ψ(s, j)
end for end for returnΨ Such equivalences play a central role in the algorithm we describe in the next section. The SR may also have an important role in neuroscience. Stachenfeld et al. (2014; recently suggested that the successor representation is encoded by the hippocampus, and that a low-dimensional basis set representing it is encoded by the enthorhinal cortex. Interestingly, both hippocampus and entorhinal cortex are believed to be part of the brain system responsible for spatial memory and navigation.
EIGENOPTION DISCOVERY
In order to discover eigenoptions, we first need to obtain the eigenpurposes through the eigenvectors encoding the DIF model in the environment. This is currently done through PVFs, which the agent obtains by either explicitly building the environment's adjacency matrix or by enumerating all of the environment's transitions (c.f. Section 2.2). Such an approach is fairly effective in deterministic settings in which states can be enumerated and uniquely identified, i.e., the tabular case. However, there is no obvious extension of this approach to stochastic settings. It may be hard for the agent to explicitly model the environment dynamics in a weight matrix. The existent alternative, to enumerate the environment's transitions, may have a large cost. These issues become worse when states cannot be enumerated, i.e., the function approximation case. The existing algorithm that is applicable to the function approximation setting requires a fixed representation as input, not being able to learn a representation while estimating the DIF model.
In this paper we introduce an algorithm that addresses the aforementioned issues by estimating the DIF model through the SR. Also, we introduce a new neural network that is capable of approximating the SR from raw pixels by learning a latent representation of game screens. The learned SR is then used to discover eigenoptions, replacing the need for knowing the combinatorial Laplacian. In this section we discuss the proposed algorithm in the tabular case, the equivalence between PVFs and the SR, and the algorithm capable of estimating the SR, and eigenoptions, from raw pixels.
THE TABULAR CASE
The general structure of the algorithms capable of discovering eigenoptions is fairly straightforward, as shown in Alg. 1. The agent learns (or is given) a representation that captures the DIF model (e.g., the combinatorial Laplacian). It then uses the eigenvectors of this representation to define eigenpurposes (EXTRACTEIGENPURPOSES), the intrinsic reward functions described by Equation 1 that it will learn how to maximize. The option's policy is the one that maximizes this new reward function, while a state s is defined to be terminal with respect to the eigenpurpose e i if q ei * (s, a) ≤ 0 for all a ∈ A. The initiation set of an option e i is defined to be S \ T ei .
In the tabular case, our proposed algorithm is also fairly simple. Instead of assuming the matrixΨ is given in the form of the graph Laplacian, or trying to estimate the graph Laplacian from samples by stacking the row vectors corresponding to the different observed transitions, we estimate the DIF model through the successor representation (c.f. Alg. 2). This idea is supported by the fact that, for our purposes, the eigenvectors of the normalized Laplacian and the eigenvectors of the SR are equivalent. Below we formalize this concept and discuss its implications. We show that the eigenvectors of the normalized Laplacian are equal to the eigenvectors of the SR scaled by γ −1 D 1/2 . The aforementioned equivalence ensures that the eigenpurposes extraction and the eigenoption learning steps remain unchanged. That is, we still obtain the eigenpurposes from the eigendecomposition 1 of matrixΨ, and we still use each eigenvector e i ∈ E to define the new learning problem in which the agent wants to maximize the eigenpurpose, defined in Equation 1.
Importantly, the use of the SR addresses some other limitations of previous work: 1) it deals with stochasticity in the environment and in the agent's policy naturally; 2) its memory cost is independent on the number of samples drawn by the agent; and 3) it does not assume that for every action there is another action the agent can take to return to the state it was before, i.e., W is symmetric.
RELATIONSHIP BETWEEN PVFS AND THE SR
As aforementioned, PVFs (the eigenvectors of the normalized Laplacian) are equal to the eigenvectors of the successor representation scaled by γ −1 D 1/2 . To the best of our knowledge, this equivalence was first explicitly discussed by Stachenfeld et al. (2014). We provide below a more formal statement of such an equivalence, for the eingevalues and the eigenvectors of both approaches. We use the proof to further discuss the extent of this interchangeability. Theorem. Stachenfeld et al. (2014): Let 0 < γ < 1 s.t. Ψ = (I − γT ) −1 denotes the matrix encoding the SR, and let L = D −1/2 (D − W )D −1/2 denote the matrix corresponding to the normalized Laplacian, both obtained under a uniform random policy. The i-th eigenvalue (λ SR,i ) of the SR and the j-th eigenvalue (λ PVF,j ) of the normalized Laplacian are related as follows:
λ PVF,j = 1 − (1 − λ SR,i −1 )γ −1
The i-th eigenvector (e SR,i ) of the SR and the j-th eigenvector (e PVF,j ) of the normalized Laplacian are related as follows:
e PVF,j = (γ −1 D 1/2 )e SR,i
Proof. Let λ i , e i denote the i-th eigenvalue and eigenvector of the SR, respectively. Using the fact that the SR is known to converge, in the limit, to (I −γT ) −1 (through the Neumann series), we have:
(I − γT ) −1 e i = λ i e i (I − γT )e i = λ −1 i e i (I − T )γ −1 e i = [1 − (1 − λ −1 i )γ −1 ]γ −1 e i (I − T )γ −1 e i = λ j γ −1 e i (3) (I − D −1 W )γ −1 e i = λ j γ −1 e i D −1/2 (D − W )D −1/2 D 1/2 γ −1 e i = λ j γ −1 D 1/2 e i
Importantly, when using PVFs we are first interested in the eigenvectors with the corresponding smallest eigenvalues, as they are the "smoothest" ones. However, when using the SR we are interested in the eigenvectors with the largest eigenvalues. The change of variables in Eq. 3 highlights this fact i.e.,
λ j = [1 − (1 − λ −1 i )γ −1 ].
The indices j are sorted in the reverse order of the indices i. This distinction can be very important when trying to estimate the relevant eigenvectors. Finding the largest eigenvalues/eigenvectors is statistically more robust to noise in estimation and does not depend on the lowest spectrum of the matrix. Moreover, notice that the scaling by D 1/2 does not change the direction of the eigenvectors when the size of the action set is constant across all states. This is often the case in the RL problems being studied.
THE FUNCTION APPROXIMATION CASE: THE SR THROUGH DEEP NEURAL NETWORKS
The tabular case is interesting to study because it provides intuition about the problem and it is easier to analyze, both empirically and theoretically. However, the tabular case is only realizable in toy domains. In real-world situations the number of states is often very large and the ability to and ø denote elementwise multiplication and the fact that gradients are not propagated further back, respectively. generalize and to recognize similar states is essential. In this section, inspired by Kulkarni et al.'s (2016b) and Oh et al.'s (2015) work, we propose replacing Alg. 2 by a neural network that is able to estimate the successor representation from raw pixels. Such an approach circumvents the limitations of previous work that required a linear feature representation to be provided beforehand.
The SR with non-enumerated states: Originally, the SR was not defined in the function approximation setting, where states are described in terms of feature vectors. Successor features are the natural extension of the SR to this setting. We use Barreto et al.'s (2017) definition of successor features, where ψ π,i (s) denotes the successor feature i of state s ∈ S when following a policy π:
ψ π,i (s) = E π,p ∞ t=0 γ t φ i (S t ) S 0 = s .
In words, ψ π,i (s) encodes the discounted expected value of the i-th feature in the vector φ(·) when the agent starts in state s and follows the policy π. The update rule presented in Eq. 2 can be naturally extended to this definition. The temporal-difference error in the update rule can be used as a differentiable loss function, allowing us to estimate the successor features with a neural network.
Neural network architecture: The architecture we used is depicted in Fig 2. The reconstruction module is the same as the one introduced by Oh et al. (2015), but augmented by the SR estimator (the three layers depicted at the bottom). The SR estimator uses the learned latent representation as input i.e., the output of the representation learning module.
The proposed neural network receives raw pixels as input and learns to estimate the successor features of a lower-dimension representation learned by the neural network. The loss function L SR we use to learn the successor features is:
L SR (s, s ) = E φ − (s) + γψ − φ − (s ) − ψ φ(s) 2 ,
where φ(s) denotes the feature vector encoding the learned representation of state s and ψ(·) denotes the estimated successor features. In practice, φ(·) is the output of the representation learning module and ψ(·) is the output of the SR estimator, as shown in Fig. 2. The loss function above also highlights the fact that we have two neural networks. We use − to represent a target network (Mnih et al., 2015), which is updated at a slower rate for stability purposes.
We cannot directly estimate the successor features from raw pixels using only L SR because zero is one of its fixed points. This is the reason we added Oh et al.'s (2015) reconstruction module in the proposed network. It behaves as an auxiliary task that predicts the next state to be observed given the current state and action. By predicting the next state we increase the likelihood the agent will learn a representation that takes into consideration the pixels that are under its control, which has been shown to be a good bias in RL problems (Bellemare et al., 2012). Such an auxiliary task is defined through the network's reconstruction error L RE :
L RE (s, a, s ) = ζ φ(s), a − s 2 ,
where ζ(·) denotes the output of the reconstruction module, as shown in Fig. 2. The final loss being optimized is L(s, a, s ) = L RE (s, a, s ) + L SR (s, s ). Finally, to ensure that the SR will not interfere with the learned features, we zero the gradients coming from the SR estimator (represented with the symbol ø in Fig. 2). We trained our model with RMSProp and we followed the same protocol Oh et al. (2015) used to initialize the network.
Eigenoption learning: In Alg. 1, the function EXTRACTEIGENPURPOSES returns the eigenpurposes described by Eq. 1. Eigenpurposes are defined in terms of a feature representation φ(s t ) of the environment and of the eigenvectors e i of the DIF model (the SR in our case). We use the trained network to generate both. It is trivial to obtain φ(s t ) as we just use the output of the appropriate layer in the network as our feature representation. To obtain e i we first need to generate a meaningful matrix since our network outputs a vector of successor features instead of a matrix. We do so by having the agent follow the uniform random policy while we store the network outputs ψ(s t ), which correspond to the network estimate of the successor features of state s t . We then create a matrix T where row t corresponds to ψ(s t ) and we define e i to be its right eigenvectors.
Once we have created the eigenpurposes, the option discovery problem is reduced to a regular RL problem where the agent aims to maximize the cumulative sum of rewards. Any learning algorithm can be used for that. We provide details about our approach in the next section.
EXPERIMENTS
We evaluate the discovered eigenoptions quantitatively and qualitatively in this section. We use the traditional rooms domain to evaluate the impact, on the eigenvectors and on the discovered options, of approximating the DIF model through the SR. We then use Atari 2600 games to demonstrate how the proposed network does discover purposeful options from raw pixels.
TABULAR CASE
Our first experiment evaluates the impact of estimating the SR from samples instead of assuming the DIF model was given in the form of the normalized Laplacian. We use the rooms domain ( Fig. 3a; Sutton et al., 1999) to evaluate our method. Fig. 4b depicts the first eigenvector obtained from the SR while Fig. 4c depicts the corresponding eigenoption. We followed the uniform random policy for 1,000 episodes to learn the SR. Episodes were 100 time steps long. We used a stepsize of 0.1, and we set γ = 0.9. The estimated eigenvector is fairly close to the true one and, as expected, the obtained eigenvector is fairly similar to the PVFs that are obtained for this domain.
In the Appendix we provide the plots for the true SR and the PVF, as well as plots for different eigenvectors, comparing them to those obtained from (I − γT ) −1 .
Eigenoptions are known for improving the agent's ability to explore the environment. We use the metric diffusion time to validate whether such an ability is preserved with our method. The diffusion time can be seen as a proxy for how hard it is for an agent to reach the goal state when following a uniform random policy. It is defined as the expected number of decisions (action selection steps) an agent needs to take, when following the uniform random policy, to navigate between two randomly chosen states. We compared the agent's diffusion time when using eigenoptions obtained with PVFs to the diffusion time when using eigenoptions obtained with estimates of the SR. As we can see in gap between the diffusion time when using PVFs and when using the SR is likely due to different ways of dealing with corners. The SR implicitly models self-loops in the states adjacent to walls, since the agent takes an action and it observes it did not move.
We also evaluated how the estimates of the SR evolve as more episodes are used during learning, and its impact in the diffusion time (Fig 3d). In the Appendix we present more results, showing that the local structure of the graph is generally preserved. Naturally, more episodes allow us to learn more accurate estimates of the SR as a more global facet of the environment is seen, since the agent has more chances to further explore the state space. However, it seems that even the SR learned from few episodes allow us to discover useful eigenoptions, as depicted in Fig. 3d. The eigenoptions obtained from the SR learned using only 100 episodes are already capable of reducing the agent's diffusion time considerably. Finally, it is important to stress that the discovered options do more than randomly selecting subgoal states. "Random options" only reduce the agent's diffusion time when hundreds of them are added to the agent's action set (Machado et al., 2017).
Finally, we evaluated the use of the discovered eigenoptions to maximize reward. In our experiments the agent learned, off-policy, the greedy policy over primitive actions (target policy) while following the uniform random policy over actions and eigenoptions (behavior policy). We used Qlearning (Watkins & Dayan, 1992) in our experiments -parameters λ = 0, α = 0.1, and γ = 0.9. As before, episodes were 100 time steps long. Figure 4 summarizes the obtained results comparing the performance of our approach to regular Q-learning over primitive actions. The eigenoptions were extracted from estimates of the SR obtained after 100 episodes. The reported results are the average over 24 independent runs when learning the SR, with each one of these runs encoding 100 runs evaluating Q-Learning. The options were added following the sorting provided by the eigenvalues. For example, 4 options denotes an agent with the action set used in the behavior policy being composed of the four primitive actions and the four eigenoptions generated by the top 2 eigenvalues (both directions are being used). Notice that these results do not try to take the sample efficiency of our approach into consideration, they are only meant to showcase how eigenoptions, once discovered, can speed up learning. The sample complexity of learning options is generally justified in lifelong learning settings where they are re-used over multiple tasks (e.g., Brunskill & Li, 2014). This is beyond the scope of this paper.
The obtained results clearly show that eigenoptions are not only capable of reducing the diffusion time in the environment but of also improving the agent's control performance. They do so by increasing the likelihood that the agent will cover a larger part of the state space given the same amount of time. Moreover, as before, it seems that a very accurate estimate of the successor representation is not necessary for the eigenoptions to be useful. Similar results can be obtained for different locations of the start and goal states, and when the estimates of the SR are more accurate. These results can be seen in the Appendix.
ATARI 2600
This second set of experiments evaluates the eigenoptions discovered when the SR is obtained from raw pixels. We obtained the SR through the neural network described in Section 3. We used four We followed the protocol described in the previous section to create eigenpurposes. We trained the network in Fig. 2 to estimate the SR under the uniform random policy. Since the network does not impact the policy being followed, we built a dataset of 500, 000 samples for each game and we used this dataset to optimize the network weights. We passed through the shuffled dataset 10 times, using RMSProp with a step size of 10 −4 . Once we were done with the training, we let the agent follow a uniform random policy for 50, 000 steps while we stored the SR output by the network for each observed state as a row of matrix T . We define e, in the eigenpurposes we maximize (c.f., Eq. 1), to be the right eigenvectors of the matrix T , while φ(·) is extracted at each time step from the network in Fig. 2. Due to computational constraints, we approximated the final eigenoptions. We did so by using the ALE's internal emulator to do a one-step lookahead and act greedily with respect to each eigenpurpose (in practice, this is equivalent to learning with γ = 0). This is not ideal because the options we obtain are quite limited, since they do not deal with delayed rewards. However, even in such limiting setting we were able to obtain promising results, as we discuss below.
Following Machado et al. (2017), we evaluate the discovered eigenoptions qualitatively. We execute all options following the procedure described above (greedy one-step lookahead) while tracking the avatar's position on the screen. Figure 5 summarizes the behavior of some of the meaningful options discovered. The trajectories generated by different options are represented by different colors and the color's intensity at a given location represents how often the agent was at that location. Eigenoptions were introduced as options that generate purposeful behavior and that help agents explore the environment. We can clearly see that the discovered eigenoptions are indeed purposeful. They aim to reach a specific location and stay there. If this was not the case the agent's trajectory would be much more visible. Instead, what we actually observe is that the mass of visitation is concentrated on one location on the screen, dominating (color intensity) all the others. The location the agent is spending most of its time on can in fact be seen as the option's terminal state. Constantly being in a state suggests the agent has arrived to a myopic local maximum for that eigenpurpose.
In three out of four games (BANK HEIST, MONTEZUMA'S REVENGE, MS. PACMAN) our algorithm discovers options that clearly push the agent to corners and to other relevant parts of the state space, corroborating the intuition that eigenoptions also improve exploration. In MONTEZUMA'S REVENGE, the terminal state of the highlighted options even correspond to what are considered good subgoals for the game (Kulkarni et al., 2016a). It is likely that additional subgoals, such as the key, were not found due to our myopic greedy approach. This approach may also explain why our algorithm was ineffective in FREEWAY. Avoiding cars may be impossible without longer-term planning. A plot depicting the two meaningful options discovered in this game is in the Appendix. Importantly, the fact that myopic policies are able to navigate to specific locations and stay there also suggests that, as in the tabular case, the proposed approach gives rise to dense intrinsic rewards that are very informative. This is another important constrast between randomly assigned subgoals and our approach. Randomly assigned subgoals do not give rise to such dense rewards. Thus, one can argue that our approach does not only generate useful options but it also gives rise to dense eigenpurposes, making it easier to build the policies associated with them.
It is important to stress that our algorithm was able to discover eigenoptions, from raw pixels, similar to those obtained by algorithms that use the RAM state of the game as a feature representation. The RAM state of the game often uses specific bytes to encode important information of the game, such as the position of the player's avatar in the game. Our algorithm had to implicitly learn what were the meaningful parts of the screen. Also, different from previous algorithms, our approach is not constrained by the dimensionality of the state representation nor to binary features. Based on this discussion, we consider our results to be very promising, even though we only depict options that have effect on the initial state of the games. We believe that in a more general setting (e.g., using DQN to learn policies) our algorithm has the potential to discover even better options.
RELATED WORK
Our work was directly inspired by Kulkarni et al. (2016b), the first to propose approximating the SR using a neural network. We use their loss function in a novel architecture. Because we are not directly using the SR for control, we define the SR in terms of states, instead of state-action pairs. Different from Kulkarni et al. (2016b), our network does not learn a reward model and it does not use an autoencoder to learn a representation of the world. It tries to predict the next state the agent will observe. The prediction module we used was introduced by Oh et al. (2015). Because it predicts the next state, it implicitly learns representations that take into consideration the parts of the screen that are under the agent's control. The ability to recognize such features is known as contingency awareness, and it is known to have the potential to improve agents' performance (Bellemare et al., 2012). Kulkarni et al. (2016b) did suggest the deep SR could be used to find bottleneck states, which are commonly used as subgoals for options, but such an idea was not further explored. Importantly, Jong et al. (2008) and Machado et al. (2017) have shown that options that look for bottleneck states can be quite harmful in the learning process.
The idea of explicitly building hierarchies based on the learned latent representation of the state space is due to Machado et al. (2017) and Vezhnevets et al. (2017). Machado et al. (2017) proposed the concept of eigenoptions, but limited to the linear function approximation case. Vezhnevets et al. (2017) do not explicitly build options with initiation and termination sets. Instead, they learn a hierarchy through an end-to-end learning system that does not allow us to easily retrieve options from it. Finally, Kompella et al. (2017) has proposed the use of slow feature analysis (SFA; Wiskott & Sejnowski, 2002) to discover options. Sprekeler (2011) has shown that, given a specific choice of adjacency function, PVFs (and consequently the SR) are equivalent to SFA. However, their work is limited to linear function approximation. Our method also differs in how we define the initiation and termination sets. The options they discover look for bottleneck states, which is not our case.
CONCLUSION
In this paper we introduced a new algorithm for eigenoption discovery in RL. Our algorithm uses the successor representation (SR) to estimate the model of diffusive information flow in the environment, leveraging the equivalence between proto-value functions (PVFs) and the SR. This approach circumvents several limitations from previous work: (i) it builds increasingly accurate estimates using a constant-cost update-rule; (ii) it naturally deals with stochastic MDPs; (iii) it does not depend on the assumption that the transition matrix is symmetric; and (iv) it does not depend on handcrafted feature representations. The first three items were achieved by simply using the SR instead of the PVFs, while the latter was achieved by using a neural network to estimate the SR.
The proposed framework opens up multiple possibilities for investigation in the future. It would be interesting to evaluate the compositionality of eigenoptions, or how transferable they are between similar environments, such as the different modes of Atari 2600 games (Machado et al., 2018). Finally, now that the fundamental algorithms have been introduced, it would be interesting to investigate whether one can use eigenoptions to accumulate rewards instead of using them for exploration.
APPENDIX: SUPPLEMENTARY MATERIAL
This supplementary material contains details omitted from the main text due to space constraints. The list of contents is below:
• A more detailed proof of the theorem in the paper;
• Empirical results evaluating how the number of episodes used to learn the successor representation impacts the obtained eigenvectors and their corresponding eigenoptions; • Evaluation of the reconstruction module (auxiliary task) that learns the latent representation that is used to estimate the successor representation.
λ PVF,j = 1 − (1 − λ SR,i −1 )γ −1
The i-th eigenvector (e SR,i ) of the SR and the j-th eigenvector (e PVF,j ) of the normalized Laplacian are related as follows:
e PVF,j = (γ −1 D 1/2 )e SR,i
THE IMPACT THE NUMBER OF EPISODES HAS IN LEARNING THE SR AND THE EIGENOPTIONS
In Section 4.1 we briefly discussed the impact of estimating the successor representation from samples instead of assuming the agent has access to the normalized Laplacian. It makes much more sense to use the successor representation as the DIF model in the environment if we can estimate it quickly. The diffusion time was the main evidence we used in Section 4.1 to support our claim that early estimates of the successor representation are useful for eigenoption discovery. In order to be concise we did not actually plot the eigenvectors of the estimates of the successor representation at different moments, nor explicitly compared them to proto-value functions or to the eigenvectors of the matrix (I − γT ) −1 . We do so in this section.
Figures 7-10 depict the first four eigenvectors of the successor representation in the Rooms domain, after being learned for different number of episodes (episodes were 100 time steps long, η = 0.1, γ = 0.9). We also depict the corresponding eigenvectors of the (I − γT ) −1 matrix 2 , and of the normalized Laplacian (Machado et al., 2017). Because the eigenvectors orientation (sign) is often arbitrary in an eigendecomposition, we matched their orientation to ease visualization.
Overall, after 500 episodes we already have an almost perfect estimate of the first eigenvectors in the environment; while 100 episodes seem to not be enough to accurately learn the DIF model in all rooms. However, learning the successor representation for 100 episodes seems to be enough to generate eigenoptions that reduce the agent's diffusion time, as we show in Figure 3d. We can better discuss this behavior by looking at Figures 11-14, which depict the options generated by the obtained eigenvectors.
With the exception of the options generated after learning the successor representation for 100 episodes, all the eigenoptions obtained from estimates of the successor representation already move the agent towards the "correct" room(s). Naturally, they do not always hit the corners, but the general structure of the policies can be clearly seen. We also observe that the eigenoptions obtained from proto-value functions are shifted one tile from the corners. As discussed in the main paper, this is a consequence of how Machado et al.'s (2017) dealt with corners. They did not model selfloops in the MDP, despite the fact that the agent can be in the same state for two consecutive steps. The successor representation captures this naturally. Finally, we use Figure 11a to speculate why the options learned after 100 episodes are capable of reducing the agent's diffusion time. The first eigenoption learned by the agent moves it to the parts of the state space it has never been to, this may be the reason that the combination of these options is so effective. It also suggests that incremental methods for option discovery and exploration are a promising path for future work.
USING EIGENOPTIONS TO ACCUMULATE REWARD IN THE ENVIRONMENT
In Section 4.1 we also evaluated the agent's ability to accumulate reward after the eigenoptions have been learned. We further analyze this topic here. As in Section 4.1, the agent learned, off-policy, the greedy policy over primitive actions (target policy) while following the uniform random policy over actions and eigenoptions (behavior policy). We used Q-learning (Watkins & Dayan, 1992) in our experiments -parameters λ = 0, α = 0.1, and γ = 0.9. Episodes were 100 time steps long. Figures 16-19 summarize the obtained results comparing the performance of our approach to regular Q-learning over primitive actions in four different environments (c.f. Figure 15). We evaluate the agent's performance when using eigenoptions extracted from estimates of the SR obtained after 100, 500, and 1000 episodes, as well eigenoptions obtained from the true SR, i.e., (I − γT ) −1 . The reported results are the average over 24 independent runs when learning the SR, with each one of these runs encoding 100 runs evaluating Q-Learning. The options were added following the sorting provided by the eigenvalues. For example, 4 options denotes an agent with the action set used in the behavior policy being composed of the four primitive actions and the four eigenoptions generated by the top 2 eigenvalues (both directions are being used).
We can see that eigenoptions are not only capable of reducing the diffusion time in the environment but of also improving the agent's control performance. They do so by increasing the likelihood that the agent will cover a larger part of the state space given the same amount of time. Interestingly, few eigenoptions seem to be enough for the agent. Moreover, although rough estimates of the SR seem to be enough to improve the agent's performance (e.g., estimates obtained after only 100 episodes).
More accurate predictions of the SR are able to further improve the agent's performance, mainly when dozens of eigenoptions are being used. The first eigenoptions to be accurately estimated are those with larger eigenvalues, which are the ones we add first.
EVALUATION OF THE RECONSTRUCTION TASK
In Section 4.2 we analyzed the eigenoptions we are able to discover in four games of the Arcade Learning Environment. We did not discuss the performance of the proposed network in the auxiliary tasks we defined. We do so here. Figures 20-23 depict a comparison between the target screen that should be predicted and the network's actual prediction for ten time steps in each game. We can see that it accurately predicts the general structure of the environment and it is able to keep track of most moving sprites on the screen. The prediction is quite noisy, different from Oh et al.'s (2015) result. Still, it is interesting to see how even an underperforming network is able to learn useful representations for our algorithm. It is likely better representations would result in better options. Figure 6 depicts the two meaningful eigenoptions we were able to discover in the game FREEWAY. As in Figure 5, each option is represented by the normalized count of the avatar's position on the screen in a trajectory. The trajectories generated by different options are represented by different colors and the color's intensity at a given location represents how often the agent was at that location.
EIGENOPTIONS DISCOVERED IN FREEWAY
Figure 2 :
2Neural network architecture used to learn the SR. The symbols
Figure 3 :
3Results in the rooms domain. The rightmost figure depicts the diffusion time as eigenoptions are added to the agent's action set (sorted by eigenvalues corresponding to the eigenpurposes).
Figure 4 :
4Fig 3d,the eigenoptions obtained with the SR do help the agent to explore the environment. The Different environments (varying start and goal locations) used in our evaluation (a), as well as the learning curves obtained in each one of these environments (b, c) for different number of options obtained from the SR when estimated after 100 episodes. See text for more details.
Figure 5 :
5Plots of density of state visitation of eigenoptions discovered in three Atari 2600 games. States visited more frequently show darker images of the avatar. Note that an eigenoption's overwhelming mass of visitations corresponds to its terminal state, and that disparate options have different terminal states. Atari 2600 games from the Arcade Learning Environment (Bellemare et al., 2013) as testbed: BANK HEIST, FREEWAY, MONTEZUMA'S REVENGE, and MS. PAC-MAN.
A
MORE DETAILED PROOF OF THE THEOREM IN THE MAIN PAPER Theorem. Stachenfeld et al. (2014): Let 0 < γ < 1 s.t. Ψ = (I − γT ) −1 denotes the matrix encoding the SR, and let L = D −1/2 (D − W )D −1/2 denote the matrix corresponding to the normalized Laplacian, both obtained under a uniform random policy. The i-th eigenvalue (λ SR,i ) of the SR and the j-th eigenvalue (λ PVF,j ) of the normalized Laplacian are related as follows:
Figure 6 :Figure 7 :Figure 8 :Figure 9 : 1 Figure 10 :Figure 11 :Figure 12 :Figure 13 : 1 Figure 14 :Figure 22 :
678911011121311422Eigenoptions discovered in the game FREEWAY. Evolution of the first eigenvector being estimated by the SR and baselines. Evolution of the second eigenvector being estimated by the SR and baselines. Evolution of the third eigenvector being estimated by the SR and baselines.(a) 100 episodes (b) 500 episodes (c) 1,000 episodes (d) PVF (e) (I − γT ) −Evolution of the fourth eigenvector being estimated by the SR and baselines. Evolution of the first eigenoption being estimated by the SR and baselines. Evolution of the second eigenoption being estimated by the SR and baselines. Evolution of the third eigenoption being estimated by the SR and baselines. (a) 100 episodes (b) 500 episodes (c) 1,000 episodes (d) PVF (e) (I − γT ) −Evolution of the fourth eigenoption being estimated by the SR and baselines. Final 1-step predictions in the game MONTEZUMA'S REVENGE.We use the task of predicting the next game screen as an auxiliary task when estimating the successor representation.
Notice the matrixΨ is not guaranteed to be symmetric. In that case one can define the eigenpurposes to bê Ψ's right eigenvectors, as we do in Section 3.3.
Recall (I − γT ) −1 is the matrix to which the successor representation converges to in the limit.
t = 1 t = 2 t = 3 t = 4 t = 5 t = 8 t = 7 t = 6 t = 10 t = 9Figure 20: Final 1-step predictions in the game BANK HEIST. We use the task of predicting the next game screen as an auxiliary task when estimating the successor representation.
t = 25 t = 26 t = 27 t = 28 t = 29 t = 32 t = 31 t = 30 t = 34 t = 33 Figure 21: Final 1-step predictions in the game FREEWAY. We use the task of predicting the next game screen as an auxiliary task when estimating the successor representation.
t = 60 t = 61 t = 62 t = 63 t = 64 t = 67 t = 66 t = 65 t = 69 t = 68 Figure 23: Final 1-step predictions in the game MS. PACMAN. We use the task of predicting the next game screen as an auxiliary task when estimating the successor representation.
ACKNOWLEDGMENTSThe authors would like to thank Craig Sherstan and Martha White for feedback on an earlier draft, Kamyar Azizzadenesheli, Marc G. Bellemare and Michael Bowling for useful discussions, and the anonymous reviewers for their feedback and suggestions.Proof. This proof is more detailed than the one presented in the main paper. Let λ i , e i denote the i-th eigenvalue and eigenvector of the SR. Using the fact that the SR is known to converge, in the limit, to (I − γT ) −1 (through the Neumann series), we have:Figure 19: Plot depicting the agent's performance when following options obtained through estimates of the SR (100, 500, and 1, 000 episodes), as well as through the true SR, in environment 4.Prediction Prediction Target TargetPrediction Prediction TargetTarget Prediction Prediction Target Target
The Option-Critic Architecture. Pierre-Luc Bacon, Jean Harb, Doina Precup, Proc. of the AAAI Conference on Artificial Intelligence (AAAI). of the AAAI Conference on Artificial Intelligence (AAAI)Pierre-Luc Bacon, Jean Harb, and Doina Precup. The Option-Critic Architecture. In Proc. of the AAAI Conference on Artificial Intelligence (AAAI), pp. 1726-1734, 2017.
Successor Features for Transfer in Reinforcement Learning. André Barreto, Will Dabney, Rémi Munos, Jonathan Hunt, Tom Schaul, David Silver, Hado Van Hasselt, Advances in Neural Information Processing Systems (NIPS). André Barreto, Will Dabney, Rémi Munos, Jonathan Hunt, Tom Schaul, David Silver, and Hado van Hasselt. Successor Features for Transfer in Reinforcement Learning. In Advances in Neural Information Processing Systems (NIPS), pp. 4058-4068, 2017.
Investigating Contingency Awareness Using Atari 2600 Games. G Marc, Joel Bellemare, Michael Veness, Bowling, Proc. of the AAAI Conference on Artificial Intelligence (AAAI). of the AAAI Conference on Artificial Intelligence (AAAI)Marc G. Bellemare, Joel Veness, and Michael Bowling. Investigating Contingency Awareness Using Atari 2600 Games. In Proc. of the AAAI Conference on Artificial Intelligence (AAAI), 2012.
The Arcade Learning Environment: An Evaluation Platform for General Agents. G Marc, Yavar Bellemare, Joel Naddaf, Michael Veness, Bowling, Journal of Artificial Intelligence Research. 47Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Envi- ronment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253-279, 2013.
PAC-inspired Option Discovery in Lifelong Reinforcement Learning. Emma Brunskill, Lihong Li, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Emma Brunskill and Lihong Li. PAC-inspired Option Discovery in Lifelong Reinforcement Learn- ing. In Proc. of the International Conference on Machine Learning (ICML), pp. 316-324, 2014.
Using Relative Novelty to Identify Useful Temporal Abstractions in Reinforcement Learning. Ş Ozgür, Andrew G Imsek, Barto, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Ozgür Ş imsek and Andrew G. Barto. Using Relative Novelty to Identify Useful Temporal Abstrac- tions in Reinforcement Learning. In Proc. of the International Conference on Machine Learning (ICML), 2004.
Probabilistic Inference for Determining Options in Reinforcement Learning. Christian Daniel, Jan Herke Van Hoof, Gerhard Peters, Neumann, Machine Learning. 104Christian Daniel, Herke van Hoof, Jan Peters, and Gerhard Neumann. Probabilistic Inference for Determining Options in Reinforcement Learning. Machine Learning, 104(2-3):337-357, 2016.
Improving Generalization for Temporal Difference Learning: The Successor Representation. Peter Dayan, Neural Computation. 54Peter Dayan. Improving Generalization for Temporal Difference Learning: The Successor Repre- sentation. Neural Computation, 5(4):613-624, 1993.
Stochastic Neural Networks for Hierarchical Reinforcement Learning. Carlos Florensa, Yan Duan, Pieter Abbeel, Proc. of the International Conference on Learning Representations (ICLR. of the International Conference on Learning Representations (ICLRCarlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic Neural Networks for Hierarchical Re- inforcement Learning. In Proc. of the International Conference on Learning Representations (ICLR), 2017.
Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation. Zhaohan Daniel Guo, Philip S Thomas, Emma Brunskill, Advances in Neural Information Processing Systems (NIPS). Zhaohan Daniel Guo, Philip S. Thomas, and Emma Brunskill. Using Options and Covariance Test- ing for Long Horizon Off-Policy Policy Evaluation. In Advances in Neural Information Process- ing Systems (NIPS), pp. 2489-2498, 2017.
Reinforcement Learning with Unsupervised Auxiliary Tasks. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu, Proc. of the International Conference on Learning Representations (ICLR. of the International Conference on Learning Representations (ICLRMax Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. In Proc. of the International Conference on Learning Representations (ICLR), 2017.
The Utility of Temporal Abstraction in Reinforcement Learning. Nicholas K Jong, Todd Hester, Peter Stone, Proc. of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS). of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)Nicholas K. Jong, Todd Hester, and Peter Stone. The Utility of Temporal Abstraction in Rein- forcement Learning. In Proc. of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 299-306, 2008.
Continual Curiosity-driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots. Marijn F Varun Raj Kompella, Matthew D Stollenga, Jürgen Luciw, Schmidhuber, Artificial Intelligence. 247Varun Raj Kompella, Marijn F. Stollenga, Matthew D. Luciw, and Jürgen Schmidhuber. Continual Curiosity-driven Skill Acquisition from High-Dimensional Video Inputs for Humanoid Robots. Artificial Intelligence, 247:313-335, 2017.
Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining. George Konidaris, Andrew G Barto, Advances in Neural Information Processing Systems (NIPS). George Konidaris and Andrew G. Barto. Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining. In Advances in Neural Information Processing Systems (NIPS), pp. 1015-1023, 2009.
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. D Tejas, Karthik Kulkarni, Ardavan Narasimhan, Josh Saeedi, Tenenbaum, Advances in Neural Information Processing Systems (NIPS). Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. In Advances in Neural Information Processing Systems (NIPS), pp. 3675-3683, 2016a.
Deep Successor Reinforcement Learning. CoRR. D Tejas, Ardavan Kulkarni, Simanta Saeedi, Samuel J Gautam, Gershman, abs/1606.02396Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep Successor Reinforcement Learning. CoRR, abs/1606.02396, 2016b.
A Laplacian Framework for Option Discovery in Reinforcement Learning. C Marlos, Marc G Machado, Michael Bellemare, Bowling, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Marlos C. Machado, Marc G. Bellemare, and Michael Bowling. A Laplacian Framework for Option Discovery in Reinforcement Learning. In Proc. of the International Conference on Machine Learning (ICML), pp. 2295-2304, 2017.
Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. C Marlos, Marc G Machado, Erik Bellemare, Joel Talvitie, Matthew Veness, Michael Hausknecht, Bowling, Journal of Artificial Intelligence Research. JAIR), In pressMarlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. Journal of Artificial Intelligence Research (JAIR), In press, 2018.
Proto-value Functions: Developmental Reinforcement Learning. Sridhar Mahadevan, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Sridhar Mahadevan. Proto-value Functions: Developmental Reinforcement Learning. In Proc. of the International Conference on Machine Learning (ICML), pp. 553-560, 2005.
Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes. Sridhar Mahadevan, Mauro Maggioni, Journal of Machine Learning Research (JMLR). 8Sridhar Mahadevan and Mauro Maggioni. Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes. Journal of Machine Learn- ing Research (JMLR), 8:2169-2231, 2007.
Adaptive Skills Adaptive Partitions (ASAP). Daniel J Mankowitz, Timothy Arthur Mann, Shie Mannor, Advances in Neural Information Processing Systems (NIPS). Daniel J. Mankowitz, Timothy Arthur Mann, and Shie Mannor. Adaptive Skills Adaptive Partitions (ASAP). In Advances in Neural Information Processing Systems (NIPS), pp. 1588-1596, 2016.
Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density. Amy Mcgovern, Andrew G Barto, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Amy McGovern and Andrew G. Barto. Automatic Discovery of Subgoals in Reinforcement Learn- ing using Diverse Density. In Proc. of the International Conference on Machine Learning (ICML), pp. 361-368, 2001.
Human-level Control through Deep Reinforcement Learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Shane Legg, and Demis Hassabis. Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra518Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Belle- mare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wier- stra, Shane Legg, and Demis Hassabis. Human-level Control through Deep Reinforcement Learn- ing. Nature, 518(7540):529-533, 2015.
Action-Conditional Video Prediction using Deep Networks in Atari Games. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, Satinder P Singh, Advances in Neural Information Processing Systems (NIPS). Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action- Conditional Video Prediction using Deep Networks in Atari Games. In Advances in Neural Information Processing Systems (NIPS), pp. 2863-2871, 2015.
. Alec Solway, Carlos Diuk, Natalia Córdova, Debbie Yee, Andrew G Barto, Yael Niv, Matthew M Botvinick, Optimal Behavioral Hierarchy. PLOS Computational Biology. 108Alec Solway, Carlos Diuk, Natalia Córdova, Debbie Yee, Andrew G. Barto, Yael Niv, and Matthew M. Botvinick. Optimal Behavioral Hierarchy. PLOS Computational Biology, 10(8): 1-10, 2014.
On the Relation of Slow Feature Analysis and Laplacian Eigenmaps. Henning Sprekeler, Neural Computation. 2312Henning Sprekeler. On the Relation of Slow Feature Analysis and Laplacian Eigenmaps. Neural Computation, 23(12):3287-3302, 2011.
Design Principles of the Hippocampal Cognitive Map. Kimberly L Stachenfeld, Matthew Botvinick, Samuel J Gershman, Advances in Neural Information Processing Systems (NIPS). Kimberly L. Stachenfeld, Matthew Botvinick, and Samuel J. Gershman. Design Principles of the Hippocampal Cognitive Map. In Advances in Neural Information Processing Systems (NIPS), pp. 2528-2536, 2014.
The Hippocampus as a Predictive Map. Kimberly L Stachenfeld, M Matthew, Samuel J Botvinick, Gershman, Nature Neuroscience. 20Kimberly L. Stachenfeld, Matthew M Botvinick, and Samuel J. Gershman. The Hippocampus as a Predictive Map. Nature Neuroscience, 20:1643-1653, 2017.
Learning to Predict by the Methods of Temporal Differences. Richard S Sutton, Machine Learning. 3Richard S. Sutton. Learning to Predict by the Methods of Temporal Differences. Machine Learning, 3:9-44, 1988.
Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning. Richard S Sutton, Doina Precup, Satinder P Singh, Artificial Intelligence. 1121-2Richard S. Sutton, Doina Precup, and Satinder P. Singh. Between MDPs and Semi-MDPs: A Frame- work for Temporal Abstraction in Reinforcement Learning. Artificial Intelligence, 112(1-2):181- 211, 1999.
Temporal Difference Learning and TD-Gammon. Gerald Tesauro, Communications of the ACM. 383Gerald Tesauro. Temporal Difference Learning and TD-Gammon. Communications of the ACM, 38 (3):58-68, 1995.
FeUdal Networks for Hierarchical Reinforcement Learning. Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, Koray Kavukcuoglu, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. FeUdal Networks for Hierarchical Reinforcement Learning. In Proc. of the International Conference on Machine Learning (ICML), pp. 3540-3549, 2017.
Dual Representations for Dynamic Programming and Reinforcement Learning. T Wang, M Bowling, D Schuurmans, Proc. of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL). of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL)T. Wang, M. Bowling, and D. Schuurmans. Dual Representations for Dynamic Programming and Reinforcement Learning. In Proc. of the IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL), pp. 44-51, 2007.
Technical Note: Q-Learning. J C H Christopher, Peter Watkins, Dayan, Machine Learning. 8Christopher J. C. H. Watkins and Peter Dayan. Technical Note: Q-Learning. Machine Learning, 8 (3-4), May 1992.
Slow Feature Analysis: Unsupervised Learning of Invariances. Laurenz Wiskott, Terrence J Sejnowski, Neural Computation. 144Laurenz Wiskott and Terrence J. Sejnowski. Slow Feature Analysis: Unsupervised Learning of Invariances. Neural Computation, 14(4):715-770, 2002. |
219,956,317 | A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning | Modern large-scale machine learning applications require stochastic optimization algorithms to be implemented on distributed compute systems. A key bottleneck of such systems is the communication overhead for exchanging information across the workers, such as stochastic gradients. Among the many techniques proposed to remedy this issue, one of the most successful is the framework of compressed communication with error feedback (EF). EF remains the only known technique that can deal with the error induced by contractive compressors which are not unbiased, such as Top-K. In this paper, we propose a new and theoretically and practically better alternative to EF for dealing with contractive compressors. In particular, we propose a construction which can transform any contractive compressor into an induced unbiased compressor. Following this transformation, existing methods able to work with unbiased compressors can be applied. We show that our approach leads to vast improvements over EF, including reduced memory requirements, better communication complexity guarantees and fewer assumptions. We further extend our results to federated learning with partial participation following an arbitrary distribution over the nodes, and demonstrate the benefits thereof. We perform several numerical experiments which validate our theoretical findings. | [
38796293,
65455367,
6628106,
14124313,
43964415
] | A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth [email protected]
Visual Computing Center KAUST
Visual Computing Center KAUST
Thuwal Saudi Arabia, Thuwal Saudi Arabia
Peter Richtárik [email protected]
Visual Computing Center KAUST
Visual Computing Center KAUST
Thuwal Saudi Arabia, Thuwal Saudi Arabia
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Modern large-scale machine learning applications require stochastic optimization algorithms to be implemented on distributed compute systems. A key bottleneck of such systems is the communication overhead for exchanging information across the workers, such as stochastic gradients. Among the many techniques proposed to remedy this issue, one of the most successful is the framework of compressed communication with error feedback (EF). EF remains the only known technique that can deal with the error induced by contractive compressors which are not unbiased, such as Top-K. In this paper, we propose a new and theoretically and practically better alternative to EF for dealing with contractive compressors. In particular, we propose a construction which can transform any contractive compressor into an induced unbiased compressor. Following this transformation, existing methods able to work with unbiased compressors can be applied. We show that our approach leads to vast improvements over EF, including reduced memory requirements, better communication complexity guarantees and fewer assumptions. We further extend our results to federated learning with partial participation following an arbitrary distribution over the nodes, and demonstrate the benefits thereof. We perform several numerical experiments which validate our theoretical findings.
Introduction
We consider distributed optimization problems of the form
min x∈R d f (x) := 1 n n i=1 f i (x) ,(1)
where x ∈ R d represents the weights of a statistical model we wish to train, n is the number of nodes, and f i : R d → R is a smooth differentiable loss function composed of data stored on worker i. In a classical distributed machine learning scenario, f i (x) := E ζ∼Di [f ζ (x)] is the expected loss of model x with respect to the local data distribution D i of the form, and f ζ : R d → R is the loss on the single data point ζ. This definition allows for different distributions D 1 , . . . , D n on each node, which means that the functions f 1 , . . . , f n can have different minimizers. This framework covers
• Stochastic Optimization when either n = 1 or all D i are identical,
• Empirical Risk Minimization (ERM), when f i (x) can be expressed as a finite average, i.e, f i (x) = 1 mi mi i=1 f ij (x) for some f ij : R d → R, • Federated Learning (FL) [Kairouz et al., 2019] where each node represents a client. arXiv:2006.11077v1 [cs.LG] 19 Jun 2020
Communication as the Bottleneck
In distributed training, model updates (or gradient vectors) have to be exchanged in each iteration. Due to the size of the communicated messages for commonly considered deep models [Alistarh et al., 2016], this represents significant bottleneck of the whole optimization procedure. To reduce the amount of data that has to be transmitted, several strategies were proposed.
One of the most popular strategies is to incorporate local steps and communicated updates every few iterations only , Lin et al., 2018a, Stich and Karimireddy, 2020, Karimireddy et al., 2019a, Khaled et al., 2020. Unfortunately, despite their practical success, local methods are poorly understood and their theoretical foundations are currently lacking. Almost all existing error guarantees are dominated by a simple baseline, minibatch SGD [Woodworth et al., 2020].
In this work, we focus on another popular approach: gradient compression. In this approach, instead of transmitting the full dimensional (gradient) vector g ∈ R d , one transmits a compressed vector C(g), where C : R d → R d is a (possibly random) operator chosen such that C(g) can be represented using fewer bits, for instance by using limited bit representation (quantization) or by enforcing sparsity. A particularly popular class of quantization operators is based on random dithering [Goodall, 1951, Roberts, 1962; see [Alistarh et al., 2016, Wen et al., 2017, Zhang et al., 2017, Horváth et al., 2019a, Ramezani-Kebrya et al., 2019. Much sparser vectors can be obtained by random sparsification techniques that randomly mask the input vectors and only preserve a constant number of coordinates [Wangni et al., 2018, Konečný and Richtárik, 2018, Stich et al., 2018, Vogels et al., 2019. There is also a line of work [Horváth et al., 2019a, Basu et al., 2019 in which a combination of sparsification and quantization was proposed to obtain a more aggressive effect. We will not further distinguish between sparsification and quantization approaches, and refer to all of them as compression operators hereafter.
Considering both practice and theory, compression operators can be split into two groups: biased and unbiased. For the unbiased compressors, C(g) is required to be an unbiased estimator of the update g. Once this requirement is lifted, extra tricks are necessary for Distributed Compressed Stochastic Gradient Descent (DCSGD) utilizing such a compressor to work, even if the full gradient is computed by each node. Indeed, a naive approaches can lead to divergence [Beznosikov et al., 2020], and Error Feedback (EF) [Seide et al., 2014, Karimireddy et al., 2019b] is the only known mechanism able to remedy the situation and lead to a convergent method.
Contributions
Our contributions can be summarized as follows:
• We provide a theoretical analysis of Compressed SGD under weak and general assumptions.
If f is µ-quasi convex (not necessarily convex) and local functions f i are (L, σ 2 )-smooth (weaker version of L-smoothness), we obtain the rate
O δLr 0 exp − µT 4δL + σ 2 µT ,
where δ ≥ 1 is the parameter which bounds the second moment of the compression operator, and T is the number of iterations. This rate is strictly better than the best-known rate for Compressed SGD with EF. Moreover, the latter requires extra assumptions. In addition, our theory guarantees convergence in both iterates and functional value. For EF, the best known rates [Karimireddy et al., 2019b, Beznosikov et al., 2020 are expressed in terms of functional values only. Another practical implication of our findings is the reduction of the memory requirements by half; this is because in Compressed SGD, one does not need to store the error vector. • We propose a construction that can transform any biased compressor into an unbiased one (Section 3). We argue that using such an induced compressor within Compressed SGD is superior, both in theory and practice, to using the original biased compressor in conjunction with EF. • We further extend our results to the multi-node scenario and show that the resulting method, Distributed Compressed SGD (DCSGD), improves linearly with the number of nodes, which is not the case for EF. Moreover, we obtain the first convergence guarantee for partial for i = 1, . . . , n do 14:
x k+1 = x k − η k ∆send ∆ k i = C(η k g k i + e k i ) to master 7: e k+1 i = η k g k i + e k i − ∆ k i 8:
end for 9:
Master side 10:
aggregate ∆ k = 1 n n i=1 ∆ k i 11:
broadcast ∆ k to each worker 12:
Parallel: Worker side 13:
for i = 1, . . . , n do 14:
x k+1 = x k − ∆ k 15:
end for 16: end for participation with arbitrary distributions over nodes, which plays a key role in Federated Learning.
• Finally, we provide experimental evaluation on an array of classification tasks with MNIST and CIFAR10 datasets corroborating our theoretical findings.
Error Feedback is not a Good Idea when Using Unbiased Compressors
In this section we first introduce the notions of unbiased and general compression operators, and then compare Distributed Compressed SGD (DCSGD) without (Algorithm 1) and with (Algorithm 2) Error Feedback.
Unbiased vs General Compression Operators
We start with the definition unbiased and general (contractive) compression operators [Cordonnier, 2018, Stich et al., 2018, Koloskova et al., 2019. Definition 1 (Unbiased Compression Operator). A randomized mapping C : R d → R d is an unbiased compression operator (unbiased compressor) if there exists δ ≥ 1 such that
E [C(x)] = x, E C(x) 2 ≤ δ x 2 , ∀x ∈ R d .(2)
If this holds, we will for simplicity write C ∈ U(δ).
Definition 2 (General Compression Operator). A (possibly) randomized mapping C : R d → R d is a general compression operator (general compressor) if there exists λ > 0 and δ ≥ 1 such that
E λC(x) − x 2 ≤ 1 − 1 δ x 2 , ∀x ∈ R d .(3)
If this holds, we will for simplicity write C ∈ C(δ).
To link these two definitions, we include the following simple lemma (see, e.g. Beznosikov et al. [2020]). Lemma 1. If C ∈ U(δ), then (3) holds with λ = 1 δ , i.e., C ∈ C(δ). That is, U(δ) ⊂ C(δ).
Note that the opposite inclusion to that established in the above lemma does not hold. For instance, the Top-K operator belongs to C(δ), but does not belong to U(δ). In the next section we develop a procedure for transforming any mapping C : R d → R d (and in particular, any general compressor) into a closely related induced unbiased compressor.
Distributed SGD with vs without Error Feedback
In the rest of this section, we compare the convergence rates for Distributed Compressed SGD (Algorithm 1) and Distributed Compressed SGD with Error Feedback (Algorithm 2). We do this comparison under standard assumptions [Karimi et al., 2016, Bottou et al., 2018, Necoara et al., 2019, Gower et al., 2019, Stich and Karimireddy, 2020, listed next.
First, we assume throughout that f has a unique minimizer x , and let f = f (x ) > −∞.
Assumption 1 (µ-quasi convexity). f is µ-quasi convex, i.e., f ≥ f (x) + ∇f (x), x − x + µ 2 x − x 2 , ∀x ∈ R d .(4)
Assumption 2 (unbiased gradient oracle). The stochastic gradient used in Algorithms 1 and 2 satisfies
E g k i | x k = ∇f i (x k ), ∀i, k.(5)
Note that this assumption implies E 1
n n i=1 g k i | x k = ∇f (x k )
. Assumption 3 ((L, σ 2 )expected smoothness). each function f i is (L, σ 2 )-smooth, i.e., there exist constants L > 0 and σ 2 ≥ 0 such that
E g k i 2 ≤ 2L(f i (x k ) − f i ) + σ 2 ,(6)
where f i is the minimum functional value of f i .
This assumption can be seen as a generalization of standard L-smoothness. For more details and discussion, see e.g. [Gower et al., 2019. Equipped with these assumptions, we are ready to proceed with the convergence theory.
Theorem 2 (Convergence of DCSGD in n = 1 case). Consider the DCSGD algorithm in the single node (n = 1) case. Let Assumptions 1-3 hold, let and C ∈ U(δ). Then there exist stepsizes η k ≤ 1 2δL and weights w k ≥ 0 such that for all T ≥ 1 we have
E f (x T ) − f + µE x T − x 2 ≤ 64δLr 0 exp − µT 4δL + 36σ 2 µT ,(7)
where
r 0 = x 0 − x 2 , W T = T k=0 w k , and Prob(x T = x k ) = w k /W T .
Note that the statistical term 36σ 2 µT does not depend on compression and matches the optimal rate for SGD , including the constant. The other important aspect to consider is the first term. It guarantees linear convergence if σ 2 = 0, which holds for commonly used over-parameterized networks [Vaswani et al., 2019] as one can reach zero training loss. Comparing our results to the best-known result for Error Feedback [Stich and Karimireddy, 2020] used with C ∈ U(δ) ⊂ C(δ), our theory allows for 10× larger stepsizes. Moreover, our convergence guarantee (7) for unbiased compressors implies convergence for both the functional values and the last iterate, rather than for functional values only. In addition, while the rate of DCSGD as captured by Theorem 2 and the rate of DCSGD with Error Feedback [Stich and Karimireddy, 2020] are the same inÕ notation, our rate has at least 10 times better constants and does not contain any hidden polylogarithmic factors. Another practical advantage is that there is no need to store an extra vector for the error, which reduces the storage costs by a factor of two, making Algorithm 1 a viable choice for Deep Learning models with millions of parameters. Finally, one does not need to assume standard L-smoothness in order to prove convergence, while, one the other hand, L-smoothness is an important building block for proving convergence for general compressors due to the presence of error Karimireddy, 2020, Beznosikov et al., 2020]. Putting all together, this suggests that standard DCSGD (Algorithm 1) is preferable, in theory, to DCSGD with Error Feedback (Algorithm 2) for C ∈ U(δ).
Fixing Bias with Error-Compression
In the previous section, we showed that compressed DCSGD is theoretically preferable to DCSGD with Error Feedback for C ∈ U(δ). Unfortunately, C(δ) ⊂ U(δ), an example being the Top-K compressor [Alistarh et al., 2018, Stich et al., 2018, which operates by keeping the top K coordinates in magnitude only and setting rest to zero. This compressors belongs to C( d k ), but does not belong to U(δ) for any δ. On the other hand, multiple unbiased alternatives to Top-K have been proposed in the literature, including gradient sparsification [Wangni et al., 2018] and adaptive random sparsification [Beznosikov et al., 2020].
Induced Compressor
We now propose a new way of constructing an unbiased compressor from any compressor C ∈ C. We shall argue that using this induced compressor within DCSGD is preferable, in both theory and practice, to using the original compressor within DCSGD + Error Feedback.
Theorem 3. For C 1 ∈ C(δ 1 ) with λ = 1, choose C 2 ∈ U(δ 2 ) and define the induced compressor via
C(x) := C 1 (x) + C 2 (x − C 1 (x)). The induced operator satisfies C ∈ U(δ) with δ = δ 2 (1 − 1 /δ1) + 1 /δ1.
To get some intuition about this procedure, first, recall the structure used in Error Feedback. The gradient estimator is first compressed with C 1 (g) and the error e = g − C 1 (g) is computed and stored in memory. For our proposed approach, instead of storing the error e, we compress it with an unbiased compressor C 2 and communicate both these compressed vectors. Note that this procedure results in extra variance as we do not work with the exact error, but with its unbiased estimate only. On the other hand, there is no bias. In addition, due to our construction, at least the same amount of information is sent as for plain C 1 (g). The only drawback is the necessity to send two compressed vectors instead of one. Theorem 3 provides freedom in generating the induced compressor through the choice of the unbiased compressor C 2 . In practice, it makes sense to choose C 2 with similar (or smaller) compression factor to the the compressor C 1 we are transforming as this way the total communication complexity per iteration is preserved, up to the factor of two.
Benefits of Induced Compressor
In the light of the results in Section 2, we argue that one should always prefer unbiased compressors to biased ones as long as their variances δ and communication complexities are the same, e.g., Rand-k over Top-K. Contrary to the theory, greedy compressors are often observed to perform better due to their lower empirical variance [Beznosikov et al., 2020].
These considerations give a practical significance to Theorem 3 as we demonstrate on the following example. Let us consider two compressors-one biased C 1 ∈ C(δ 1 ) and one unbiased C 2 ∈ U(δ 2 ), such that δ 1 = δ 2 = δ, having identical communication complexity, e.g., Top-K and Rand-K. The induced compressor
C 3 (x) := C 1 (x) + C 2 (x − C 1 (x)) belongs to U(δ 3 ), where δ 3 = δ − 1 − 1 δ < δ.
While the size of the transmitted message is doubled, one can use Algorithm 1 since C 3 is unbiased, which provides at least 10× better convergence guarantees to Algorithm 2.
Based on the construction of the induced compressor, one might expect that we need extra memory as "the error" e = g − C 1 (g) needs to be stored, but during computation only. This is not an issue as compressors for DNNs are always applied layer-wise [Dutta et al., 2019], and hence the size of the extra memory is negligible. It does not help EF, as the error needs to be stored at any time for each layer.
Extensions
We now develop several extensions of Algorithm 1 relevant to distributed optimization in general, and to Federated Learning in particular. This is all possible due to the simplicity of our approach. Note that in the case of Error Feedback, these extensions have either not been obtained yet, or similarly to Section 2, the results are worse when compared to our derived bounds for unbiased compressors.
Multi-node scenario
We begin with the case of general n ≥ 1. The following theorem provides the convergence rate of Algorithm 1. Theorem 4 (Convergence of DCSGD in n ≥ 1 case). Consider the DCSGD algorithm in the multiple nodes (n ≥ 1) case. Let Assumptions 1-3 hold, let and C ∈ U(δ). Then there exist stepsizes η k ≤ 1 2δnL and weights w k ≥ 0 such that for all T ≥ 1 we have
E f (x T ) − f + µE x T − x 2 ≤ 64δ n Lr 0 exp − µT 4δ n L + 36(σ 2 + D) µT ,
where r 0 , W T ,x T are defined in Theorem 2, D = 2L n n i=1 (f i (x ) − f i ) and δ n = δ−1 n + 1. Inspecting the convergence rate, observe that Theorem 2 arises as a special case of Theorem 4 for n = 1. Similar arguments and comments can be made as those we have made in the discussion after Theorem 2. However, now we need to make a comparison with the complexity results of Beznosikov et al. [2020], who analyzed Algorithm 2 in the n > 1 case. In addition, the multi-node scenario reduces the effect of the variance constant δ by a factor of 1/n, which is not the case for EF.
Partial Participation with Arbitrary Distribution over Nodes
In this section, we extend our results to a variant of DCSGD utilizing partial participation, which is of key relevance to Federated Learning. In this framework, only a subset of all nodes communicates to the master node in each communication round. In this work, we consider a very general partial participation framework: we assume that the subset of participating clients is determined by a fixed but otherwise arbitrary random set-valued mapping S (a "sampling") with values in 2 [n] , where [n] = {1, 2, . . . , n}. To the best of our knowledge, this is the first partial participation result where an arbitrary distribution over the nodes is considered.
On the other hand, this is not the first work which makes use of the arbitrary sampling paradigm; this was used before in other contexts, e.g., for obtaining importance sampling guarantees for coordinate descent [Qu et al., 2015], primal-dual methods [Chambolle et al., 2018], and variance reduction [Horváth and Richtárik, 2019].
Note that the sampling S is uniquely defined by assigning probabilities to all 2 n subsets of [n]. With each sampling S we associate a probability matrix P ∈ R n×n defined by P ij := Prob({i, j} ⊆ S). The probability vector associated with S is the vector composed of the diagonal entries of P: p = (p 1 , . . . , p n ) ∈ R n , where p i := Prob(i ∈ S). We say that S is proper if p i > 0 for all i. It is easy to show that b := E [|S|] = Trace (P) = n i=1 p i , and hence b can be seen as the expected number of clients participating in each communication round.
There are two algorithmic changes due to this extension: line 4 of Algorithm 1 does not iterate over every node, only over nodes i ∈ S k , where S k ∼ S, and the aggregation step in line 9 is adjusted to lead to an unbiased estimator of the gradient, which gives ∆ k = i∈S k 1 npi ∆ k i . To prove convergence, we exploit the following lemma.
v ∈ R n such P − pp Diag (p 1 v 1 , p 2 v 2 , . . . , p n v n ) . (8) Moreover, E i∈S ζ i np i −ζ 2 ≤ 1 n 2 n i=1 v i p i ζ i 2 ,(9)
where S ∼ S and the expectation is taken over sampling S.
The following theorem establishes the convergence rate for Algorithm 1 with partial participation. Theorem 6. Let Assumptions 1-3 hold and C ∈ U(δ), then there exist stepsizes η k ≤ 1 2δ S L and weights w k ≥ 0 such that For the case S = [n] with probability 1, one can show that Lemma 5 holds with v = 0, and hence we exactly recover the results of Theorem 4. In addition, we can quantify the slowdown factor with respect to full participation regime (Theorem 4), which is max i∈[n] vi pi . While in our framework we assume the distribution S to be fixed, using results of Eichner et al. [2019], one could extend this result to a block-cyclic structure with each block having an arbitrary distribution S j .
E f (x T ) − f + µE x T − x 2 ≤ 64δ S Lr 0 exp − µT 4δ S L + 36(σ 2 + D) µT ,
Note that in all the previous theorems, we can only guarantee a sublinear O( 1 /T ) convergence rate. Linear rate is obtained in the special case when σ 2 = 0 (in which case D = 0). This is satisfied if there is no noise at the optimum, which is the case for over-parameterized models. Furthermore, linear rate can be obtained using compression of gradient differences, as pioneered in the DIANA algorithm [Mishchenko et al., 2019a]. Both of these scenarios were already considered in Horváth et al. [2019b] for the framework of Theorem 4 and full participation. These results can be easily extended to partial participation using our proof technique for Theorem 6. Note that this reduction is not possible for Error Feedback as the analysis of the DIANA algorithm is heavily dependent on the unbiasedness property. This points to another advantage of the induced compressor framework introduced in Section 3.
Acceleration
As the last comparison, we discuss the combination of compression and acceleration/momentum. This setting is very important to consider as essentially all state-of-the-art methods for training deep learning models, including Adam [Kingma andBa, 2015, Reddi et al., 2018], rely on the use of momentum in one form or another. One can treat the unbiased compressed gradient as a stochastic gradient [Gorbunov et al., 2020] and the theory for momentum SGD [Yang et al., 2016, Gadat et al., 2018, Loizou and Richtárik, 2017 would be applicable with an extra smoothness assumption. Moreover, it is possible to remove the variance caused by stochasticity and obtain linear convergence with an accelerated rate [Li et al., 2020]. Similarly to our previous discussion, both of these techniques are heavily dependent on the unbiasedness property. It is an intriguing question, but out of the scope of the paper, to investigate the combined effect of momentum and Error Feedback and see whether these techniques are compatible theoretically.
Experiments
In this section, we compare Algorithms 1 and 2 for several compression operators. If method contains " + EF ", it means that EF is applied, thus Algorithm 2 is applied. Otherwise, Algorithm 1 is displayed. To be fair, we always compare methods with the same communication complexity per iteration. We report the number of epochs (passes over the dataset) with respect to training loss, testing loss, and testing accuracy. These are obtained by evaluating the best model in terms of the validation error on the test dataset. A validation error is computed based on 10 % randomly selected training data. Similarly, we tune the step-size using the same validation set. For every experiment, we randomly distributed the training dataset among 8 workers; each worker computes its local gradient based on its own dataset. We used a batch size of 32. All the provided figures display the mean performance with one standard error over 5 independent runs. For a fair comparison, we use the same random seed for the compared methods. Our experimental results are based on a Python implementation of all the methods running in PyTorch. All reported quantities are independent of the system architecture and network bandwidth. Our implementation is freely available on GitHub: https://github.com/SamuelHorvath/Compressed_SGD_PyTorch.
Dataset and Models
We do an evaluation on 2 datasets -MNIST and CIFAR10. For MNIST, we consider a small neural network model with two fully connected (FC) layers with 512 neurons in the second layer. The step-size is tuned based on the values 1, 0.5 and 0.1. For CIFAR10, we consider VGG11 [Simonyan and Zisserman, 2015] and ResNet18 [He et al., 2016] models and step-sizes 0.1, 0.05 and 0.01. Some of the plots are displayed in the supplementary materials, Appendix A.
Error Feedback for Unbiased Compression Operators
In our first experiment, we compare the effect of Error Feedback in the case when an unbiased compressor is used. Note that unbiased compressors are theoretically guaranteed to work both with Algorithm 1 and 2. We can see from Figure 1 that adding Error Feedback can hurt the performance; we use natural compression [Horváth et al., 2019a] and TernGrad [Wen et al., 2017] (coincides with QSGD [Alistarh et al., 2016] and natural dithering [Horváth et al., 2019a] (with the infinity norm and one level) as compressors. This agrees with our theoretical findings. In addition, for sparsification techniques such as Random Sparsification or Gradient Sparsification [Wangni et al., 2018], we observed that when sparsity is set to be 10 %, Algorithm 1 converges for all the selected values of step-sizes, but Algorithm 2 diverges and a smaller step-size needs to be used. This is an important observation as many practical works , Wei et al., 2015, Aji and Heafield, 2017, Hsieh et al., 2017, Lin et al., 2018b, Lim et al., 2018 use sparsification techniques mentioned in this section, but proposed to use EF, while our work shows that using unbiasedness property leads not only to better convergence but also to memory savings.
Unbiased Alternatives to Biased Compression
In this section, we investigate candidates for unbiased compressors than can compete with Top-K, one of the most frequently used compressors. Theoretically, Top-K is not guaranteed to work by itself and might lead to divergence [Beznosikov et al., 2020] unless Error Feedback is applied. One would usually compare the performance of Top-K with EF to Rand-K, which keeps K randomly selected coordinates and then scales the output by d /K to preserve unbiasedness. Rather than to naively comparing to Rand-K, we propose to use different unbiased approaches, which are more related to Top-K compressor. The first one is Gradient Sparsification proposed by Wagni et al. [Wangni et al., 2018], which we refer to Rand-K (Wangni et al.), where the probability of keeping each coordinate scales with its magnitude and communication budget. As the second alternative, we propose to use our induced compressor, where C 1 is Top-a and unbiased part is Rand-(K − a) (Wangni et al.) with communication budget K − a. It should be noted that a can be considered as a hyperparameter to tune. For our experiment, we chose it to be K /2 for simplicity. Figure 2 suggests that both of the proposed techniques can outperform Top-K with EF, as can be seen for CIFAR10 with VGG11, Moreover, they do not require extra memory to store the error vector. In addition, our unbiased induced compressor further improves over Rand-K (Wagni et al.). Finally, Top-K without EF suffers a significant decrease in performance, which stresses the necessity of error correction.
Effect of Acceleration/Momentum
As the next experiment, we look at the effect of momentum on DCSGD with and without EF, which is set to 0.9. We consider the same setup as in the previous subsections. Based on our discussion on acceleration, we know that unbiased compressors are compatible with momentum and one can obtain theoretical guarantees, while for biased compressors with EF, this is not clear. Figure 3 shows that in terms of the training loss, Top-K with EF performs slightly worse than its unbiased alternative. Similarly to the previous experiment, the performance of Top-K is significantly degraded without EF. As observed in the first experiment, adding EF has a negative impact on the convergence of TernGrad.
Failure of DCSGD with biased Top-1
In this experiment, we present example considered in Beznosikov et al. [2020], which was used as a counterexample to show that some form of error correction is needed in order for biased compressors to work/provably converge. In addition, we run experiments on their construction and show that while Error Feedback fixes divergence, it is still significantly dominated by unbiased non-uniform sparsification(NU Rand-1), which works by only keeping one non-zero coordinate sampled with probability equal to |x| / d i=1 |x|i, where |x| denotes element-wise absolute value, as can be seen in Figure 4. Example from Beznosikov et al. [2020]. Consider n = d = 3 and define the following smooth and strongly convex quadratic functions
f 1 (x) = a, x 2 + 1 4 x 2 , f 2 (x) = b, x 2 + 1 4 x 2 , f 3 (x) = c, x 2 + 1 4 x 2 ,
where a = (−3, 2, 2), b = (2, −3, 2), c = (2, 2, −3). Then, with the initial point x 0 = (t, t, t), t > 0 −11, 9, 9), ∇f 2 (x 0 ) = t 2 (9, −11, 9), ∇f 3 (x 0 ) = t 2 (9, 9, −11).
∇f 1 (x 0 ) = t 2 (
Using the Top-1 compressor, we get
C(∇f 1 (x 0 )) = t 2 (−11, 0, 0), C(∇f 2 (x 0 )) = t 2 (0, −11, 0), C(∇f 3 (x 0 )) = t 2 (0, 0, −11).
The next iterate of DCGD is
x 1 = x 0 − η 3 3 i=1 C(∇f i (x 0 )) = 1 + 11η 6 x 0 .
Repeated application gives x k = 1 + 11η 6 k x 0 , which diverges exponentially fast to +∞ since η > 0.
Initialization. In our experiments, we use the starting point x 0 = (1, 1, 1) and choose step size 1 L , where L is the smoothness parameter of f = 1 3 (f 1 + f 2 + f 3 ). Note that zero vector x * = (0, 0, 0) is the unique minimizer of f .
Conclusion
In this paper, we argue that if compressed communication is required for distributed training due to communication overhead, it is better to use unbiased compressors. We show that this leads to strictly better convergence guarantees with fewer assumptions. In addition, we propose a new construction for transforming any compressor into an unbiased one using a compressed EF-like approach. Besides theoretical superiority, usage of unbiased compressors enjoys lower memory requirements. Our theoretical findings are corroborated with empirical evaluation.
A Extra Experiments
In this section, we include extra experiments which complement the figures in the main paper. Figure 5 corresponds to the same settings as Figure 1. Analogously, Figure 6 corresponds to Figure 2 and Figure 7 to Figure 3. Essentially, the same can be concluded as we argue in the main paper. We follow (2), which holds for C ∈ U(δ).
E 1 δ C(x) − x 2 = 1 δ 2 E C(x) 2 − 2 1 δ E [C(x)] , x − x 2 ≤ 1 δ − 2 δ + 1 x 2 = 1 − 1 δ x 2 ,
which concludes the proof.
B.2 Proof of Theorem 2
For the case n = 1, Algorithm 1 is reduced to f 1 = f , thus the update
x k+1 = x k − C(g k ).
We start with
E x k+1 − x 2 |x k = x k − x 2 − η k E C(g k ), x k − x |x k + (η k ) 2 E C(g k ) 2 |x k (2)+(5) ≤ x k − x 2 − η k ∇f (x k ), x k − x + (η k ) 2 δE g k 2 |x k (5)+(6) ≤ x k − x 2 − η k ∇f (x k ), x k − x + (η k ) 2 δ 2L(f (x k ) − f ) + σ 2 (4) ≤ (1 − µη k ) x k − x 2 − 2η k 1 − η k δL (f (x k ) − f ) + (η k ) 2 δσ 2 .
Taking full expectation and η k ≤ 1 2δL , we obtain
E x k+1 − x 2 ≤ (1 − µη k )E x k − x 2 − η k E f (x k ) − f + (η k ) 2 δσ 2 .
Let A = [a 1 , . . . , a n ] ∈ R d×n , where a i = ζi pi , and let e be the vector of all ones in R n . We now write the variance of X in a form which will be convenient to establish a bound:
E X − E [X] 2 = E X 2 − E [X] 2 = E i∈S ζ i np i 2 − ζ 2 = E i,j ζ i np i ζ j np j 1 i,j∈S − ζ 2 = i,j p ij ζ i np i ζ j np j − i,j ζ i n ζ j n = 1 n 2 i,j (p ij − p i p j )a i a j = 1 n 2 e P − pp • A A e.(13)
Since by assumption we have P − pp Diag (p • v), we can further bound
e P − pp • A A e ≤ e Diag (p • v) • A A e = n i=1 p i v i a i 2 .
To obtain (9), it remains to combine this with (13).
B.6 Proof of Theorem 6
Similarly to the proof of Theorem 2, we use the update of Algorithm 1 to bound the following quantity
E x k+1 − x 2 |x k = x k − x 2 − η k n i=1 E i∈S k 1 np i C(g k i ), x k − x |x k + E i∈S k η k np i C(g k i ) 2 |x k (2)+(5) ≤ x k − x 2 − η k ∇f (x k ), x k − x + (η k ) 2 E i∈S k 1 np i C(g k i ) − 1 n n i=1 C(g k i ) 2 |x k + E 1 n n i=1 C(g k ) 2 |x k (5)+(9)+(2) ≤ x k − x 2 − η k ∇f (x k ), x k − x + (η k ) 2 n n i=1 δv i np i + δ − 1 n + 1 E g k i |x k (4)+(6) ≤ (1 − µη k ) x k − x 2 − 2η k 1 − η k δ S L (f (x k ) − f ) + (η k ) 2 δ S (σ 2 + D).
Taking full expectation and η k ≤ 1 2δ S L , we obtain
E x k+1 − x 2 ≤ (1 − µη k )E x k − x 2 − η k E f (x k ) − f + (η k ) 2 δ S (σ 2 + D).
The rest of the analysis is identical to the proof of Theorem 2 with only difference c = σ 2 + D and δ S instead of δ.
Lemma 5 (
5Lemma 1, Horváth and Richtárik [2019]). Let ζ 1 , ζ 2 , . . . , ζ n be vectors in R d and let ζ := 1 n n i=1 ζ i be their average. Let S be a proper sampling. Then there exists
Figure 1 :
1Algorithm 1 vs. Algorithm 2 on MNIST with 2 FC layers network and natural compression (top) and CIFAR10 with ResNet18 and TernGrad (bottom) as a compression.where r 0 , W T ,x T are defined in Theorem 2, D in Theorem 4 and δ S = δ max i∈[n]
Figure 2 :
2Comparison of different sparsification techniques with and without usage of Error Feedback on MNIST with 2 FC layers (top) and CIFAR10 with VGG11 (bottom). K = 5% * d, for Induced compressor C 1 is Top-K /2 and C 2 is Rand-K /2(Wangni et al.).
Figure 3 :
3Comparison of different sparsification techniques with momentum and with and without usage of Error Feedback on MNIST dataset with 2 FC layers(top) and CIFAR10 with ResNet18 (bottom). K = 5% * d, for Induced compressor C 1 is Top-K /2 and C 2 is Rand-K /2(Wangni et al.).
Figure 4 :
4Comparison of Top-1 (+ EF) and NU Rand-1 on Example 1 fromBeznosikov et al. [2020].
Figure 5 :
5Algorithm 1 vs. Algorithm 2.
Figure 6 :Figure 7 :
67Comparison of different sparsification techniques w/ and w/o usage of Error Feedback on MNIST with 2 FC layers. K = 5% * d, for Induced compressor C 1 is Top-K /2 and C 2 is Rand-K /2(Wangni et al.). Comparison of different sparsification techniques with momentum and w/ and w/o usage of Error Feedback on CIFAR10 with VGG11.
for k = 0, 1, . . . T do for i = 1, . . . , n doAlgorithm 1 DCSGD
1: Input: {η k } T
k=0 > 0, x 0
2: 3:
Parallel: Worker side
4:
5:
obtain g k
i
6:
send ∆ k
i = C(g k
i ) to master
no need to keep track of errors
8:
end for
9:
Master side
10:
aggregate ∆ k = 1
n
n
i=1 ∆ k
i
11:
broadcast ∆ k to each worker
12:
Parallel: Worker side
13:
for k = 0, 1, . . . T do for i = 1, . . . , n dok
15:
end for
16: end for
Algorithm 2 DCSGD with Error Feedback
1: Input: {η k } T
k=0 > 0, x 0 , e 0
i = 0 ∀i ∈ [n]
2: 3:
Parallel: Worker side
4:
5:
obtain g k
i
6:
AppendixThe rest of the analysis is closely related to the one of Stich[2019b]with an extra adjustments such that this analysis is able to accommodate compression represented by parameter δ. We would like to point out that similar results to Stich[2019b]were also present in[Lacoste-Julien et al., 2012, Stich et al., 2018, Grimmer, 2019.We first rewrite the previous inequality to the formwhereWe proceed with lemmas that establish a convergence guarantee for every recursion of type (10). Lemma 7. Let {r k } k≥0 , {s k } k≥0 be as in (10) for a > 0 and for constant stepsizes η k ≡ η := 1 αd , ∀k ≥ 0. Then it holds for all T ≥ 0:Proof. This follows by relaxing (10) using E f (x k ) − f ≥ 0,and unrolling the recursion(10)for a > 0 and for decreasing stepsizes η k := 2 αa(κ+k) , ∀k ≥ 0, with parameter κ := 2d a , and weights w k :Proof. We start by re-arranging (10) and multiplying both sides with w kwhere the equality follows from the definition of η k and w k and the inequality from• and W T = (2κ+T )(T +1) 2 ≤ 2(κ+T )(1+T ) 2 ≤ (κ + T ) 2 for κ = 2d a ≥ 1.By applying these two estimates we conclude the proof.The convergence can be obtained as the combination of these two lemmas.Lemma 9. Let {r k } k≥0 , {s k } k≥0 as in(10), a > 0. Then there exists stepsizes η k ≤ 1 αd and weighsaT .Proof of Lemma 9. For integer T ≥ 0, we choose stepsizes and weights as followsfor κ = 2d a and t 0 = T 2 . We will now show that these choices imply the claimed result. We start with the case T ≤ d a . For this case, the choice η = 1 αd givesIf T > d a , then we obtain from Lemma 7 that r t0 ≤ r 0 exp − aT 2αd + c ad .From Lemma 8 we have for the second half of the iterates:aT .Now we observe that the restart condition r t0 satisfies:because T > d a . These conclude the proof.Having these general convergence lemmas for the recursion of the form (10), the proof of the theorem follows directly from Lemmas 7 and 9 with a = µ, b = 1, c = σ 2 , d = 2L and α = δ. It is easy to check that condition η k ≤ 1 αd = 1 2δL is satisfied.
Sparse communication for distributed gradient descent. Alham Fikri, Aji , Kenneth Heafield, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAlham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017.
QSGD: Randomized quantization for communication-optimal stochastic gradient descent. Dan Alistarh, Jerry Li, Ryota Tomioka, Milan Vojnovic, arXiv:1610.02132arXiv preprintDan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Randomized quantization for communication-optimal stochastic gradient descent. arXiv preprint arXiv:1610.02132, 2016.
The convergence of sparsified gradient methods. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, Cédric Renggli, Advances in Neural Information Processing Systems. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cédric Renggli. The convergence of sparsified gradient methods. In Advances in Neural Information Processing Systems, pages 5973-5983, 2018.
Qsparse-local-SGD: Distributed SGD with quantization, sparsification and local computations. Debraj Basu, Deepesh Data, Can Karakus, Suhas Diggavi, Advances in Neural Information Processing Systems. Debraj Basu, Deepesh Data, Can Karakus, and Suhas Diggavi. Qsparse-local-SGD: Distributed SGD with quantization, sparsification and local computations. In Advances in Neural Information Processing Systems, pages 14668-14679, 2019.
On biased compression for distributed learning. Aleksandr Beznosikov, Samuel Horvath, Peter Richtárik, Mher Safaryan, arXiv:2002.12410arXiv preprintAleksandr Beznosikov, Samuel Horvath, Peter Richtárik, and Mher Safaryan. On biased compression for distributed learning. arXiv preprint arXiv:2002.12410, 2020.
Optimization methods for large-scale machine learning. Léon Bottou, E Frank, Jorge Curtis, Nocedal, Siam Review. 602Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018.
Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. Antonin Chambolle, Matthias J Ehrhardt, Peter Richtárik, Carola-Bibiane Schonlieb, SIAM Journal on Optimization. 284Antonin Chambolle, Matthias J Ehrhardt, Peter Richtárik, and Carola-Bibiane Schonlieb. Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM Journal on Optimization, 28(4):2783-2808, 2018.
Convex optimization using sparsified stochastic gradient descent with memory. Jean-Baptiste Cordonnier, Technical reportJean-Baptiste Cordonnier. Convex optimization using sparsified stochastic gradient descent with memory. Technical report, 2018.
On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. Aritra Dutta, El Houcine, Ahmed M Bergou, Chen-Yu Abdelmoniem, Atal Narayan Ho, Marco Sahu, Panos Canini, Kalnis, arXiv:1911.08250arXiv preprintAritra Dutta, El Houcine Bergou, Ahmed M Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. arXiv preprint arXiv:1911.08250, 2019.
Semi-cyclic stochastic gradient descent. Hubert Eichner, Tomer Koren, Brendan Mcmahan, Nathan Srebro, Kunal Talwar, arXiv:1904.10120arXiv preprintHubert Eichner, Tomer Koren, H Brendan McMahan, Nathan Srebro, and Kunal Talwar. Semi-cyclic stochastic gradient descent. arXiv preprint arXiv:1904.10120, 2019.
Stochastic heavy ball. Sébastien Gadat, Fabien Panloup, Sofiane Saadane, Electronic Journal of Statistics. 121Sébastien Gadat, Fabien Panloup, Sofiane Saadane, et al. Stochastic heavy ball. Electronic Journal of Statistics, 12(1):461-529, 2018.
Television by pulse code modulation. W M Goodall, Bell System Technical Journal. 301WM Goodall. Television by pulse code modulation. Bell System Technical Journal, 30(1):33-49, 1951.
A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent. Eduard Gorbunov, Filip Hanzely, Peter Richtárik, The 23rd International Conference on Artificial Intelligence and Statistics. Eduard Gorbunov, Filip Hanzely, and Peter Richtárik. A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent. In The 23rd International Conference on Artificial Intelligence and Statistics, 2020.
SGD: General analysis and improved rates. Nicolas Robert Mansel Gower, Xun Loizou, Alibek Qian, Egor Sailanbayev, Peter Shulgin, Richtárik, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningLong Beach, CaliforniaRobert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. SGD: General analysis and improved rates. Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, 2019.
Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity. Benjamin Grimmer, SIAM Journal on Optimization. 292Benjamin Grimmer. Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity. SIAM Journal on Optimization, 29(2):1350-1365, 2019.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016.
Nonconvex variance reduced optimization with arbitrary sampling. Samuel Horváth, Peter Richtárik, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningSamuel Horváth and Peter Richtárik. Nonconvex variance reduced optimization with arbitrary sampling. Proceedings of the 36th International Conference on Machine Learning, 2019.
Natural compression for distributed deep learning. Samuel Horváth, Chen-Yu Ho, L'udovit Horváth, Atal Narayan Sahu, Marco Canini, Peter Richtárik, arXiv:1905.10988arXiv preprintSamuel Horváth, Chen-Yu Ho, L'udovit Horváth, Atal Narayan Sahu, Marco Canini, and Peter Richtárik. Natural compression for distributed deep learning. arXiv preprint arXiv:1905.10988, 2019a.
Stochastic distributed learning with gradient quantization and variance reduction. Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Sebastian Stich, Peter Richtárik, arXiv:1904.05115arXiv preprintSamuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Sebastian Stich, and Peter Richtárik. Stochastic distributed learning with gradient quantization and variance reduction. arXiv preprint arXiv:1904.05115, 2019b.
Gaia: Geo-distributed machine learning approaching LAN speeds. Kevin Hsieh, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, R Gregory, Ganger, B Phillip, Onur Gibbons, Mutlu, 14th Symposium on Networked Systems Design and Implementation. Kevin Hsieh, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, Gregory R Ganger, Phillip B Gibbons, and Onur Mutlu. Gaia: Geo-distributed machine learning approaching LAN speeds. In 14th Symposium on Networked Systems Design and Implementation, pages 629-647, 2017.
Peter Kairouz, Brendan Mcmahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, arXiv:1912.04977Advances and open problems in federated learning. Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel CummingsarXiv preprintPeter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019.
Linear convergence of gradient and proximalgradient methods under the polyak-łojasiewicz condition. Hamed Karimi, Julie Nutini, Mark Schmidt, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerHamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal- gradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 795-811. Springer, 2016.
Satyen Sai Praneeth Karimireddy, Mehryar Kale, Mohri, J Sashank, Reddi, U Sebastian, Ananda Theertha Stich, Suresh, arXiv:1910.06378Stochastic controlled averaging for on-device federated learning. arXiv preprintSai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for on-device federated learning. arXiv preprint arXiv:1910.06378, 2019a.
Error feedback fixes signSGD and other gradient compression schemes. Quentin Sai Praneeth Karimireddy, Rebjock, U Sebastian, Martin Stich, Jaggi, arXiv:1901.09847arXiv preprintSai Praneeth Karimireddy, Quentin Rebjock, Sebastian U Stich, and Martin Jaggi. Error feedback fixes signSGD and other gradient compression schemes. arXiv preprint arXiv:1901.09847, 2019b.
Tighter theory for local SGD on identical and heterogeneous data. Ahmed Khaled, Konstantin Mishchenko, Peter Richtárik, The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020). 2020Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Tighter theory for local SGD on identical and heterogeneous data. In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020), 2020.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, the 3rd International Conference for Learning Representations. San DiegoPublished as a conference paper atDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
Decentralized stochastic optimization and gossip algorithms with compressed communication. Anastasia Koloskova, U Sebastian, Martin Stich, Jaggi, arXiv:1902.00340arXiv preprintAnastasia Koloskova, Sebastian U Stich, and Martin Jaggi. Decentralized stochastic optimization and gossip algorithms with compressed communication. arXiv preprint arXiv:1902.00340, 2019.
Randomized distributed mean estimation: Accuracy vs. communication. Jakub Konečný, Peter Richtárik, Frontiers in Applied Mathematics and Statistics. 462Jakub Konečný and Peter Richtárik. Randomized distributed mean estimation: Accuracy vs. commu- nication. Frontiers in Applied Mathematics and Statistics, 4:62, 2018.
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method. Simon Lacoste-Julien, Mark Schmidt, Francis Bach, arXiv:1212.2002arXiv preprintSimon Lacoste-Julien, Mark Schmidt, and Francis Bach. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method. arXiv preprint arXiv:1212.2002, 2012.
Scaling distributed machine learning with the parameter server. Mu Li, Jun Woo David G Andersen, Alexander J Park, Amr Smola, Vanja Ahmed, James Josifovski, Eugene J Long, Bor-Yiing Shekita, Su, 11th {USENIX} Symposium on Operating Systems Design and Implementation. Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pages 583-598, 2014.
Acceleration for compressed gradient descent in distributed and federated optimization. Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik, arXiv:2002.11364arXiv preprintZhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization. arXiv preprint arXiv:2002.11364, 2020.
3LC: Lightweight and effective traffic compression for distributed machine learning. Hyeontaek Lim, Michael David G Andersen, Kaminsky, arXiv:1802.07389arXiv preprintHyeontaek Lim, David G Andersen, and Michael Kaminsky. 3LC: Lightweight and effective traffic compression for distributed machine learning. arXiv preprint arXiv:1802.07389, 2018.
Don't use large mini-batches. Tao Lin, U Sebastian, Kumar Kshitij Stich, Martin Patel, Jaggi, arXiv:1808.07217use local SGD. arXiv preprintTao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. Don't use large mini-batches, use local SGD. arXiv preprint arXiv:1808.07217, 2018a.
Deep gradient compression: Reducing the communication bandwidth for distributed training. Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J Dally, ICLR 2018 -International Conference on Learning Representations. Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J Dally. Deep gradient compression: Reducing the communication bandwidth for distributed training. ICLR 2018 -International Conference on Learning Representations, 2018b.
Momentum and stochastic momentum for stochastic gradient. Nicolas Loizou, Peter Richtárik, arXiv:1712.09677proximal point and subspace descent methods. NewtonarXiv preprintNicolas Loizou and Peter Richtárik. Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods. arXiv preprint arXiv:1712.09677, 2017.
Distributed learning with compressed gradient differences. Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, Peter Richtárik, arXiv:1901.09269arXiv preprintKonstantin Mishchenko, Eduard Gorbunov, Martin Takáč, and Peter Richtárik. Distributed learning with compressed gradient differences. arXiv preprint arXiv:1901.09269, 2019a.
Konstantin Mishchenko, Filip Hanzely, Peter Richtárik, arXiv:1901.0943799% of parallel optimization is inevitably a waste of time. arXiv preprintKonstantin Mishchenko, Filip Hanzely, and Peter Richtárik. 99% of parallel optimization is inevitably a waste of time. arXiv preprint arXiv:1901.09437, 2019b.
Linear convergence of first order methods for non-strongly convex optimization. Ion Necoara, Yu Nesterov, Francois Glineur, Mathematical Programming. 1751-2Ion Necoara, Yu Nesterov, and Francois Glineur. Linear convergence of first order methods for non-strongly convex optimization. Mathematical Programming, 175(1-2):69-107, 2019.
Quartz: Randomized dual coordinate ascent with arbitrary sampling. Zheng Qu, Peter Richtárik, Tong Zhang, Advances in Neural Information Processing Systems. Zheng Qu, Peter Richtárik, and Tong Zhang. Quartz: Randomized dual coordinate ascent with arbitrary sampling. In Advances in Neural Information Processing Systems, pages 865-873, 2015.
NUQSGD: Improved communication efficiency for data-parallel SGD via nonuniform quantization. Ali Ramezani-Kebrya, Fartash Faghri, Daniel M Roy, arXiv:1908.06077arXiv preprintAli Ramezani-Kebrya, Fartash Faghri, and Daniel M Roy. NUQSGD: Improved communication efficiency for data-parallel SGD via nonuniform quantization. arXiv preprint arXiv:1908.06077, 2019.
On the convergence of Adam and beyond. J Sashank, Satyen Reddi, Sanjiv Kale, Kumar, ICLR 2018 -International Conference on Learning Representations. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. ICLR 2018 -International Conference on Learning Representations, 2018.
Parallel coordinate descent methods for big data optimization. Peter Richtárik, Martin Takáč, Mathematical Programming. 1561-2Peter Richtárik and Martin Takáč. Parallel coordinate descent methods for big data optimization. Mathematical Programming, 156(1-2):433-484, 2016.
Picture coding using pseudo-random noise. Lawrence Roberts, IRE Transactions on Information Theory. 82Lawrence Roberts. Picture coding using pseudo-random noise. IRE Transactions on Information Theory, 8(2):145-154, 1962.
1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, Dong Yu, Fifteenth Annual Conference of the International Speech Communication Association. Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Fifteenth Annual Conference of the International Speech Communication Association, 2014.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, ICLR 2015 -International Conference on Learning Representations. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR 2015 -International Conference on Learning Representations, 2015.
Local SGD converges fast and communicates little. U Sebastian, Stich, ICLR 2019 -International Conference on Learning Representations. Sebastian U Stich. Local SGD converges fast and communicates little. ICLR 2019 -International Conference on Learning Representations, 2019a.
Unified optimal analysis of the (stochastic) gradient method. U Sebastian, Stich, arXiv:1907.04232arXiv preprintSebastian U Stich. Unified optimal analysis of the (stochastic) gradient method. arXiv preprint arXiv:1907.04232, 2019b.
The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. U Sebastian, Sai Praneeth Stich, Karimireddy, ICLR 2020 -International Conference on Learning Representations. Sebastian U Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. ICLR 2020 -International Conference on Learning Representations, 2020.
Sparsified SGD with memory. U Sebastian, Jean-Baptiste Stich, Martin Cordonnier, Jaggi, Advances in Neural Information Processing Systems. Sebastian U Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems, pages 4447-4458, 2018.
Fast and faster convergence of SGD for overparameterized models and an accelerated perceptron. Sharan Vaswani, Francis Bach, Mark Schmidt, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019. the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019Naha, Okinawa, JapanSharan Vaswani, Francis Bach, and Mark Schmidt. Fast and faster convergence of SGD for over- parameterized models and an accelerated perceptron. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) 2019, Naha, Okinawa, Japan, 2019.
PowerSGD: Practical low-rank gradient compression for distributed optimization. Thijs Vogels, Martin Sai Praneeth Karimireddy, Jaggi, Advances in Neural Information Processing Systems. Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. PowerSGD: Practical low-rank gradient compression for distributed optimization. In Advances in Neural Information Processing Systems, pages 14236-14245, 2019.
Gradient sparsification for communicationefficient distributed optimization. Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang, Advances in Neural Information Processing Systems. Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. Gradient sparsification for communication- efficient distributed optimization. In Advances in Neural Information Processing Systems, pages 1299-1309, 2018.
Managed communication and consistency for fast data-parallel iterative analytics. Jinliang Wei, Wei Dai, Aurick Qiao, Qirong Ho, Henggang Cui, R Gregory, Ganger, B Phillip, Garth A Gibbons, Eric P Gibson, Xing, Proceedings of the Sixth ACM Symposium on Cloud Computing. the Sixth ACM Symposium on Cloud ComputingJinliang Wei, Wei Dai, Aurick Qiao, Qirong Ho, Henggang Cui, Gregory R Ganger, Phillip B Gibbons, Garth A Gibson, and Eric P Xing. Managed communication and consistency for fast data-parallel iterative analytics. In Proceedings of the Sixth ACM Symposium on Cloud Computing, pages 381-394, 2015.
Terngrad: Ternary gradients to reduce communication in distributed deep learning. Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li, Advances in Neural Information Processing Systems. Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Terngrad: Ternary gradients to reduce communication in distributed deep learning. In Advances in Neural Information Processing Systems, pages 1509-1519, 2017.
Blake Woodworth, Kshitij Kumar, Patel, U Sebastian, Zhen Stich, Brian Dai, Bullins, Ohad H Brendan Mcmahan, Nathan Shamir, Srebro, arXiv:2002.07839Is local SGD better than minibatch SGD? arXiv preprint. Blake Woodworth, Kumar Kshitij Patel, Sebastian U Stich, Zhen Dai, Brian Bullins, H Brendan McMahan, Ohad Shamir, and Nathan Srebro. Is local SGD better than minibatch SGD? arXiv preprint arXiv:2002.07839, 2020.
Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. Tianbao Yang, Qihang Lin, Zhe Li, arXiv:1604.03257arXiv preprintTianbao Yang, Qihang Lin, and Zhe Li. Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. arXiv preprint arXiv:1604.03257, 2016.
Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4035-4043. JMLR. org, 2017. |
3,720,457 | UNDERSTANDING SHORT-HORIZON BIAS IN STOCHASTIC META-OPTIMIZATION | Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the metaobjective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if metaoptimization is to scale to practical neural net training regimes. * Equal contribution. | [
6610705,
6628106
] | UNDERSTANDING SHORT-HORIZON BIAS IN STOCHASTIC META-OPTIMIZATION
Yuhuai Wu
University of Toronto and Vector Institute
Mengye Ren [email protected]
University of Toronto and Vector Institute
Renjie Liao [email protected]
University of Toronto and Vector Institute
Roger B Grosse [email protected]
University of Toronto and Vector Institute
UNDERSTANDING SHORT-HORIZON BIAS IN STOCHASTIC META-OPTIMIZATION
Published as a conference paper at ICLR 2018
Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the metaobjective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if metaoptimization is to scale to practical neural net training regimes. * Equal contribution.
INTRODUCTION
The learning rate is one of the most important and frustrating hyperparameters to tune in deep learning. Too small a value causes slow progress, while too large a value causes fluctuations or even divergence. While a fixed learning rate often works well for simpler problems, good performance on the ImageNet (Russakovsky et al., 2015) benchmark requires a carefully tuned schedule. A variety of decay schedules have been proposed for different architectures, including polynomial, exponential, staircase, etc. Learning rate decay is also required to achieve convergence guarantee for stochastic gradient methods under certain conditions (Bottou, 1998). Clever learning rate heuristics have resulted in large improvements in training efficiency (Goyal et al., 2017;Smith, 2017). A related hyperparameter is momentum; typically fixed to a reasonable value such as 0.9, careful tuning can also give significant performance gains (Sutskever et al., 2013). While optimizers such as Adam (Kingma & Ba, 2015) are often described as adapting coordinate-specific learning rates, in fact they also have global learning rate and momentum hyperparameters analogously to SGD, and tuning at least the learning rate can be important to good performance.
In light of this, it is not surprising that there have been many attempts to adapt learning rates, either online during optimization (Schraudolph, 1999;Schaul et al., 2013), or offline by fitting a learning rate schedule (Maclaurin et al., 2015). More ambitiously, others have attempted to learn an optimizer (Andrychowicz et al., 2016;Li & Malik, 2017;Finn et al., 2017;Lv et al., 2017;Wichrowska et al., 2017;Metz et al., 2017). All of these approaches are forms of meta-optimization, where one defines a meta-objective (typically the expected loss after some number of optimization steps) and tunes the hyperparameters to minimize this meta-objective. But because gradient-based meta-optimization can require thousands of updates, each of which unrolls the entire base-level optimization procedure, the meta-optimization is thousands of times more expensive than the baselevel optimization. Therefore, the meta-objective must be defined with a much smaller time horizon (e.g. hundreds of updates) than we are ordinarily interested in for large-scale optimization. The hope is that the learned hyperparameters or optimizer will generalize well to much longer time horizons. Unfortunately, we show that this is not achieved in this paper. This is because of a strong tradeoff between short-term and long-term performance, which we refer to as short-horizon bias. L(x, y) = 1 2 (x 2 + 100y 2 ) + σ 2 Figure 1: Aggressive learning rate (red) followed by a decay schedule (yellow) wins over conservative learning rate (blue) by making more progress along the low curvature direction (x direction).
In this work, we investigate the short-horizon bias both mathematically and empirically. First, we analyze a quadratic cost function with noisy gradients based on Schaul et al. (2013). We consider this a good proxy for neural net training because secondorder optimization algorithms have been shown to train neural networks in orders-of-magnitude fewer iterations (Martens, 2010), suggesting that much of the difficulty of SGD training can be explained by quadratic approximations to the cost. In our noisy quadratic problem, the dynamics of SGD with momentum can be analyzed exactly, allowing us to derive the greedy-optimal (i.e. 1-step horizon) learning rate and momentum in closed form, as well as to (locally) minimize the long-horizon loss using gradient descent. We analyze the differences between the short-horizon and long-horizon schedules.
Interestingly, when the noisy quadratic problem is either deterministic or spherical, greedy schedules are optimal. However, when the problem is both stochastic and badly conditioned (as is most neural net training), the greedy schedules decay the learning rate far too quickly, leading to slow convergence towards the optimum. This is because reducing the learning rate dampens the fluctuations along high curvature directions, giving it a large immediate reduction in loss. But this comes at the expense of long-run performance, because the optimizer fails to make progress along low curvature directions. This phenomenon is illustrated in Figure 1, a noisy quadratic problem in 2 dimensions, in which two learning rate schedule are compared: a small fixed learning rate (blue), versus a larger fixed learning rate (red) followed by exponential decay (yellow). The latter schedule initially has higher loss, but it makes more progress towards the optimum, such that it achieves an even smaller loss once the learning rate is decayed. Figure 2 shows this effect quantitatively for a noisy quadratic problem in 1000 dimensions (defined in Section 2.3). The solid lines show the loss after various numbers of steps of lookahead with a fixed learning rate; if this is used as the meta-objective, it favors small learning rates. The dashed curves show the loss if the same trajectories are followed by 50 steps with an exponentially decayed learning rate; these curves favor higher learning rates, and bear little obvious relationship to the solid ones. This illustrates the difficulty of selecting learning rates based on short-horizon information.
10 −1 4 × 10 −2 6 × 10 −2 2 × 10 −1 LR α 10 1 10 2 10 3 10 4
Meta-training loss
Steps= 10
Steps= 30
Steps= 100 Figure 2: Short-horizon metaobjectives for the noisy quadratic problem. Solid: loss after k updates with fixed learning rate. Dashed: loss after k updates with fixed learning rate, followed by exponential decay.
The second part of our paper empirically investigates gradientbased meta-optimization for neural net training. We consider two idealized meta-optimization algorithms: an offline algorithm which fits a learning rate decay schedule by running optimization many times from scratch, and an online algorithm which adapts the learning rate during training. Since our interest is in studying the effect of the meta-objective itself rather than failures of meta-optimization, we give the metaoptimizers sufficient time to optimize their meta-objectives well. We show that short-horizon meta-optimizers, both online and offline, dramatically underperform a hand-tuned fixed learning rate, and sometimes cause the base-level optimization progress to slow to a crawl, even with moderately long time horizons (e.g. 100 or 1000 steps) similar to those used in prior work on gradient-based meta-optimization.
In short, we expect that any meta-objective which does not correct for short-horizon bias will probably fail when run for a much longer time horizon than it was trained on. There are applications where short-horizon meta-optimization is directly useful, such as few-shot learning (Santoro et al., 2016;Ravi & Larochelle, 2017). In those settings, short-horizon bias is by definition not an issue. But much of the appeal of meta-optimization comes from the possibility of using it to speed up or simplify the training of large neural networks. In such settings, short-horizon bias is a fundamental obstacle that must be addressed for meta-optimization to be practically useful.
NOISY QUADRATIC PROBLEM
In this section, we consider a toy problem which demonstrates the short-horizon bias and can be analyzed analytically. In particular, we borrow the noisy quadratic model of Schaul et al. (2013); the true function being optimized is a quadratic, but in each iteration we observe a noisy version with the correct curvature but a perturbed minimum. This can be equivalently viewed as noisy observations of the gradient, which are intended to capture the stochasticity of a mini-batch-based optimizer. We analyze the dynamics of SGD with momentum on this example, and compare the long-horizon-optimized and greedy-optimal learning rate schedules.
BACKGROUND
Approximating the cost surface of a neural network with a quadratic function has led to powerful insights and algorithms. Second-order optimization methods such as Newton-Raphson and natural gradient (Amari, 1998) iteratively minimize a quadratic approximation to the cost function. Hessianfree (H-F) optimization (Martens, 2010) is an approximate natural gradient method which tries to minimize a quadratic approximation using conjugate gradient. It can often fit deep neural networks in orders-of-magnitude fewer updates than SGD, suggesting that much of the difficulty of neural net optimization is captured by quadratic models. In the setting of Bayesian neural networks, quadratic approximations to the log-likelihood motivated the Laplace approximation (MacKay, 1992) and variational inference (Graves, 2011;Zhang et al., 2017). Koh & Liang (2017) used quadratic approximations to analyze the sensitivity of a neural network's predictions to particular training labels, thereby yielding insight into adversarial examples.
Such quadratic approximations to the cost function have also provided insights into learning rate and momentum adaptation. In a deterministic setting, under certain conditions, second-order optimization algorithms can be run with a learning rate of 1; for this reason, H-F was able to eliminate the need to tune learning rate or momentum hyperparameters. Martens & Grosse (2015) observed that for a deterministic quadratic cost function, greedily choosing the learning rate and momentum to minimize the error on the next step is equivalent to conjugate gradient (CG). Since CG achieves the minimum possible loss of any gradient-based optimizer on each iteration, the greedily chosen learning rates and momenta are optimal, in the sense that the greedy sequence achieves the minimum possible loss value of any sequence of learning rates and momenta. This property fails to hold in the stochastic setting, however, and as we show in this section, the greedy choice of learning rate and momentum can do considerably worse than optimal.
Our primary interest in this work is to adapt scalar learning rate and momentum hyperparameters shared across all dimensions. Some optimizers based on diagonal curvature approximations (Kingma & Ba, 2015) have been motivated in terms of adapting dimension-specific learning rates, but in practice, one still needs to tune scalar learning rate and momentum hyperparameters. Even K-FAC (Martens & Grosse, 2015), which is based on more powerful curvature approximations, has scalar learning rate and momentum hyperparameters. Our analysis applies to all of these methods since they can be viewed as performing SGD in a preconditioned space.
ANALYSIS
NOTATIONS
We will primarily focus on the SGD with momentum algorithm in this paper. The update is written as follows:
v (t+1) = µ (t) v (t) − α (t) ∇ θ (t) L,(1)θ (t+1) = θ (t) + v (t+1) ,(2)
where L is the loss function, t is the training step, and α (t) is the learning rate. We call the gradient trace v (t) "velocity", and its decay constant µ (t) "momentum". We denote the ith coordinate of a vector v as v i . When we focus on a single dimension, we sometimes drop the dimension subscripts. We also denote
A(·) = E[·] 2 + V[·]
, where E and V denote expectation and variance respectively.
PROBLEM FORMULATION
We now define the noisy quadratic model, where in each iteration, the optimizer is given the gradient for a noisy version of a quadratic cost function, where the curvature is correct but the minimum is sampled stochastically from a Gaussian distribution. We assume WLOG that the Hessian is diagonal because SGD is a rotation invariant algorithm, and therefore the dynamics can be analyzed in a coordinate system corresponding to the eigenvectors of the Hessian. We make the further (nontrivial) assumption that the noise covariance is also diagonal. 1 Mathematically, the stochastic cost function is written as:L
(θ) = 1 2 i h i (θ i − c i ) 2 ,(3)
where c is the stochastic minimum, and each c i follows a Gaussian distribution with mean θ * i and variance σ 2 i . The expected loss is given by:
L(θ) = E L (θ) = 1 2 i h i (θ i − θ * i ) 2 + σ 2 i .(4)
The optimum of L is given by
θ * = E[c]; we assume WLOG that θ * = 0. The stochastic gradient is given by ∂L ∂θi = h i (θ i − c i ).
Since the deterministic gradient is given by ∂L ∂θi = h i θ i , the stochastic gradient can be viewed as a noisy Gaussian observation of the deterministic gradient with variance h 2 i σ 2 i . This interpretation motivates the use of this noisy quadratic problem as a model of SGD dynamics.
We treat the iterate θ (t) as a random variable (where the randomness comes from the sampled c's); the expected loss in each iteration is given by
E L(θ (t) ) = E 1 2 i h i (θ (t) i ) 2 + σ 2 i (5) = 1 2 i h i E θ (t) i 2 + V θ (t) i + σ 2 i .(6)
OPTIMIZED AND GREEDY-OPTIMAL SCHEDULES
We are interested in adapting a global learning rate α (t) and a global momentum decay parameter µ (t) for each time step t. We first derive a recursive formula for the mean and variance of the iterates at each step, and then analyze the greedy-optimal schedule for α (t) and µ (t) .
Several observations allow us to compactly model the dynamics of SGD with momentum on the noisy quadratic model. First, E[L(θ (t) )] can be expressed in terms of E[θ i ] and V[θ i ] using Eqn. 5. Second, due to the diagonality of the Hessian and the noise covariance matrix, each coordinate evolves independently of the others. Third, the means and variances of the parameters θ i are functions of those statistics at the previous step. Because each dimension evolves independently, we now drop the dimension subscripts. Combining these observations, we model the dynamics of SGD with momentum as a deterministic recurrence relation with sufficient statistics
E[θ (t) ], E[v (t) ], V[θ (t) ], V[v (t) ], and Σ (t) θ,v = Cov(θ (t) , v (t)
). The dynamics are as follows:
Theorem 1 (Mean and variance dynamics). The expectations of the parameter θ and the velocity v are updated as,
E v (t+1) = µ (t) E v (t) − (α (t) h)E θ (t) , E θ (t+1) = E θ (t) + E v (t+1) .
The variances of the parameter θ and the velocity v are updated as
V v (t+1) = µ (t) 2 V v (t) + α (t) h 2 V θ (t) − 2µ (t) α (t) hΣ (t) θ,v + α (t) hσ 2 , V θ (t+1) = 1 − 2α (t) h V θ (t) + V v (t+1) + 2µ (t) Σ (t) θ,v , Σ (t+1) θ,v = µ (t) Σ (t) θ,v − α (t) hV θ (t) + V v (t+1) .
By applying Theorem 1 recursively, we can obtain E[θ (t) ] and V[θ (t) ], and hence E[L(θ (t) )], for every t. Therefore, using gradient-based optimization, we can fit a locally optimal learning rate and momentum schedule, i.e. a sequence of values {(α (t) , µ (t) )} T t=1 which locally minimizes E[L(θ (t) )] at some particular time T . We refer to this as the optimized schedule.
Furthermore, there is a closed-form solution for one-step lookahead, i.e., we can solve for the optimal learning rate α (t) * and momentum µ (t) * that minimizes E[L(θ (t+1) )] given the statistics at time t. We call this as the greedy-optimal schedule. Theorem 2 (Greedy-optimal learning rate and momentum). The greedy-optimal learning rate and momentum schedule is given by
α (t) * = i h 2 i A θ (t) i j h j A v (t) j − j h j E θ (t) j v (t) j h 2 i E θ (t) i v (t) i i h 3 i A θ (t) i + σ 2 i j h j A v (t) j − j h 2 j E θ (t) j v (t) j h 2 i E θ (t) i v (t) i , µ (t) * = − i h i 1 − α (t) * h i E θ (t) i v (t) i i h i A v (t) i .
Note that Schaul et al. (2013) derived the greedy optimal learning rate for SGD, and Theorem 2 extends it to the greedy optimal learning rate and momentum for SGD with momentum.
UNIVARIATE AND SPHERICAL CASES
As noted in Section 2.1, Martens & Grosse (2015) found the greedy choice of α and µ to be optimal for gradient descent on deterministic quadratic objectives. We now show that the greedy schedule is also optimal for SGD without momentum in the case of univariate noisy quadratics, and hence also for multivariate ones with spherical Hessians and gradient covariances. In particular, the following holds for SGD without momentum on a univariate noisy quadratic: Theorem 3 (Optimal learning rate, univariate). For all T ∈ N, the sequence of learning rates
{α (t) * } T −1 t=1 that minimizes L(θ (T ) ) is given by α (t) * = A θ (t) h A θ (t) + σ 2 .(7)
Moreover, this agrees with the greedy-optimal learning rate schedule as derived by Schaul et al. (2013).
If the Hessian and the gradient covariance are both spherical, then each dimension evolves identically and independently according to the univariate dynamics. Of course, one is unlikely to encounter an optimization problem where both are exactly spherical. But some approximate secondorder optimizers, such as K-FAC, can be viewed as preconditioned SGD, i.e. SGD in a transformed Figure 3: Comparisons of the optimized learning rates and momenta trained by gradient descent (red), greedy learning rates and momenta (blue), and the optimized fixed learning rate and momentum (green) in both noisy (a) and deterministic (b) quadratic settings. In the deterministic case, our optimized schedule matched the greedy one, just as the theory predicts. space where the Hessian and the gradient covariance are better conditioned (Martens & Grosse, 2015). In principle, with a good enough preconditioner, the Hessian and the gradient covariance would be close enough to spherical that a greedy choice of α and µ would perform well. It will be interesting to investigate whether any practical optimization algorithms demonstrate this behavior.
EXPERIMENTS
In this section, we compare the optimized and greedy-optimal schedules on a noisy quadratic problem. We chose a 1000 dimensional quadratic cost function with the curvature distribution from Li (2005), on which CG achieves its worst-case convergence rate. We assume that h i = V[ ∂L ∂θi ], and hence σ 2 i = 1 hi ; this choice is motivated by the observations that under certain assumptions, the Fisher information matrix is a good approximation to the Hessian matrix, but also reflects the covariance structure of the gradient noise (Martens, 2014). We computed the greedy-optimal schedules using Theorem 3. For the optimized schedules, we minimized the expected loss at time T = 250 using Adam using Adam (Kingma & Ba, 2015), with a learning rate 0.003 and 500 steps. We set an upper bound for the learning rate which prevented the loss component for any dimension from becoming larger than its initial value; this was needed because otherwise the optimized schedule allowed the loss to temporarily grow very large, a pathological solution which would be unstable on realistic problems. We also considered fixed learning rate and momentum, with the two hyperparameters fit using Adam. The training curves and the corresponding learning rates and momenta are shown in Figure 3(a). The optimized schedule achieved a much lower final expected loss value (4.25) than was obtained by the greedy-optimal schedule (63.86) or fixed schedule (42.19).
We also show the sums of the losses along the 50 highest curvature directions and 50 lowest curvature directions. We find that under the optimized schedule, the losses along the high curvature directions hardly decrease initially. However, because it maintains a high learning rate, the losses along the low curvature directions decrease significantly. After 50 iterations, it begins decaying the learning rate, at which point it achieves a large drop in both the high-curvature and total losses. On the other hand, under the greedy-optimal schedule, the learning rates and momenta become small very early on, which immediately reduces the losses on the high curvature directions, and hence also the total loss. However, in the long term, since the learning rates are too small to make substantial progress along the low curvature directions, the total loss converged to a much higher value in the end. This gives valuable insight into the nature of the short-horizon bias in meta-optimization: shorthorizon objectives will often encourage the learning rate and momentum to decay quickly, so as to achieve the largest gain in the short term, but at the expense of long-run performance.
It is interesting to compare this behavior with the deterministic case. We repeated the above experiment for a deterministic quadratic cost function (i.e. σ 2 i = 0) with the same Hessian; results are shown in Figure 3(b). The greedy schedule matches the optimized one, as predicted by the analysis of Martens & Grosse (2015). This result illustrates that stochasticity is necessary for short-horizon bias to manifest. Interestingly, the learning rate and momentum schedules in the deterministic case are nearly flat, while the optimized schedules for the stochastic case are much more complex, suggesting that stochastic optimization raises a different set of issues for hyperparameter adaptation.
GRADIENT-BASED META-OPTIMIZATION
We now turn our attention to gradient-based hyperparameter optimization. A variety of approaches have been proposed which tune hyperparameters by doing gradient descent on a meta-objective (Schraudolph, 1999;Maclaurin et al., 2015;Andrychowicz et al., 2016). We empirically analyze an idealized version of a gradient-based meta-optimization algorithm called stochastic meta-descent (SMD) (Schraudolph, 1999). Our version of SMD is idealized in two ways: first, we drop the algorithmic tricks used in prior work, and instead allow the meta-optimizer more memory and computation than would be economical in practice. Second, we limit the representational power of our meta-model: whereas Andrychowicz et al. (2016) aimed to learn a full optimization algorithm, we focus on the much simpler problem of adapting learning rate and momentum hyperparameters, or schedules thereof. The aim of these two simplifications is that we would like to do a good enough job of optimizing the meta-objective that any base-level optimization failures can be attributed to deficiencies in the meta-objective itself (such as short-horizon bias) rather than incomplete metaoptimization.
Despite these simplifications, we believe our experiments are relevant to practical meta-optimization algorithms which optimize the meta-objective less thoroughly. Since the goal of the metaoptimizer is to adapt two hyperparameters, it's possible that poor meta-optimization could cause the hyperparameters to get stuck in regions that happen to perform well; indeed, we observed this phenomenon in some of our early explorations. But it would be dangerous to rely on poor meta-optimization, since improved meta-optimization methods would then lead to worse base-level performance, and tuning the meta-optimizer could become a roundabout way of tuning learning rates and momenta.
We also believe our experiments are relevant to meta-optimization methods which aim to learn entire algorithms. Even if the learned algorithms don't have explicit learning rate parameters, it's possible for a learning rate schedule to be encoded into an algorithm itself; for instance, Adagrad (Duchi et al., 2011) implicitly uses a polynomial decay schedule because it sums rather than averages the squared derivatives in the denominator. Hence, one would need to worry about whether the metaoptimizer is implicitly fitting a learning rate schedule that's optimized for short-term performance.
BACKGROUND: STOCHASTIC META-DESCENT
The high-level idea of stochastic meta-descent (SMD) (Schraudolph, 1999) is to perform gradient descent on the learning rate, or any other differentiable hyperparameters. This is feasible since any gradient based optimization algorithm can be unrolled as a computation graph (see Figure 4), and automatic differentiation is readily available in most deep learning libraries.
There are two basic types of automatic differentiation (autodiff) methods: forward mode and reverse mode. In forward mode autodiff, directional derivatives are computed alongside the forward computation. In contrast, reverse mode autodiff (a.k.a. backpropagation) computes the gradients moving backwards through the computation graph. Meta-optimization using reverse mode can be computationally demanding due to memory constraints, since the parameters need to be stored at every step. Maclaurin et al. (2015) got around this by cleverly exploiting approximate reversibility to minimize the memory cost of activations. Since we are optimizing only two hyperparameters, however, forward mode autodiff can be done cheaply. Here, we provide the forward differentiation equations for obtaining the gradient of vanilla SGD learning rate. Let dθt dα be u t , and dLt dα be α , and the Hessian at step t to be H t . By chain rule, we get,
α = g t · u t−1 ,(8)u t = u t−1 − g t − αH t u t−1 .(9)
While the Hessian is infeasible to construct explicitly, the Hessian-vector product in Equation 9 can be computed efficiently using reverse-on-reverse (Werbos, 1988) or forward-on-reverse automatic differentiation (Pearlmutter, 1994), in time linear in the cost of the forward pass. See Schraudolph (2002) for more details.
Algorithm 1: Stochastic Meta-Descent
Input: α0, η, θ, T, M Output: α θ0 ← θ; α ← α0; for m ← 1 ... M do u ← 0; for t ← 1 ... T do X, y ← GetMiniBatch(); g ← BGrad(L(X, y, θ), θ, 1); θnew ← Step(θ, g, α); α ← g · u; u ← FGrad(θnew, [α, θ], [1, u]); θ ← θnew; α ← MetaStep(α, α , η); θ ← θ0 return α
Using the gradients with respect to hyperparameters, as given in Eq. 9, we can apply gradient based meta-optimization, just like optimizing regular parameters. It is worth noting that, although SMD was originally proposed for optimizing vanilla SGD, in practice it can be applied to other optimization algorithms such as SGD with momentum or Adam (Kingma & Ba, 2015). Moreover, gradient-based optimizers other than SGD can be used for the meta-optimization as well.
The basic SMD algorithm is given as Algorithm 1. Here, α is a set of hyperparameters (e.g. learning rate), and α 0 are inital hyperparameter values; θ is a set of optimization intermediate variables, such as weights and velocities; η is a set of metaoptimizer hyperparameters (e.g. meta learning rate). BGrad(y, x, dy) is the backward gradient function that computes the gradients of the loss function wrt. θ, and FGrad(y, x, dx) is the forward gradient function that accumulates the gradients of θ with respect to α.
Step and MetaStep optimize regular parameters and hyperparameters, respectively, for one step using gradient-based methods. Additionally, T is the lookahead window size, and M is the number of meta updates.
Simplifications from the original SMD algorithm. The original SMD algorithm (Schraudolph, 1999) fit coordinate-wise adaptive learning rates with intermediate gradients (u t ) accumulated throughout the process of training. Since computing separate directional derivatives for each coordinate using forward mode autodiff is computationally prohibitive, the algorithm used approximate updates. Both features introduced bias into the meta-gradients. We make several changes to the original algorithm. First, we tune only a global learning rate parameter. Second, we use exact forward mode accumulation because this is feasible for a single learning rate. Third, rather than accumulate directional derivatives during training, we compute the meta-updates on separate SGD Figure 5: Meta-objective surfaces and SMD trajectories (red) optimizing initial effective learning rate and decay exponent with horizons of {100, 1k, 5k, 20k} steps 2 . 2.5k random samples with Gaussian interpolation are used to illustrate the meta-objective surface.
trajectories simulated using fixed network parameters. Finally, we compute multiple meta-updates in order to ensure that the meta-objective is optimized sufficiently well. Together, these changes ensure unbiased meta-gradients, as well as careful optimization of the meta-objective, at the cost of high computational overhead. We do not recommend this approach as a practical SMD implementation, but rather as a way of understanding the biases in the meta-objective itself.
OFFLINE META-OPTIMIZATION
To understand the sensitivity of the optimized hyperparameters to the horizon, we first carried out an offline experiment on a multi-layered perceptron (MLP) on MNIST (LeCun et al., 1998). Specifically, we fit learning rate decay schedules offline by repeatedly training the network, and a single meta-gradient was obtained from each training run. Learnable decay schedule. We used a parametric learning rate decay schedule known as inverse time decay (Welling & Teh, 2011): α t = α0
(1+ t K ) β ,
where α 0 is the initial learning rate, t is the number of training steps, β is the learning rate decay exponent, and K is the time constant. We jointly optimized α 0 and β. We fixed µ = 0.9, K = 5000 for simplicity. Experimental details. The network had two layers of 100 hidden units, with ReLU activations. Weights were initialized with a zero-mean Gaussian with standard deviation 0.1. We used a warm start from a network trained for 50 SGD with momentum steps, using α = 0.1, µ = 0.9. (We used a warm start because the dynamics are generally different at the very start of training.) For SMD optimization, we trained all hyperparameters in log space using Adam optimizer, with 5k meta steps. Figure 5 shows SMD optimization trajectories on the meta-objective surfaces, initialized with multiple random hyperparameter settings. The SMD trajectories appear to have converged to the global optimum.
Importantly, the meta-objectives with longer horizons favored a much smaller learning rate decay exponent β, leading to a more gradual decay schedule. The meta-objective surfaces were very different depending on the time horizon, and the final β value differed by over two orders of magnitude between 100 and 20k step horizons.
We picked the best learning rate schedules from meta-objective surfaces (in Figure 5), and obtained the training curves of a network shown in Figure 6. The resulting training loss at 20k steps with Effective LR Figure 7: Training curves and learning rates from online SMD with lookahead of 5 steps (blue), and hand-tuned fixed learning rate (red). Each blue curve corresponds to a different initial learning rate.
the 100 step horizon was over three orders of magnitude larger than with the 20k step horizon. In general, short horizons gave better performance initially, but were surpassed by longer horizons. The differences in error were less drastic, but we see that the 100 step network was severely undertrained, and the 1k step network achieved noticeably worse test error than the longer-horizon ones.
ONLINE META-OPTIMIZATION
In this section, we study whether online adaptation also suffers from short-horizon bias. Specifically, we used Algorithm 1) to adapt the learning rate and momentum hyperparameters online while a network is trained. We experimented with an MLP on MNIST and a CNN on CIFAR-10 (Krizhevsky, 2009).
Experimental details. For the MNIST experiments, we used an MLP network with two hidden layers of 100 units, with ReLU activations. Weights were initialized with a zero-mean Gaussian with standard deviation 0.1. For CIFAR-10 experiments, we used a CNN network adapted from Caffe (Jia et al., 2014), with 3 convolutional layers of filter size 3 × 3 and depth [32,32,64], and 2 × 2 max pooling with stride 2 after every convolution layer, and follwed by a fully connected hidden layer of 100 units. Meta-optimization was done with 100 steps of Adam for every 10 steps of regular training. We adapted the learning rate α and momentum µ. After 25k steps, adaptation was stopped, and we trained for another 25k steps with an exponentially decaying learning rate such that it reached 1e-4 on the last time step. We re-parameterized the learning rate with the effective learning rate α eff = α 1−µ , and the momentum with 1 − µ, so that they can be optimized more smoothly in the log space. Figure 7 shows training curves both with online SMD and with hand-tuned fixed learning rate and momentum hyperparameters. We show several SMD runs initialized from widely varying hyperparameters; all the SMD runs behaved similarly, suggesting it optimized the meta-objective efficiently enough. Under SMD, learning rates were quickly decreased to very small values, leading to slow progress in the long term, consistent with the noisy quadratic and offline adaptation experiments.
As online SMD can be too conservative in the choice of learning rate, it is natural to ask whether removing the stochasticity in the lookahead sequence can fix the problem. We therefore considered online SMD where the entire lookahead trajectory used a single mini-batch, hence removing the stochasticity. As shown in Figure 8, this deterministic lookahead scheme led to the opposite problem: the adapted learning rates were very large, leading to instability. We conclude that the stochasticity of mini-batch training cannot be simply ignored in meta-optimization.
CONCLUSION
In this paper, we analyzed the problem of short-horizon bias in meta-optimization. We presented a noisy quadratic toy problem which we analyzed mathematically, and observed that the optimal learning rate schedule differs greatly from a greedy schedule that minimizes training loss one step ahead. While the greedy schedule tends to decay the learning rate drastically to reduce the loss on high curvature directions, the optimal schedule keeps a high learning rate in order to make steady progress on low curvature directions, and eventually achieves far lower loss. We showed that this bias stems from the combination of stochasticity and ill-conditioning: when the problem is either deterministic or spherical, the greedy learning rate schedule is globally optimal; however, when the problem is both stochastic and ill-conditioned (as is most neural net training), the greedy schedule performs poorly. We empirially verified the short-horizon bias in the context of neural net training by applying gradient based meta-optimization, both offline and online. We found the same pathological behaviors as in the noisy quadratic problem -a fast learning rate decay and poor long-run performance.
While our results suggest that meta-optimization should not be applied blindly, our noisy quadratic analysis also provides grounds for optimism: by removing ill-conditioning (by using a good preconditioner) and/or stochasticity (with large batch sizes or variance reduction techniques), it may be possible to enter the regime where short-horizon meta-optimization works well. It remains to be seen whether this is achievable with existing optimization algorithms.
A PROOFS OF THEOREMS
The proofs are organized as follows; we provide a proof to Theorem 1 in A.1, a proof to Theorem 2 in A.2 and a proof to Theorem 3 in A.3.
A.1 MODEL DYNAMICS
Recall the stochastic gradient descent with momentum is defined as follows,
v (t+1) = µ (t) v (t) − α (t) (hθ (t) + hσξ), ξ ∼ N (0, 1) θ (t+1) = θ (t) + v (t+1) = θ (t) + µ (t) v (t) − α (t) (hθ (t) + hσξ) = (1 − α (t) h)θ (t) + µ (t) v (t) + hσξ.
A.1.1 DYNAMICS OF THE EXPECTATION
We calculate the mean of the velocity v (t+1) ,
E v (t+1) = E µ (t) v (t) − α (t) hθ (t) = µ (t) E v (t) − α (t) hE θ (t) .(10)
We calculate the mean of the parameter θ (t+1) ,
E θ (t+1) = E θ (t) + E v (t+1) .(11)
Let's assume the following initial conditions:
E v (0) = 0 E θ (0) = E 0 .
Then Eq.(10) and Eq.(11) describes how E θ (t) , E v (t) changes over time t.
A.1.2 DYNAMICS OF THE VARIANCE
We calculate the variance of the velocity v (t+1) ,
V v (t+1) = V µ (t) v (t) − α (t) hθ (t) + (α (t) hσ) 2 = (µ (t) ) 2 V v (t) + (α (t) h) 2 V θ (t) − 2µ (t) α (t) h · Cov θ (t) , v (t) + (α (t) hσ) 2 .(12)
The variance of the parameter θ (t+1) is given by,
V θ (t+1) = V θ (t) + V v (t+1) + 2 µ (t) Cov θ (t) , v (t) − α (t) hV θ (t) .(13)
We also need to derive how the covariance of θ and v changes over time:
Cov θ (t+1) , v (t+1) = Cov (θ (t) + v (t+1) ), v (t+1) = Cov θ (t) , v (t+1) + V v (t+1) = µ (t) Cov θ (t) , v (t) − α (t) hV θ (t) + V v (t+1) .(14)
Let's assume the following initial conditions:
V v (0) = 0 V θ (0) = V 0 Cov θ (0) , v (0) = 0.
Combining Eq. (12-14), we obtain the following dynamics (from t = 0, . . . , T − 1):
V v (t+1) = (µ (t) ) 2 V v (t) + (α (t) h) 2 V θ (t) − 2µ (t) α (t) h · Cov θ (t) , v (t) + (α (t) hσ) 2 V θ (t+1) = V θ (t) + V v (t+1) + 2 µ (t) Cov θ (t) , v (t) − α (t) hV θ (t) Cov θ (t+1) , v (t+1) = µ (t) Cov θ (t) , v (t) − α (t) hV θ (t) + V v (t+1) .
A.2 GREEDY OPTIMALITY
A.2.1 UNIVARIATE CASE The loss at time step t is,
L (t+1) = 1 2 h E θ (t+1) 2 + V θ (t+1) = 1 2 h E θ (t) + µ (t) E v (t) − (α (t) h)E θ (t) 2 + V θ (t) + (µ (t) ) 2 V v (t) + (α (t) h) 2 V θ (t) − 2µ (t) α (t) h · Cov θ (t) , v (t) + (α (t) hσ) 2 + 2 µ (t) Cov θ (t) , v (t) − α (t) hV θ (t) = 1 2 h (1 − α (t) h)E θ (t) + µ (t) E v (t) 2 + (1 − α (t) h) 2 V θ (t) + (µ (t) ) 2 V v (t) + 2µ (t) (1 − α (t) h)Cov θ (t) , v (t) + (α (t) hσ) 2 = 1 2 h (1 − α (t) h) 2 E θ (t) 2 + V θ (t) + (µ (t) ) 2 E v (t) 2 + V v (t) + 2µ (t) (1 − α (t) h) E θ (t) E v (t) + Cov θ (t) , v (t) + (α (t) hσ) 2 .
For simplicity, we denote A(·) = E [·] 2 + V [·], and notice that E
θ (t) v (t) = E θ (t) E v (t) +
Cov θ (t) , v (t) , hence, L (t+1) = 1 2 h (1 − α (t) h) 2 A(θ (t) ) + (µ (t) ) 2 A(v (t) ) + 2µ (t) (1 − α (t) h)E θ (t) v (t) + (α (t) hσ) 2 .
(15) In order to find the optimal learning rate and momentum for minimizing L (t+1) , we take the derivative with respect to α (t) and µ (t) , and set it to 0:
∇ α (t) L (t+1) = (1 − α (t) h)A(θ (t) ) · (−h) − µ (t) hE θ (t) v (t) + α (t) (hσ) 2 = 0 α (t) h(A(θ (t) ) + σ 2 ) = A(θ (t) ) + µ (t) E θ (t) v (t) ∇ µ (t) L (t+1) = µ (t) A(v (t) ) + (1 − α (t) h)E θ (t) v (t) = 0 µ (t) * = − (1 − α (t) h)E θ (t) v (t) A(v (t) ) α (t) h(A(θ (t) ) + σ 2 ) = A(θ (t) ) − (1 − α (t) h)E θ (t) v (t) A(v (t) ) E θ (t) v (t) α (t) * = A(θ (t) )A(v (t) ) − E θ (t) v (t) 2 h A(v (t) )(A(θ (t) ) + σ 2 ) − E θ (t) v (t) 2 .
A.2.2 HIGH DIMENSION CASE
The loss is the sum of losses along all directions:
L (t+1) = i 1 2 h i (1−α (t) h i ) 2 A(θ (t) i )+(µ (t) ) 2 A(v (t) i )+2µ (t) (1−α (t) h i )E θ (t) i v (t) i +(α (t) h i σ i ) 2
Now we obtain optimal learning rate and momentum by setting the derivative to 0,
∇ α (t) L (t+1) = i h i (1 − α (t) h i )A(θ (t) i ) · (−h i ) − µ (t) h i E θ (t) i v (t) i + α (t) (h i σ i ) 2 = 0 α (t) i (h i ) 3 (A(θ (t) i ) + (σ i ) 2 ) = i (h i ) 2 A(θ (t) i ) + µ (t) (h i ) 2 E θ (t) i v (t) i ∇ µ (t) L (t+1) = i h i µ (t) A(v (t) i ) + h i (1 − α (t) h i )E θ (t) i v (t) i = 0 µ (t) * = − i h i (1 − α (t) h i )E θ (t) i v (t) i i h i A(v (t) i ) α (t) i (h i ) 3 (A(θ (t) i ) + (σ i ) 2 ) j h j A(v (t) j ) − j (h j ) 2 E θ (t) j v (t) j (h i ) 2 E θ (t) i v (t) i = i (h i ) 2 A(θ (t) i ) j h j A(v (t) j ) − j h j E θ (t) j v (t) j (h i ) 2 E θ (t) i v (t) i α (t) * = i (h i ) 2 A(θ (t) i ) j h j A(v (t) j ) − j h j E θ (t) j v (t) j (h i ) 2 E θ (t) i v (t) i i (h i ) 3 (A(θ (t) i ) + (σ i ) 2 ) j h j A(v (t) j ) − j (h j ) 2 E θ (t) j v (t) j (h i ) 2 E θ (t) i v (t) i .
A.3 UNIVARIATE OPTIMALITY IN SGD
We now consider a dynamic programming approach to solve the problem. We formalize the optimization problem of {α i } as follows. We first denote L min as the minimum expected loss at the last time step T (i.e., under the optimal learning rate).
L min = min α (t) ,α (t+1) ,...,α (T −1) E ξ (t) ,ξ (t+1) ,...,ξ (T −1) L(θ (T ) ) .
Recall that the loss can be expressed in terms of the expectation and variance of θ. Denote A (t) = (E θ (t) ) 2 + V θ (t) . The final loss can be expressed in terms of A (T ) min , obtained by using optimal learning rate schedule: L min = 1 2 hA (T ) min + σ 2 .
As in Theorem 1, we derive the dynamics for SGD without momentum:
θ (t) = (1 − α (t−1) h)θ (t−1) + α (t−1) hσξ (t−1) ⇒ E θ (t) , V θ (t) = (1 − α (t−1) h)E θ (t−1) , (1 − α (t−1) h) 2 V θ (t−1) + (α (t−1) hσ) 2 .
Thus, we can find a recurrence relation of the sequence A (t) :
A (t) = (1−α (t−1) h) 2 (E θ (t−1) ) 2 + V θ (t−1) 2 +(α (t−1) hσ) 2 = (1−α (t−1) h) 2 A (T −1) min +(α (t−1) hσ) 2 . Since A (T )
min is a function of α (T −1) * . We can obtain optimal learning rate α (T −1) * by taking the derivative of L min w.r.t. α (T −1) and setting it to zero:
Figure 4 :
4Regular SGD in the form of a computation graph. The learning rate parameter α is part of the differentiable computations.
Figure 6 :
6Training curves with best learning rate schedules from meta-objective surfaces with {100, 1k, 5k, 20k} step horizons.
Figure 8 :
8Online SMD with deterministic lookahead of 5 steps (blue), compared with a manually tuned fixed learning rate (red). Other settings are the same asFigure 7.
dα (T −1) = 0 ⇒ α (T −1) * = A (T −1) min h(A (T −1) min + σ 2 ).
This amounts to assuming that the Hessian and the noise covariance are codiagonalizable. One heuristic justification for this assumption in the context of neural net optimization is that under certain assumptions, the covariance of the gradients is proportional to the Fisher information matrix, which is close to the Hessian(Martens, 2014).
We encountered some optimization difficulties for SMD with horizon of 20k steps. Since those are not the focus of this paper, we left out the trajectories of 20k steps to avoid confusions.
Acknowledgement YW is supported by a Google PhD Fellowship. RL is supported by Connaught International Scholarships.(T ) min in terms of A (T −1) min and the optimal α (T −1) * :Therefore,We now generalize the above derivation. First rewrite L min in terms of A T −k min and calculate the optimal learning rate at time step T − k. Theorem 4. For all T ∈ N, and k ∈ N, 1 ≤ k ≤ T , we have,Therefore, the optimal learning α (t) at timestep t is given as,.Proof. The form of L min can be easily proven by induction on k, and use the identity that,The optimal learning rate then follows immediately by taking the derivative of L min w.r.t. α (T −k−1) and setting it to zero. Note that the subscript min is omitted from A (t) in Eq.(17)as we assume all A (t) are obtained using optimal α * , and hence minimum.
Natural gradient works efficiently in learning. Amari Shun Ichi, 0899-7667Neural Computation. 102Shun Ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251- 276, 1998. ISSN 0899-7667.
Learning to learn by gradient descent by gradient descent. Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W Hoffman, David Pfau, Tom Schaul, Nando De Freitas, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Barcelona, SpainMarcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3981- 3989, 2016.
On-line learning in neural networks. chapter On-line Learning and Stochastic Approximations. Léon Bottou, ISBN 0- 521-65263-4Cambridge University PressNew York, NY, USALéon Bottou. On-line learning in neural networks. chapter On-line Learning and Stochastic Approximations, pp. 9-42. Cambridge University Press, New York, NY, USA, 1998. ISBN 0- 521-65263-4.
Adaptive subgradient methods for online learning and stochastic optimization. John C Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011.
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, ICML. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Technical reportPriya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. Technical report, FAIR, 2017.
Practical variational inference for neural networks. Alex Graves, Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems. Granada, SpainAlex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain., pp. 2348-2356, 2011.
Caffe: Convolutional architecture for fast feature embedding. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell, 978-1-4503-3063-3Proceedings of the 22Nd ACM International Conference on Multimedia, MM '14. the 22Nd ACM International Conference on Multimedia, MM '14Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22Nd ACM International Conference on Multimedia, MM '14, pp. 675-678, 2014. ISBN 978-1-4503-3063-3.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3th International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3th International Conference on Learning Representations, 2015.
Understanding black-box predictions via influence functions. P W Koh, P Liang, International Conference on Machine Learning (ICML). P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In International Conference on Machine Learning (ICML), 2017.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Technical reportAlex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
Gradient-based learning applied to document recognition. Yann Lecun, Leon Bottou, Yoshua Bengio, Patrick Haffner, 10.1109/5.726791Proceedings of the IEEE. 8611Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, Nov 1998. ISSN 0018-9219. doi: 10.1109/5.726791.
Learning to optimize. Ke Li, Jitendra Malik, 5th International Conference on Learning Representations. Ke Li and Jitendra Malik. Learning to optimize. In 5th International Conference on Learning Representations, 2017.
Sharpness in rates of convergence for CG and symmetric lanczos methods. Ren-Cang Li, Technical reportRen-cang Li. Sharpness in rates of convergence for CG and symmetric lanczos methods. Technical report, 2005.
Learning gradient descent: Better generalization and longer horizons. Kaifeng Lv, Shunhua Jiang, Jian Li, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningKaifeng Lv, Shunhua Jiang, and Jian Li. Learning gradient descent: Better generalization and longer horizons. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pp. 2247-2255, 2017.
A practical bayesian framework for backpropagation networks. J C David, Mackay, 0899-7667Neural Comput. 43David J. C. MacKay. A practical bayesian framework for backpropagation networks. Neural Comput., 4(3):448-472, May 1992. ISSN 0899-7667.
Gradient-based hyperparameter optimization through reversible learning. Dougal Maclaurin, David Duvenaud, Ryan P Adams, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningDougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32nd International Conference on Machine Learning, July 2015.
Deep learning via Hessian-free optimization. James Martens, ICML-10. James Martens. Deep learning via Hessian-free optimization. In ICML-10, 2010.
James Martens, arXiv:1412.1193New insights and perspectives on the natural gradient method. James Martens. New insights and perspectives on the natural gradient method. arXiv:1412.1193, 2014.
Optimizing neural networks with kronecker-factored approximate curvature. James Martens, Roger Grosse, ICML. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015.
Unrolled generative adversarial networks. Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein, 5th International Conference on Learning Representations. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. In 5th International Conference on Learning Representations, 2017.
Fast exact multiplication by the hessian. A Barak, Pearlmutter, 10.1162/neco.1994.6.1.147Neural Computation. 61Barak A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation, 6(1):147-160, January 1994. ISSN 0899-7667. doi: 10.1162/neco.1994.6.1.147.
Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, 5th International Conference on Learning Representations. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations, 2017.
ImageNet large scale visual recognition challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S Bernstein, Alexander C Berg, Fei-Fei Li, International Journal of Computer Vision. 1153Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015.
Meta-learning with memory-augmented neural networks. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy P Lillicrap, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine LearningNew York City, NY, USAAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of the 33nd Interna- tional Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 1842-1850, 2016.
No more pesky learning rates. Tom Schaul, Sixin Zhang, Yann Lecun, Proceedings of the 30th International Conference on Machine Learning, ICML 2013. the 30th International Conference on Machine Learning, ICML 2013Atlanta, GA, USATom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 343-351, 2013.
Local gain adaptation in stochastic gradient descent. Nicol N Schraudolph, 10.1049/cp:19991170Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470. 2Nicol N. Schraudolph. Local gain adaptation in stochastic gradient descent. In 1999 Ninth International Conference on Artificial Neural Networks ICANN 99. (Conf. Publ. No. 470), volume 2, pp. 569-574 vol.2, 1999. doi: 10.1049/cp:19991170.
Fast curvature matrix-vector products for second-order gradient descent. Nicol N Schraudolph, 0899-7667. doi: 10.1162/ 08997660260028683Neural Computation. 147Nicol N. Schraudolph. Fast curvature matrix-vector products for second-order gradient de- scent. Neural Computation, 14(7):1723-1738, July 2002. ISSN 0899-7667. doi: 10.1162/ 08997660260028683.
Cyclical learning rates for training neural networks. L N Smith, 10.1109/WACV.2017.582017 IEEE Winter Conference on Applications of Computer Vision (WACV). L. N. Smith. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464-472, March 2017. doi: 10.1109/WACV. 2017.58.
On the importance of initialization and momentum in deep learning. Ilya Sutskever, James Martens, George Dahl, Geoffrey Hinton, Proceedings of the 30th International Conference on International Conference on Machine Learning. the 30th International Conference on International Conference on Machine Learning2813Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning -Volume 28, ICML'13, 2013.
Bayesian learning via stochastic gradient langevin dynamics. Max Welling, Yee Whye Teh, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningBellevue, Washington, USAMax Welling and Yee Whye Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pp. 681-688, 2011.
Backpropagation: past and future. P J Werbos, IEEE 1988 International Conference on Neural Networks. 1P. J. Werbos. Backpropagation: past and future. IEEE 1988 International Conference on Neural Networks, pp. 343-353 vol.1, 1988.
Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningOlga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pp. 3751-3760, 2017.
Noisy natural gradient as variational inference. Guodong Zhang, Shengyang Sun, David K Duvenaud, Roger B Grosse, abs/1712.02390CoRRGuodong Zhang, Shengyang Sun, David K. Duvenaud, and Roger B. Grosse. Noisy natural gradient as variational inference. CoRR, abs/1712.02390, 2017. |
263,831,046 | AMORTIZED NETWORK INTERVENTION TO STEER THE EXCITATORY POINT PROCESSES | We tackle the challenge of large-scale network intervention for guiding excitatory point processes, such as infectious disease spread or traffic congestion control.Our model-based reinforcement learning utilizes neural ODEs to capture how the networked excitatory point processes will evolve subject to the time-varying changes in network topology.Our approach incorporates Gradient-Descent based Model Predictive Control (GD-MPC), offering policy flexibility to accommodate prior knowledge and constraints.To address the intricacies of planning and overcome the high dimensionality inherent to such decision-making problems, we design an Amortize Network Interventions (ANI) framework, allowing for the pooling of optimal policies from history and other contexts, while ensuring a permutation equivalent property.This property enables efficient knowledge transfer and sharing across diverse contexts.Our approach has broad applications, from curbing infectious disease spread to reducing carbon emissions through traffic light optimization, and thus has the potential to address critical societal and environmental challenges. | [
53802740
] | AMORTIZED NETWORK INTERVENTION TO STEER THE EXCITATORY POINT PROCESSES
6 Oct 2023
A Preprint
School of Data Science
The Chinese University of Hong Kong
518172Shenzhen ShenzhenChina
Zitao Song [email protected]
School of Data Science
The Chinese University of Hong Kong
518172Shenzhen ShenzhenChina
Wendi Ren [email protected]
School of Data Science
The Chinese University of Hong Kong
518172Shenzhen ShenzhenChina
Shuang Li [email protected]
School of Data Science
The Chinese University of Hong Kong
518172Shenzhen ShenzhenChina
AMORTIZED NETWORK INTERVENTION TO STEER THE EXCITATORY POINT PROCESSES
6 Oct 2023BAB49A7451D57FE1B1A23B04EE941978arXiv:2310.04159v1[cs.LG]
We tackle the challenge of large-scale network intervention for guiding excitatory point processes, such as infectious disease spread or traffic congestion control.Our model-based reinforcement learning utilizes neural ODEs to capture how the networked excitatory point processes will evolve subject to the time-varying changes in network topology.Our approach incorporates Gradient-Descent based Model Predictive Control (GD-MPC), offering policy flexibility to accommodate prior knowledge and constraints.To address the intricacies of planning and overcome the high dimensionality inherent to such decision-making problems, we design an Amortize Network Interventions (ANI) framework, allowing for the pooling of optimal policies from history and other contexts, while ensuring a permutation equivalent property.This property enables efficient knowledge transfer and sharing across diverse contexts.Our approach has broad applications, from curbing infectious disease spread to reducing carbon emissions through traffic light optimization, and thus has the potential to address critical societal and environmental challenges.
Introduction
In the face of widespread epidemic outbreaks, governments must act swiftly and wisely to control the spread of diseases, often through measures like temporary city lockdowns or travel restrictions (Salathé & Jones, 2010;Sambaturu et al., 2020).Similarly, optimizing traffic light schedules in densely populated urban areas is essential to alleviate traffic congestion.These real-world scenarios highlight the necessity of guiding event processes across networks by modifying network structures as needed.The dynamics of these networked events are complex, involving vast volumes of data across multiple dimensions.Decision-making must be reliable and adaptable to rapidly changing circumstances.However, altering dynamic network structures presents a computational challenge, especially in scenarios like city traffic control, where realworld constraints and various factors must be considered.For instance, when regulating the coronavirus, government interventions must balance health concerns with economic implications and public sentiment.Thus, this network intervention problem requires innovative solutions.We model events, such as infectious disease spread or traffic congestion, as multivariate excitatory temporal point processes.Our goal is to solve a model-based reinforcement learning (MBRL) problem: guiding large-scale excitatory processes across dynamic networks by modifying network structures to minimize costs.This presents challenges in both modeling and computation.First, modeling networked excitatory point processes with complex excitation patterns is challenging.Traditional disease models, such as SIR models (Weiss, 2013), use ordinary differential equations (ODEs).These models divide the population into compartments like susceptible, infectious, and recovered, utilizing ODEs to capture changes over time.Similarly, classic traffic flow models rely on ODEs or PDEs.For example, the Lighthill-Whitham-Richards (LWR) model (Lighthill & Whitham, 1955;Richards, 1956) employs PDEs to describe traffic density evolution along roads.These models offer simplified yet insightful representations of disease dynamics and traffic patterns.To address the complex dynamics of high-dimensional event sequences, we turn to the Neural ODE model (Chen et al., 2018), a data-driven approach for modeling ODE dynamics.Importantly, our model-based RL framework can adapt to various event process models beyond Neural ODEs, allowing for efficient computational choices while maintaining high prediction accuracy.Figure 1: A viral infection started in a random region, with a network intervention curbing its spread.Nodes represent counties, and edges are roads.On day two, one county had a spike in cases, which spread to its neighboring county (red node).On days three and four, external lockdowns were alternated on neighboring roads to curb the pandemic (yellow node).
The second challenge is to design intervention policies that accommodate domain constraints, incorporate feedback rapidly, and adapt to changing circumstances.Gradient-Descent-based Model Predictive Control (GD-MPC) with medium-sized neural network models (Nagabandi et al., 2018;Bharadhwaj et al., 2020) is a valuable approach among Model-based Reinforcement Learning (MBRL) algorithms.MPC solves a finite-horizon optimization problem at each time step using a sliding window approach, which improves decision-making.MPC advantages include explicit consideration of system dynamics and constraints, continuous adaptation based on feedback, and flexibility in incorporating various objectives and constraints.These features make MPC a powerful tool for designing adaptive intervention policies for complex, high-frequency event sequences.The third challenge is scaling the MBRL algorithm for high-dimensional problems, like controlling an entire city's traffic network.We've developed the Amortize Network Interventions (ANI) framework to tackle this issue.ANI enables us to extract optimal policies from historical data and similar tasks while preserving a crucial permutation equivalent property.We introduce a novel metric to aid in learning permutation equivalent representations, ensuring efficient parameter transfer and sharing across tasks, thereby enhancing our approach's adaptability and scalability.
Our proposed method is strategically crafted to meet the above three challenges.To assess its efficacy and efficiency, we have conducted comprehensive experiments using synthetic traffic congestion data and real-world COVID datasets.
The experimental results demonstrate the effectiveness of our approach in adeptly steering excitatory point processes through the control of network dynamics.
2 Problem Formulation: Model-based RL
We begin by modeling spatial-temporal event sequences as temporal graph networks.For infectious diseases, we divide the geographical map into regions, each corresponding to a graph node.Each time step records new confirmed cases in these regions, creating a discrete-time dynamic graph.In the case of traffic congestion incidents, we use a lane-based approach.Each lane on a road becomes a network node, and at each time step, we track the congestion count for each lane within the specified time interval.Formally, we define a temporal graph network G t = (V t , E t ) indexed by t = 0, 1, . . ., with V t and E t representing the node and edge sets at time t.The network maintains a fixed set of N nodes at each time step.For each node, representing either a region or a traffic lane, we observe a sequence of event spike counts at each time step.This results in a spike count matrix observed up to time t, denoted as X t ∈ N t×N .Here, X t contains N time-series of event counts: X t = {x t 1 , . . ., x t N }.We focus on the problem of managing the flow of the event counts to achieve specific levels at minimal cost, through sequential adjustment of the edge {E t } t≥0 .Adding or removing certain edges will alter the connections between corresponding nodes, influencing the generative patterns of events.This formulation has broad applications, including containing epidemic outbreaks through lockdown policies or regulating traffic congestion by strategically designing traffic lights.We consider a finite-time horizon control framework, where an agent aims to find an edge intervention policy π(h t ) : S → A, given the current state h t , such that the cumulative expected reward within a fixed time horizon is maximized,
π * = arg max π∈Π E T t=0 r t (h t , a t ) ,(1)
where h 0 ∼ p 0 (•), h t+1 ∼ P(•|h t , a t ), and a t ∼ π(h t ).Several key aspects are as follows:
1. Environment: high-dimensional event sequences {X t } t≥0 with stationary dynamics, occurring over a temporal graph network {G t = (V t , E t )} t≥0 .2. State: all the historical observations up to current time t, including event sequence and intervention histories.We assume the state information is completely encoded into a graph embedding vector h t , where h t ∈ R N ×D and D is the embedded dimension.We will explain how to perform the state embedding when we describe our predictive model for the event sequences.
3. Action: the action space is defined as A := {a ∈ {0, 1} N ×N |a T c ≤ B, m,n a mn ≤ K}, where c is the intervention cost to the edges, B ∈ R + is the total budget at each stage, and K is the maximum number of edges to be intervened at each stage.Here, we put some budget constraints on the action space to enable a safe policy.4. State Transition: although the dynamics of the event sequences are unknown, we will build a predictive model P θ (•|h t , a t ) and estimate the model parameters θ using observational data. 5. Reward Function: the reward function is tailored to suit particular applications.It is influenced by cumulative event counts and can be augmented by incorporating other societal or environmental considerations.Note that our time-dependent reward function r t can entail a discount factor γ t .Since the state transition model P θ (•|h, a) are unknown, we need to learn them from data.The optimal policy π * in Eq. ( 1) can be estimated by repeated querying the model.In the next section, we will explain how to build the predictive model for the environment, e.g., the event sequence model.It is noteworthy to mention that solving a large-scale problem requires solving the abovementioned problem (Eq.( 1)) repeatedly -from one region to multiple regions, from one fixed time window to multiple time windows.How to leverage the optimal policies of previous subproblems to ease the optimization of a new one?In this paper, we have devised the Amortized Network Interventions (ANI) framework.
As demonstrated in Fig. 2, ANI enables us to aggregate optimal policies from historical data and similar tasks while preserving a critical permutation equivalent property.We will elaborate on ANI in Section 5.
3 Modeling the Environment: Networked Jump ODE Model
Inspired by traditional ODE-and PDE-based models in infectious disease and traffic flow studies, we propose a datadriven approach to model event sequence dynamics.We introduce a Networked Jump ODE (NJODE) model to replicate the evolution of excitatory point processes, drawing from concepts in Neural Spatio-Temporal Point Processes (Chen et al., 2020a) and Neural Jump SDEs (Jia & Benson, 2019), which have been used for fine-grained spatio-temporal event process modeling.We modify these models to handle large event counts in discrete-time and high-frequency scenarios.We model the excitatory point processes based on two assumptions.(1) Processes with the same network share triggering kernel model parameters but have distinct parameters for emission probability distributions.This scalability helps accommodate more nodes in local regions without significantly increasing model parameters.
(2) Different local regions share a similar underlying dynamic structure.This enables fine-tuning or reusing pre-trained local region dynamics for unseen local region dynamics.Evolution of latent states We formalize the state transition model by an ODE system with jumps, where the latent state h t at each time t evolves according to
h t0 n = h 0 n (2) dh t n dt = f h (t, h t n ), ∀t ∈ R + \ ∪ i {t i },(3)lim ϵ↓0 h ti+ϵ n = m∈Nn w m→n • ϕ h (h ti m , x ti m ).(4)
Here, N n is the neighbors of node n and h ti n ∈ R D is the latent state for node n, where n ∈ {1, 2, . . ., N }.t i represents the time stamp to record discrete jumps.Rather than treating the event arrival time as a random variable (Chen et al., 2020a), we regard the total number of discrete events with interval [t i−1 , t i ) as a random variable x ti , allowing us to process high-frequency temporal data like traffic flow.The use of ϵ is to portray h t n as a left-continuous function with right limits at any fixed t i .f h is used to model the continuous change and ϕ h is used to model the instantaneous jump based on neighbors' events x ti m .f h and ϕ h are shared for different event processes in the same local region.w m→n indicates the influence strength from node m to n.We denote W = [w m→n ] as the influence matrix.This architecture is similar to a recurrent neural network with a continuous-time latent state modeled by a neural ODE.Under this formulation, the latent state h t n incorporates both historical information from itself and abrupt changes triggered by neighboring nodes.This mechanism for preserving abrupt change and recording memory is important to model excitatory point processes and generalize other unseen dynamics.Conditional emission probability distribution At each time t i , we parameterize the event count distribution as a function of the latent state h t .Specifically, in the rest of the paper, we assume the spike count x t n follows a Poisson distribution, whose intensity λ t n is a function of h t n :
λ t n = exp(b ψn + g ψ (h t n )).
(5) Here, we assume g ψ is the shared distribution parameter neural network among different nodes, while b ψn is the distinct baseline variable for different nodes.Given this model, we see that the final emission probability of x t n conditioned on historical observations x <t n is given by log p θ (x t n |x <t n ) = −λ t n + x t n log λ t n − log(x t n !) (6) where θ refers to all model parameters.Finally, given a spike counts matrix X ∈ N N ×T , we assume different nodes at different times are conditionally independent given the latent state h t , thereby we estimate the parameter θ by maximum log-likelihood and the total log-likelihood is expressed as
L LLH (X; θ) = T −1 t=0 N n=1 log p θ (x t n |x <t n ).(7)
Mean field approximation for reward modeling In our MBRL formulation, we use the estimated event process model as our environment simulator to perform online planning.The reward is usually a function of the generated future events.For example, it can be the negative value of the total number of newly infested people at the next stage, i.e., r t n := − N n=1
xt+1 n , where xt n denotes an estimator of x t n .In the planning phase, accurately approximating the expected cumulative reward demands a considerable number of rollouts from conditional emission probability distributions, which can be time-consuming.Instead, we construct a reward model r t n,MFA based on the mean field approximation (MFA) for x t n by averaging over the high dimensional freedoms on the conditional term (detailed in Appendix E).As a result, during planning, we have a deterministic reward model after removing the stochasticity in Eq. ( 4) and replacing x ti m with its mean.This mean field approximation enables efficient online planning.
Gradient-Descen-based Model Predictive Control
Given the estimated environment model in Section 3, we design control algorithms to obtain an optimal event flow steering policy by performing interventions to the graph's edges.Specifically, for an N -node influence graph, each action involves selecting a subset of k (k ≤ K) edges from N (N − 1) direct edges (excluding self-connections).Hence, we can represent action a t as a k-hot matrix, resulting in the intervened influence graph given by W ⊙ (1 − a t ).Our approach draws inspiration from Adaptive MPC (Garcia et al., 1989), which dynamically adjusts and enhances a model in real-time to account for time-varying dynamic characteristics.We construct a policy-gradient-based control algorithm and incorporate flexible constraints on the action space.
Receding Horizon Control We construct our cumulative objective function from a rolling-horizon perspective.At each time t, we optimize the policy π φ by looking T -step ahead, i.e.,
π * φ = arg max πφ T −1 i=0 r t MFA (h t+i , π φ (h t+i ), f h • ϕ h (h t+i , π φ (h t+i )),(8)
where the expected reward is replaced by the MFA, and the function composition f h • ϕ h will give the next state.In online planning, the learned model is used to explore state trajectories that start from the current latent state h t .After finding a reward-maximizing policy from time t to t + T , only the first action is employed.At time t + 1, when new data arrives, a new latent state h t+1 is queried again from our model, and the calculations repeat, yielding a new policy and prediction trajectory.Gradient-Descent-based Optimization Instead of exhaustively searching the discrete combinatorial action space to optimize our objective, we approximate this space using a continuous relaxation technique (Xie & Ermon, 2019).We replace
W ⊙ (1 − a t ) with W ⊙ (1 − p t )
, where p t represents edge selection probabilities.With this reparametrization, the objective becomes a fully deterministic function of the policy and dynamics, enabling end-to-end differentiable policy learning.
Incorporating Fairness Constraints and More We can incorporate flexible constraints, including fairness, into the decision-making process.We distinguish between hard and soft constraints in our approach.For hard constraints, such as limitations on consecutive lockdown days for a county (e.g., not locking down a county for more than certain consecutive days), we can employ a dynamic mask to explicitly exclude actions that fall outside the feasible space.As for soft constraints, like ensuring overall fairness in the policy, we can design an additional reward term, denoted as r t aug , and scale it by λ.This augmented reward term is jointly updated with the policy to enforce fairness within the optimization objective.
5 Making Large-Scale Problem Tractable: Amortized Policy 7) and Update θ i in Θ; end
In practice, managing a city's extensive traffic network is a challenging large-scale problem due to its sheer size.To tackle this, we employ a divide-and-conquer approach, breaking the problem down into manageable subproblems.For instance, we segment the vast network into smaller, more manageable subgraphs, each representing a tractable subproblem.While this strategy makes the overall problem more manageable, it raises a crucial question: How can we utilize optimal policies from previous subproblems to streamline the optimization of new ones?To address this, we introduce the Amortized Network Interventions (ANI) framework.Amortized Intervention In the previous section, our assumption was that each agent operates solely with local information, without utilizing global data.In this section, our objective is to learn a shared amortized policy (Gordon et al., 2019) that can be applied across different regions with distinct dynamics.We hypothesize the existence of collective behavior among these various local temporal dynamic systems.Given a sequence of local policies {π i } M i=1 addressing M distinct sub-problems, our goal is to create an amortized policy π amo .This policy should extract invariant representations and enable the adoption of similar policy structures among similar temporal dynamic systems.Permutation Equivalent Property Inspired from policy similarity embeddings (PSM) (Agarwal et al., 2021) and the policy permutation invariant property in SensoryNeuron (Tang & Ha, 2021), we devise an agent that can extract permutation equivalent embeddings and is policy permutation equivalent to the latent state space h t .Since each dimension of h t corresponds to one node in the excitatory point process, the permutation equivalent property along the node dimension characterizes the collective behavior within complex dynamic systems.We present the definition of permutation equivalent property in Definition 1, based on which we design a permutation equivalent metric in Definition 2 that defines the distance between states, similar to π-bisimulation (Castro, 2020).Definition 1 (Permutation Equivalent Policy) Given a state h t = (h t 1 ; . . .; h t N ) and an action parameterized by a k-hot adjacency matrix in R N ×N , we say a policy is permutation equivalent (PE) to the state h t if the order of corresponding rows in the adjacency matrix is also permuted accordingly when we reshuffle the orders the N latent states.Mathematically, the permutation equivalent policy can be described by a function π : R N ×D → R N ×N such that π(Ph t ) = Pπ(h t )P T , where P ∈ R N ×N is any permutation matrix.Definition 2 (Permutation Equivalent Metric, PEM) For any x, y ∈ S, where y is permuted state of x, i.e., y = Px for some permutation matrix P, the PEM under a distance d and policy π is described by d π : S × S → R, satisfying the recursive equation:
d π (x, y) = d(π(x), P T π(y)P) + γd π (x ′ , P T y ′ ),(9
) where x ′ and y ′ are the transition states of x and y, given the deterministic dynamic f and policy π.The distance d term in Definition 2 captures the difference in local permutation equivalent behavior while the recursive term captures long-term behavioral difference.The exact weights assigned to the two are given by the discount factor γ. The proposed distance can be efficiently computed by approximate dynamic programming algorithms.Bi-Contrastive Metric Embeddings We use a representation mapping ψ to project the high dimensional latent graph embeddings h t into two low dimensional graph embedding p t and m t , where p t only contains the internal positional information of N node {h t n } N n=1 while m t contains the individual magnitude information for different nodes h t n .We illustrate the architecture in Figure 2. Intuitively, the graph magnitude embedding m t is invariant under row permutations of h t while the graph positional embedding p t is invariant when we only change the magnitude of the row features in h t .During training, we perturb the anchor graph embedding h t into two groups G perm (h t ) and G mage (h t ).
To jointly learn the positional and magnitude embeddings with PEM, we adapt SimCLR (Chen et al., 2020b) and design a bi-contrastive learning scheme, under which the graph positional embeddings and graph magnitude embeddings can either be a positive pair under permutation transformation or a negative pair under magnitude adjustment.For any anchor embedding h 0 , we take the augmentation h 1 ∈ G perm (h i ), and h k ∈ G mage (h i ), k ̸ = 0, 1.Then, the bi-contrastive metric embeddings loss is given by a state similarity weighted SimCLR contrastive loss
LBCME(h0, h1, {h k }; ψ) = − log Γ(h0, h1)exp(s(m0, m1)) Γ(h0, h1)exp(s(m0, m1)) + k̸ =0,1 (1 − Γ(h0, h k ))exp(s(m0, m k ))(10)
+ log exp(s(p0, p1))/Γ(h0, h1) exp(s(p0, p1))/Γ(h0, h1)
+ k̸ =0,1 exp(s(p0, p k ))/(1 − Γ(h0, h k )) ,(11)
where Γ(h 0 , h 1 ) = exp(−d π (h 0 , h 1 )/β) is the weight given by PEM.β controls the sensitivity of similarity measure to PEM d π .s(u, v) := u T v ||u||||v|| denotes the cosine similarity function.
Experimental Evaluation
We assess the effectiveness of our approach, Amortized Network Intervention (ANI), in managing networked temporal dynamics through simulated and real-world experiments.Our results demonstrate that ANI successfully reduces the mutual influence effects in both synthetic and two real datasets.We measure this improvement by calculating reduced intensities.
Network Intervention on Synthetic Data
In our synthetic data experiments, we performed intervention analysis.Specifically, we used our Networked Jump ODE Model on low-dimensional synthetic Multivariate Hawkes Processes (MHP) without applying network amortization.
To assess the performance of our model-based reinforcement learning algorithm for dynamic network intervention, we conducted a comparative analysis against two model-free RL baselines, SAC (Haarnoja et al., 2018) and PPO (Schulman et al., 2017), as well as one model-based RL baseline called Neural Hawkes Process Intervention (NHPI) (Qu et al., 2023).We also adapted model-free RL techniques to Temporal Point Processes (TPP) (Upadhyay et al., 2018) for event intervention and maintained the event intervention settings for NHPI to explore and compare the effectiveness of event intervention versus action intervention with high-frequency event data.Details on data generation for the synthetic dataset can be found in Appendix G.1.
Our study results are depicted in Figure 3. Remarkably, our approach achieves comparable levels of intensity reduction as SAC and PPO in both datasets, all without direct interaction with the environment.NHPI, which focuses on event intervention, faces difficulties in reducing activity intensity, especially with high-frequency event sequences.For additional generalization results on unseen MHPs with synthetic data, please refer to Appendix G.2. Here, our goal is to design an amortized city lock-down strategy that shares a similar policy structure for distinct city regimes to curb the epidemic by intervening in the influence matrix between cities. Concretely, we trained an amortized policy from five different county corpus and tested the amortized interventions on multiple unseen county dynamics.To generalize to an unseen split, the agent needs to be invariant to the orders of different counties and the amplitude or the phase of the spikes of the underlying excitatory point processes.Thus, we evaluated the generalization ability to the unseen counties corpus in two parts, local community transformation, and cross-community adaption, where local community transformation captures the agent's ability to generalize to a permuted or intensity-adjusted community, and cross-region adaption characterizes the ability to generalize to a intensity-peak-shifted community.We illustrate the two types of transformation in Figure 5.
Evaluating Generalization on Covid Data
Generalization Over Local Community Transformation
We show the generalization ability to a permutated or intensityadjusted community by permutating and changing the intensity magnitude on the same community region and applying the amortized policy to them. Figure 4 demonstrates the intensity costs of using amortized policy and not using during learning policies on two types of communities after transformation.For the cost curve on the local community by magnitude transformation, the amortized policy starts to converge at around 30 episodes while the cost of the non-amortized policy is still decreasing.Importantly, we observe the amortized policy also displays a more stable learning curve compared with the nonamortized policy.
Generalization Over Cross Community Adaption We investigate how well the proposed approach generalizes over unseen intensity dynamics from different counties.We evaluate the generalization performance in different county corpus with or without a similar dynamic structure to the training environment.Specifically, we define the testing environment as "in-distribution" or "generalize via interpolation" when the testing environment shares a similar intensity peak to the training environment and define the testing environment as "out-ofdistribution" or "generalize via extrapolation" when the testing environment has a peak-shift or a delay effect to the original training environment.Table 1 summarize the average reduced intensity for different methods under different region settings.
Notably, Table 1 (1 st row ) indicates that the non-adaptive and non-amortized policies are struggling to control the intensities in both in-distribution and out-of-distribution environments.Importantly, when we use an adaptive but non-amortized policy, the reduced intensities are quite obvious (Table 1, 3 rd row).This is not surprising since adaptively learning a policy (i.e., repeatedly updating the model with new policies) allows the agent to explore more possibilities in the environment and thus can obtain an optimal trajectory more easily.Furthermore, using amortized policy gives a significant jump on both adaptive and non-adaptive policy among all four environments.It is also interesting to point out that in-distribution environments are easier to generalize than out-of-distribution environments which contain a peak-shift or other complex transformations when compared with the trained environment.These findings are also consistent with the intensity cost curves illustrated in Figure 7.We endeavored to enact network interventions aimed at alleviating traffic congestion within the urban road network system, particularly at road intersections.Event data were collected through SUMO (Lopez et al., 2018) simulations, whereby a traffic car was categorized as contributing to congestion if its velocity dropped below 0.5m/s.The network topology was derived from real-world cartography, as illustrated in Figure 6, and subsequently processed by SUMO to create four distinct crossroad types (detailed information available in Appendix I.1).Following training on these crossroads, we assessed the generalization capabilities of our proposed amortized network intervention method on an additional set of four previously unseen road intersections.As depicted in Figure 8, our results indicate that the learned meta-policy exhibits rapid adaptability to unfamiliar road systems only after a few gradient steps, demonstrating superior traffic congestion mitigation ability compared to a train-from-scratch model.Furthermore, we include a visual representation of the learned network intervention in Appendix I.3.
Evaluating Generalization on Traffic Data
Understanding Gains from PEM: Ablations and Visualizations
We show the efficacy of the proposed Policy Equivalent Embeddings (PEE) which are Bi-contrastive metric embeddings (Bi-CMEs) learned with Policy Equivalent Metrics (PEM) on the latent states by comparing it to Policy Similar Embeddings (PSE) (Agarwal et al., 2021) which is another common generalization approach effective on pixel-based RL tasks.Specifically, we investigate the gains from Bi-CMEs and PEM by ablating them.Instead of learning Bi-CMEs jointly through the position and magnitude embeddings, we learn a separate CME for (Chen et al., 2020b) position and magnitude embeddings and use these separately learned embeddings to generate the policies.Visualizing learned representations We visualize the metric embeddings in the ablation above by projecting them to two dimensions with t-SNE.Figure 9 shows that PEEs partition the latent embeddings into four parts: (1) original position embeddings (red) and position embeddings with adjusted magnitude (green); (2) original magnitude embeddings (yellow) and magnitude embeddings with position permuted randomly (blue); (3) position embeddings with position permuted randomly (blue) which are orthogonal to the original position embeddings (red) and ( 4) magnitude embeddings with adjusted magnitude (purple) which are orthogonal to the original magnitude embeddings (yellow).Nevertheless, the projection of embeddings learned with PSM (Right in Figure 9) gives a clear collapsing effect on position embeddings with position permuted randomly (blue) and magnitude embeddings with adjusted magnitude (purple).This finding is consistent with the results in Table 2 that Bi-CMEs weighted by PSM fail to extract permutation invariant and magnitude invariant information from the latent dynamic system.
Conclusions
This paper presents Amortized Networks Intervention, a versatile framework to steer the excitatory point processes.Our approach handles partial observability, fairness constraints, and large-scale network interventions on a combinatorial action space, and achieves promising performance on challenging tasks on large, real-world datasets.Furthermore, the framework discussed here holds the potential for addressing significant problems like traffic light scheduling in urban areas.
A Related work
As previously highlighted in the introduction, several critical bottlenecks restrict universal applications of prompt control algorithms to disease.When it comes to quickly acquiring new skills, meta-learning emerges as an ideal paradigm for achieving rapid mastery in specific scenarios.In terms of data efficiency, both model-based reinforcement learning (MBRL) and meta-learning have the potential to significantly reduce sample complexity.
Neural Temporal Point Process.In the realm of modeling real-world data, the use of constrained models like Multivariate Hawkes Processes (Hawkes, 1971) can often lead to unsatisfactory results due to model misspecification.
In recent studies, researchers have started exploring neural network parameterizations of Temporal Point Processes (TPPs) to mitigate these limitations.Common approaches (Du et al., 2016;Mei & Eisner, 2017) involve the employment of recurrent neural networks to evolve a latent state from which the intensity value can be derived.However, this approach falls short in capturing clustered and bursty event sequences, which are prevalent, as it overlooks vital temporal dependencies or necessitates an excessively high sampling rate (Nickel & Le, 2020).
To surmount these challenges, Neural Jump SDEs (Jia & Benson, 2019) and Neural Spatial Temporal Process (NSTT) (Chen et al., 2020a) extend the Neural Ordinary Differential Equation (ODE) framework, facilitating the computation of exact likelihoods for neural TPPs while addressing the limitations of prior methodologies.These two advancements closely align with our dynamic model, and we draw upon their concepts to develop neural excitatory point processes (EPPs) that are governed by an influence matrix.
Manipulation of Dynamic Processes.The manipulation and control of dynamic processes represent an active area of research.Typically, control policies for temporal process manipulating can be divided into two categories: (1) Gradually introducing exogenous event interventions into the existing historical events, and (2) Promptly enforcing network interventions to the influence matrix between different types of events.Currently, most research is centered around the first type of intervention, primarily focusing on low-frequency and low-dimensional event interventions within social media datasets.For example, techniques like dynamic programming (Farajtabar et al., 2014(Farajtabar et al., , 2017) ) and stochastic control on SDE (Wang et al., 2018) with a closed feedback loop are utilized to steer user activities in social media platforms.However, the number of event types in the above work is limited to derive the closed-form solution.Modern Reinforcement Learning approaches, including both model-free (Upadhyay et al., 2018) and model-based RL (Qu et al., 2023), are proposed to mitigate fake news events on social media.Notably, the event-intervention-based approach will fail to generalize to high frequency and uncontrollable event data like newly infested disease cases and incoming traffic.
On the other hand, the problem of network intervention, particularly node manipulation (e.g., vaccination) to control epidemic processes on graphs has received extensive attention (Hoffmann et al., 2020;Medlock & Galvani, 2009).
Most previous studies have adopted a static setup and made a single decision.In recent work (Meirom et al., 2021), the agent performs a sequential decision-making to progressively control graph dynamics through node interventions, demonstrating effectiveness in slowing the spread of infections among different individuals.While existing network-intervention-based approaches offer a promising solution for individual-level quarantine in pandemic control, they do not inherently adapt to county-or state-level control, which necessitates more complex node status considerations than those assumed in (Meirom et al., 2021), as well as a larger search space incorporating edge interventions.Moreover, it's worth noting that our approach is related to, but more comprehensive than, epidemic control problems, as it accommodates various data distributions, including Poisson, within excitatory point processes.
Model-based Reinforcement Learning.The key to applications within a Reinforcement Learning (RL) framework lies in enhancing sample efficiency, with Model-Based Reinforcement Learning (MBRL) serving the role of approximating a target environment for the agent's interaction.In environments characterized by unknown dynamics, MBRL can either learn a deterministic mapping or a distribution of state transitions, denoted as p(∆s|[s, a]).Typically, modeling uncertainty in dynamic systems involves the evolution of a hidden unit to represent a dynamic world model.Several auto-regressive neural network structures are prevalent in this domain, including well-known models such as World Models Ha & Schmidhuber (2018), Decision Transformer (Chen et al., 2021), and the Dreamer family (Hafner et al., 2019(Hafner et al., , 2023)).On the other hand, the integration of Neural Networks with Model Predictive Control (MPC) (Nagabandi et al., 2018) has achieved excellent sample complexity within model-based reinforcement learning algorithms.Our approach closely aligns with MPC and MBRL techniques, wherein MBRL methods, such as Gradient Descent, can be leveraged to refine or adapt the model utilized by MPC.This adaptation holds the potential to enhance performance, especially in scenarios where system dynamics are non-linear, partially unknown, or subject to change.Meta Reinforcement Learning.The majority of meta Reinforcement Learning (RL) algorithms adhere to a model-free approach and introduce task-specific variational parameters to facilitate learning across various simple locomotion control tasks.Examples of such algorithms include MAESN (Gupta et al., 2018) and PEARL (Rakelly et al., 2019).Simultaneously, there has been notable success in recent times by incorporating the inherent sequential structure of off-policy control into the representation learning process, as demonstrated in works like CURL (Laskin et al., 2020), Sensory (Tang & Ha, 2021), and PSM (Agarwal et al., 2021) particularly when dealing with more complex tasks such as those found in the Distracting DM Control Suite (Stone et al., 2021).
In scenarios where data is limited, such as in disease control, researchers are increasingly focusing on Model-Based Meta Reinforcement Learning (MBMRL), with a specific emphasis on achieving fast adaptation within dynamics models.Approaches like AdMRL (Lin et al., 2020) and Amortized Meta Model-based Policy Search (AMBPS) (Wang & Van Hoof, 2022) involve the optimization and inference of task-specific policies within a parameterized family of tasks, often containing parameters related to positions and velocities.Importantly, a key distinction between our method and AMBPS lies in our utilization of network embeddings and the incorporation of inductive bias to facilitate the learning of a meta-policy without parameterizing individual tasks.
B Limitations
It should be noted that our approach assumes that decisions made by considering a receding horizon window size of T for each node provide a reasonably good approximation of the optimal policy.However, it is important to acknowledge that if long-range correlations exist, this approximation may result in decreased performance.Consequently, an intriguing question arises regarding the ability of our approach to effectively tackle the problem of network interventions within long-range correlated data distributions under the proposed training protocol.
C A Comparison Between Neural ODE and Neural Process
In this section, we conduct a comparative analysis of model performance and efficiency between Neural Ordinary Differential Equations (Neural ODE or NODE) (Chen et al., 2018) and Neural Process (NP) (Garnelo et al., 2018).
To evaluate these models, we utilized a synthetic dataset consisting of 5-dimensional Multi-Hyperparameter (MHP) data, with each dimension containing 100 sample points in the training set.The results of this comparison are visually presented in Figure 10.
Our observations reveal notable distinctions between the learned behaviors of NODE and the Neural Process model.Specifically, the curve learned by NODE exhibits greater diversity, as depicted in the middle panel of Figure 10.An important metric to consider is the log-likelihood, where we find that Neural ODE achieves a log-likelihood of -33, significantly outperforming Neural Process with a log-likelihood of -180.It's essential to consider the computational cost associated with each model.For Neural ODE, a significant portion of the computational burden arises from the numerical integration process itself.The runtime complexity of this integration process is denoted as O(NFE), where NFE represents the number of function evaluations.The worst-case scenario for NFE depends on two factors: the minimum step size δ required by the ODE solver and the maximum integration time of interest, denoted as ∆t max .
In contrast, the runtime complexity of Neural Process, which employs n context points and m target points, is represented as O(n + m).In our specific experiment, the number of total function evaluations (NFE) for NODE was approximately 5000, while for Neural Process, we selected 50 context points and 100 target points.
It is worth noting that the integration steps, as quantified by ∆tmax δ , can potentially result in a large constant factor, which may be hidden within the big-O notation.However, it is reassuring to acknowledge that modern ODE solvers, such as dorpi5, are designed with adaptive step size mechanisms that adjust dynamically to the supplied data.This adaptive behavior mitigates concerns related to the scalability of the integration process with respect to dataset size and complexity.
D Technical Details of a Probabilistic Networked Model
exp(−exp(ψ))(exp(ψ)) x x! exp(ψ) exp(ψ) Bern(σ(ψ)) σ(ψ) x σ(−ψ) 1−x σ(ψ) σ(ψ)σ(−ψ) Bin(υ, σ(ψ)) υ x σ(ψ) x σ(−ψ) υ−x υσ(ψ) υσ(ψ)σ(−ψ) NB(υ, σ(ψ)) υ+x−1 x σ(ψ) x σ(−ψ) υ υ • exp(ψ) υexp(ψ)/σ(−ψ)
E Technical Details of Mean Field Approximation for Reward Model
To sample multi-stage observations {x t n } from conditional emission probability distributions {ϕ t }, we have (20) where h t+1 is the latent state of the dynamic system and g λn is the intensity function that can generate the mean value of a count distribution.The second equality follows the law of total expectation and the third expectation is on the distribution of the latent state.
x1 n ∼ ϕ 1 (12) x2 n ∼ ϕ 2 |x 1 n (13) x3 n ∼ ϕ 3 |x 1 n , x2 n (14) • • •(
F Architecture and Hyperparamters
For the dynamic model, we parameterized the ODE forward function f h in Equation equation 3 as a Time-dependent multilayer perception (MLP) with dimensions [64-64-64].We used the softplus activation function.We first attempted to use MLP to parameterize the instantaneous jump function ϕ h in Equation equation 4; however, it led to unstable results for long sequences.Thus we switched to the GRU parameterization, which takes an input, the latent state h ti , and outputs a new latent state.Importantly, we found letting f h and ϕ h share all the parameters across different node trajectories will also lead to the failure of capturing the mode diversity.We therefore created an independent biased term added between the MLP layers in f h to compensate for diversity loss.Lastly, we use MLP to parameterize the emission function g λ .We initialized all Neural ODEs (for the hidden state) with zero drift by initializing the weights and biases of the final layer to zero.All integrals were solved using (Chen et al., 2018) to within a relative and absolute tolerance of 1E-4 or 1E-6, chosen based on preliminary testing for convergence and stability.We also use Seminorms (Kidger et al., 2021) to accelerate neural ODE learning and apply temporal regularization (Ghosh et al., 2020) to mitigate the effect of stiff ODE systems.For the policy model, we parameterized the policy network by 4 heads, and 128 d-model Transformer layers.For the synthetic dataset, we used a 2-layer transformer for representation learning and another 2-layer transformer for policy generation.For the real-world dataset, we used a 4-layer transformer for representation learning and another 4-layer transformer for policy generation.For the Policy Equivalent Metric presented in Definition 2, we learned a value function parameterized by a 2-layer transformer by optimizing Least Square Temporal Difference (LSTD).We trained the dynamic, policy, and PEM-value network by using the ADAM optimizer with a 1E-2 decay rate across 4 RTX3090 GPUs.The initial learning rates for dynamic learning, policy learning, and PEM-value function learning were set to be 1E-3, 1E-4, and 1E-4 respectively.
G Additional Details of Synthetic Data Experiment
G.1 Dataset Setup
We generated synthetic networked point process data by simulating Multivariate Hawkes processes (MHP), which are doubly stochastic point processes with self-excitations (Hawkes, 1971).Specifically, the underlying ground truth influence matrix W was generated with n = 10 nodes and the weights were set as w ij ∼ U[0, 0.5].We set the graph sparsity to 0.1, i.e., each edge is kept with probability 0.1.The generated influence matrix was adjusted appropriately so that its maximum spectral radius is smaller than one, ensuring the stability of the process.We further set the Hawkes kernel to be an exponential basis kernel, where the parameter was set to β = 4, meaning roughly losing 98% of influence after one unit of time.The simulation of MHP was based on a thinning algorithm (Ogata, 1981) on a T = 100 horizon.
G.2 Additional Results and Figures
We additionally trained the amortized policy on a synthetic star graph and a circular graph with different ground-truth weight matrices and test the performance on a new star or cycle graph with random weights.The results are shown in Figure 11.Surprisingly, we find the amortized policy only slightly outperforms the non-amortized on both environments.We conjecture this is because we are adapting the amortized policy to a small local region (only 10 nodes) in this experiment so that the non-amortized policy already can achieve relatively good results.We used data released publicly by (NYTimes, 2020) on daily COVID-19 to learn the excitatory point processes of the pandemic outbreak.The data contains the cumulative counts of coronavirus cases in the United States, at the state and county level, over time.Specifically, we separated the U.S. COVID-19 data into state-wise records and further split a state-wise record into different county corpus where each split is named as "a local region" or "a split", containing distinct intensity trajectories from no more than 25 counties.
H.2 Additional Results and Figures
We also present the results for different policies under various fairness constraints.Specifically, we use λ 1 to control the weight of an intervention cost and use λ 2 to control the weight of a smoothing cost.The intervention cost used here is defined by the distance between two counties and the smoothing cost is defined by the distance between two consecutive policies between time.We use the average reduced intensity and the average lockdown probability for each edge to measure fairness and the result is shown in Table 4. Interestingly, we find imposing more constraints on the policy (i.e., λ 1 = 0.1, λ 2 = 0.1) does not lead to the lowest lockdown probability.Instead, only enforcing smoothing constraints (i.e., λ 1 = 0.0, λ 2 = 0.1) gives the fairest policy.We conjecture that adding extra intervention cost constraints will discourage the agent from exploring and thus underperform policy smoothing constraints.We illustrate the detailed lockdown for different constraints in Figure 12.
I Additional Details of Traffic Data Experiement I.1 Dataset Setup
To simulate real-world traffic, based on the road types shown in Figure 6, we design a road network with four types: intersections with one or two lanes, and T-Junction with one or two lanes.Specifically, we let the speed of the road be 8m/s or 11m/s randomly.Then, we generate car trips by the random generation tool from the SUMO package.We make such a simulation for 1000s at one run.After this playing, we can get the simulation results including emissions (e.g.CO 2 , CO), positions, speed, and lane id (i.e. the car runs in which lane with lanes more than one in a road) of each car at each time step.Given the generated summary data from SUMO, we then count the congestion event (i.e. the car speed less than 0.5m/s) for the following analysis.
I.2 Additional Results and Figure
We show the intensity cost of the learning process for four types of roads in our simulation setup in Figure 13.For both our model (meta) and our model (train from scratch), the cost trend will converge after several time steps, which proves that our model has great learning and generalization ability.
I.3 Case Study
We further provide a case study to show the great interpretation capacity of our model.In the T-junction with single lane scenario, there are 2 discrete actions, corresponding to the following green phase configurations in Figure 14.The traffic light is marked as node 10, while the other three lanes are marked as node 8, node 11, and node 12 in our Sumo simulation setup.Real-world traffic can be represented as an NJODE model corresponding with the change of traffic lights.In our simulation, when node 10 (i.e.traffic light) becomes green, the lane controlled by this traffic signal will connect, which is represented as 1 in Figure 15; otherwise, the connection between two lanes is disconnected and is represented as 0.
For instance, in the first sub-graph of Figure 15, the car can move from node 11 to node 12 under the control of its traffic signal.The learned policy of our model is exactly consistent with real traffic trips, which demonstrates that our model has great adaptation ability in uncovering real-world network interventions.
Figure 2 :
2
Figure 2: Overview of the Method.The proposed Amortized Network Intervention contains three modules.The first module is to generate a latent node embedding h i n and evolve the latent states through the NJODE model.The second module learns a Permutation Equivalent Embedding (PEE) over the latent space hn by a bi-contrastive loss function prepared for the downstream adaptation.The third module accesses the learned PEE from the second module and generates a permutation equivalent policy via Model Predictive Control (MPC).
Algorithm 1 :
1
ANI (Meta-Training Phase) Input: Task pools B and a pretrained model pool Θ learned based on Eq. (7) Result: Policy parameters φ and representation parameters ψ Initialize parameters φ and ψ; while meta-training not complete do Sample a network M i ∼ B and corresponding model θ i ∈ Θ; // Policy & Representation Learning Optimize {φ, ψ} jointly based on Eq. (8)(11); Obtain intervened network W ′ based on policy π φ ; // Planning ahead Collect new data D i by NJODESolver(W ′ , θ i ) by Eq. (2 -5); // Adapative Model Update Optimize {θ i } by D i based on Eq. (
Figure 3 :
3
Figure 3: Cumulative intensity cost on synthetic datasets.Figure4: Generalization results of local community transfor-
Figure 4 :
4
Figure 3: Cumulative intensity cost on synthetic datasets.Figure 4: Generalization results of local community transformations on Covid data.
Figure 7 :Figure 5 :
75
Figure 7: Generalization results for steering covid data on cross-community dynamics.
Figure 6 :
6
Figure 6: Top: Satellite map extracted from Google Earth(Goo, 2022).Middle: Road Network in SUMO(Lopez et al., 2018).Bottom: Extracted networks where red nodes are junction points.
Figure 8 :
8
Figure 8: Generalization results of mitigated traffic flow on two unseen intersections from SUMO.
Figure 10 :
10
Figure 10: Temporal Point Process Model learned by different methods.Left: ground truth intensity function.Middle: Learned intensity function plot by Neural Jump ODE (Log-Likelihood: -33).Right: Learned intensity function plot by Neural Process.(Log-Likelihood: -183)
15) Thus, we construct a reward model based on the Mean Field Approximation (MFA) for xt n by averaging over the high dimensional freedoms on the conditional term, i.e.,x1 n,MFA ∼ ϕ 1 (16) x2 n,MFA ∼ ϕ 2 | E ϕ1 [x 1 n,MFA ] (17) x3 n,MFA ∼ ϕ 3 | E ϕ1 [x 1 n,MFA ], E ϕ 2 | E ϕ 1 [x 1 n,MFA ] [x 2 n,MFA ](18) • • • (19) The stochasticity is further ruled out in the reward model by taking the expectation of xt n,MFA , i.e., r t n,MFA := E[x t+1 n,MFA ] = E[E[x t+1 n,MFA |h t+1 ]] = E[g λn (h t+1 )]
Figure 11 :
11
Figure 11: Generalization Results over synthetic data H Additional Details of Covid Data Experiment H.1 Dataset Setup
Table 4 :
4
Average lockdown probabilities and reduced intensity under different soft constraints.
Figure 12 :
12
Figure 12: Amortized Networks Interventions in probabilities under different constraints.
Figure 13 :
13
Figure 13: Intensity cost of four types road.
Figure 14 :Figure 15 :
1415
Figure 14: Discrete actions of the T-junction (single lane).
Table 2 :
2
Ablation studies.Reduced intensity after network
interventions on West Virginia (Split 0) when we ablate thesimilarity metric and learning procedure for metric embeddingsin different data augmentation settings. Each ablation entry isrepeated for 100 trials for a fair comparison.MetricCMEsCMEs/ Embedding(Perm.)(Magn.)Bi-CMEsPSM0.02(0.04) 0.04(0.02) 0.05(0.02)PEM0.01(0.06) 0.05(0.02) 0.54(0.27)
Table 1 :
1
Reduced amount of intensities after network interventions for each node per unit time on four unseen communities by different methods.We report average performance across 100 runs for three different seeds, with a standard deviation between parentheses.Left: t-SNE of latent embeddings learned with PEM.Middle: Original dynamic of selected 25 counties in Kansas (Top).Perturbed dynamics by two different transformations: random permutation and magnitude adjustment by FFT on the latent space ht (Middle, Bottom).Right: t-SNE of latent embeddings learned with PSM.PEM successfully disentangle permutation-sensitive position embeddings (blue) and value-sensitive magnitude embeddings (purple).
Reduced IntensitiesAdaptive AmortizedIn-distributionOut-of-distributionGeorgia-0 Alabama-0 Georgia-1 West Virginia-0FalseFalse True-0.05(0.18) 0.08(0.06) -0.07(0.11) 0.21(0.43) 0.18(0.58) 0.06(0.24)-0.02(0.05) 0.02(0.02)TrueFalse True0.18(0.19) 0.47(0.14)0.14(0.22) 0.71(0.42)0.02(0.13) 0.39(0.27)0.15(0.10) 0.54(0.27)
Table 3 :
3
Table of conditional spike count distributions, their parameterizations, and their properties
Distributionp(x|ψ, υ)E(x)Var(x)Poi(exp(ψ))
. References Google earth. 2022. 10 Sept. 2023
Rishabh Agarwal, Marlos C Machado, Pablo Samuel Castro, Marc G Bellemare, Contrastive behavioral similarity embeddings for generalization in reinforcement learning. International Conference on Learning Representations. 2021
Model-predictive control via cross-entropy and gradient-based optimization. Homanga Bharadhwaj, Kevin Xie, Florian Shkurti, Learning for Dynamics and Control. PMLR2020
Scalable methods for computing state similarity in deterministic markov decision processes. Pablo Samuel, Castro , Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence202034
Decision transformer: Reinforcement learning via sequence modeling. Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch, Advances in neural information processing systems. 202134
Neural ordinary differential equations. Yulia Ricky Tq Chen, Jesse Rubanova, David K Bettencourt, Duvenaud, Advances in neural information processing systems. 201831
Brandon Ricky Tq Chen, Maximilian Amos, Nickel, Neural spatio-temporal point processes. International Conference on Learning Representations. 2020a
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International conference on machine learning. PMLR2020b
Recurrent marked temporal point processes: Embedding event history to vector. Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, Le Song, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data mining2016
Shaping social activity by incentivizing users. Mehrdad Farajtabar, Nan Du, Manuel Gomez Rodriguez, Isabel Valera, Hongyuan Zha, Le Song, Advances in neural information processing systems. 201427
Fake news mitigation via point process based intervention. Mehrdad Farajtabar, Jiachen Yang, Xiaojing Ye, Huan Xu, Rakshit Trivedi, Elias Khalil, Shuang Li, Le Song, Hongyuan Zha, International conference on machine learning. PMLR2017
Model predictive control: Theory and practice-a survey. Carlos E Garcia, David M Prett, Manfred Morari, Automatica. 2531989
. Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, Yee Whye Eslami, Teh, arXiv:1807.016222018Neural processes. arXiv preprint
Steer: Simple temporal regularization for neural ode. Arnab Ghosh, Harkirat Behl, Emilien Dupont, Philip Torr, Vinay Namboodiri, Advances in Neural Information Processing Systems. 202033
Meta-learning probabilistic inference for prediction. Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E Turner, International Conference on Learning Representations. 2019
Meta-reinforcement learning of structured exploration strategies. Abhishek Gupta, Russell Mendonca, Yuxuan Liu, Pieter Abbeel, Sergey Levine, Advances in neural information processing systems. 201831
Soft actor-critic algorithms and applications. David Ha, Jürgen Schmidhuber ; Tuomas, Aurick Haarnoja, Kristian Zhou, George Hartikainen, Sehoon Tucker, Jie Ha, Vikash Tan, Henry Kumar, Abhishek Zhu, Pieter Gupta, Abbeel, arXiv:1803.10122arXiv:1912.01603Dream to control: Learning behaviors by latent imagination. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi, 2018. 2018. 2019arXiv preprint
Mastering diverse domains through world models. Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, Timothy Lillicrap, arXiv:2301.041042023arXiv preprint
Spectra of some self-exciting and mutually exciting point processes. Alan G Hawkes, arXiv:2008.08262Quarantines as a targeted immunization strategy. Jessica Hoffmann, Matt Jordan, Constantine Caramanis, 1971. 202058arXiv preprint
Neural jump stochastic differential equations. Junteng Jia, Austin R Benson, Advances in Neural Information Processing Systems. 201932
hey, that's not an ode": Faster ode adjoints via seminorms. Patrick Kidger, Ricky Tq Chen, Terry J Lyons, ICML. 2021
Curl: Contrastive unsupervised representations for reinforcement learning. Michael Laskin, Aravind Srinivas, Pieter Abbeel, International Conference on Machine Learning. PMLR2020
On kinematic waves ii. a theory of traffic flow on long crowded roads. Michael James, Lighthill , Gerald Beresford, Whitham , Proceedings of the royal society of london. series a. mathematical and physical sciences. 2291178. 1955
Model-based adversarial meta-reinforcement learning. Zichuan Lin, Garrett Thomas, Guangwen Yang, Tengyu Ma, Advances in Neural Information Processing Systems. 202033
Microscopic traffic simulation using sumo. Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, Evamarie Wießner, The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE. 2018
Hongyuan Mei and Jason M Eisner. The neural hawkes process: A neurally self-modulating multivariate point process. Advances in neural information processing systems. Jan Medlock, Alison P Galvani, Science. 32559482009. 2017Optimizing influenza vaccine distribution
Controlling graph dynamics with reinforcement learning and graph neural networks. Eli Meirom, Haggai Maron, Shie Mannor, Gal Chechik, International Conference on Machine Learning. PMLR2021
Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, Sergey Levine, IEEE international conference on robotics and automation. 2018. 2018IEEE
NYTimes. Coronavirus (covid-19) data in the united states. Maximilian Nickel, Matthew Le, arXiv:2002.125012020. 2020arXiv preprintLearning multivariate hawkes processes at scale
On lewis' simulation method for point processes. Yosihiko Ogata, IEEE transactions on information theory. 2711981
Bellman meets hawkes: modelbased reinforcement learning via temporal point processes. Chao Qu, Xiaoyu Tan, Siqiao Xue, Xiaoming Shi, James Zhang, Hongyuan Mei, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence202337
Efficient off-policy meta-reinforcement learning via probabilistic context variables. Kate Rakelly, Aurick Zhou, Chelsea Finn, Sergey Levine, Deirdre Quillen, International conference on machine learning. PMLR2019
Dynamics and control of diseases in networks with community structure. Paul I Richards, PLoS computational biology. 41e10007361956. 2010Marcel Salathé and James H JonesOperations research
Designing effective and practical interventions to contain epidemics. Prathyush Sambaturu, Bijaya Adhikari, Aditya Prakash, Srinivasan Venkatramanan, Anil Vullikanti, Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. the 19th International Conference on Autonomous Agents and MultiAgent Systems2020
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. 2017arXiv preprint
The distracting control suite-a challenging benchmark for reinforcement learning from pixels. Austin Stone, Oscar Ramirez, Kurt Konolige, Rico Jonschkowski, arXiv:2101.027222021arXiv preprint
The sensory neuron as a transformer: Permutation-invariant neural networks for reinforcement learning. Yujin Tang, David Ha, Advances in Neural Information Processing Systems. 202134
Deep reinforcement learning of marked temporal point processes. Utkarsh Upadhyay, Abir De, Manuel Gomez Rodriguez, Advances in Neural Information Processing Systems. 201831
Model-based meta reinforcement learning using graph structured surrogate models and amortized policy search. Qi Wang, Herke Van Hoof, International Conference on Machine Learning. PMLR2022
A stochastic differential equation framework for guiding online user activities in closed loop. Yichen Wang, Evangelos Theodorou, Apurv Verma, Le Song, International Conference on Artificial Intelligence and Statistics. PMLR2018
Reparameterizable subset sampling via continuous relaxations. Howard Howie, Weiss , Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI'19. the 28th International Joint Conference on Artificial Intelligence, IJCAI'19AAAI Press2013. 2019Materials matematics. ISBN 9780999241141 |
257,405,190 | Image Inpainting via Iteratively Decoupled Probabilistic Modeling | Figure 1. Our model supports photo-realistic large-hole inpainting for various scenarios. The first example for object removal is a highresolution image captured in the wild, while other inpainting examples (512 × 512) come from Places2 [82] and CelebA-HQ [23] datasets.AbstractGenerative adversarial networks (GANs) have made great success in image inpainting yet still have difficulties tackling large missing regions. In contrast, iterative probabilistic algorithms, such as autoregressive and denoising diffusion models, have to be deployed with massive computing resources for decent effect. To achieve high-quality results with low computational cost, we present a novel pixel spread model (PSM) that iteratively employs decoupled probabilistic modeling, combining the optimization efficiency of GANs with the prediction tractability of probabilistic models. As a result, our model selectively spreads informative pixels throughout the image in a few iterations, largely enhancing the completion quality and efficiency. On multiple benchmarks, we achieve new state-of-the-art performance. Code is released at https://github.com/ fenglinglwb/PSM . | [
232269984,
3568073,
6628106
] | Image Inpainting via Iteratively Decoupled Probabilistic Modeling
Wenbo Li [email protected]
CUHK
Xin Yu
Lab
The University of Hong
Kong 3 CUHK (Shenzhen) 4 Tencent AI
Kun Zhou [email protected]@gmail.com
Yibing Song
Zhe Lin [email protected]
Adobe Inc
Jiaya Jia
CUHK
Image Inpainting via Iteratively Decoupled Probabilistic Modeling
Figure 1. Our model supports photo-realistic large-hole inpainting for various scenarios. The first example for object removal is a highresolution image captured in the wild, while other inpainting examples (512 × 512) come from Places2 [82] and CelebA-HQ [23] datasets.AbstractGenerative adversarial networks (GANs) have made great success in image inpainting yet still have difficulties tackling large missing regions. In contrast, iterative probabilistic algorithms, such as autoregressive and denoising diffusion models, have to be deployed with massive computing resources for decent effect. To achieve high-quality results with low computational cost, we present a novel pixel spread model (PSM) that iteratively employs decoupled probabilistic modeling, combining the optimization efficiency of GANs with the prediction tractability of probabilistic models. As a result, our model selectively spreads informative pixels throughout the image in a few iterations, largely enhancing the completion quality and efficiency. On multiple benchmarks, we achieve new state-of-the-art performance. Code is released at https://github.com/ fenglinglwb/PSM .
Introduction
Image inpainting, a fundamental computer vision task, aims to fill the missing regions in an image with visually pleasing and semantically appropriate content. It has been extensively employed in graphics and imaging applications, such as photo restoration [60,61], image editing [3,21], compositing [32], re-targeting [8], and object removal [10]. This task, especially filling large holes, is more ill-posed than other restoration problems, necessitating models of stronger generation abilities.
In past years, generative adversarial networks (GANs) have made great processes in image inpainting [43,69,71,37,62,34]. By implicitly modeling a target distribution through a min-max game, GANs-based methods significantly outperform traditional exemplar-based techniques [16,55,10,9] in terms of visual quality. However, the one-shot generation of GANs sometimes lead to unstable training [51,14,28] and makes it challenging to learn a complex distribution, particularly when inpainting large holes in high-resolution images.
Conversely, autoregressive models [57,58,42] and denoising diffusion models [53,19,11] recently demonstrated remarkable power in content generation [44,49,73,52]. These models utilize tractable probabilistic modeling techniques to iteratively refine the image based on prior estimations, resulting in more stable training and improved coverage. However, it is widely known that autoregressive models process images pixel by pixel, which makes it cumbersome to handle high-resolution data. On the other hand, denoising diffusion models typically require thousands of iterations to achieve accurate estimations. Thus, using these methods directly in image inpainting incurs respective drawbacks -strategies for high-quality large-hole high-resolution image inpainting still fall short.
To complete the map of inpainting, in this paper, we develop a new pixel spread model (PSM) tailored for the largehole scenario. PSM operates in an iterative manner, where all pixels are predicted in parallel during each iteration, and only qualified predictions are retained for subsequent iterations. It acts as a process to gradually spread trustful pixels to unknown locations. Our core design lies in a simple yet highly effective decoupled probabilistic modeling (see Sec. 3.1.1), which enjoys the merits of GANs' efficient optimization and the tractability of probabilistic models. In detail, our model simultaneously predicts an inpainted result (i.e., the mean term) and an uncertainty map (i.e., the variance term). The mean term is optimized using implicit adversarial training, yielding more accurate predictions with fewer iterations. The variance term, contrarily, is modeled explicitly using Gaussian regularization.
The adoption of our decoupled strategy offers numerous advantages. First, the use of adversarial optimization leads to a significant reduction in the number of iterative steps required to achieve promising results, as shown in Fig. 2, much faster than autoregressive and denoising diffusion models. Second, the Gaussian regularization employed produces a variance term that naturally acts as an uncertainty measure (see Sec. 3.1.2). This allows for the selection of reliable estimates for iterative refinement, largely reducing GANs' artifacts. Furthermore, the explicit modeling of the distribution facilitates continuous sampling, thereby producing predictions with enhanced quality and diversity, as demonstrated in Sec. 4. Finally, the uncertainty measure is instrumental in constructing an uncertainty-guided attention mechanism (see Sec. A), which encourages the network to leverage more informative pixels for efficient reasoning. As a result, our PSM completes large missing regions with photo-realistic content, as illustrated in Fig. 1
Related Work
Traditional Methods
Image inpainting is a classical computer vision problem. Early methods make use of image priors, such as selfsimilarity and sparsity. Diffusion-based methods [5,2], for instance, convey information to the holes from nearby undamaged neighbors. Another line of exemplar-based approaches [16,55,29,9,12,31] looks for highly similar patches to complete missing regions using human-defined distance metrics. The most representative work is Patch-Match [3], which employs heuristic searching in a multiscale image space to speed up inpainting greatly. However, due to a lack of context understanding, they do not guarantee visually appealing and semantically consistent results.
Deep Learning Based Methods
Using a great amount of training data to considerably increase the ability of high-level understanding, deep-neuralnetwork-based methods [43,69,75,36,64] achieve success. Pathak et al. [43] introduce the adversarial loss [13] to inpainting, yielding visually realistic results. Several approaches along this line continually push the performance Figure 3. Our pixel spread model for high-quality large-hole image inpainting. Left illustration is the pixel spread pipeline with proposed decoupled probabilistic modeling, and the right images are visual examples. We simplify the input of the t-th iteration to xt−1, and denote the estimated mean and variance as µ t and σ 2 t . The σt map on the right is normalized for better visualization. We observe gradual uncertainty reduction in missing regions during the pixel spread process.
to new heights. For example, in order to obtain locally finegrained details and globally consistent structures, Iizuka et al. [20] adopt two discriminators for adversarial training. Additionally, partial [35] and gated [72] convolution layers are proposed to reduce artifacts, e.g., color discrepancy and blurriness, for irregular masks. Moreover, intermediate cues, including foreground contours [68], object structures [40,45], and segmentation maps [54] are used in multi-stage generation. Despite nice inpainting content for small masks, these methods still do not guarantee large-hole inpainting quality.
Large Hole Image Inpainting
To deal with large missing regions, a surge of effort was made to improve the model capability. Attention techniques [71,37,67,70] and transformer architectures [62,81,34] take advantage of context information. They work well when an image contains repeating patterns. Besides, Zhao et al. [80] propose a novel architecture, bridging the gap between image-conditional and unconditional generation, improving free-form large-scale image completion. There are also attempts to study the progressive generation. This line is to select only high-quality pixels each time and gradually fill holes. We note that these methods heavily rely on specially designed update algorithms [78,15,33,41], or consume additional model capacity to separately assess the prediction accuracy [77], or need more training stages [7] when processing images.
Recently, benefiting from exact likelihood computation and iterative samplings, autoregressive models [62,74,66] and denoising diffusion models [48,46,38,1] have shown great potential in producing diversified and realistic content. They inevitably incur high inference costs with thousands of steps and require massive computation resources. In this work, we present decoupled probabilistic modeling that obtains predictions and uncertainty measures simultaneously. Our model identifies reliable predicted pixels and sends them to subsequent iterations, thereby mitigating GANs-generated artifacts. Also, the proposed approach can be viewed as a diffusion model that learns pixel spreading rather than denoising and requires fewer iterations.
Our Method
Our objective is to use photo-realistic material to complete a masked image with substantial missing areas. In this section, we first formulate our pixel spread model (PSM) along with a comprehensive analysis. It is followed by the details of model design and loss functions.
Pixel Spread Model
Although GANs-based methods achieve significantly better results than traditional ones, they still face great difficulties handling large missing regions. We attribute one of the reasons to the one-shot nature of GANs and instead propose iterative inpainting.
In each pass, since there are inevitably some good predictions, we use these pixels as clues to assist the next-time generation. In this way, our pixel spread model gradually propagates valuable information to the entire image. In the following, we first discuss the single-pass modeling before moving on to the pixel spread process.
Decoupled Probabilistic Modeling
For iterative inpainting, it is essential to find a mechanism to evaluate the accuracy of predictions. One intuitive solution is introducing a tractable probabilistic model so that uncertainty information can be analytically calculated. However, this requirement often leads to the assumption that the approximated target distribution is Gaussian, which is considerably too simple to explain the truly complicated distributions. Although several iterative models, such as denoising diffusion models [19], enrich the expression of marginal distribution by including a number of hidden variables and optimizing the variational lower evidence bound, these methods typically yield a high inference cost.
To address these key issues, we propose a decoupled probabilistic modeling tailored for efficient iterative inpainting. The essential insight is that we leverage the advantages of implicit GANs-based optimization and explicit Gaussian regularization in a decoupled way. Thus we can simultaneously obtain accurate predictions and explicit uncertainty measures.
As shown in Fig. 3, given an input image x t at time t with large holes, our model (see architecture details in Sec. A) predicts the inpainting result µ t as well as an uncertainty map σ 2 t . We use the adversarial loss (along with other losses of Sec. 3.3) to supervise image prediction µ t , while jointly treating (µ t , σ 2 t ) as the mean and diagonal covariance of Gaussian distribution. GANs' implicit optimization makes it possible to approximate the true distribution as closely as possible, greatly reducing the number of iterations. It also supplies us with an explicit uncertainty measure for the mean term, allowing us to select reliable pixels. The Gaussian regularization is mainly applied to the variance term using negative log likelihood (NLL) L nll as
L nll = − D i=1 log δ + (y i ) δ − (y i ) N y; sg[µ i θ (x)], σ i θ (x) 2 dy ,(1)
where D is the data dimension and i is the pixel index, θ denotes model parameters, and input x and output y are scaled to [−1, 1]. δ + (y) and δ − (y) are defined as
δ + (y) = ∞ if y = 1 , y + 1 255 if y < 1 ,(2)δ − (y) = −∞ if y = −1 , y − 1 255 if y > −1 .(3)
Specifically, we include a stop-gradient operation (i.e., sg[·]), which encourages the Gaussian constraint only to optimize the variance term and allows the mean term to be estimated from more accurate implicit modeling.
Discussion. We use the estimated mean and variance for sampling during the diffusion process, while taking the deterministic mean term as the output for the final iteration. The feasibility of this design is proved by the experiments in Sec. 4. Additionally, the probabilistic modeling enables us to apply continuous sampling during pixel spread, yielding higher quality and more diverse estimations. Finally, we find the uncertainty measure also enables us to design a more effective attention mechanism in Sec. A.
Pixel Spread Scheme
We use a feed-forward network, denoted as f θ (·), to gradually spread informative pixels to the entire image, starting from known regions as
x t , m t , u t = f θ (x t−1 , m t−1 , u t−1 ) ,(4)
where t is the time step, x t refers to the masked image, m t stands for a binary mask, and u t is the uncertainty map. The output includes the updated image, mask, and uncertainty map. Network parameters are shared across all iterations. We use several iterations for both training and testing to improve performance. Specifically, as shown in Fig. 3 and Eq. (4), our method runs as follows at the t-th iteration.
1. Predict. Given the masked image x t−1 , mask m t−1 ,
and uncertainty map u t−1 , our method estimates mean µ t and variance σ 2 t for all pixels. Then a preliminary uncertainty mapũ t scaled to [0, 1] is generated by converting the variance map.
2. Pick. We first sort the uncertainty scores for unknown pixels. According to the pre-defined mask schedule, we calculate the number of pixels that are to be added in this iteration, and insert those with the lowest uncertainty to the known category, updating the mask to m t . Based on the preliminary uncertainty mapũ t , by marking locations that are still missing as 1 and the initially known pixels as 0, we obtain the final uncertainty map u t .
3. Sample. We consider two situations. First, for the initially known locations m 0 , we always use the original input pixels x 0 . Second, we apply continuous sampling in accordance with µ t and σ t for the inpainting areas. The result is formulated as
x t = x 0 + (m t − m 0 ) (µ t + α · σ t z) ,(5)
where α is an adjustable ratio and z ∼ N (0, I), and denotes the Hadamard product. Note that we do not use the σ t z term in the final iteration.
Model Architecture
We use a deep U-Net [47] architecture with a Style-GAN [25,26] decoder, reaching large receptive fields with stacked convolutions to take advantage of context information in images [6,39,4,63]. In addition, we introduce multiple attention blocks at various resolutions, in light of the discovery that global interaction significantly improves reconstruction quality on much larger and more diverse datasets at higher resolutions [71,70,11].
Based only on feature similarity, the conventional attention mechanism [59] offers equal opportunity for pixels to exchange information. For the inpainting task, however, missing pixels are initialized with the same specified values, making them close to one another. As a result, it is usually unable to effectively leverage useful information from visible regions. Even worse, the valid pixels are compromised, resulting in blurry content and unpleasing artifacts. In this situation, as shown in Fig. 4, we take into account the pixels' uncertainty scores to adjust the aggregating weights in attention. It properly resolves the problem mentioned above. The attention output is computed by
Attention(q, k, v, u) = Softmax qk T √ d k + F(u) v ,(6)
where {q, k, v} are the query, key, value matrices, d k denotes the scaling factor, and F represents learnable functions that predict biased pixel weights based on the uncertainty map u and also include a reshape operation.
Loss Functions
In each iteration, our model outputs the mean and variance estimates, as shown in Fig 3. The mean term is optimized using adversarial loss [13] L adv and perceptual loss [56,22] L pcp , which aims to produce natural-looking images. The losses are described as follows.
Adversarial loss. We formulate the adversarial loss as
L ag = −Ex [log (D (x))] ,(7)L ad = −E x [log (D (x))] − Ex [log (1 − D (x))] ,(8)
where D is the discriminator implemented as [25], x andx are real and predicted images.
Perceptual loss. We adopt a high receptive filed perceptual loss [56], which is formulated as
L pcp = i φ i (x) − φ i (x) 2 2 ,(9)
where φ i is the layer output of a pre-trained ResNet50 [17]. As discussed in Sec. 3.1.1, we apply the negative log likelihood L nll to constrain the variance for uncertainty modeling. Thus the final loss function for the generator is
L = j λ 1 L j ag + λ 2 L j pcp + λ 3 L j nll ,(10)
where j is the number of spread iterations. We empirically set λ 1 = 1, λ 2 = 2 and λ 3 to 1 × 10 −4 .
Experiments
Datasets and Metrics
We train our models at 512 × 512 resolution on Places2 [82] and CelebA-HQ [23] in order to adequately assess the proposed method. Places2 is a large-scale dataset with nearly 8 million training images in various scene categories. Additionally, 36,500 images make up the validation split. During training, images undergo random flipping, cropping, and padding, while testing images are centrally cropped to the 512 × 512 size. For CelebA-HQ, we employ 24,183 and 2,993 images, respectively, to train and test our models. Following [72,80,56,34], we use on-the-fly generated masks during training, where the detailed setup is from MAT [34]. We evaluate all models using identical masks provided by [34] for fair comparisons. Besides, for evaluating model robustness, we use the same model to inpaint both small and large masks.
Despite being adopted in early inpainting work, L1 distance, PSNR, and SSIM [65] are found not strongly associated with human perception when assessing image quality [30,50]. In this work, in light of [80,34], we use FID [18], P-IDS [80], and U-IDS [79], which robustly measures the perceptual fidelity of inpainted images, as more suitable metrics.
Implementation Details
We use an encoder-decoder architecture. The encoder is made up of convolution blocks, while the decoder is adopted from StyleGAN2 [26]. The encoder's channel size starts at 64 and doubles after each downsampling until the maximum of 512. The decoder has a symmetrical configuration. We adopt attention blocks at 32 × 32 and 16 × 16 resolutions for both the encoder and decoder. The uncertainty map is initialized as "1 -mask" at the first iteration. Given an H ×W input, we first downsample the feature size to H 32 × W 32 before returning to H × W . We train our models for 20M images on Places2 and CelebA-HQ using 8 NVIDIA A100 GPUs. We utilize exponential moving average (EMA), adaptive discriminator augmentation (ADA), and weight modulation training strategies [24,34]. The batch size is 32, and the learning rate is 1 × 10 −3 . We employ an Adam [27] Model "A" refers to the full model. Models "B" and "C" use fewer iterations. We tease apart the decoupled probabilistic modeling (DPM), continuous sampling (CS), and uncertainty-guided attention (UGA) in models "D", "E", and "F". Model "G" only adopts attention blocks at 16 × 16 resolution. Table 2. Quantitative ablation study of the number of testing iterations. As the number of iterations increases, the FID↓ results get better and then saturate. and β 2 = 0.99. We empirically set α = 0.01 in Eq. (5) based on experimental results. To generate 512 × 512 images, we iterate twice for training and four times for testing.
The fact that previous work [80,34] trains models on Places2 with 50M or more images -much more extensive data than ours -evidences the benefit of our method. It is found that additional training can further improve our approach, and yet 20M images currently already deliver cutting-edge performance.
Ablation Study
In this section, we conduct a comprehensive investigation of the proposed designs in our method. For quick evaluation, we train our models for 6M images at 256×256 resolution using Places365-Standard, a subset of Places2 [82]. We start with model "A", which employs our full designs and adopts three iterations during training.
Iterative number. Our core idea is to employ iterative optimization to enhance the generation quality. We adjust the iteration number and maintain the same setup during training and testing. As illustrated in Table 1, models with one and two iterations, dubbed "B" and "C", yield 0.59 and 0.19 FID decreases compared to model "A". Additionally, as shown in Fig. 2, adopting more iterations is capable of producing more aesthetically pleasing content. The first and third cases exhibit obviously fewer artifacts, and the arch in the second example is successfully restored after three iterations. Both the quantitative and qualitative results manifest the importance of iterative generation. It is noted that we can test the system with a different number of iterations from the training stage. Using more iterations results in higher FID performance, as demonstrated in Table 2, yet at the expense of longer inference time. Thus, there is a trade-off between inference speed and generation quality. Additionally, when comparing models "A" and "B", it is clear that introducing more iterations in the training process is beneficial. But the number of iterations in the inference stage is more important.
Decoupled probabilistic modeling. To deliver accurate prediction while supporting the uncertainty measure for iterative inpainting, we propose decoupled probabilistic modeling. When putting all supervision on the sampled result, we observe the training diminishes the variance term (close to 0 for all pixels). It is because, unlike denoising diffusion models that precisely quantify the noise levels at each step, our GANs-based method no longer provides specific optimization targets for the mean and variance terms. The variance term is underestimated for trivial optimization in this case. It renders the picking process less effective.
As illustrated in Table 1, model "D" obtains an inferior FID result compared with the full model "A". Besides, from the visual comparison in Fig. 5, it is observed that model "D" tends to generate blurry content, while model "A" produces sharper structures and fine-grained details. All these observations prove the effectiveness of this design.
Continuous sampling. Our approach may use the estimated variance to perform continuous sampling. Table 1 indicates that FID decreases by nearly 0.1 when continu- Table 3. Quantitative comparisons on Places [82] and CelebA-HQ [23]. " †": the officially released Stable Diffusion inpainting model trained on a large-scale high-quality dataset LAION-Aesthetics V2 5+. Our method achieves the best performance under both large and small mask settings. The best and second best results are in red and blue.
Input
Ours 1 Ours 2 Ours 3 Figure 6. Visual examples of diverse generation for our method. The differences mainly lie in the fine-grained details. ous sampling (model "E") is not involved. Also, it is observed that our full model leads to more visually consistent content. For example, box structures are well restored according to the visible pixels in Fig. 5. Thus, continuous sampling brings higher fidelity to our results. As shown in Fig. 6, our model also supports the pluralistic generation, particularly in the hole's center. However, when the mean term is estimated with low uncertainty or the iteration number is constrained, the differences in results are not always instantly obvious. A detailed analysis of fidelity-diversity trade-off is further provided in the supplementary file.
Uncertainty-guided attention. To fully exploit distant context, we add attention blocks to our framework. We first compare using attention at 32×32, 16×16 (model "A") and only at 16 × 16 (model "G"). We discover a 0.28 FID drop in model "G" from the quantitative comparison in Table 1 Table 4. Ablation study of mask schedule functions. attention mechanism may result in color consistency and blurriness. To support this claim, we tease apart the uncertainty guidance and notice a minor performance drop in Table 1. Also, we provide a visual comparison in Fig. 5. We observe that model "A" produces more visually appealing window details than model "F".
Mask schedule. As illustrated in Table 4 and Fig. 7, we analyze various mask schedule strategies and discover that the uniform strategy achieves the best FID. We argue that this is because the mask ratios of input images vary widely, and uniform schedule results in more stable training for different iterations.
Comparisons to State-of-the-Art Methods
We thoroughly compare the proposed pixel spread model (PSM) with GANs-based models [34,80,56,83,76,70,72], autoregressive models [62], and denoising diffusion models [46] in Table 3. Although the last two lines have lately made notable progress even for commercial use, the majority of off-the-shelf techniques can only handle images with up to 256 × 256 resolution. We use publicly accessible models for 512 × 512 resolution and test them on the same masks to make a fair comparison.
Input Original
Big-LaMa MAT LDM PSM (ours) Figure 8. Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 Places2. Our PSM produces structures and details that are more realistic and reasonable.
As shown in Table 3, our PSM achieves the state-of-theart performance on Places2 and CelebA-HQ benchmarks under both large and small mask settings. It can be seen that our method significantly performs better than the existing GANs-based models. Besides, even with only 20% of the parameters of strong denoising diffusion model LDM [46], our method delivers superior results in terms of all metrics. For example, on the Places2 benchmark, our PSM brings about 1.1 improvement on FID and larger gains on P-IDS and U-ID under the large mask setup. As for the inference speed, our PSM costs nearly 250ms to obtain a 512 × 512 image, which is 10× faster than LDM (∼3s). Notably, our model is trained using far fewer samples (our 20M images vs. CoModGAN's [80] 50M images). We observe that more extended training can further boost performance. All these results demonstrate the effectiveness of our method.
We also provide visual comparisons in Fig. 8. In a variety of scenes, our method generates more aesthetically pleasing textures with fewer artifacts when compared to existing methods. For instance, room layouts and building structures are better inpainted by our approach. More examples are provided in the supplementary materials.
Conclusion
We have proposed a new pixel spread model for largehole image inpainting. Utilizing the proposed iteratively decoupled probabilistic modeling, our method can assess the prediction accuracy and retain the pixels with the lowest uncertainty as hints for subsequent processing, yielding highquality completion. Furthermore, our approach exhibits favorable inference efficiency, significantly surpassing that of prevalent denoising diffusion models. The state-of-the-art performance on multiple benchmarks demonstrate the effectiveness of our method. Additionally, our model has potential for extension to other tasks, such as text-to-image generation, which we will explore in the future.
Limitation analysis. Our method shows a tendency to make more changes in small details rather than in large structures. We aim to improve the diversity of our generation in this regard. Additionally, our method sometimes struggles to understand objects when only a few hints are given, as illustrated by a few failure cases presented in the supplemental materials.
B. Generalization to 1024×1024 Resolution
To evaluate the generalization ability of models, we compare our pixel spread model (PSM), MAT [34] and LaMa [56] trained on 512 × 512 Places2 [82] at the 1024 × 1024 resolution. As illustrated in Table B.1, our PSM performs significantly better than MAT and LaMa on all metrics, despite using fewer training samples. Remarkably, our approach results in an approximately 1.9 FID improvement. We do not involve denoising diffusion models (e.g., LDM) and other GANs-based models (e.g., CoModGAN) for comparisons because scaling them up to the 1024 × 1024 resolution is impractical.
C. 512×512 LPIPS Results
LPIPS [79] is also a widely used perceptual metric in image inpainting. For a comprehensive comparison with state-
D. 256×256 CelebA-HQ Results
We also conduct quantitative comparisons on 256 × 256 CelebA-HQ [23] dataset. As shown in Table D.3, our method achieves the best performance among all methods.
E. Comparison to RePaint
Considering that the sizes of RePaint [38]
F. Fidelity-Diversity Trade-Off
Apart from FID (depending on both diversity and fidelity), we follow previous work to use Improved Precision and Recall as fidelity (precision) and diversity (recall) measures. As shown in Table F.5, our model yields better FID, higher precision yet slightly lower recall than LDM on Places2, while outperforming MAT on all metrics. Improving diversity will be our future work.
G. Additional Pluralistic Generation
As shown in Fig. 6 and Sec. 5, our method also supports pluralistic generation. 1, we observe that the differences mainly lie in the fine-grained details. We will work on improving the generating diversity.
H. Additional Qualitative Comparisons
We provide more visual examples on 512 × 512 Places2 [82] and CelebA-HQ [23] in Fig. H.2, Fig. H.3, Fig. H.4, Fig. H.5, and Fig. H.6. Due to space limit, we additionally add comparisons with CoModGAN [80] in Fig. H.7. Compared to other methods, our method generates more photo-realistic and semantically consistent content. For example, our method successfully recovers human legs, airplane structures, and more realistic indoor and outdoor scenes. All the results demonstrate the effectiveness of our method.
I. Failure Cases
As discussed in Sec. 5, our model sometimes fails to recover the damaged objects when limited clues are provided. We show some failure cases in Fig. I.8. For instance, the missing part of the notebook is filled with the background, and the recovered bus structure is incomplete. We attribute one of the reasons to the lack of high-level semantic understanding. We will further improve the generative capability of our model.
Figure 2 .
2Inpainting results of our PSM at different iterations. One-shot generation usually results in blurry content with unpleasing artifacts, while more iterations yield better results.
Figure 4 .
4U-Net architecture with the proposed uncertainty-guided attention. We omit the mask update for clarity.
Figure 5 .
5Qualitative ablation study. Model "A" is our full model. The proposed decoupled probabilistic modeling, continuous sampling, and uncertainty-guided attention designs are not used in models "D", "E", and "F".
Figure 7 .
7Visualization of mask schedule functions.
Figure G. 1 .
1Visual examples of diverse generation for our method.
Fig. G.
Fig. G.1, we observe that the differences mainly lie in the fine-grained details. We will work on improving the generating diversity.
Figure H. 2 .FigureFigureFigureFigureFigure H. 7 .Figure I. 8 .
278Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 Places2 dataset. Please zoom in for a better view. Our PSM produces structures and details that are more realistic and reasonable. H.3. Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 Places2 dataset. Please zoom in for a better view. Our PSM produces structures and details that are more realistic and reasonable. H.4. Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 Places2 dataset. Please zoom in for a better view. Our PSM produces structures and details that are more realistic and reasonable. H.5. Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 CelebA-HQ dataset. Please zoom in for a better view. Our PSM produces face outlines and details that are more realistic and reasonable. H.6. Qualitative side-by-side comparisons of state-of-the-art methods on 512 × 512 CelebA-HQ dataset. Please zoom in for a better view. Our PSM produces face outlines and details that are more realistic and reasonable. Qualitative comparisons between CoModGAN and our PSM on 512 × 512 Places2 and CelebA-HQ datasets. Please zoom in for a better view. Our PSM produces structures and details that are more realistic and reasonable. Failure cases of our PSM. It is difficult to recover the objects when large-scale regions are missing.
optimizer with β 1 = 0Model Iter. DPM CS UGA Att. Res. FID↓
A
3
32,16
2.36
B
1
-
-
32,16
2.95
C
2
32,16
2.55
D
3
32,16
2.49
E
3
32,16
2.45
F
3
32,16
2.44
G
3
16
2.64
Table 1. Quantitative ablation study of the number of training iter-
ations, modeling and sampling strategies, and architecture designs.
Table C.2. LPIPS↓ results on 512 × 512 Places2[82] and CelebA-HQ[23] datasets. " †": our models are trained with 20M samples, much less than other methods (e.g., MAT uses 50M samples on Places2 and 25M samples on CelebA-HQ). We use a single model for both the small and large mask setups. " ‡": the official Stable Diffusion inpainting model is trained on a large-scale high-quality dataset LAION-Aesthetics V2 5+. The best and second best results are in red and blue.of-the-art methods, we provide LPIPS results inTable C.2. We argue that LPIPS may not be suitable for large-hole image inpainting because it is calculated pixel-by-pixel. This measure is for reference only.Method FID↓ P-IDS(%)↑ U-IDS(%)↑
PSM (Ours) 3.95
14.40
32.23
MAT [34] 5.83
9.51
28.02
LaMa [56] 6.31
4.98
23.24
Table B.1. Quantitative comparisons on 1024 × 1024 Places2 [82]
dataset under the large mask setup by transferring models trained
at the 512 × 512 resolution. The best and second best results are
in red and blue. Our PSM generalizes well to higher resolutions.
Method
#Param.
Places
CelebA-HQ
×10 6
Small Large Small Large
PSM (Ours) †
74
0.084 0.161 0.052 0.099
Stable Diffusion ‡
860
0.148 0.220
-
-
LDM [46]
387
0.100 0.190
-
-
MAT [34]
62
0.099 0.189 0.065 0.125
CoModGAN [80]
109
0.101 0.192 0.073 0.140
LaMa [56]
51/27
0.086 0.166 0.075 0.143
ICT [62]
150
-
-
0.105 0.195
MADF [83]
85
0.095 0.181 0.068 0.130
AOT GAN [76]
15
0.101 0.195 0.074 0.145
HFill [70]
3
0.148 0.284
-
-
DeepFill v2 [72]
4
0.113 0.213 0.117 0.221
EdgeConnect [40]
22
0.114 0.275 0.101 0.208
results are at 256 × 256 on Places2 and CelebA-HQ while ours are at Method Small Mask Large Mask FID↓ P-IDS↑ U-IDS↑ FID↓ P-IDS↑ U-IDS↑ PSM (Ours) † Table D.3. Quantitative comparisons on 256 × 256 CelebA-HQ [23] dataset. The P-IDS and U-IDS results are shown in percentage (%). " †": our model is trained with 12M samples, far less than other methods (e.g., MAT uses 25M samples). We use a single model for both the small and large mask setups. The best and second best results are in red and blue. FID↓ P-IDS(%)↑ U-IDS(%)↑ FID↓ P-IDS(%)↑ U-IDS(%)↑Table E.4. Quantitative comparisons with RePaint [38] on 256 × 256 Places [82] and CelebA-HQ [23] datasets. Table F.5. FID, precision and recall comparisons for evaluating fidelity-diversity traed-off on 512 × 512 Places [82] dataset.512 × 512, we don't compare it in the main body of the paper. Here we compare our model PSM to RePaint at 256 × 256 resolution on Places2 and CelebA-HQ in Table E.4, where PSM achieves better performance and is 1000× faster than RePaint (i.e., 0.25s v.s. 250s for one image processing). For saving time, we just use the first 10K Places2 validation images for evaluation.2.58 21.35 33.70 4.57 14.07 25.28
MAT [34]
2.94 20.88 32.01 5.16 13.90 25.13
LaMa [56]
3.98 8.82
22.57 8.75 2.34
8.77
ICT [62]
5.24 4.51
17.39 10.92 0.90
5.23
MADF [83]
10.43 6.25
14.62 23.59 0.50
1.44
AOT GAN [76] 9.64 5.61
14.62 22.91 0.47
1.65
DeepFill v2 [72] 5.69 6.62
16.82 13.23 0.84
2.62
EdgeConnect [40] 5.24 5.61
15.65 12.16 0.84
2.31
Method
Places2-10K (256 × 256)
CelebA-HQ (256 × 256)
Ours 3.47
18.32
34.52
4.57
14.07
25.28
RePaint 6.15
11.11
27.16
10.55
0.07
1.47
Method
#Param. FID↓ Precision↑ Recall↑
PSM (Ours)
74M
1.68
0.983
0.971
LDM
387M
2.76
0.962
0.975
MAT
62M
2.90
0.965
0.939
, and an early convolutional block is also introduced at these scales. Different attention blocks use adaptive mapping functions inFig. 4, each of which is composed of 4 convolutional layers with a kernel size of 3 × 3.The input consists of 7 channels: 3 for color images, 1 for the initial mask, 1 for the updated mask, 1 for the uncertainty map, and 1 for the time step. The number of channels is initially converted to 64, then doubled after each downsampling, up to a maximum of 512, and the decoder employs a symmetrical setup. The output contains 6 channels: 3 for the mean term and 3 for the log variance term.We use a weight modulation technique, where the style representation is derived from an image global feature and a random latent code. As for the global feature, we employ convolutional layers to further downsample the feature size from H 32 × W 32 to H 256 × W 256 and a global pooling layer to obtain 1d representation. The random latent code is mapped from Gaussian noise by 8 fully connected layers.
Blended diffusion for text-driven editing of natural images. Omri Avrahami, Dani Lischinski, Ohad Fried, CVPR. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In CVPR, pages 18208-18218, 2022.
Filling-in by joint interpolation of vector fields and gray levels. Coloma Ballester, Marcelo Bertalmio, Vicent Caselles, Guillermo Sapiro, Joan Verdera, TIP. 108Coloma Ballester, Marcelo Bertalmio, Vicent Caselles, Guillermo Sapiro, and Joan Verdera. Filling-in by joint in- terpolation of vector fields and gray levels. TIP, 10(8):1200- 1211, 2001.
Patchmatch: A randomized correspondence algorithm for structural image editing. Connelly Barnes, Eli Shechtman, Adam Finkelstein, Dan B Goldman, TOG. 28324Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspon- dence algorithm for structural image editing. TOG, 28(3):24, 2009.
Non-local image dehazing. Dana Berman, Shai Avidan, CVPR. Dana Berman, Shai Avidan, et al. Non-local image dehazing. In CVPR, pages 1674-1682, 2016.
Image inpainting. Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, Coloma Ballester, Proceedings of the 27th annual conference on Computer graphics and interactive techniques. the 27th annual conference on Computer graphics and interactive techniquesMarcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interac- tive techniques, pages 417-424, 2000.
A non-local algorithm for image denoising. Antoni Buades, Bartomeu Coll, J-M Morel, CVPR. IEEE2Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In CVPR, volume 2, pages 60-65. IEEE, 2005.
Masked generative image transformer. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T Freeman, Maskgit, CVPR. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In CVPR, pages 11315-11325, 2022.
Weakly-and self-supervised learning for content-aware deep image retargeting. Donghyeon Cho, Jinsun Park, Tae-Hyun Oh, Yu-Wing Tai, In So Kweon, ICCV. Donghyeon Cho, Jinsun Park, Tae-Hyun Oh, Yu-Wing Tai, and In So Kweon. Weakly-and self-supervised learning for content-aware deep image retargeting. In ICCV, pages 4558- 4567, 2017.
Object removal by exemplar-based inpainting. Antonio Criminisi, Patrick Perez, Kentaro Toyama, CVPR. II-IIIEEEAntonio Criminisi, Patrick Perez, and Kentaro Toyama. Ob- ject removal by exemplar-based inpainting. In CVPR, vol- ume 2, pages II-II. IEEE, 2003.
Region filling and object removal by exemplar-based image inpainting. Antonio Criminisi, Patrick Pérez, Kentaro Toyama, TIP. 139Antonio Criminisi, Patrick Pérez, and Kentaro Toyama. Re- gion filling and object removal by exemplar-based image in- painting. TIP, 13(9):1200-1212, 2004.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, NIPS. 34Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. NIPS, 34:8780-8794, 2021.
Image inpainting using nonlocal texture matching and nonlinear filtering. Ding Ding, Sundaresh Ram, Jeffrey J Rodríguez, TIP. 284Ding Ding, Sundaresh Ram, and Jeffrey J Rodríguez. Image inpainting using nonlocal texture matching and nonlinear fil- tering. TIP, 28(4):1705-1719, 2018.
. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative adversarial nets. NIPS. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. NIPS, 27, 2014.
. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C Courville, Improved training of wasserstein gans. NIPS, 30Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. NIPS, 30, 2017.
Progressive image inpainting with full-resolution residual network. Zongyu Guo, Zhibo Chen, Tao Yu, Jiale Chen, Sen Liu, ACMMM. Zongyu Guo, Zhibo Chen, Tao Yu, Jiale Chen, and Sen Liu. Progressive image inpainting with full-resolution resid- ual network. In ACMMM, pages 2496-2504, 2019.
Scene completion using millions of photographs. James Hays, Alexei A Efros, ToG. 2634James Hays and Alexei A Efros. Scene completion using millions of photographs. ToG, 26(3):4-es, 2007.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. NIPS, 30. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilib- rium. NIPS, 30, 2017.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, NIPS. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu- sion probabilistic models. NIPS, 33:6840-6851, 2020.
Globally and locally consistent image completion. Satoshi Iizuka, Edgar Simo-Serra, Hiroshi Ishikawa, ToG. 364Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ToG, 36(4):1-14, 2017.
Sc-fegan: Face editing generative adversarial network with user's sketch and color. Youngjoo Jo, Jongyoul Park, ICCV. Youngjoo Jo and Jongyoul Park. Sc-fegan: Face editing gen- erative adversarial network with user's sketch and color. In ICCV, pages 1745-1753, 2019.
Perceptual losses for real-time style transfer and super-resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, ECCV. SpringerJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694-711. Springer, 2016.
Progressive growing of gans for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, ICLR. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018.
Training generative adversarial networks with limited data. Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila, NIPS. 33Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adver- sarial networks with limited data. NIPS, 33:12104-12114, 2020.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, CVPR. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401-4410, 2019.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, CVPR. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, pages 8110-8119, 2020.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira, arXiv:1705.07215On convergence and stability of gans. arXiv preprintNaveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.
Examplar-based inpainting based on local geometry. Josselin Olivier Le Meur, Christine Gautier, Guillemot, ICIP. IEEEOlivier Le Meur, Josselin Gautier, and Christine Guillemot. Examplar-based inpainting based on local geometry. In ICIP, pages 3401-3404. IEEE, 2011.
Photorealistic single image super-resolution using a generative adversarial network. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, CVPR. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo- realistic single image super-resolution using a generative ad- versarial network. In CVPR, pages 4681-4690, 2017.
Laplacian patchbased image synthesis. Joo Ho Lee, Inchang Choi, Min H Kim, CVPR. Joo Ho Lee, Inchang Choi, and Min H Kim. Laplacian patch- based image synthesis. In CVPR, pages 2727-2735, 2016.
Seamless image stitching in the gradient domain. Anat Levin, Assaf Zomet, Shmuel Peleg, Yair Weiss, ECCV. SpringerAnat Levin, Assaf Zomet, Shmuel Peleg, and Yair Weiss. Seamless image stitching in the gradient domain. In ECCV, pages 377-389. Springer, 2004.
Recurrent feature reasoning for image inpainting. Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, Dacheng Tao, CVPR. Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, and Dacheng Tao. Recurrent feature reasoning for image inpainting. In CVPR, pages 7760-7768, 2020.
Mat: Mask-aware transformer for large hole image inpainting. Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia, CVPR. Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, and Jiaya Jia. Mat: Mask-aware transformer for large hole image in- painting. In CVPR, pages 10758-10768, 2022.
Image inpainting for irregular holes using partial convolutions. Guilin Liu, A Fitsum, Kevin J Reda, Ting-Chun Shih, Andrew Wang, Bryan Tao, Catanzaro, ECCV. Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for ir- regular holes using partial convolutions. In ECCV, pages 85-100, 2018.
Rethinking image inpainting via a mutual encoderdecoder with feature equalizations. Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang, Chao Yang, ECCV. SpringerHongyu Liu, Bin Jiang, Yibing Song, Wei Huang, and Chao Yang. Rethinking image inpainting via a mutual encoder- decoder with feature equalizations. In ECCV, pages 725- 741. Springer, 2020.
Coherent semantic attention for image inpainting. Hongyu Liu, Bin Jiang, Yi Xiao, Chao Yang, ICCV. Hongyu Liu, Bin Jiang, Yi Xiao, and Chao Yang. Coher- ent semantic attention for image inpainting. In ICCV, pages 4170-4179, 2019.
Repaint: Inpainting using denoising diffusion probabilistic models. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool, CVPR. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In CVPR, pages 11461-11471, 2022.
Non-local sparse models for image restoration. Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, ICCV. IEEEJulien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-local sparse models for image restoration. In ICCV, pages 2272-2279. IEEE, 2009.
Kamyar Nazeri, Eric Ng, Tony Joseph, Z Faisal, Mehran Qureshi, Ebrahimi, arXiv:1901.00212Generative image inpainting with adversarial edge learning. arXiv preprintKamyar Nazeri, Eric Ng, Tony Joseph, Faisal Z Qureshi, and Mehran Ebrahimi. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212, 2019.
Onion-peel networks for deep video completion. Sungho Seoung Wug Oh, Joon-Young Lee, Seon Joo Lee, Kim, ICCV. Seoung Wug Oh, Sungho Lee, Joon-Young Lee, and Seon Joo Kim. Onion-peel networks for deep video com- pletion. In ICCV, pages 4403-4412, 2019.
Image transformer. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran, ICML. PMLRNiki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Im- age transformer. In ICML, pages 4055-4064. PMLR, 2018.
Context encoders: Feature learning by inpainting. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A Efros, CVPR. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, pages 2536-2544, 2016.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image gen- eration with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Structureflow: Image inpainting via structure-aware appearance flow. Yurui Ren, Xiaoming Yu, Ruonan Zhang, H Thomas, Shan Li, Ge Liu, Li, ICCV. Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H Li, Shan Liu, and Ge Li. Structureflow: Image inpainting via structure-aware appearance flow. In ICCV, pages 181-190, 2019.
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, CVPR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image syn- thesis with latent diffusion models. In CVPR, pages 10684- 10695, 2022.
Unet: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. In International Conference on Medical image com- puting and computer-assisted intervention, pages 234-241. Springer, 2015.
Palette: Image-to-image diffusion models. Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, Mohammad Norouzi, ACM SIGGRAPH 2022 Conference Proceedings. Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1- 10, 2022.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
Enhancenet: Single image super-resolution through automated texture synthesis. S M Mehdi, Bernhard Sajjadi, Michael Scholkopf, Hirsch, ICCV. Mehdi SM Sajjadi, Bernhard Scholkopf, and Michael Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In ICCV, pages 4491-4500, 2017.
Improved techniques for training gans. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, NIPS. 29Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. NIPS, 29, 2016.
Make-a-video: Text-to-video generation without text-video data. Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, arXiv:2209.14792arXiv preprintUriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792, 2022.
Generative modeling by estimating gradients of the data distribution. Yang Song, Stefano Ermon, NIPS. 32Yang Song and Stefano Ermon. Generative modeling by es- timating gradients of the data distribution. NIPS, 32, 2019.
Yuhang Song, Chao Yang, Yeji Shen, Peng Wang, Qin Huang, C-C Jay Kuo, arXiv:1805.03356Spg-net: Segmentation prediction and guidance network for image inpainting. arXiv preprintYuhang Song, Chao Yang, Yeji Shen, Peng Wang, Qin Huang, and C-C Jay Kuo. Spg-net: Segmentation prediction and guidance network for image inpainting. arXiv preprint arXiv:1805.03356, 2018.
Image completion with structure propagation. Jian Sun, Lu Yuan, Jiaya Jia, Heung-Yeung Shum, ACM SIGGRAPH 2005 Papers. Jian Sun, Lu Yuan, Jiaya Jia, and Heung-Yeung Shum. Image completion with structure propagation. In ACM SIGGRAPH 2005 Papers, pages 861-868. 2005.
Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, arXiv:2109.07161Harshith Goka, Kiwoong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. arXiv preprintRoman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. arXiv preprint arXiv:2109.07161, 2021.
Conditional image generation with pixelcnn decoders. Aaron Van Den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, NIPS. 29Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image gen- eration with pixelcnn decoders. NIPS, 29, 2016.
Pixel recurrent neural networks. Aäron Van Den, Nal Oord, Koray Kalchbrenner, Kavukcuoglu, ICML. PMLRAäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, pages 1747-1756. PMLR, 2016.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pages 5998- 6008, 2017.
Bringing old photos back to life. Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Jing Liao, Fang Wen, CVPR. Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Jing Liao, and Fang Wen. Bringing old photos back to life. In CVPR, pages 2747-2757, 2020.
Old photo restoration via deep latent space translation. Ziyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Fang Wen, Jing Liao, TPAMIZiyu Wan, Bo Zhang, Dongdong Chen, Pan Zhang, Dong Chen, Fang Wen, and Jing Liao. Old photo restoration via deep latent space translation. TPAMI, 2022.
High-fidelity pluralistic image completion with transformers. Ziyu Wan, Jingbo Zhang, Dongdong Chen, Jing Liao, arXiv:2103.14031arXiv preprintZiyu Wan, Jingbo Zhang, Dongdong Chen, and Jing Liao. High-fidelity pluralistic image completion with transform- ers. arXiv preprint arXiv:2103.14031, 2021.
Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He, Non-local neural networks. In CVPR. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaim- ing He. Non-local neural networks. In CVPR, pages 7794- 7803, 2018.
Image inpainting via generative multi-column convolutional neural networks. Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, Jiaya Jia, NIPS. Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, and Jiaya Jia. Image inpainting via generative multi-column convolu- tional neural networks. NIPS, 2018.
Image quality assessment: from error visibility to structural similarity. Zhou Wang, Alan C Bovik, R Hamid, Eero P Sheikh, Simoncelli, TIP. 134Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 13(4):600-612, 2004.
Yuejian Fang, and Nan Duan. Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, 2022Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, Yuejian Fang, and Nan Duan. Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. NIPS, 2022.
Image inpainting with learnable bidirectional attention maps. Chaohao Xie, Shaohui Liu, Chao Li, Ming-Ming Cheng, Wangmeng Zuo, Xiao Liu, Shilei Wen, Errui Ding, ICCV. Chaohao Xie, Shaohui Liu, Chao Li, Ming-Ming Cheng, Wangmeng Zuo, Xiao Liu, Shilei Wen, and Errui Ding. Im- age inpainting with learnable bidirectional attention maps. In ICCV, pages 8858-8867, 2019.
Foreground-aware image inpainting. Wei Xiong, Jiahui Yu, Zhe Lin, Jimei Yang, Xin Lu, Connelly Barnes, Jiebo Luo, CVPR. Wei Xiong, Jiahui Yu, Zhe Lin, Jimei Yang, Xin Lu, Con- nelly Barnes, and Jiebo Luo. Foreground-aware image in- painting. In CVPR, pages 5840-5848, 2019.
Shift-net: Image inpainting via deep feature rearrangement. Zhaoyi Yan, Xiaoming Li, Mu Li, Wangmeng Zuo, Shiguang Shan, ECCV. Zhaoyi Yan, Xiaoming Li, Mu Li, Wangmeng Zuo, and Shiguang Shan. Shift-net: Image inpainting via deep feature rearrangement. In ECCV, pages 1-17, 2018.
Contextual residual aggregation for ultra high-resolution image inpainting. Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, Zhan Xu, CVPR. Zili Yi, Qiang Tang, Shekoofeh Azizi, Daesik Jang, and Zhan Xu. Contextual residual aggregation for ultra high-resolution image inpainting. In CVPR, pages 7508-7517, 2020.
Generative image inpainting with contextual attention. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, Thomas S Huang, CVPR. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contex- tual attention. In CVPR, pages 5505-5514, 2018.
Free-form image inpainting with gated convolution. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, Thomas S Huang, ICCV. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Free-form image inpainting with gated convolution. In ICCV, pages 4471-4480, 2019.
Scaling autoregressive models for content-rich text-to-image generation. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, arXiv:2206.10789arXiv preprintJiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gun- jan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yin- fei Yang, Burcu Karagol Ayan, et al. Scaling autoregres- sive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
Yingchen Yu, Fangneng Zhan, Rongliang Wu, Jianxiong Pan, Kaiwen Cui, Shijian Lu, Feiying Ma, Xuansong Xie, Chunyan Miao, arXiv:2104.12335Diverse image inpainting with bidirectional and autoregressive transformers. arXiv preprintYingchen Yu, Fangneng Zhan, Rongliang Wu, Jianxiong Pan, Kaiwen Cui, Shijian Lu, Feiying Ma, Xuansong Xie, and Chunyan Miao. Diverse image inpainting with bidi- rectional and autoregressive transformers. arXiv preprint arXiv:2104.12335, 2021.
Learning pyramid-context encoder network for highquality image inpainting. Yanhong Zeng, Jianlong Fu, Hongyang Chao, Baining Guo, CVPR. Yanhong Zeng, Jianlong Fu, Hongyang Chao, and Baining Guo. Learning pyramid-context encoder network for high- quality image inpainting. In CVPR, pages 1486-1494, 2019.
Aggregated contextual transformations for high-resolution image inpainting. Yanhong Zeng, Jianlong Fu, Hongyang Chao, Baining Guo, arXiv:2104.01431arXiv preprintYanhong Zeng, Jianlong Fu, Hongyang Chao, and Baining Guo. Aggregated contextual transformations for high-resolution image inpainting. arXiv preprint arXiv:2104.01431, 2021.
High-resolution image inpainting with iterative confidence feedback and guided upsampling. Yu Zeng, Zhe Lin, Jimei Yang, Jianming Zhang, Eli Shechtman, Huchuan Lu, ECCV. SpringerYu Zeng, Zhe Lin, Jimei Yang, Jianming Zhang, Eli Shecht- man, and Huchuan Lu. High-resolution image inpainting with iterative confidence feedback and guided upsampling. In ECCV, pages 1-17. Springer, 2020.
Semantic image inpainting with progressive generative networks. Haoran Zhang, Zhenzhen Hu, Changzhi Luo, Wangmeng Zuo, Meng Wang, ACMMM. Haoran Zhang, Zhenzhen Hu, Changzhi Luo, Wangmeng Zuo, and Meng Wang. Semantic image inpainting with pro- gressive generative networks. In ACMMM, pages 1939- 1947, 2018.
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, CVPR. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586-595, 2018.
Large scale image completion via co-modulated generative adversarial networks. Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, Chao Eric, Yan Chang, Xu, ICLR. Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, I Eric, Chao Chang, and Yan Xu. Large scale im- age completion via co-modulated generative adversarial net- works. In ICLR, 2020.
Tfill: Image completion via a transformer-based architecture. Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, arXiv:2104.00845arXiv preprintChuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Tfill: Im- age completion via a transformer-based architecture. arXiv preprint arXiv:2104.00845, 2021.
Places: A 10 million image database for scene recognition. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, Antonio Torralba, PAMI. 406Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. PAMI, 40(6):1452-1464, 2017.
Image inpainting by end-to-end cascaded refinement with mask awareness. Manyu Zhu, Dongliang He, Xin Li, Chao Li, Fu Li, Xiao Liu, Errui Ding, Zhaoxiang Zhang, TIP. 30Manyu Zhu, Dongliang He, Xin Li, Chao Li, Fu Li, Xiao Liu, Errui Ding, and Zhaoxiang Zhang. Image inpainting by end-to-end cascaded refinement with mask awareness. TIP, 30:4855-4866, 2021. |
3,502,468 | FEARNET: BRAIN-INSPIRED MODEL FOR INCREMENTAL LEARNING | Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. Fear-Net is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks. | [
5273326
] | FEARNET: BRAIN-INSPIRED MODEL FOR INCREMENTAL LEARNING
28 Nov 2017
Ronald Kemker
Carlson Center for Imaging Science
Rochester Institute of Technology Rochester
14623NYUSA
Christopher Kanan [email protected]
Carlson Center for Imaging Science
Rochester Institute of Technology Rochester
14623NYUSA
FEARNET: BRAIN-INSPIRED MODEL FOR INCREMENTAL LEARNING
28 Nov 2017Under review as a conference paper at ICLR 2018
Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. Fear-Net is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.
INTRODUCTION
In incremental classification, an agent must sequentially learn to classify training examples, without necessarily having the ability to re-study previously seen examples. While deep neural networks (DNNs) have revolutionized machine perception (Krizhevsky et al., 2012), off-the-shelf DNNs cannot incrementally learn classes due to catastrophic forgetting. Catastrophic forgetting is a phenomenon in which a DNN completely fails to learn new data without forgetting much of its previously learned knowledge (McCloskey & Cohen, 1989). While methods have been developed to try and mitigate catastrophic forgetting, as shown in Kemker et al. (2017), these methods are not sufficient and perform poorly on larger datasets. In this paper, we propose FearNet, a brain-inspired system for incrementally learning categories that significantly outperforms previous methods.
The standard way for dealing with catastrophic forgetting in DNNs is to avoid it altogether by mixing new training examples with old ones and completely re-training the model offline. For large datasets, this may require weeks of time, and it is not a scalable solution. An ideal incremental learning system would be able to assimilate new information without the need to store the entire training dataset. A major application for incremental learning includes real-time operation on-board embedded platforms that have limited computing power, storage, and memory, e.g., smart toys, smartphone applications, and robots. For example, a toy robot may need to learn to recognize objects within its local environment and of interest to its owner. Using cloud computing to overcome these resource limitations may pose privacy risks and may not be scalable to a large number of embedded devices. A better solution is on-device incremental learning, which requires the model to use less storage and computational power.
In this paper, we propose an incremental learning framework called FearNet (see Fig. 1). FearNet has three brain-inspired sub-systems: 1) a recent memory system for quick recall, 2) a memory system for long-term storage, and 3) a sub-system that determines which memory system to use for a particular example. FearNet mitigates catastrophic forgetting by consolidating recent memories into long-term storage using pseudorehearsal (Robins, 1995). Pseudorehearsal allows the network to revisit previous memories during incremental training without the need to store previous training examples, which is more memory efficient. Figure 1: FearNet consists of three braininspired modules based on 1) mPFC (longterm storage), 2) HC (recent storage), and 3) BLA for determining whether to use mPFC or HC for recall.
Problem Formulation: Here, incremental class learning consists of T study-sessions. At time t, the learner receives a batch of data B t , which contains N t labeled training samples, i.e., B t = {(x j , y j )} Nt j=1 , where x j ∈ R d is the input feature vector to be classified and y j is its corresponding label. The number of training samples N t may vary between sessions, and the data inside a study-session is not assumed to be independent and identically distributed (iid). During a study session, the learner only has access to its current batch, but it may use its own memory to store information from prior study sessions. We refer to the first session as the model's "base-knowledge," which contains exemplars from M ≥ 1 classes. The batches learned in all subsequent sessions contain only one class, i.e., all y j will be identical within those sessions.
Novel Contributions: Our contributions include:
1. FearNet's architecture includes three neural networks: one inspired by the hippocampal complex (HC) for recent memories, one inspired by the medial prefrontal cortex (mPFC) for long-term storage, and one inspired by the basolateral amygdala (BLA) that determines whether to use HC or mPFC for recall.
2. Motivated by rapid eye movement (REM) sleep, FearNet employs a generative autoencoder for pseudorehearsal, which mitigates catastrophic forgetting by generating previously learned examples that are replayed alongside novel information during consolidation. This process does not involve storing previous training data.
3. FearNet achieves state-of-the-art results on large image and audio datasets with a relatively small memory footprint, demonstrating how dual-memory models can be scaled.
RELATED WORK
Catastrophic forgetting in DNNs occurs due to the plasticity-stability dilemma (Abraham & Robins, 2005). If the network is too plastic, older memories will quickly be overwritten; however, if the network is too stable, it is unable to learn new data. This problem was recognized almost 30 years ago. In French (1999), methods developed in the 1980s and 1990s are extensively discussed, and French argued that mitigating catastrophic forgetting would require having two separate memory centers: one for the long-term storage of older memories and another to quickly process new information as it comes in. He also theorized that this type of dual-memory system would be capable of consolidating memories from the fast learning memory center to long-term storage.
One of the earliest methods for reducing catastrophic forgetting is rehearsal (Hetherington & Seidenberg, 1989). In rehearsal, the new training examples in a study session are mixed with a random selection of old training examples from previous study sessions, which is a kind of memory replay. Rehearsal reduces forgetting, but performance is still worse than offline models. Moreover, rehearsal requires storing all of the training data. Robins (1995) argued that storing of training examples was inefficient and of "little interest," so he introduced pseudorehearsal. Rather than replaying past training data, in pseudorehearsal, the algorithm generates new examples for a given class. In Robins (1995), this was done by creating random input vectors, having the network assign them a label, and then mixing them into the new training data. This idea was revived in Draelos et al. (2017), where a generative autoencoder was used to create pseudo-examples for unsupervised incremental learning. This method was the inspiration for memory consolidation in FearNet.
Recently there has been renewed interest in solving catastrophic forgetting in supervised learning.
Many new methods are designed to mitigate catastrophic forgetting when each study session contains a permuted version of the entire training dataset (see Goodfellow et al. (2013)). Unlike incremental class learning, all labels are contained in each study session. PathNet uses an evolutionary algorithm to find the optimal path through a large DNN, and then freezes the weights along that path (Fernando et al., 2017). It assumes all classes are seen in each study session, and it is not capable of incremental class learning. Elastic Weight Consolidation (EWC) employs a regularization scheme that redirects plasticity to the weights that are least important to previously learned study sessions (Kirkpatrick et al., 2017). After EWC learns a study session, it uses the training data to build a Fisher matrix that determines the importance of each feature to the classification task it just learned. EWC was shown to work poorly at incremental class learning in Kemker et al. (2017).
The Fixed Expansion Layer (FEL) model mitigates catastrophic forgetting by using sparse updates (Coop et al., 2013). FEL uses two hidden layers, where the second hidden layer (i.e., the FEL layer) has connectivity constraints. The FEL layer is much larger than the first hidden layer, is sparsely populated with excitatory and inhibitory weights, and is not updated during training. This limits learning of dense shared representations, which reduces the risk of learning interfering with old memories. FEL requires a large number of units to work well (Kemker et al., 2017). Reducing EPC from 20 (blue) to 1 (red) severely impairs its ability to recall older information.
Gepperth & Karaoguz (2016) introduced a new approach for incremental learning, which we call Gepp-Net. GeppNet uses a self-organizing map (SOM) to reorganize the input onto a two-dimensional lattice. This serves as a long-term memory, which is fed into a simple linear layer for classification. After the SOM is initialized, it can only be updated if the input is sufficiently novel. This prevents the model from forgetting older data too quickly. GeppNet also uses rehearsal using all previous training data. A variant of Gepp-Net, GeppNet+STM, uses a fixed-size memory buffer to store novel examples. When this buffer is full, it replaces the oldest example. During pre-defined intervals, the buffer is used to train the model. Gepp-Net+STM is better at retaining base-knowledge since it only trains during its consolidation phase, but the STM-free version learns new data better because it is updates the model on every novel labeled input.
iCaRL (Rebuffi et al., 2017) is an incremental class learning framework. Rather than directly using a DNN for classification, iCaRL uses it for supervised representation learning. During a study session, iCaRL updates a DNN using the study session's data and a set of J stored examples from each class observed in earlier sessions (J = 20 in their paper), which is a kind of rehearsal. After a study session, the J examples retained are carefully chosen using herding. The DNN in iCaRL is then used to compute an embedding for each stored example, and then the mean embedding for each class seen is computed. To classify a new instance, the DNN is used to compute an embedding for it, and then the class with the nearest mean embedding is assigned. iCaRL's performance is heavily influenced by the number of examples it stores, as shown in Fig. 2.
MAMMALIAN MEMORY: NEUROSCIENCE AND MODELS
FearNet is heavily inspired by the dual-memory model of mammalian memory (McClelland et al., 1995), which has considerable experimental support from neuroscience (Frankland et al., 2004;Takashima et al., 2006;Kitamura et al., 2017;Bontempi et al., 1999;Taupin & Gage, 2002;Gais et al., 2007). This theory proposes that HC and mPFC operate as complementary memory systems, where HC is responsible for recalling recent memories and mPFC is responsible for recalling remote (mature) memories. GeppNet is the most recent DNN to be based on this theory, but it was also independently explored in the 1990s in French (1997) and Ans & Rousset (1997). In this section, we review some of the evidence for the dual-memory model.
One of the major reasons why HC is thought to be responsible for recent memories is that if HC is bilaterally destroyed, then anterograde amnesia occurs with old memories for semantic information preserved. One mechanism HC may use to facilitate creating new memories is adult neurogenesis. This occurs in HC's dentate gyrus (Altman, 1963;Eriksson et al., 1998). The new neurons have higher initial plasticity, but it reduces as time progresses (Deng et al., 2010).
In contrast, mPFC is responsible for the recall of remote (long-term) memories (Bontempi et al., 1999). Taupin & Gage (2002) and Gais et al. (2007) showed that mPFC plays a strong role in memory consolidation during REM sleep. McClelland et al. (1995) and Euston et al. (2012) theorized that, during sleep, HC reactivates recent memories to prevent forgetting which causes these recent memories to replay in mPFC as well, with dreams possibly being caused by this process. After memories are transferred from HC to mPFC, evidence suggests that corresponding memory in HC is erased (Poe, 2017).
Recently, Kitamura et al. (2017) performed contextual fear conditioning (CFC) experiments in mice to trace the formation and consolidation of recent memories to long-term storage. CFC experiments involve shocking mice while subjecting them to various visual stimuli (i.e., colored lights). They found that BLA, which is responsible for regulating the brain's fear response, would shift where it retrieved the corresponding memory from (HC or mPFC) as that memory was consolidated over time. FearNet follows the memory consolidation theory proposed by Kitamura et al. (2017).
THE FEARNET MODEL
FearNet has two complementary memory centers, 1) a short-term memory system that immediately learns new information for recent recall (HC) and 2) a DNN for the storage of remote memories (mPFC). FearNet also has a separate BLA network that determines which memory center contains the associated memory required for prediction. During sleep phases, FearNet uses a generative model to consolidate data from HC to mPFC through pseudorehearsal.
DUAL-MEMORY STORAGE
FearNet's HC model is a variant of a probabilistic neural network (Specht, 1990). HC computes class conditional probabilities using stored training examples. Formally, HC estimates the probability that an input feature vector x belongs to class k as
P HC (C = k|x) = β k k ′ β k ′ (1) β k = ǫ + min j x − u k,j 2 −1 if HC contains instances of class k 0 otherwise (2)
where ǫ > 0 is a regularization parameter and u k,j is the j'th stored exemplar in HC for class k. All exemplars are removed from HC after they are consolidated in mPFC.
FearNet's mPFC is implemented using a DNN trained both to reconstruct its input using a symmetric encoder-decoder (autoencoder) and to compute P P F C (C = k|x). The autoencoder enables us to use pseudorehearsal, which is described in more detail in Sec. 4.2. The loss function for mPFC is
L mP F C = L class + L recon ,(3)
where L class is the supervised classification loss and L recon is the unsupervised reconstruction loss, as illustrated in Fig. 3(a). For L class , we use standard softmax loss. L recon is the weighted sum of mean squared error (MSE) reconstruction losses from each layer, which is given by
L recon = M j=1 λ j L recon,j ,(4)
where M is the number of layers in the encoder/decoder, L recon,j is the reconstruction loss for layer j, and λ j is the reconstruction weight for that layer. mPFC is similar to a Ladder Network architecture (Rasmus et al., 2015), which combines classification and reconstruction to improve regularization, especially during low-shot learning. The λ j hyperparameters were found empirically, with λ 0 being largest and decreasing for deeper layers (see supplementary material). This prioritizes the reconstruction task so that the generated pseudo-examples are more realistic.
After training mPFC using the data stored in HC, all of the data in HC is pushed through the encoder to extract a dense feature representation of the original data. Using this data, we compute a mean feature vector µ c and covariance matrix Σ c for each class c.
NETWORK SELECTION USING BLA
During prediction, FearNet uses the BLA network ( Fig. 3(b)) to determine whether to classify an input x using HC or mPFC. This can be challenging because if HC has only been trained on one class, it will put all of its probability mass on that class, whereas mPFC will likely be less confident. The output of BLA is given by A (x) and will be a value between 0 and 1, with a 1 indicating mPFC should be used. BLA is trained after each study session using only the data in HC and with pseudoexamples generated with mPFC, using the same procedure described in Sec. 4.2. Instead of using solely BLA to determine which network to use, we found that combining its output with those of mPFC and HC improved results. The predicted classŷ is computed aŝ
y = arg max k ′ P HC (C = k ′ |x) if ψ > max k P mP F C (C = k|x) arg max k ′ P mP F C (C = k ′ |x) otherwise (5) where ψ = (1 − A (x)) −1 max k P HC (C = k|x) A (x)
ψ is the probability of the class prediction made by HC weighted by the confidence that the associated memory is actually stored in HC. BLA has the same number of layers/units as the mPFC encoder, and uses a logistic output unit.
EXPERIMENTAL SETUP
Evaluating Incremental Learning Performance. To evaluate how well the incrementally trained models perform compared to an offline model, we use the three metrics proposed in Kemker et al. (2017). After each study session t in which a model learned a new class k, we compute the model's test accuracy on the new class (α new,t ), the accuracy on the base-knowledge (α base,t ), and the accuracy of all of the test data seen to this point (α all,t ). After all T study sessions are complete, a model's ability to retain the base-knowledge is given by Ω base = 1
T −1 T t=2 α base,t α of f line ,
where α of f line is the accuracy of a multi-layer perceptron (MLP) trained offline (i.e., it is given all of the training data at once). The model's ability to immediately recall new information is measured by and Ω all are relative to an offline MLP model, so a value of 1 indicates that a model has similar performance to the offline baseline. This allows results across datasets to be better compared. Note that Ω base > 1 and Ω all > 1 only if the incremental learning algorithm is more accurate than the offline model, which can occur due to better regularization strategies employed by different models. (Welinder et al., 2010). We use the 2011 version of the dataset. AudioSet is an audio classification dataset (Gemmeke et al., 2017). We use the variant of AudioSet used by Kemker et al. (2017), which contains a 100 class subset such that none of the classes were super-or sub-classes of one another. Also, since the AudioSet data samples can have more than one class, the chosen samples had only one of the 100 classes chosen in this subset.
Ω new = 1 T −1 T t=2 α new,
For CIFAR-100 and CUB-200, we extract ResNet-50 image embeddings as the input to each of the models, where ResNet-50 was pre-trained on ImageNet (He et al., 2016). We use the output after the mean pooling layer and normalize the features to unit length. For AudioSet, we use the audio CNN embeddings produced by pre-training the model on the YouTube-8M dataset (Abu-El-Haija et al., 2016). We use the pre-extracted AudioSet feature embeddings, which represent ten second sound clips (i.e., ten 128-dimensional vectors concatenated in order).
Comparison Models. We compare FearNet to FEL, GeppNet, GeppNet+STM, and iCaRL. FEL, GeppNet, and GeppNet+STM were chosen due to their previously reported efficacy at incremental class learning in Kemker et al. (2017). We also compare against iCaRL, which was explicitly designed for this task.
In each of our experiments, all models take the same feature embedding as input for a given dataset. This required modifying iCaRL by turning its CNN into a fully connected network. We performed a hyperparameter search for each model/dataset combination to tune the number of units and layers (see Supplemental Materials).
Training Parameters. FearNet was implemented in Tensorflow. For mPFC and BLA, each fully connected layer uses an exponential linear unit activation function (Clevert et al., 2016). The output of the encoder also connects to a softmax output layer. Xavier initialization is used to initialize all weight layers (Glorot & Bengio, 2010), and all of the biases are initialized to one. BLA's architecture is identical to mPFC's encoder, except it has a logistic output unit, instead of a softmax layer.
mPFC and BLA were trained using the NAdam optimizer. We train mPFC on the base-knowledge set for 1,000 epochs, consolidate HC over to mPFC for 60 epochs, and train BLA for 20 epochs. Because mPFC's decoder is vital to preserving memories, its learning rate is 1/100 times lower than the encoder. We performed a hyperparameter search for each dataset and model, varying the model shape (64-1,024 units), depth (2-4 layers), and how often to sleep (see Sec. 6.2). Across datasets, mPFC and BLA performed best with two hidden layers, but the number of units per layer varied across datasets. The specific values used for each dataset are given in supplemental material. In preliminary experiments, we found no benefit to adding weight decay to mPFC, likely because the reconstruction task helps regularize the model.
EXPERIMENTAL RESULTS
Unless otherwise noted, each class is only seen in one unique study-session and the first baseknowledge study session contains half the classes in the dataset. We perform additional experiments to study how changing the number of base-knowledge classes affects performance in Sec. 6.2. Unless otherwise noted, FearNet sleeps every 10 study sessions across datasets.
STATE-OF-THE-ART COMPARISON
ADDITIONAL EXPERIMENTS
Novelty Detection with BLA. We evaluated the performance of BLA by comparing it to an oracle version of FearNet, i.e., a version that knew if the relevant memory was stored in either mPFC or HC. Table 3 shows that FearNet's BLA does a good job at predicting which network to use; however, the decrease in Ω new suggests BLA is sometimes using mPFC when it should be using HC. Table 3: FearNet performance when the location of the associated memory is known using an oracle versus using BLA. When should the model sleep? To study how the frequency of memory consolidation affects FearNet's performance, we trained FearNet on CUB-200 and varied the sleep frequency from 1-15 study sessions. As shown in Fig. 5, when FearNet increases the number of classes it learns before sleeping, it is better able to retain its base-knowledge, but this reduces its ability to recall new information. In humans, it is known that sleep deprivation impairs new learning (Yoo et al., 2007), and that forgetting occurs during sleep (Poe, 2017 Table 4: Multi-modal incremental learning experiment. FearNet was trained with various base-knowledge sets (column-header) and then incrementally trained on all remaining data.
Multi-Modal Incremental Learning. As shown in Sec. 6.1, FearNet can incrementally learn and retain information from a single dataset, but how does it perform if new inputs differ greatly from previously learned ones? This scenario is one of the first shown to cause catastrophic forgetting in MLPs. To study this, we trained FearNet to incrementally learn CIFAR-100 and AudioSet, which after training is a 200-way classification problem. To do this, AudioSet's features are zero-padded to make them the same length as CIFAR-100s. Table 4 shows the performance of FearNet for three separate training paradigms: 1) FearNet learns CIFAR-100 as the baseknowledge and then incrementally learns AudioSet; 2) FearNet learns AudioSet as the baseknowledge and then incrementally learns CIFAR-100; and 3) the base-knowledge contains a 50/50 split from both datasets with FearNet incrementally learning the remaining classes. Our results suggest FearNet is capable of incrementally learning multi-modal information, if the model has a good starting point (high base-knowledge); however, if the model starts with lower base-knowledge performance (e.g., AudioSet), the model struggles to learn new information incrementally (see Supplemental Material for detailed plots). Base-Knowledge Effect on Performance. In this section, we show how the size of the base-knowledge (i.e., number of classes) affects FearNet's performance on CUB-200. To do this, we varied the size of the base-knowledge from 10-150 classes, with the remaining classes learned incrementally. Results are given in Fig. 6. As the base-knowledge size increases, there is a noticeable increase in overall model performance because 1) PFC has a better learned representation from a larger quantity of data and 2) there are not as many incremental learning steps remaining for the dataset, so the base-knowledge performance is less perturbed. Table 5: Memory requirements to train CIFAR-100 and the amount of memory that would be required if these models were trained up to 1,000 classes. Table 5 shows the memory requirements for each model in Sec. 6.1 for learning CIFAR-100 and a hypothetical extrapolation for learning 1,000 classes. This chart accounts for a fixed model capacity and storage of any data or class statistics. FearNet's memory footprint is comparatively small because it only stores class statistics rather than some or all of the raw training data, which makes it better suited for deployment.
DISCUSSION
An open question is how to deal with storage and updating of class statistics if classes are seen in more than one study sessions. One possibility is to use a running update for the class means and covariances, but it may be better to favor the data from the most recent study session due to learning in the autoencoder.
FearNet assumed that the output of the mPFC encoder was normally distributed for each class, which may not be the case. It would be interesting to consider modeling the classes with a more complex model, e.g., a Gaussian Mixture Model. Robins (1995) showed that pseudorehearsal worked reasonably well with randomly generated vectors because they were associated with the weights of a given class. The largest impact on model size is the stored covariance matrix Σ c for each class. We tested a variant of FearNet that used a diagonal Σ c instead of a full covariance matrix. Table 6 shows that performance degrades, but FearNet still works.
FearNet can be adapted to other paradigms, such as unsupervised learning and regression. For unsupervised learning, FearNet's mPFC already does a form of it implicitly. For regression, this would require changing mPFC's loss function and may require grouping input feature vectors into similar collections. FearNet could also be adapted to perform the supervised data permutation experiment performed by Goodfellow et al. (2013) and Kirkpatrick et al. (2017). This would likely require storing statistics from previous permutations and classes. FearNet would sleep between learning different permutations; however, if the number of classes was high, recent recall may suffer.
FearNet does have room for improvement. Future work will focus on integrating BLA directly into the model, rather than training it independently. Additionally, we will explore replacing FearNet's pseduorehearsal mechanism with a generative model that does not require the storage of class statistics, which would be more memory efficient.
CONCLUSION
In this paper, we proposed a brain-inspired framework capable of incrementally learning data with different modalities and object classes. FearNet outperforms existing methods for incremental class learning on large image and audio classification benchmarks, demonstrating that FearNet is capable of recalling and consolidating recently learned information while also retaining old information. In addition, we showed that FearNet is more memory efficient, making it ideal for platforms where size, weight, and power requirements are limited. Future work will focus on incorporating the entire framework into a single end-to-end framework.
A SUPPLEMENTAL MATERIAL
A.1 MODEL HYPERPARAMETERS Table S1 shows the training parameters for the FearNet model for each dataset. We also experimented with various dropout rates, weight decay, and various activation functions; however, weight decay did not work well with FearNet's mPFC.
Hyperparameter Values
Learning Rate 2 · 10 −3 Table S2 shows the training parameters for the iCaRL framework used in this paper. We adapted the code from the author's GitHub page for our own experiments. The ResNet-18 convolutional neural network was replaced with a fully-connected neural network. We experimented with various regularization strategies to increase the initial base-knowledge accuracy with weight decay working the best. The values that are given as a range of values are the hyperparameter search spaces. . S1 shows the plots for the multi-modal experiments in Sec. 6.2. The three base-knowledge experiments were 1) CIFAR-100 is the base-knowledge and AudioSet is trained incrementally, 2) Au-dioSet is the base-knowledge and then AudioSet is trained incrementally, and 3) the base-knowledge is a 50/50 mix of the two datasets and then the remaining classes are trained incrementally. For all three base-knowledge experiments, we show the mean-class accuracy on the base-knowledge and the entire test set. FearNet works well when it adequately learns the base-knowledge (Experiment #1 and #3); however, when FearNet learns it poorly, incremental learning deteriorates. Figure S1: Detailed plots for the multi-modal experiment. The top row is when the base-knowledge was CIFAR-100, the middle row is when the base-knowledge was AudioSet, and the bottom row is when the base-knowledge was a 50/50 mix from the two datasets. The left column represents the mean-class accuracy on the base-knowledge test set and the right column computes mean-class accuracy on the entire test set.
Figure 2 :
2iCaRL's performance depends heavily on the number of exemplars per class (EPC) that it stores.
Figure 3 :
3The mPFC and BLA sub-systems in FearNet. mPFC is responsible for the long-term storage of remote memories. BLA is used during prediction time to determine if the memory should be recalled from short-or long-term memory.
Figure 4 :
4Mean-class test accuracy of all classes seen so far. Ω base Ωnew Ω all Ω base Ωnew Ω all Ω base Ωnew Ω all Ω base Ω all
Figure 5 :
5FearNet performance as the sleep frequency decreases.
Figure 6 :
6FearNet performance as a function of base-knowledge size.
These values are stored and used to generate pseudo-examples during consolidation (see Sec. 4.2). 4.2 PSEUDOREHEARSAL FOR MEMORY CONSOLIDATION During FearNet's sleep phase, the original inputs stored in HC are transferred to mPFC using pseudo-examples created using an autoencoder. This process known as intrinsic replay, and it was used by Draelos et al. (2017) for unsupervised learning.Using the class statistics from the encoder, pseudo-examples for a class c are generated by sampling a Gaussian with mean µ c and covariance matrix Σ c to obtainx rand . Then,x rand is passed through the decoder to generate a pseudo-example. To create a balanced training set, for each class that mPFC has learned, we generate ⌈m⌉ pseudo-examples, where m is the average number of examples per class stored in HC. The pseudo-examples are mixed with the data in HC, and the mixture is used to fine-tune mPFC using backpropagation. After consolidation, all units in HC are deleted.
t . Finally, we measure how well the model does on all available test data with Ω all = 1 α of f line . The Ω all metric shows how well new memories are integrated into the model over time. For all of the metrics, higher values indicate superior performance. Both Ω baseT −1
T
t=2
α all,t
Table 1 :
1Dataset SpecificationsDatasets. We evaluate all of the
models on three benchmark datasets
(Table 1): CIFAR-100, CUB-200,
and AudioSet. CIFAR-100 is a pop-
ular image classification dataset con-
taining 100 mutually-exclusive ob-
ject categories, and it was used in
Rebuffi et al. (2017) to evaluate
iCaRL. All images are 32 × 32 pix-
els. CUB-200 is a fine-grained im-
age classification dataset containing
high resolution images of 200 different bird species
Table 2 :
2State-of-the-art comparison on CIFAR-100, CUB-200, and AudioSet datasets. The best Ω all for each dataset is in bold. Ω base and Ω all are normalized by the offline MLP baseline.
Table 2
2shows incremental class learning summary results for all five methods. FearNet achieves the best Ω base and Ω all on all three datasets.Fig. 4shows that FearNet more closely resembles the offline MLP baseline than other methods. Ω new measures test accuracy on the most recently trained class. For FearNet, this measures the performance of HC and BLA. Ω new does not account for how well the class was consolidated into mPFC which happens later during a sleep phase; however, Ω all does account for this. FEL achieves a high Ω new score because it is able to achieve nearly perfect test accuracy on every new class it learns, but this results in forgetting more quickly than FearNet.The final mean-class test accuracy for the offline MLP used to normalize the metrics is 69.9% for
CIFAR-100, 59.8% for CUB-200, and 45.8% for AudioSet.
Table 6 :
6Using a diagonal covariance
matrix for FearNet's class statistics
instead of a full covariance matrix on
CIFAR-100.
Table S1 :
S1FearNet Training Parameters
Table S2 :
S2iCaRL Training ParametersTable S3shows the training parameters for GeppNet and GeppNet+STM. Parameters not listed here are the default parameters defined byGepperth & Karaoguz (2016). The values that are given as a range of values are the hyperparameter search spaces. Incremental Class Learning Iterations (Tinc2 − Tinc1)[2, 000, 20, 000] Hyperparameter
Values
SOM Lattice Shape (N )
20-36
Non-Linearity Suppression Threshold (θ)
0.1-0.75
Table S3 :
S3GeppNet Training Parameters Table S4shows the training parameters for the Fixed Expansion Layer (FEL). The number of units in the FEL layer is given by where H is the number of units in the first hidden-layer and K is the maximum number of classes in the dataset. The values that are given as a range of values are the hyperparameter search spaces.FEL Units =
H 2 + HK
K
(6)
Hyperparameter
Values
Hidden Layer Size (H)
64-1800
FEL Layer Size See Equation 6
Number of Hidden Layers
2
Mini-Batch Size
8
Initial Learning Rate
10 −2
Table S4 :
S4FEL Training Parameters A.2 ICARL PERFORMANCE WITH MORE EXEMPLARS Table S5 provides additional experimental results for when there are more exemplars per class (EPC) for the iCaRL framework. Rebuffi et al. (2017) used 20 EPC in their original paper; however, we increased the number to 100 EPC to see if storing more training data helped iCaRL. Although a higher EPC does increase iCaRL performance, it still does not outperform FearNet. Note that CUB-200 only has about 30 training samples per class, so iCaRL is storing the entire training set for 100 EPC. Our main results use the default value of 20. base Ωnew Ω all Ω base Ωnew Ω all Ω base Ωnew Ω all Ω base Ω allModel
CIFAR-100
CUB-200
AudioSet
Mean
Ω
Table S5 :
S5iCaRL's performance when the stored EPC is increased from 20 to 100.A.3 MULTI-MODAL LEARNING EXPERIMENTFig
Memory retention-the synaptic stability versus plasticity dilemma. C Wickliffe, Anthony Abraham, Robins, Trends in Neurosciences. 282Wickliffe C Abraham and Anthony Robins. Memory retention-the synaptic stability versus plastic- ity dilemma. Trends in Neurosciences, 28(2):73-78, 2005.
Youtube-8m: A large-scale video classification benchmark. Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, arXiv:1609.08675Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, et al. Youtube-8m: A large-scale video classifi- cation benchmark. arXiv:1609.08675, 2016.
Autoradiographic investigation of cell proliferation in the brains of rats and cats. Joseph Altman, The Anatomical Record. 1454Joseph Altman. Autoradiographic investigation of cell proliferation in the brains of rats and cats. The Anatomical Record, 145(4):573-591, 1963.
Avoiding catastrophic forgetting by coupling two reverberating neural networks. Bernard Ans, Stphane Rousset, 0764-4469Comptes Rendus de l'Acadmie des Sciences -Series III -Sciences de la Vie. 320Bernard Ans and Stphane Rousset. Avoiding catastrophic forgetting by coupling two reverberating neural networks. Comptes Rendus de l'Acadmie des Sciences -Series III -Sciences de la Vie, 320 (12):989 -997, 1997. ISSN 0764-4469.
Time-dependent reorganization of brain circuitry underlying long-term memory storage. Bruno Bontempi, Catherine Laurent-Demir, Claude Destrade, Robert Jaffard, Nature. 4006745Bruno Bontempi, Catherine Laurent-Demir, Claude Destrade, and Robert Jaffard. Time-dependent reorganization of brain circuitry underlying long-term memory storage. Nature, 400(6745):671- 675, 1999.
Fast and accurate deep network learning by exponential linear units (elus). Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter, ICLR. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In ICLR, 2016.
Ensemble learning in fixed expansion layer networks for mitigating catastrophic forgetting. Robert Coop, Aaron Mishtal, Itamar Arel, IEEE Trans. on Neural Networks and Learning Systems. 2410Robert Coop, Aaron Mishtal, and Itamar Arel. Ensemble learning in fixed expansion layer networks for mitigating catastrophic forgetting. IEEE Trans. on Neural Networks and Learning Systems, 24(10):1623-1634, 2013.
New neurons and new memories: how does adult hippocampal neurogenesis affect learning and memory?. Wei Deng, B James, Fred H Aimone, Gage, Nature Reviews Neuroscience. 115Wei Deng, James B Aimone, and Fred H Gage. New neurons and new memories: how does adult hippocampal neurogenesis affect learning and memory? Nature Reviews Neuroscience, 11(5): 339-350, 2010.
Neurogenesis deep learning: Extending deep networks to accommodate new classes. J Timothy, Nadine E Draelos, Miner, C Christopher, Jonathan A Lamb, Craig M Cox, Vineyard, D Kristofor, Carlson, M William, Conrad D Severa, James B James, Aimone, International Joint Conference on Neural Networks. IEEETimothy J Draelos, Nadine E Miner, Christopher C Lamb, Jonathan A Cox, Craig M Vineyard, Kristofor D Carlson, William M Severa, Conrad D James, and James B Aimone. Neurogenesis deep learning: Extending deep networks to accommodate new classes. In International Joint Conference on Neural Networks, pp. 526-533. IEEE, 2017.
Neurogenesis in the adult human hippocampus. Ekaterina Peter S Eriksson, Thomas Perfilieva, Ann-Marie Björk-Eriksson, Claes Alborn, Nordborg, A Daniel, Fred H Peterson, Gage, Nature medicine. 411Peter S Eriksson, Ekaterina Perfilieva, Thomas Björk-Eriksson, Ann-Marie Alborn, Claes Nordborg, Daniel A Peterson, and Fred H Gage. Neurogenesis in the adult human hippocampus. Nature medicine, 4(11):1313-1317, 1998.
The role of medial prefrontal cortex in memory and decision making. Aaron J David R Euston, Bruce L Gruber, Mcnaughton, Neuron. 766David R Euston, Aaron J Gruber, and Bruce L McNaughton. The role of medial prefrontal cortex in memory and decision making. Neuron, 76(6):1057-1070, 2012.
Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, Daan Wierstra, Pathnet, arXiv:1701.08734Evolution channels gradient descent in super neural networks. Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv:1701.08734, 2017.
The involvement of the anterior cingulate cortex in remote contextual fear memory. W Paul, Bruno Frankland, Bontempi, E Lynn, Leszek Talton, Alcino J Kaczmarek, Silva, Science. 3045672Paul W Frankland, Bruno Bontempi, Lynn E Talton, Leszek Kaczmarek, and Alcino J Silva. The involvement of the anterior cingulate cortex in remote contextual fear memory. Science, 304 (5672):881-883, 2004.
Pseudo-recurrent connectionist networks: An approach to the 'sensitivitystability' dilemma. M Robert, French, Connection Science. 94Robert M French. Pseudo-recurrent connectionist networks: An approach to the 'sensitivity- stability' dilemma. Connection Science, 9(4):353-380, 1997.
Catastrophic forgetting in connectionist networks. M Robert, French, Trends in Cognitive Sciences. 34Robert M French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4):128-135, 1999.
Sleep transforms the cerebral trace of declarative memories. Steffen Gais, Geneviève Albouy, Mélanie Boly, Thien Thanh Dang-Vu, Annabelle Darsaud, Martin Desseilles, Géraldine Rauchs, Manuel Schabus, Virginie Sterpenich, Gilles Vandewalle, Proceedings of the National Academy of Sciences. 10447Steffen Gais, Geneviève Albouy, Mélanie Boly, Thien Thanh Dang-Vu, Annabelle Darsaud, Martin Desseilles, Géraldine Rauchs, Manuel Schabus, Virginie Sterpenich, Gilles Vandewalle, et al. Sleep transforms the cerebral trace of declarative memories. Proceedings of the National Academy of Sciences, 104(47):18778-18783, 2007.
Audio set: An ontology and human-labeled dataset for audio events. F Jort, Gemmeke, P W Daniel, Dylan Ellis, Aren Freedman, Wade Jansen, R Lawrence, Manoj Moore, Marvin Plakal, Ritter, ICASSP. New Orleans, LAJort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In ICASSP, New Orleans, LA, 2017.
A bio-inspired incremental learning architecture for applied perceptual problems. Alexander Gepperth, Cem Karaoguz, Cognitive Computation. 85Alexander Gepperth and Cem Karaoguz. A bio-inspired incremental learning architecture for ap- plied perceptual problems. Cognitive Computation, 8(5):924-934, 2016.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-256, 2010.
An empirical investigation of catastrophic forgetting in gradient-based neural networks. J Ian, Mehdi Goodfellow, Da Mirza, Aaron Xiao, Yoshua Courville, Bengio, arXiv:1312.6211Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical in- vestigation of catastrophic forgetting in gradient-based neural networks. arXiv:1312.6211, 2013.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CPVR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CPVR, pp. 770-778, 2016.
Is there catastrophic interference in connectionist networks. P Hetherington, S Mark, Seidenberg, Proceedings of the 11th annual conference of the cognitive science society. the 11th annual conference of the cognitive science societyHillsdale, NJErlbaum2633P Hetherington and Mark S Seidenberg. Is there catastrophic interference in connectionist networks. In Proceedings of the 11th annual conference of the cognitive science society, volume 26, pp. 33. Erlbaum Hillsdale, NJ, 1989.
Measuring catastrophic forgetting in neural networks. Ronald Kemker, Marc Mcclure, Angelina Abitino, Tyler Hayes, Christopher Kanan, arXiv:1708.02072arXiv preprintRonald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. arXiv preprint arXiv:1708.02072, 2017.
Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Proc. of the National Academy of Sciences. of the National Academy of Sciences201611835James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, et al. Overcoming catastrophic forgetting in neural networks. Proc. of the National Academy of Sciences, pp. 201611835, 2017.
Engrams and circuits crucial for systems consolidation of a memory. Takashi Kitamura, K Sachie, Ogawa, S Dheeraj, Teruhiro Roy, Okuyama, D Mark, Lillian M Morrissey, Smith, L Roger, Susumu Redondo, Tonegawa, Science. 3566333Takashi Kitamura, Sachie K Ogawa, Dheeraj S Roy, Teruhiro Okuyama, Mark D Morrissey, Lil- lian M Smith, Roger L Redondo, and Susumu Tonegawa. Engrams and circuits crucial for systems consolidation of a memory. Science, 356(6333):73-78, 2017.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097- 1105, 2012.
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. L James, Mcclelland, L Bruce, Randall C O' Mcnaughton, Reilly, Psychological review. 1023419James L McClelland, Bruce L McNaughton, and Randall C O'reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995.
Catastrophic interference in connectionist networks: The sequential learning problem. Michael Mccloskey, J Neal, Cohen, Psychology of learning and motivation. 24Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109-165, 1989.
Sleep is for forgetting. Gina R Poe, Journal of Neuroscience. 373Gina R Poe. Sleep is for forgetting. Journal of Neuroscience, 37(3):464-473, 2017.
Semisupervised learning with ladder networks. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, Tapani Raiko, NIPS. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi- supervised learning with ladder networks. In NIPS, pp. 3546-3554, 2015.
icarl: Incremental classifier and representation learning. Alexander Sylvestre-Alvise Rebuffi, Christoph H Kolesnikov, Lampert, Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, and Christoph H Lampert. icarl: Incremental clas- sifier and representation learning. 2017.
Catastrophic forgetting, rehearsal and pseudorehearsal. Anthony Robins, Connection Science. 72Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2): 123-146, 1995.
Probabilistic neural networks. F Donald, Specht, Neural networks. 31Donald F Specht. Probabilistic neural networks. Neural networks, 3(1):109-118, 1990.
Declarative memory consolidation in humans: a prospective functional magnetic resonance imaging study. A Takashima, Karl Magnus Petersson, F Rutters, Tendolkar, Jensen, Mj Zwarts, G Mc-Naughton, Fernandez, Proceedings of the National Academy of Sciences of the United States of America. 1033A Takashima, Karl Magnus Petersson, F Rutters, I Tendolkar, O Jensen, MJ Zwarts, BL Mc- Naughton, and G Fernandez. Declarative memory consolidation in humans: a prospective func- tional magnetic resonance imaging study. Proceedings of the National Academy of Sciences of the United States of America, 103(3):756-761, 2006.
Adult neurogenesis and neural stem cells of the central nervous system in mammals. Philippe Taupin, H Fred, Gage, Journal of neuroscience research. 696Philippe Taupin and Fred H Gage. Adult neurogenesis and neural stem cells of the central nervous system in mammals. Journal of neuroscience research, 69(6):745-749, 2002.
Caltech-UCSD Birds 200. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, CNS-TR-2010-001California Institute of TechnologyTechnical ReportP. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
A deficit in the ability to form new human memories without sleep. Seung-Schik Yoo, T Peter, Ninad Hu, Gujar, A Ferenc, Matthew P Jolesz, Walker, Nature Neuroscience. 103Seung-Schik Yoo, Peter T Hu, Ninad Gujar, Ferenc A Jolesz, and Matthew P Walker. A deficit in the ability to form new human memories without sleep. Nature Neuroscience, 10(3):385-392, 2007. |
235,899,205 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-taskadjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling <title> tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at autoprompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. | [
9586240,
165163607,
5034059,
3432876,
221845203,
28193461,
225067799,
102353817,
964287
] | HTLM: Hyper-Text Pre-Training and Prompting of Language Models
14 Jul 2021
Armen Aghajanyan [email protected]
Facebook AI
Dmytro Okhonko
Facebook AI
Mike Lewis [email protected]
Facebook AI
Mandar Joshi [email protected]
Facebook AI
University of Washington
Hu Xu [email protected]
Facebook AI
Gargi Ghosh [email protected]
Facebook AI
Luke Zettlemoyer
Facebook AI
University of Washington
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
14 Jul 2021
We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-taskadjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling <title> tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at autoprompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research.
Introduction
The vast majority of text used to pretrain language models is extracted from web pages, while discarding any markup they contain Brown et al., 2020;Raffel et al., 2019;. We argue that this HTML should not be ignored; it enables new forms of highly effective language model pretraining and * Equal Contribution <!DOCTYPE html> <html> <title> <mask>12 </title> <body> south korea on monday announced sweeping tax reforms , including income and corporate tax cuts to boost growth by stimulating sluggish private consumption and business investment . </body> </html> ↓ <!DOCTYPE html> <html> <title>˜South Korea Announces Tax Reforms To Boost Economic Growth˜</title> <body> south korea on monday announced sweeping tax reforms... </body> </html> Figure 1: An example structured prompt for a simple summarization task, where we ask a generative masked language model to generate a mask representing the title with an average tokens size of 12. prompting with structured document-level supervision.
Hyper-text, such as the HTML found in the Common Crawl 1 , has a number of advantages for pretraining over plain text. It often encodes highlevel properties of different parts of the documents, which are difficult to infer from the text alone. For example, <title> elements can be excellent summaries of the <body> of a document, while element class and id attributes can encode categorical properties of documents. Such supervision is highly diverse, depending on what the website authors choose to present, and provides close proxies for many NLP tasks we aim to later solve.
Modeling hyper-text allows us to introduce structured prompting of language models. We design prompts that incorporate the established semantics of HTML to better control for the desired model output. This includes, for exam-ple, performing zero-shot summarization by asking the model to infill <title> tags in a web page. And, the fact that we jointly model text and hyper-text formatting also allows for effective auto-prompting. If we have even a few examples for a new task, we can directly ask the model to format them in HTML, and templatize the result to define the new prompt.
Our HyperText Language Model (HTLM) is trained on 23TB of simplified HTML which we automatically extract from common crawl dumps (see Section §2.1). We use a modified BART denoising objective ) that randomly masks spans of hyper-text and aims to reconstruct the original input. We extend the original masking with a new size hint scheme, where each mask is associated with an integer that provides a noisy hint for the size of the masked text, to allow for more fine grained task-specific length priors when prompting the final model (see Section §2.3). Figure 1 shows an example mask that should be reconstructed with a phrase that contains roughly 12 tokens.
Through extensive experiments, we show that our HTLM achieves highly effective transfer for a wide range of end tasks and supervision levels. It matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and full fine-tuning on GLUE, while also setting new state-of-the-art performance levels for zero-shot summarization with a gain of up to 8 ROUGE-1 points. It also allows few shot learning for problems that are less easily reduced to textonly inputs, such table to text generation. Following methodology introduced by Le Scao and Rush (2021), we further find that hyper-text prompts provide more data efficiency to the HTLM model than plain text prompts do for existing LMs, being effectively equivalent to having up to a thousand extra training examples. Finally, we see that the HTLM model is highly effective at auto-prompting itself, in some cases rivaling the performance of manually engineered prompts.
In summary, our contributions include:
• We present the first hyper-text language model (HTLM), trained on 23TB of simplified HTML data from the common crawl.
• Our new hyper-text prompting scheme uses both the well-established semantics of HTML and new size hints on prompt masks to provide more fine-grained control of new task specifications.
• We demonstrate consistently strong transfer from HTLM to a range of tasks at differing supervision levels, including improving the best-known zero-shot summarization numbers by up to 8 ROUGE-1 points.
• Following Le Scao and Rush (2021), our data efficiency analysis shows that hyper-text prompts are worth more to the HTLM model than plain text prompts are for existing LMs, being effectively equivalent to having up to a thousand extra training examples.
• We demonstrate the HTLM directly supports auto prompting for new tasks, by simply asking it to format any available examples in HTML, often rivaling or surpassing previous manually engineered prompts.
• We release all code and models to support future HTLM research.
HyperText Language Model (HTLM)
HTLM is trained on a large corpus of simplified HTML, which is automatically extracted from the common crawl (Section §2.1). We use a BARTstyle denoising autoencoder with span masking (Section §2.2), extended to allow size hints during reconstruction of the original text (Section §2.3).
Minimal HTML
Although HTML contains supervision signals to natural language, the majority of HTML in a modern web page does not provide any significant form of supervision for pretraining. For example, a large portion of a webpage is JavaScript code or CSS, which provides more aesthetics to the page rather than document-level information. Coupling this with the challenges of training transformers on very long sequence lengths (Choromanski et al., 2020;Beltagy et al., 2020), it was important to automatically convert web pages to a simplified form, which we call Minimal-HTML (MHTML), as defined below. We remove all sub-trees of the HTML DOM 2 which do not contain textual elements of a certain character size (128 for standard textual elements, 64 for lists/tables/spans). We also filter out all headers, footers, copyrights, forms, and iFrames. We fold consecutive <div> elements into a singular <div> element with merged attributes. We also remove all attributes which are not class or id attributes. Lastly, we skip all MHTML documents whose ratio of text to HTML is not greater than 0.46. Particularly we noticed that MHTML documents whose ratio of text to HTML is low, the average quality of the document tends to be lower as well. We found these numbers by visually inspecting a set of Common Crawl (CC) documents after application of aforementioned transforms ensuring both a high quality of kept documents while also not filtering too large amount of data. Furthermore we filter out all documents who have a lang attribute that is not set to en.
Applying these deterministic transformations removes on average 94% of characters from a raw webpage while maintaining the general markup of the document. Furthermore, it allowed close to 85% of MHTML documents to fit into 1024 BPE tokens; the maximum token length for BART and many other existing language models.
One by-product of this type of filtering is that it also produced high-quality documents by default 3 ; thus, we opted out of model-based filtering of documents such as CC-100 (Conneau et al., 2019). We used the January 2021 snapshot of Common Crawl, which provided us with 23 Terabytes of MHTML text after filtering.
Model
We adopt a BART-style denoising autoencoder for several reasons. We want to predict arbitrary substrings within the MHTML, conditioned on the rest of the document. This allows us to equally easily (1) use masks during prompting to mark where to generate text associated with model outputs within a web page, and (2) automatically generate prompts by wrapping plain text training examples in masks that allow the model to mark them up by generating MHTML formatting. We also do not know in advance exactly how much text needs to be generated in each case, thereby ruling out the use of more traditional masked language models. For all of our experiments, we adopt the same architecture as BART-Large and initialized our models with the BART-Large checkpoint. This model has roughly 400 million parameters.
We trained our augmented BART model for a total of 330,000 steps on 256 GPUs with an effective batch size of 8192. We initialize our model with the original BART-Large model. We train using the Adam optimizer (Kingma and Ba, 2014) and a polynomial decay learning rate scheduler with a peak learning rate of 4e−5 and 10, 000 warm-up steps.
We do not use the sentence shuffling from the original BART objective, and select a Poisson λ of 3.5 for sampling span lengths for masking. We set dropout in the attention to 0.1 for the first 170k steps, reducing it to 0.0 thereafter. We also filter out data to only English (en) after 170k steps using FastText (Joulin et al., 2016). We noticed the perplexity plateaued around 170k steps which is why we simplify the learning process by removing dropout and applying stronger filtering of the English language.
Size Hints
BART allows each mask to be replaced with multiple tokens during the reconstruction. During pretraining, BART masks a span with the length sampled from a Poisson distribution; thus, the model must learn to implicitly predict the length of the masked text. A fundamental problem we encountered when trying to use standard BART for zeroshot generative prompting is the inability to control the length of the generated text for each mask, even when using various decoding strategies like length penalties.
To allow for more control, we augment BART's masking scheme by introducing size hints. Specifically, we tokenize the noisy estimate of the length of a span directly and insert it right after the span mask token. For example, given the correct mask length m, we insert n mask tokens where n is max (1, ⌊N (m, m * ǫ)⌋) and ǫ is a hyperparameter representing how noisy we want these size hints to be. By optionally injecting size hints, we can prompt the model to generate text of roughly some specific length, or by not injecting size hints, we allow the model to model the mask size implicitly. We give size-hints to 80% of masks with the noisiness of size hints ǫ = 0.1.
We provide an example of the benefits of size hints in generation in Table 1.
HTML-based Prompting
We use the HTML-based prompting scheme for a range of generation and classification tasks. Broadly, we use HTML templates-either selected manually or generated by the model itself by autoprompting-to specify the HTML structure of the task. The template is then instantiated with the task input and placeholder mask tokens for the output. The model uses this instantiated template as a prompt. Because BART models reconstruct the full input, we rely on simple heuristics to match the prefix/suffix around any masks and extract the final output.
Generation Prompting Policies
Given that we have optional size hints for masks, a single prompt can generate a wide variety of text; therefore, we discuss multiple policies to select the prompted results. We can decide not to utilize size hints at all and thus remove the need to use any policies, but this comes at the cost of template robustness. Without size hints, a template not only has to express the semantics of the task, but also needs to match the average target length as well; such prompts are brittle and require careful manual design. However, using hints allows us to decouple generation length from the prompt, greatly improving template reuse across related tasks. It is also possible that for a prompt and a specific subset of the data, HTLM will not generate an output from which we can programmatically extract the generated mask; therefore, our policies for sizehints also mitigate this issue. For every generation task, we first construct a prompt that can generate the correct text semantically, and then we provide size hints equal to the average target of a subset of the training set,s. If, for a particular input, we are not able to extract a value, we run HTLM on the same prompt, but with our size hint set tos ± iǫs, from which we select the output with the lowest perplexity, we continue this process at most five times where i represents the current index of the policy. If we still cannot find a valid generated answer, we fall back on the auto-template described in the next section. In experiments, we denote HTLM-Manual-NS (not sized) as our manually engineered prompt with no size hint, while HTLM-Manual-S uses the policy defined here for all generation benchmarks.
Auto-Prompting
To avoid manually engineering prompts, we also explore automatic generation of structured prompts. By training on hypertext, HTLM can learn high-level document semantics that we exploit for prompt creation. We generate prompting templates by asking the model to recover document markups. Specifically, we place mask tokens around every independent block of data (e.g. summary/article).
We provide an example of auto-prompting for a sample from the Gigaword summarization dataset (Napoles et al., 2012) with the respective masking in Figure 2 . For our generation experiments, we denote HTLM-Auto-NS (not-sized) as the auto-prompt without using size hints, where HTLM-Auto-S uses the size hints based policy described in the previous section.
We found that HTLM auto-prompting was less effective for classification tasks. We hypothesize that this is because generative targets carry significantly more information than a simple binary target token.
4 Zero/One-Shot Prompting Perez et al. (2021) argue that zero/few-shot learning cannot happen when prompts are created by tuning on a large amount of development data. To mitigate for this issue all the manual prompts used throughout our experiments are either derived from related papers or developed using a maximum of fifty samples from the train set.
Generation
We evaluate HTLM on summarization, a prototypical generation task. For all summarization benchmarks, we use ROUGE-1/2/L as our primary metrics to stay consistent with other literature (Lin, 2004).
Furthermore we benchmark HTLM on a set of three standard natural language generation tasks. We utilize the official benchmarking scripts provided which report BLEU (Papineni et al., 2002), NIST (Belz and Reiter, 2006), METEOR (Lavie and Agarwal, 2007), ROUGE-L (Lin, 2004), CIDEr (Vedantam et al., 2015) and TER (Snover et al., 2005). We use Li and Liang (2021) for our baselines, and present prefix tuning results with 0.1% of parameters as well.
Gigaword consists of headlines from news articles (Napoles et al., 2012). The target summaries Prompt Size Hint (X) HTLM Output <html id="cnn_dailymail" lang="en" xml:lang="en"> <head class="pg-headline" data-act-id="article_head_0"> <p> <mask>X --CNN</p> </head> <body> <p> However, observers inside the court said the prosecution evidence was difficult to follow or to understand. Correspondents said the most prominent video clip was by another al-Jazeera journalist who is not on trial. The three, including ex-BBC reporter Peter Greste , deny spreading false news and helping the Muslim Brotherhood. They appeared in the Cairo court on Tuesday along with other Egyptian students and activists being tried with them. Defence lawyers complained they were unable to understand some of the recordings being played because of poor technical quality. While the recordings were played, defendants talked among themselves in the caged dock. The trial was later adjourned until 3 May and bail for the defendants was denied, reports from the court said. Peter Greste, producer Baher Mohamed and al-Jazeera English's Canadian-Egyptian bureau chief Mohamed Fadel Fahmy have been held since December. A fourth al-Jazeera reporter, Abdullah Elshamy, who works for the network's Arabic channel, has been detained since August but not charged. Egyptian authorities accuse the journalists of aiding the Islamist Muslim Brotherhood, which has been banned as a terrorist group. </p> </body> </html>
HTLM
− −− → <html lang="en" xml:lang="en"> <head> <title> the us rejects charges against its ambassador in bolivia | The Washington Post </title> </head> <body> <div class = "post-body entry-content"> <p> the us state department said wednesday it had received no formal word from bolivia that it was ... </p> </div> </body> </html> Figure 2: An example of auto-prompting using a sample from the train-set of the Gigaword dataset. HTLM places the summary inside of a <title> inside of a <head> element, while placing the article in a <div> element with an entry-content attribute value for attribute class which is common on news web-sites. are relatively short, consisting roughly on average of 10 BPE tokens. (Hermann et al., 2015) provides multi-sentence target summaries close to 3 sentences, or roughly 50 tokens.
CNN/Dailymail
Reddit TIFU (Kim et al., 2018) contains summaries of Reddit posts. Specifically, we use the short subset of data . Compared to our other summarization datasets, this dataset is highly abstractive and not based on news articles.
XSum (Narayan et al., 2018) provides abstractive single sentence summaries of news articles.
E2E (Novikova et al., 2017) is a table-to-text generation dataset containing approximately 50K samples with 8 unique fields from the restaurants domain.
WebNLG (Gardent et al., 2017) is also a structured generation dataset containing 15 different domains from DBPedia. We report numbers on the Seen (S), Unseen (U) and All (A) subsets of the data.
DART (Nan et al., 2020) is a open-domain structured generation dataset containing Wikipedia tables.
We manually searched for prompts for each of these datasets using a maximum of 50 data points from the train set to evaluate the prompts. For our baseline, we compare against PEGA-SUS (Zhang et al., 2019), the current state of the art for zero shot summarization. PEGASUS was explicitly pre-trained for summarization by masking and generating salient gap sentences from news articles. We present our results in Table 2.
HTLM with manual prompts (HTLM-Manual) and size hints substantially improves over state-ofthe-art zero-shot summarization results on all four datasets without any tailored pretraining. In particular, we see large improvements of more than 8 ROUGE-L F1 for the Gigaword dataset. Furthermore, size hints-based auto-prompting (HTLM-Auto-S) outperforms PEGASUS in three out of four datasets. Specifically, for the Gigaword dataset, we outperform previous state-of-the-art zero-shot results from PEGASUS by roughly 6 ROUGE-L points. HTLM improvements stem from the fact that HTML-based prompting allows us better control over dataset-specific attributes such as length and style.
For NLG tasks, we required the use of a single training example to get prompting to work sufficiently. We report these one-shot numbers in Table 3. Because these tasks require structured tabular inputs, it is not obvious how to prompt any other text-based pre-trained models. We report other non-trainable baselines such as the grammar based pipeline approaches (TILB/UIT-VNU) in Gardent et al. (2017). To the best of our knowledge, these are the first one-shot table to text, natural language generation results.
Classification
For prompting in the classification setting, we select 4 datasets to work with. Instead of relying on generative prompting to generate target token(s) denoting the correct class, we instead rely on perplexity measures over the set of all targets to select the correct class. In other words, we select the class for which the perplexity of the corresponding instantiated template is the smallest.
RTE (Bentivogli et al., 2009) is a textual entailment task formulated as binary classification. We place the candidate in a <div> element with the class attribute set to candidate and do the same with the respective hypothesis. In the third element, we utilize the prompt from Brown et al. (2020) with the class attribute set to answer.
BoolQ (Clark et al., 2019) is a yes/no question answering task, also formulated as binary classification for question, passage, and answer triplets. We represent the question as a <div> element with the itemprop set to https://schema.org/Question, passage as a div element with class attribute passage and answer as a div element with the itemprop set to https://schema.org/Answer.
Winogrande (Levesque et al., 2012) consists of adversarially collected Winograd Schema Challenge (Levesque et al., 2011) data. We utilize the same template as GPT-3 but place it in a QA style template similar to BoolQ. Please refer to the Appendix for exact templates.
HellaSwag The last dataset we evaluate is the commonsense natural language inference task Hel-laSwag which, due to its adversarial nature, is considered complex (Zellers et al., 2019). We present our results on zero-shot classification in Table 4. HTLM prompting of classification datasets outperforms the most comparable (in terms of number of parameters) GPT-3 Medium sized model on the majority of tasks, while approaching-and on RTE outperformingthe GPT-3 Large model which consists of roughly double the amount of parameters as HTLM.
Fine-tuning Experiments
In addition to our previous prompting results, we also aim to show that HTLM learned representations are useful in the full finetuning setting. We compare against other pre-training MLM models such as RoBERTa , original BART , and T5 (Raffel et al., 2019) by finetuning on the GLUE benchmark (Wang et al., 2018).
During finetuning, instead of a simple concatenation of sentences from the train set, we place the examples into prompts derived from Le Scao and Rush (2021). We defer to the Appendix for the exact prompts. Given the recent advancements in finetuning, we also report results using the recently proposed R3F method for finetuning (Aghajanyan et al., 2020a) for both RoBERTa and HTLM.
We present our results in Table 5. Overall HTLM improves over existing pre-training methods. We also note that we can improve fine-tuning performance by placing the examples into prompts and fine-tuning the classification head. The improvements that we see in terms of prompting have no adverse effects on fine-tuning but are rather positive, providing further evidence that the proposed approach of structured pre-training is a viable alternative to other methods of pre-training even for fine-tuning.
We also show our fine-tuning results for the table-to-text generation datasets in Table 3. Similar to GLUE fine-tuning, we place all NLG samples into a prompt while fine-tuning. HTLM finetuned is able to outperform both variants of the GPT-2 model consistently.
Prompt Data Efficiency
What does the HTML-based pretraining and prompting scheme offer over one based on the plain text? Le Scao and Rush (2021) explored quantifying how many data points a single prompt was worth. Specifically, they analyzed three different task-specific settings given a pattern (the struc- ture that the inputs are put into) and verbalizer (i.e., yes/no answer to pattern): (1) fine-tuning a classification head (H), (2) fine-tuning the verbalizer of a prompt encoding the semantics of the task (P ), and (3) fine-tuning the prompt but with a verbalizer that is non-sensical (N ).
By carefully selecting the number of data points to be used during training in each setting while matching the end fine-tuning performance, we can empirically measure the efficacy of prompts in terms of data points. We provide the same analysis extended to BART, T5-Large, and HTLM using the same PET prompts provided in Schick and Schütze (2020). For HTLM, we wrap all PET prompts in an HTML element. We select the same datasets that were used in the original paper for our experimentation; MNLI (Williams et al., 2018), BoolQ (Clark et al., 2019), CB (De Marneffe et al., 2019), RTE (Bentivogli et al., 2009), WiC (Pilehvar andCamacho-Collados, 2019).
We first look at the average advantage of finetuning a prompt (P ) against a classification head (H) in Table 6. We see that across the board, HTLM prompts-i.e., hypertext prompts applied to HTLM-are worth more than natural language prompts to various other pre-trained models. Compared to RoBERTa-Large on smaller datasets, HTLM's advantage is close to triple on CB and double on RTE. Furthermore, on WiC, HTLM is the only pre-trained model capable of having a positive training advantage when using prompts. We view this as additional evidence to the benefit of pre-training on structured data on the prompting of a pre-trained model.
We also compare the average advantage of finetuning a prompt with a verbalizer (P ) that makes sense against against finetuning a prompt where we change the verbalizer to a random first name (N ). This is important to capture whether the benefits arise from representing the data in their respective patterns or the coupling of the pattern and the verbalizer. We present our results in Table 7. Relative to the previous P vs. H setting we lose a large amount of advantage, as was similarly seen in (Le Scao and Rush, 2021). Interestingly enough for small datasets such as CB, all of the training advantage of the prompt comes from the pattern in HTLM.
We view this as further evidence that a structured, document level approach to both pretraining and prompting can be seen as a viable alternative to a purely natural language approach. 7 Related Work GPT-2 (Radford et al., 2019) showed that large language models show varying levels of zeroshot performance across NLP tasks when compared to supervised baselines (e.g., rudimentary performance on summarization, but more competitive results on reading comprehension). Brown et al. (2020) through their GPT3 model showed that by further scaling up language mod-els on a large subset of the internet, prompting could be a viable alternative to standard finetuning. The success of GPT3 was largely attributed to massive size and compute-intensive pretraining. By reformulating NLP tasks as cloze-style questions, Schick and Schütze (2020) shows that the prompting capabilities exhibited by GPT3 can occur in language models of a much smaller scale when gradient-based finetuning is combined with task-specific unlabeled data. Follow-up work (Tam et al., 2021) improves upon these results without depending on unlabeled data. Unlike GPT-3 and other models which use conventional natural language text-based prompting, we focus on a new hyper-text based prompting scheme using generative masked language models pre-trained directly over HTML.
For task-specific zero-shot performance, custom pre-training and data augmentation schemes have been developed.
For example, PEGA-SUS (Zhang et al., 2019) proposes a novel pretraining scheme tailored for summarization which involves masking and generating salient gap sentences from a large news corpus. While PE-GASUS is capable of doing zero-shot summarization, it offers little control over summary attributes such as length and style which vary across different summarization datasets. Wiki-Transfer (Fabbri et al., 2021) fine-tunes pretrained models on pseudo-summaries, produced from generic Wikipedia data, which contain characteristics of the target dataset, such as the length and level of abstraction. Our proposed model allows fine-grained control over the length of the generated text by specifying the size of the mask. Moreover, by using different prompts, HTLM can produce stylistically varied summaries without dataset-specific augmentation and finetuning.
Another line of work has been looking at a hybrid form of prompting that attempts to optimize very few parameters to solve an end task. For example Li and Liang (2021) argue that optimizing in the continuous prompt space is an effective solution to prompt search while Aghajanyan et al. (2020b) optimize for a low-rank projection of the full parameter space. For simplicity, we only focus on either full-finetuning or zero-shot prompting in this paper.
Attempts have been made to encode architectural priors for structured inputs into transformers as well. Specifically, Ainslie et al. (2020) discuss a new type of model which allows for scalability in input length as well as the ability to encode the structure of the input. We opt to allow HTLM to learn the structure that is available in the HTML directly without encoding any structural priors into the model itself.
Conclusion
In this paper, we proposed HTLM, a hyper-text language model trained on simplified HTML documents from a large-scale web crawl. We showed that by directly modeling HTML through a BARTlike objective, we could do structured zero-shot prompting by representing tasks in HTML. Specifically, we outperform the previous best results on zero-shot prompting for summarization by a wide margin by creating prompts that capture the underlying semantics of each summarization dataset. Furthermore, we show that pre-training on structured data improved full finetuning performance relative to other pre-trained models that only modeled natural language.
We also showed additional advantages of modeling hyper-text, beyond improved accuracy. HTLM can be used for auto-prompt by simply asking the model to recover the document structure from training samples; these auto-prompts on datasets like Gigaword and CNN/DM outperformed previous state-of-the-art zero-shot approaches. Lastly, we provided an in-depth comparison of the training advantage, in terms of data efficiency, that HTLM had compared to other pre-training approaches. Across the board, HTML prompts were worth more to HTLM than natural language prompts were worth to our baselines, further showing the efficacy of pre-training structured data.
Future work can focus on the scaling laws of structured pre-training and prompting. As was seen from GPT-3, the size of the model and the amount of compute utilized and significant impact on prompting performance.
Table 1 :
1We provide a simple example using our CNN/DM prompt where by altering the Size Hint value (X) we get summaries of varied length and complexity.<mask>
us rejects charges against its ambassador in
bolivia
<mask>
<mask>
the us state department said wednesday it had
received no formal word from bolivia that it
was ...
<mask>
Table 2 :
2HTLM results on zero-shot summarization. HTLM-Manual denotes manually engineered prompts with size hints, while HTLM-Auto-S and HTLM-Auto-NS indicate autoprompting with and without size hints respectively.Metrics shown are ROUGE-1/ROUGE-2/ROUGE-L respectively.
E2E
WebNLG
DART
BLEU NIST MET R-L CIDEr
BLEU
MET
TER ↓
BLEU MET TER ↓ Mover BERT BLEURT
S
U
A
S
U
A
S
U
A
Fine-tuning
GPT-2MEDIUM
68.2 8.62 46.2 71.0 2.47 64.2 27.7 46.5 0.45 0.30 0.38 0.33 0.76 0.53 46.2 0.39 0.46
0.50
0.94
0.39
GPT-2LARGE
68.5 8.78 46.0 69.9 2.45 65.3 43.1 55.5 0.46 0.38 0.42 0.33 0.53 0.42 47.0 0.39 0.46
0.51
0.94
0.40
HTLM
70.3 8.90 46.3 70.8 2.47 65.4 48.4 55.6 0.46 0.39 0.42 0.33 0.51 0.40 47.2 0.39 0.44
0.51
0.94
0.40
Prefix (0.1%)
GPT-2MEDIUM
69.7 8.81 46.1 71.4 2.49 62.9 45.6 55.1 0.44 0.38 0.41 0.35 0.49 0.41 46.4 0.38 0.46
0.50
0.94
0.39
GPT-2LARGE
70.3 8.85 46.2 71.7 2.47 63.4 47.7 56.3 0.45 0.39 0.42 0.34 0.48 0.40 46.7 0.39 0.45
0.51
0.94
0.40
HTLM
70.1 8.85 46.1 71.2 2.45 64.8 46.1 56.3 0.46 0.38 0.42 0.33 0.47 0.40 47.1 0.39 0.45
0.50
0.94
0.39
One-Shot
Table 3 :
3We evaluate GPT-2 MEDIUM , GPT-2 LARGE and HTLM on table-to-text generation on E2E (left), WebNLG
(middle) and DART (right).
Table 4 :
4Classification accuracy with zero shot prompting. We compare our performance to the full GPT-3 model
as well as variants of comparable size.
MNLI
Acc-m/mm
QQP
Acc
RTE
Acc
QNLI
Acc
MRPC
Acc
CoLA
Mcc
SST-2
Acc
# Params
RoBERTA
90.2/-
92.2 86.6 94.7 89.1 68.0 96.4
330M
RoBERTa-R3F
91.1/91.3 92.4 88.5 95.3 91.6 71.2 97.0
330M
T5-Base
87.1/86.2 89.4 80.1 93.7 87.5 51.1 95.2
220M
T5-Large
89.9/89.6 89.9 87.2 94.8 89.9 61.2 96.3
770M
BART-Large
89.9/90.1 92.5 87.0 94.9 90.4 62.8 96.6
400M
HTLM
90.3/91.4 92.6 87.1 95.1 90.8 64.3 96.9
400M
HTLM-R3F
91.4/92.1 92.8 89.1 95.4 91.5 69.4 97.1
400M
HTLM-R3F-Prompt 91.6/91.2 92.9 89.4 95.7 91.7 69.8 97.3
400M
Table 5 :
5Results on the GLUE development set for various fine-tuning methods applied to HTLM.Average Advantage (# Training Points, P vs. H)
MNLI
BoolQ
CB
RTE
WiC
RoBERTa-Large 3506 ± 536
752 ± 46
90 ± 2
282 ± 34 −424 ± 74
T5-Large
5010 ± 230
650 ± 85
150 ± 8
300 ± 65 −220 ± 20
BART-Large
4020 ± 220
450 ± 55
125 ± 10 305 ± 25 −110 ± 45
HTLM
6025 ± 440 855 ± 205 255 ± 35 840 ± 45
45 ± 25
Table 6 :
6Average advantage (higher is better) in terms of training points for fine-tuning well-structured prompt (P )
against a classical classification head (H).
Average Advantage (# Training Points, P vs. N)
MNLI
BoolQ
CB
RTE
WiC
RoBERTa-Large 150 ± 252
299 ± 81
78 ± 2
404 ± 68 −354 ± 166
T5-Large
300 ± 120
350 ± 95
150 ± 4
608 ± 90
20 ± 43
BART-Large
200 ± 180
325 ± 54
85 ± 8
512 ± 64
−80 ± 89
HTLM
692 ± 240 565 ± 143 255 ± 34 640 ± 45
80 ± 40
Table 7 :
7Average advantage (higher is better) in terms of training points for fine-tuning well-structured prompt (P )
against a prompt with a non-sensical verbalizer (N ).
https://commoncrawl.org/
The DOM or Document Object Model is an interface that treats an HTML document as a tree structure wherein each node is an object representing a part of the document.
Much of the noise in existing text collections derived from the common crawl comes from artifacts that are introduced when returning the text in the relatively arbitrary order it appeared in the original HTML, before the markup was stripped.
A Appendix
Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta, arXiv:2008.03156Better fine-tuning by reducing representational collapse. arXiv preprintArmen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2020a. Better fine-tuning by reducing representa- tional collapse. arXiv preprint arXiv:2008.03156.
Intrinsic dimensionality explains the effectiveness of language model fine-tuning. Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta, arXiv:2012.13255arXiv preprintArmen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020b. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255.
Etc: Encoding long and structured inputs in transformers. Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Joshua Ainslie, Santiago Ontanon, Chris Alberti, Va- clav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 268-284.
Longformer: The long-document transformer. Iz Beltagy, E Matthew, Arman Peters, Cohan, arXiv:2004.05150arXiv preprintIz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
Comparing automatic and human evaluation of nlg systems. Anja Belz, Ehud Reiter, In 11th conference of the european chapter of the association for computational linguisticsAnja Belz and Ehud Reiter. 2006. Comparing auto- matic and human evaluation of nlg systems. In 11th conference of the european chapter of the associa- tion for computational linguistics.
The fifth pascal recognizing textual entailment challenge. Luisa Bentivogli, Peter Clark, Ido Dagan, Danilo Giampiccolo, TAC. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing tex- tual entailment challenge. In TAC.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, arXiv:2009.14794Rethinking attention with performers. arXiv preprintKrzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sar- los, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794.
BoolQ: Exploring the surprising difficulty of natural yes/no questions. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova, Proceedings of NAACL-HLT. NAACL-HLTChristopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceed- ings of NAACL-HLT 2019.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
The Commitment-Bank: Investigating projection in naturally occurring discourse. Marie-Catherine De Marneffe, Mandy Simons, Judith Tonhauser, To appear in proceedings of Sinn und Bedeutung 23. Data can beMarie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The Commitment- Bank: Investigating projection in naturally oc- curring discourse. To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/.
Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation. A R Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, R Shafiq, Dragomir Joty, Yashar Radev, Mehdad, NAACL. A. R. Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq R. Joty, Dragomir Radev, and Yashar Mehdad. 2021. Improving zero and few-shot abstractive summarization with inter- mediate fine-tuning and data augmentation. In NAACL.
The webnlg challenge: Generating text from rdf data. Claire Gardent, Anastasia Shimorina, Shashi Narayan, Laura Perez-Beltrachini, Proceedings of the 10th International Conference on Natural Language Generation. the 10th International Conference on Natural Language GenerationClaire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, pages 124-133.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701.
Armand Joulin, Edouard Grave, Piotr Bojanowski, arXiv:1612.03651Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprintArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
Abstractive summarization of reddit posts with multi-level memory networks. Byeongchang Kim, Hyunwoo Kim, Gunhee Kim, arXiv:1811.00783arXiv preprintByeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2018. Abstractive summarization of reddit posts with multi-level memory networks. arXiv preprint arXiv:1811.00783.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. Alon Lavie, Abhaya Agarwal, Proceedings of the second workshop on statistical machine translation. the second workshop on statistical machine translationAlon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the second workshop on statistical machine translation, pages 228-231.
How many data points is a prompt worth?. Le Teven, Alexander Scao, Rush, 10.18653/v1/2021.naacl-main.208Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesTeven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 2627-2636, Online. Association for Compu- tational Linguistics.
The winograd schema challenge. Hector Levesque, Ernest Davis, Leora Morgenstern, Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. CiteseerHector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning. Citeseer.
The Winograd schema challenge. J Hector, Ernest Levesque, Leora Davis, Morgenstern, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. 4647Hector J Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Prefixtuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, arXiv:2101.00190arXiv preprintXiang Lisa Li and Percy Liang. 2021. Prefix- tuning: Optimizing continuous prompts for genera- tion. arXiv preprint arXiv:2101.00190.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Dart: Open-domain structured data record to text generation. Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, arXiv:2007.02871arXiv preprintLinyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xian- gru Tang, Aadit Vyas, Neha Verma, Pranav Kr- ishna, et al. 2020. Dart: Open-domain struc- tured data record to text generation. arXiv preprint arXiv:2007.02871.
Annotated gigaword. Courtney Napoles, Benjamin Matthew R Gormley, Van Durme, Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX). the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)Courtney Napoles, Matthew R Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pages 95-100.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, B Shay, Mirella Cohen, Lapata, arXiv:1808.08745arXiv preprintShashi Narayan, Shay B Cohen, and Mirella Lap- ata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural net- works for extreme summarization. arXiv preprint arXiv:1808.08745.
The e2e dataset: New challenges for end-toend generation. Jekaterina Novikova, Ondřej Dušek, Verena Rieser, arXiv:1706.09254arXiv preprintJekaterina Novikova, Ondřej Dušek, and Verena Rieser. 2017. The e2e dataset: New challenges for end-to- end generation. arXiv preprint arXiv:1706.09254.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
True few-shot learning with language models. Ethan Perez, Douwe Kiela, Kyunghyun Cho, arXiv:2105.11447arXiv preprintEthan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. arXiv preprint arXiv:2105.11447.
WiC: The word-in-context dataset for evaluating context-sensitive meaning representations. Mohammad Taher Pilehvar, Jose Camacho-Collados, Proceedings of NAACL-HLT. NAACL-HLTMohammad Taher Pilehvar and Jose Camacho- Collados. 2019. WiC: The word-in-context dataset for evaluating context-sensitive meaning representa- tions. In Proceedings of NAACL-HLT.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI Blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.
It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, arXiv:2009.07118arXiv preprintTimo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language mod- els are also few-shot learners. arXiv preprint arXiv:2009.07118.
A study of translation error rate with targeted human annotation. Mathew Snover, Bonnie Dorr, Richard Schwartz, John Makhoul, Linnea Micciulla, Ralph Weischedel, Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA 06). the 7th Conference of the Association for Machine Translation in the Americas (AMTA 06)Mathew Snover, Bonnie Dorr, Richard Schwartz, John Makhoul, Linnea Micciulla, and Ralph Weischedel. 2005. A study of translation error rate with targeted human annotation. In Proceedings of the 7th Con- ference of the Association for Machine Translation in the Americas (AMTA 06), pages 223-231.
Derek Tam, R R Menon, M Bansal, abs/2103.11955Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. Derek Tam, R. R. Menon, M. Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. ArXiv, abs/2103.11955.
Cider: Consensus-based image description evaluation. Ramakrishna Vedantam, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRamakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4566-4575.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsAlex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Bel- gium. Association for Computational Linguistics.
Linformer: Selfattention with linear complexity. Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, Hao Ma, arXiv:2006.04768arXiv preprintSinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self- attention with linear complexity. arXiv preprint arXiv:2006.04768.
. Adina Williams, Nikita Nangia, Samuel Bowman, Adina Williams, Nikita Nan- gia, and Samuel Bowman.
A broad-coverage challenge corpus for sentence understanding through inference. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Associ- ation for Computational Linguistics.
Hellaswag: Can a machine really finish your sentence?. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, arXiv:1905.07830arXiv preprintRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830.
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J Liu, arXiv:1912.08777arXiv preprintJingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777. |
252,668,463 | Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel | Explaining generalizations and preventing over-confident predictions are central goals of studies on the loss landscape of neural networks. Flatness, defined as loss invariability on perturbations of a pre-trained solution, is widely accepted as a predictor of generalization in this context. However, the problem that flatness and generalization bounds can be changed arbitrarily according to the scale of a parameter was pointed out, and previous studies partially solved the problem with restrictions: Counter-intuitively, their generalization bounds were still variant for the functionpreserving parameter scaling transformation or limited only to an impractical network structure. As a more fundamental solution, we propose new prior and posterior distributions invariant to scaling transformations by decomposing the scale and connectivity of parameters, thereby allowing the resulting generalization bound to describe the generalizability of a broad class of networks with the more practical class of transformations such as weight decay with batch normalization. We also show that the above issue adversely affects the uncertainty calibration of Laplace approximation and propose a solution using our invariant posterior. We empirically demonstrate our posterior provides effective flatness and calibration measures with low complexity in such a practical parameter transformation case, supporting its practical effectiveness in line with our rationale. * Co-corresponding author arXiv:2209.15208v1 [cs.LG] 30 Sep 2022 Nonetheless, the limitations of the FM hypothesis were pointed out by Dinh et al. [10], Li et al. [11]. By rescaling two successive layers, Dinh et al.[10] demonstrated it was possible to modify a flatness measure without modifying the functions, hence allowing extraneous variability to be captured in the computation of generalizability. Meanwhile, Li et al.[11]argued that weight decayregularization[12]is an important limitation of the FM hypothesis as it leads to a contradiction of the FM hypothesis in practice; the weight decay sharpens the pre-trained solutions of NNs by downscaling the parameters, whereas the weight decay actually improves the generalization performance of NNs in general cases[13]. In short, they suggest that scaling transformation on network parameters (e.g., re-scaling layers and weight decay) may lead to a contradiction of the FM hypothesis.To resolve this contradiction, we investigate PAC-Bayesian prior and posterior distributions to derive a new scale-invariant generalization bound. Unlike related works[14,15], our bound guarantees the invariance for a general class of function-preserving parameter scaling transformation with a broad class of networks [16] (Secs.2.2 and 2.3).This bound is derived from the scale invariance of the prior and poster distributions, which guarantees not only the scale invariance of the bound but also its substance the Kullback-Leibler (KL) divergencebased kernel; we named this new term with scale-invariance property as empirical Connectivity Tangent Kernel (CTK) as it can be considered as a modification of empirical Neural Tangent Kernel[17]. Consequently, we define a novel sharpness metric named Connectivity Sharpness (CS) as a trace of CTK. Empirically, we verify our CS has a better prediction for generalization performance of NNs than existing sharpness measures[18,19,20]with a low-complexity (Sec. 2.5), with confirming its stronger correlation to generalization error (Sec. 4.1).We also found the contradiction of the FM hypothesis turns into meaningless predictive uncertainty amplifying issues in the Bayesian NN regime (Sec. 3.1), and can alleviate this issue by using Bayesian NN based on the posterior distribution of our PAC-Bayesian analysis. We call the resulting Bayesian NN as Connectivity Laplace (CL), as it can be seen as a variation of Laplace Approximation (LA; MacKay [8]) using different Jacobian. In particular, we provide pitfalls of weight decay regularization with BN in LA and its remedy using our posterior (Sec. 3.1) to suggest practical utilities of our Bayesian NNs (Sec. 4.2). We summarize our contributions as follows: | [
52920837,
204893960,
219635787,
6021932,
3693334,
14124313,
5834589,
53104061
] | Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel
Sungyub Kim [email protected]
Korea Advanced Institute of Science and Technology (KAIST)
Sihwan Park [email protected]
Korea Advanced Institute of Science and Technology (KAIST)
Kyungsu Kim
Medical AI Research Center
Research Institute for Future Medicine
Samsung Medical Center
SeoulKorea
Department of Data Convergence and Future Medicine
Sungkyunkwan University School of Medicine
SeoulKorea
Eunho Yang [email protected]
Korea Advanced Institute of Science and Technology (KAIST)
AITRICS
SeoulKorea
Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel
Explaining generalizations and preventing over-confident predictions are central goals of studies on the loss landscape of neural networks. Flatness, defined as loss invariability on perturbations of a pre-trained solution, is widely accepted as a predictor of generalization in this context. However, the problem that flatness and generalization bounds can be changed arbitrarily according to the scale of a parameter was pointed out, and previous studies partially solved the problem with restrictions: Counter-intuitively, their generalization bounds were still variant for the functionpreserving parameter scaling transformation or limited only to an impractical network structure. As a more fundamental solution, we propose new prior and posterior distributions invariant to scaling transformations by decomposing the scale and connectivity of parameters, thereby allowing the resulting generalization bound to describe the generalizability of a broad class of networks with the more practical class of transformations such as weight decay with batch normalization. We also show that the above issue adversely affects the uncertainty calibration of Laplace approximation and propose a solution using our invariant posterior. We empirically demonstrate our posterior provides effective flatness and calibration measures with low complexity in such a practical parameter transformation case, supporting its practical effectiveness in line with our rationale. * Co-corresponding author arXiv:2209.15208v1 [cs.LG] 30 Sep 2022 Nonetheless, the limitations of the FM hypothesis were pointed out by Dinh et al. [10], Li et al. [11]. By rescaling two successive layers, Dinh et al.[10] demonstrated it was possible to modify a flatness measure without modifying the functions, hence allowing extraneous variability to be captured in the computation of generalizability. Meanwhile, Li et al.[11]argued that weight decayregularization[12]is an important limitation of the FM hypothesis as it leads to a contradiction of the FM hypothesis in practice; the weight decay sharpens the pre-trained solutions of NNs by downscaling the parameters, whereas the weight decay actually improves the generalization performance of NNs in general cases[13]. In short, they suggest that scaling transformation on network parameters (e.g., re-scaling layers and weight decay) may lead to a contradiction of the FM hypothesis.To resolve this contradiction, we investigate PAC-Bayesian prior and posterior distributions to derive a new scale-invariant generalization bound. Unlike related works[14,15], our bound guarantees the invariance for a general class of function-preserving parameter scaling transformation with a broad class of networks [16] (Secs.2.2 and 2.3).This bound is derived from the scale invariance of the prior and poster distributions, which guarantees not only the scale invariance of the bound but also its substance the Kullback-Leibler (KL) divergencebased kernel; we named this new term with scale-invariance property as empirical Connectivity Tangent Kernel (CTK) as it can be considered as a modification of empirical Neural Tangent Kernel[17]. Consequently, we define a novel sharpness metric named Connectivity Sharpness (CS) as a trace of CTK. Empirically, we verify our CS has a better prediction for generalization performance of NNs than existing sharpness measures[18,19,20]with a low-complexity (Sec. 2.5), with confirming its stronger correlation to generalization error (Sec. 4.1).We also found the contradiction of the FM hypothesis turns into meaningless predictive uncertainty amplifying issues in the Bayesian NN regime (Sec. 3.1), and can alleviate this issue by using Bayesian NN based on the posterior distribution of our PAC-Bayesian analysis. We call the resulting Bayesian NN as Connectivity Laplace (CL), as it can be seen as a variation of Laplace Approximation (LA; MacKay [8]) using different Jacobian. In particular, we provide pitfalls of weight decay regularization with BN in LA and its remedy using our posterior (Sec. 3.1) to suggest practical utilities of our Bayesian NNs (Sec. 4.2). We summarize our contributions as follows:
Introduction
Though neural networks (NNs) have experienced extraordinary success, understanding the generalizability of NNs and successfully using them in real-world contexts still faces a number of obstacles [1,2]. It is a well-known enigma, for instance, why such NNs generalize well and do not suffer from overfitting [3,4,5]. Recent research on the loss landscape of NNs seeks to reduce these obstacles. Hochreiter and Schmidhuber [6] proposed a theory known as flat minima (FM): the flatness of local minima (i.e., loss invariability w.r.t. parameter perturbations) is positively correlated with network generalizability, as empirically demonstrated by Jiang et al. [7]. Concerning overconfidence, MacKay [8] suggested an approximated Bayesian posterior using the curvature information of local minima, and Daxberger et al. [9] underlined its practical utility.
• Unlike related studies, our resulting (PAC-Bayes) generalization bound guarantees the invariance for a general class of function-preserving parameter scaling transformation with a broad class of networks (Sec. 2.2 and 2.3). Based on this novel PAC-Bayes bound, we propose a low-complexity sharpness measure (Sec. 2.5).
• We provide pitfalls of weight decay regularization with BN in LA and its remedy using our posterior (Sec. 3.1).
• We empirically confirm the strong correlation between generalization error and our sharpness metric (Sec. 4.1) and visualize pitfalls of weight decay with LA in synthetic data and practical utilities of our Bayesian NNs (Sec. 4.2).
PAC-Bayes bound with scale-invariance
Background
Setup and Definitions We consider a Neural Network (NN), f (·, ·) : R D × R P → R K , given input x ∈ R D and network parameter θ ∈ R P . Hereafter, we consider one dimensional vector a one-dimensional vector as a single column matrix unless otherwise stated. We use the output of NN f (x, θ) as a prediction for input x. Let S := {(x n , y n )} N n=1 denote the independently and identically distributed (i.i.d.) training data drawn from true data distribution D, where x n ∈ R D and y n ∈ R K are input and output representation of n-th training instance, respectively. For simplicity, we denote concatenated input and output of all instances as X := {x : (x, y) ∈ S} and Y := {y : (x, y) ∈ S}, respectively and f (X , θ) ∈ R N K as a concatenation of {f (x n , θ)} N n=1 . Given a prior distribution of network parameter p(θ) and a likelihood function p(S|θ) := N n=1 p(y n |x n , θ) := N n=1 p(y n |f (x n , θ)), Bayesian inference defines posterior distribution of network parameter θ as p(θ|S) = 1 Z(S) exp(−L(S, θ)) := 1 Z(S) p(θ)p(S|θ), Z(S) := p(θ)p(S|θ)dθ where L(S, θ) := − log p(θ) − N n=1 log p(y n |x n , θ) is training loss and Z(S) is normalizing factor. For example, the likelihood function for regression task will be Gaussian: p(y|x, θ) = N (y|f (x, θ), σ 2 I k ) where σ is (homoscedastic) observation noise scale. For classification task, we treat it as a one-hot regression task following Lee et al. [21] and He et al. [22]. While we applied this modification for theoretical tractability, Lee et al. [23], Hui and Belkin [24] showed this modification offers reasonable performance competitive to the cross-entropy loss. Details on this treatment is given in Appendix B.
Laplace Approximation In general, the exact computation for the Bayesian posterior of a network parameter is intractable. The Laplace Approximation (LA; [8]) proposes to approximate the posterior distribution with a Gaussian distribution defined as p LA (ψ|S) ∼ N (ψ|θ * , (∇ 2 θ L(S, θ * )) −1 ) where θ * ∈ R P is a pre-trained parameter with training loss and ∇ 2 θ L(S, θ * ) ∈ R P ×P is Hessian of loss function w.r.t. parameter at θ * .
Recent works on LA replace the Hessian matrix with (Generalized) Gauss-Newton matrix to make computation tractable [25,26]. With this approximation, the LA posterior of regression problem can be represented as:
p LA (ψ|S) ∼ N (ψ|θ * , ( I P /α 2
Damping
+ J θ J θ /σ 2 Curvature ) −1 )(1)
where α, σ > 0 and I P ∈ R P ×P is a identity matrix and J θ ∈ R N K×P is a concatenation of J θ (x, θ * ) ∈ R K×P (Jacobian of f w.r.t. θ at input x and parameter θ * ) along training input X . Since covariance of equation 1 is inverse of P × P matrix, further sub-curvature approximation was considered including diagonal, Kronecker-factored approximate curvature (KFAC), last-layer, and sub-network [27,28,29]. Furthermore, they found that proper selection of prior scale α is needed to balance the dilemma between overconfidence and underfitting in LA. PAC-Bayes bound with data-dependent prior We consider a PAC-Bayes generalization error bound of classification task used in McAllester [30], Perez-Ortiz et al. [31] (especially, equation (7) of Perez-Ortiz et al. [31]). Let P be any PAC-Bayes prior distribution over R P independent of training dataset S and err(·, ·) : R K×K → [0, 1] be a error function which is defined separately from the loss function. For any constant δ ∈ (0, 1] and λ > 0, and any PAC-Bayes posterior distribution Q over R P , the following holds with probability at least 1 − δ:
err D (Q) ≤ err S (Q) + KL[Q P]+log(2 √ N /δ) 2N
where err D (Q) := E (x,y)∼D,θ∼Q [err(f (x, θ), y)], err S (Q) := E (x,y)∼S,θ∼Q [err(f (x, θ), y)], and N denotes the cardinality of S. That is, err D (Q) and err S (Q) are generalization error and empirical error, respectively. The only restriction on P here is that it cannot depend on the dataset S.
Following the recent discussion in Perez-Ortiz et al. [31], one can construct data-dependent PAC-Bayes bounds by (i) randomly partitioning dataset S into S Q and S P so that they are independent, (ii) using a PAC-Bayes prior distribution P D only dependent of S P (i.e., independent of S Q so P D belongs to P), (iii) using a PAC-Bayes posterior distribution Q dependent of entire dataset S, and (iv) computing empirical error err S Q (Q) with target subset S Q (not entire dataset S). In summary, one can modify the aforementioned original PAC-Bayes bound through our data-dependent prior P c as
err D (Q) ≤ err S Q (Q) + KL[Q P D ] + log(2 N Q /δ) 2N Q(2)
where N Q is the cardinality of S Q . We denote sets of input and output of splitted datasets (S P , S Q ) as X P , Y P , X Q , Y Q for simplicity.
Design of PAC-Bayes prior and posterior
Our goal is to construct scale-invariant P D and Q. To this end, we first assume a pre-trained parameter θ * ∈ R P of the negative log-likelihood function with S P . This parameter can be attained with standard NN optimization procedures (e.g., stochastic gradient descent (SGD) with momentum). Then, we consider linearized NN at the pre-trained parameter with the auxiliary variable c ∈ R P as
g lin θ * (x, c) := f (x, θ * ) + J θ (x, θ * )diag(θ * )c(3)
where diag is a vector-to-matrix diagonal operator. Note that equation 3 is the first-order Taylor approximation (i.e., linearization) of NN with perturbation θ * c given input x and parameter θ * :
g pert θ * (x, c) := f (x, θ * + θ * c) = f (x, θ * + diag(θ * )c) ≈ g lin θ * (x, c),
where denotes element-wise multiplication (Hadamard product) of two vectors. Here we write the perturbation in parameter space as θ * c instead of single variable such as δ ∈ R P . This design of linearization matches the scale of perturbation (i.e., diag(θ * ) c) to the scale of θ * in a component-wise manner. Similar idea was proposed in the context of pruning at initialization [32,33] to measure the importance of each connection independently of its weight. In this context, our perturbation can be viewed as perturbation in connectivity space by decomposing the scale and connectivity of parameter.
Based on this, we define a data-dependent prior (P D ) over connectivity as
P θ * (c) := N (c | 0 P , α 2 I p ).(4)
This distribution can be translated to a distribution over parameter by considering the distribution of perturbed parameter (ψ := θ * + θ * c): P θ * (ψ) := N (ψ | θ * , α 2 diag(θ * ) 2 ). We now define the PAC-Bayes posterior over connectivity Q(c) as follows:
Q θ * (c) := N (c|µ Q , Σ Q )(5)µ Q := Σ Q J c (Y − f (X , θ * )) σ 2 = Σ Q diag(θ * )J θ (Y − f (X , θ * )) σ 2(6)Σ Q := I P α 2 + J c J c σ 2 −1 = I P α 2 + diag(θ * )J θ J θ diag(θ * ) σ 2 −1(7)
where J c ∈ N K × P is a concatenation of J c (x, 0 P ) := J θ (x, θ * )diag(θ * ) ∈ R K×P (i.e., Jacobian of perturbed NN g pert θ * (x, c) w.r.t. c at input x and connectivity 0 P ) along training input X . Our PAC-Bayes posterior Q θ * is the posterior of Bayesian linear regression problem w.r.t. connectivity c :
y i = f (x i , θ * ) + J θ (x i , θ * )diag(θ * )c + i where (x i , y i ) ∈ S and i are i.i.d. samples of N ( i |0 K , σ 2 I K )
. Again, it is equivalent to the posterior distribution over parameter Q θ * (ψ) = N ψ|θ * + θ * µ Q , (diag(θ * ) −2 /α 2 + J θ J θ /σ 2 −1 ) where diag(θ * ) −2 := (diag(θ * ) −1 ) 2 by assuming that all components of θ * are non-zero. Note that this assumption can be easily satisfied by considering the prior and posterior distribution of non-zero components of NNs only. Although we choose this restriction for theoretical tractability, future work can modify this choice to achieve diverse predictions by considering the distribution of zero coordinates. We refer to Appendix C for detailed derivations of Q θ * (c) and Q θ * (ψ).
Remark 2.1 (Two-phase training). The prior distribution in equation 4 is data-dependent priors since they depend on the pre-trained parameter θ * optimized on training dataset S P . On the other hand, posterior distribution in equation 5 depend on both S P (through θ * ) and S Q (through Bayesian linear regression). Intuitively, one attain the PAC-Bayes posterior Q with two-phase training: pre-train with S P and Bayesian fine-tuning with S Q . A similar idea of linearized fine-tuning was proposed in the context of transfer learning in Achille et al. [34], Maddox et al. [35]. Now we provide an invariance property of prior and posterior distributions w.r.t. function-preserving scale transformations as follows: The main intuition behind this proposition is that Jacobian w.r.t. connectivity is invariant to the function-preserving scaling transformation, i.e., J θ (x, T (θ * ))diag(T (θ * )) = J θ (x, θ * )diag(θ * ). Proposition 2.2 (Scale-invariance of PAC-Bayes prior and posterior). Let T : R P → R P is a invertible diagonal linear transformation such that f (x, T (ψ)) = f (x, ψ) , ∀x ∈ R D , ∀ψ ∈ R P . Then, both PAC-Bayes prior and posterior are invariant under T :
P T (θ * ) (c) d = P θ * (c), Q T (θ * ) (c) d = Q θ * (c).
Furthermore, generalization and empirical errors are also invariant to T .
Resulting PAC-Bayes bound
Now we plug in prior and posterior into the modified PAC-Bayes generalization error bound in equation 2. Accordingly, we obtain a novel generalization error bound, named PAC-Bayes-CTK, which is guaranteed to be invariant to scale transformations (hence without the contradiction of FM hypothesis mentioned in Sec. 1). Theorem 2.3 (PAC-Bayes-CTK and its invariance). Let us assume pre-trained parameter θ * with data S P . By applying P θ * , Q θ * to data-dependent PAC-Bayes bound (equation 2), we get
err D (Q θ * ) ≤ err S Q (Q θ * ) + KL divergence µ Q µ Q 4α 2 N Q (average) perturbation + P i=1 h (β i ) 4N Q sharpness + log(2 N Q /δ) 2N Q (8) where {β i } P i=1 are eigenvalues of (I P + α 2 σ 2 J c J c ) −1 and h(x) := x − log(x) − 1.
This upper bound is invariant to T for the function-preserving scale transformation in Proposition 2.2.
Note that recent works on solving FM contradiction only focused on the scale-invariance of sharpness metric: Indeed, their generalization bounds are not invariant to scale transformations due to the scaledependent terms (equation (34) in Tsuzuku et al. [14] and equation (6) in Kwon et al. [15]). On the other hand, generalization bound in Petzka et al. [16] (Theorem 11 in their paper) only holds for single-layer NNs, whereas ours has no restrictions for network structure. Therefore, to the best of our knowledge, our PAC-Bayes bound is the first scale-invariant PAC-Bayes bound. To highlight our theoretical implications, we show the representative cases of T in Proposition 2.2 in Appendix D (e.g., weight decay for network with BN), where the generalization bounds of the other studies are variant, but ours is invariant, resolving the FM contradiction on bound level.
The following corollary explains why we name PAC-Bayes bound in Theorem 2.3 PAC-Bayes-CTK. Note that Corollary 2.4 clarifies why P i=1 h(β i )/4N Q in Theorem 2.3 is called the sharpness term of PAC-Bayes-CTK. As eigenvalues of CTK measures the sensitivity of output w.r.t. perturbation on connectivity, a sharp pre-trained parameter would have large CTK eigenvalues, so increasing the sharpness term and the generalization gap by according to Corollary 2.4.
Finally, Proposition 2.5 shows that empirical CTK is also scale-invariant.
Proposition 2.5 (Scale-invariance of empirical CTK). Let T : R P → R P be an function-preserving scale transformation in Proposition 2.2. Then empirical CTK at parameter ψ is invariant under T :
C T (ψ) xy := J θ (x, T (ψ))diag(T (ψ) 2 )J θ (y, T (ψ)) = C ψ xy , ∀x, y ∈ R D , ∀ψ ∈ R P .(9)
Remark 2.6 (Connections to empirical NTK). The empirical CTK C ψ xy resembles the existing empirical Neural Tangent Kernel (NTK) at parameter ψ [17]: Θ ψ xy := J θ (x, ψ)J θ (y, ψ) ∈ R k×k . Note that the deterministic NTK in Jacot et al. [17] is the infinite-width limiting kernel at initialized parameters, while empirical NTK can be defined on any parameter of a finite-width NN. The main difference between empirical CTK and the existing empirical NTK is in the definition of Jacobian. In CTK, Jacobian is computed w.r.t. connectivity c while the empirical NTK uses Jacobian w.r.t. parameters θ. Therefore, another PAC-Bayes bound can be derived from the linearization of f pert
θ * (x, δ) := f (x, θ * + δ) ≈ f lin θ * (x, δ) where f lin θ * (x, δ) := f (x, θ * ) + J θ (x, θ * )δ.
As this bound is related to the eigenvalues of Θ θ * X , we call this bound PAC-Bayes-NTK and provide derivations in Appendix A. Note PAC-Bayes-NTK is not scale-invariant as Θ T (ψ) xy = Θ ψ xy in general.
Computing approximate bound in real world problems
To verify that PAC-Bayes bound in Theorem 2.3 is non-vacuous, we compute it for real-world problems. We use CIFAR-10 and 100 datasets [36], where the 50K training instances are randomly partitioned into S P of cardinality 45K and S Q of cardinality 5K. We pre-train ResNet-18 [37] with a mini-batch size of 1K on S P with SGD of initial learning rate 0.4 and momentum 0.9. We use cosine annealing for learning rate scheduling [38] with a warmup for the initial 10% training step. We fix δ = 0.1 and select α, σ based on the negative log-likelihood of S Q . To compute the equation 8, one need (i) µ Q -based perturbation term, (ii) C θ * X -based sharpness term, and (iii) samples from PAC-Bayes posterior Q θ * . µ Q in equation 6 can be obtained by minimizing
arg min c∈R P L(c) = 1 2N Y − f (X , θ * ) − J c c 2 + σ 2
2α 2 N c c by first-order optimality condition. Note that this problem is a convex optimization problem w.r.t. c, since c is the parameter of the linear regression problem. We use Adam optimizer [39] with fixed learning rate 1e-4 to solve this. For sharpness term, we apply the Lanczos algorithm to approximate the eigenspectrum of C θ * X following [40]. We use 100 Lanczos iterations based on the their setting. Lastly, we estimate empirical error and test error with 8 samples of CL/LL implementation of Randomize-Then-Optimize (RTO) framework [41,42]. We refer to Appendix E for the RTO implementation of CL/LL. Table 1 provides the bound and related term of the resulting model. First of all, we found that our estimated PAC-Bayes-CTK is non-vacuous [43]: estimated bound is better than guessing at random. Note that the non-vacuous bound is not trivial in PAC-Bayes analysis: only a few PAC-Bayes literatures [44,43,31] verified the non-vacuous property of their bound, and other PAC-Bayes literatures [45,14] do not. To check the invariance property of our bound, we scale the scale-invariant parameters in ResNet-18 (i.e., parameters preceding BN layers) for fixed constants {0.5, 1.0, 2.0, 4.0}. Note that this scaling does not change the function represented by NN due to BN layers, and the error bound should be preserved. Table 1 shows that our bound and related terms are invariant to these transformations. On the other hand, PAC-Bayes-NTK is very sensitive to parameter scale, as shown in Table 7 in Appendix J.
Connectivity Sharpness and its efficient computation
Now, we focus on the fact that the trace of CTK is also invariant to the parameter scale by Proposition 2.5. Unlike PAC-Bayes-CTK/NTK, a trace of CTK/NTK does not require onerous hyper-parameter selection of δ, α, σ. Therefore, we simply define CS(θ * ) := tr(C θ * X ) as a practical sharpness measure at θ * , named Connectivity Sharpness (CS) to detour the complex computation of PAC-Bayes-CTK. This metric can be easily applied to find NNs with better generalization, similar to other sharpness metrics (e.g., trace of Hessian), as shown in [7]. We evaluate the detecting performance of CS in Sec. 4.1.
The following corollary shows how CS can explain the generalization performance of NNs, conceptually.
Corollary 2.7 (Connectivity sharpness, Informal). Let us assume CTK and KL divergence term of PAC-Bayes-CTK as defined in Theorem 2.3. Then, if CS vanishes to zero or infinity, the KL divergence term of PAC-Bayes-CTK also does so.
As the trace of a matrix can be efficiently estimated by Hutchinson's method [46], one can compute the CS without explicitly computing the entire CTK. We refer to Appendix F for detailed procedures of computing CS. As CS is invariant to function-preserving scale transformations by Theorem 2.5, it also does not contradict the FM hypothesis.
Bayesian NNs with scale-invariance
This section provides the practical implications of the posterior distribution used in PAC-Bayes analysis. We interpret the PAC-Bayes posterior Q θ * in equation 5 as a modified result of LA [8]. Then, we show this modification improves existing LA in the presence of weight decay regularization. Finally, we explain how one can efficiently construct a Bayesian NN from equation 5.
Pitfalls of weight decay with BN in Laplace Approximation
One can view parameter space version of Q θ * as a modified version of p LA (ψ|S) in equation 1 by (i) replacing isotropic damping term to the parameter scale-dependent damping term (diag(θ * ) −2 ) and (ii) adding perturbation θ * µ Q to the mean of Gaussian distribution. In this section, we focus on the effect of replacing the damping term of LA in the presence of weight decay of batch normalized NNs. We refer to [47,48] for the discussion on the effect of adding perturbation to the LA with linearized NNs.
The main difference between covariance term of LA equation 1 and equation 7 is the definition of Jacobian (i.e. parameter or connectivity) similar to the difference between empirical CTK and NTK in remark 2.6. Therefore, we name
p CL (ψ|S) ∼ N (ψ|θ * , diag(θ * ) −2 /α 2 + J θ J θ /σ 2 −1 ) as Connectivity Laplace (CL) approximated posterior.
To compare CL posterior and existing LA, we explain how weight decay regularization with BN produces unexpected side effects in existing LA. This side effect can be quantified if we consider linearized NN for LA, called Linearized Laplace (LL; Foong et al. [49]). Note that LL is well known to be better calibrated than non-linearized LA for estimating 'in-between' uncertainty. By assuming σ 2 α 2 , the predictive distribution of linearized NN for equation 1 and CL are
f lin θ * (x, ψ)|p LA (ψ|S) ∼ N (f (x, θ * ), α 2 Θ θ * xx − α 2 Θ θ * xX Θ θ * −1 X Θ θ * X x ) (10) f lin θ * (x, ψ)|p CL (ψ|S) ∼ N (f (x, θ * ), α 2 C θ * xx − α 2 C θ * xX C θ * −1 X C θ * X x ).(11)
for any input x ∈ R d where X in subscript of CTK/NTK means concatenation. We refer to Appendix G for the detailed derivations. The following proposition shows how weight decay regularization on scale-invariant parameters introduced by BN can amplify the predictive uncertainty of equation 10. Note that the primal regularizing effect of weight decay originates through regularization on scale-invariant parameters [50,13].
Proposition 3.1 (Uncertainty amplifying effect for LL). Let us assume that W γ : R P → R P is a weight decay regularization on scale-invariant parameters (e.g., parameters preceding BN layers) by multiplying γ < 1 and all the non-scale-invariant parameters are fixed. Then, predictive uncertainty of LL is amplified by 1/γ 2 > 1 while predictive uncertainty of CTK is preserved:
Var ψ∼p LA (ψ|S) (f lin Wγ (θ * ) (x, ψ)) = Var ψ∼p LA (ψ|S) (f lin θ * (x, ψ))/γ 2 Var ψ∼p CL (ψ|S) (f lin Wγ (θ * ) (x, ψ)) = Var ψ∼p CL (ψ|S) (f lin θ * (x, ψ))
where Var(·) is variance of random variable.
Recently, [47,48] observed similar pitfalls in Proposition 3.1. However, their solution requires a more complicated hyper-parameter search: independent prior selection for each normalized parameter group. On the other hand, CL does not increase the hyper-parameter to be optimized compared to LL. We believe this difference will make CL more attractive to practitioners.
Experiments
Here we describe experiments that demonstrate (i) the effectiveness of Connectivity Sharpness (CS) as a generalization measurement metric and (ii) the usefulness of Connectivity Laplace (CL) as an efficient general-purpose Bayesian NN: CL resolves the contradiction of FM hypothesis and shows stable calibration performance to the selection of prior scale.
Connectivity Sharpness as a generalization measurement metric
To verify that the CS actually has a better correlation with generalization performance compared to existing sharpness measures, we evaluate the three correlation metrics on the CIFAR-10 dataset: (a) Kendall's rank-correlation coefficient (τ ) [51] (b) granulated Kendall's coefficient and their average (Ψ) [7] (c) conditional independence test (K) [7]. For all correlation metrics, a larger value means a stronger correlation between sharpness and generalization. We compare CS to following baseline sharpness measures: trace of Hessian (tr(H); Keskar et al. [19]), trace of empirical Fisher (tr(F); Jastrzebski et al. [52]), trace of empirical NTK at θ * (tr(Θ θ * )), [7]. For computing granulated Kendall's correlation, we use 5 hyper-parameters (network depth, network width, learning rate, weight decay, and mini-batch size) and 3 options for each (thus we train models with 3 5 = 243 different training configurations). We vary the depth and width of NN based on VGG-13 [53]. We refer to Appendix H for experimental details.
In Table 2, CS shows the best results for τ , Ψ, and K compared to all other sharpness measures. Also, granulated Kendall of CS is higher than other sharpness measures for 3 out of 5 hyper-parameters and competitive to other sharpness measures with the leftover hyper-parameters. The main difference of CS with other sharpness measures is in the correlation with network width and weight decay: For network width, we found that sharpness measures except CS, tr(F), AS and FR fail to capture strong correlation. While SO/PO can capture the correlation with weight decay, we believe it is due to the weight norm term of SO/PO. However, this term would interfere in capturing the sharpness-generalization correlation related to the number of parameters (i.e., width/depth), while CS/AS does not suffer from such a problem. Also, it is notable that FR fails to capture this correlation despite its invariant property.
Connectivity Laplace as an efficient general-purpose Bayesian NN
To assess the effectiveness of CL as a general-purpose Bayesian NN, we consider uncertainty calibration on UCI dataset and CIFAR-10/100. UCI regression datasets We implement full-curvature versions of LL and CL and evaluate these to the 9 UCI regression datasets [54] and its GAP-variants [49] to compare calibration performance on in-between uncertainty. We train MLP with a single hidden layer. We fix σ = 1 and choose α from {0.01, 0.1, 1, 10, 100} using log-likelihood of validation dataset. We use 8 random seeds to compute the average and standard error of the test negative log-likelihoods. The following two tables show test NLL for LL/CL and 2 baselines (Deep Ensemble [55] and Monte-Carlo DropOut (MCDO; Gal and Ghahramani [56])). Eight ensemble members are used in Deep Ensemble, and 32 MC samples are used in LL, CL, and MCDO. Table 3 show that CL performs better than LL for 6 out of 9 datasets. Although LL shows better calibration results for 3 datasets in both settings, we would like to point out that the performance gaps between LL and CL were not severe as in the other 6 datasets, where CL performs better.
Image Classification We evaluate the uncertainty calibration performance of CL on CIFAR-10/100. As baseline methods, we consider Deterministic network, Monte-Carlo Dropout (MCDO; [56]), Monte-Carlo Batch Normalization (MCBN; [57]), and Deep Ensemble [55], Batch Ensemble [58], LL [25,9]. We use Randomize-Then-Optimize (RTO) implementation of LL/CL in Appendix E. We measure Expected Calibration Error (ECE; Guo et al. [59]), negative log-likelihood (NLL), and Brier score (Brier.) for ensemble predictions. We also measure the area under receiver operating curve (AUC) for OOD detection, where we set the SVHN [60] dataset as an OOD dataset. For more details on the experimental setting, please refer to Appendix I. Table 4 shows uncertainty calibration results on CIFAR-100. We refer to Appendix I for results on other settings, including CIFAR-10 and VGGNet [53]. Our CL shows better results than baselines for all uncertainty calibration metrics (NLL, ECE, Brier., and AUC) except Deep Ensemble. This means scale-invariance of CTK improves Bayesian inference, which is consistent with the results in toy examples. Although the Deep Ensemble presents the best results in 3 out of 4 metrics, it requires full training from initialization for each ensemble member, while LL/CL requires only a post-hoc training upon the pre-trained NN for each member. Particularly noteworthy is that CL presents competitive results with Deep Ensemble, even with much smaller computations.
Robustness to the selection of prior scale Figure 1 shows the uncertainty calibration (i.e. NLL, ECE, Brier) results over various α values for LL, CL, and Deterministic (Det.) baseline. As mentioned in previous works [27,28], the uncertainty calibration results of LL is extremely sensitive to the selection of α. Especially, LL shows severe under-fitting for large α (i.e. small damping) regime. On the other hand, CL shows stable performance in the various ranges of α.
Conclusion
This study introduced novel PAC-Bayes prior and posterior distributions to extend the robustness of generalization bound w.r.t. parameter transformation by decomposing the scale and connectivity of parameters. The resulting generalization bound is guaranteed to be invariant of any function-preserving scale transformations. This result successfully solved the problem that the contradiction of the FM hypothesis caused by the general scale transformation could not be solved in the existing generalization error bound, thereby allowing the theory to be much closer to reality. As a result of the theoretical enhancement, our posterior distribution for PAC-analysis can also be interpreted as an improved Laplace Approximation without pathological failures in weight decay regularization. Therefore, we expect this fact contributes to reducing the theory-practice gap in understanding the generalization effect of NN, leading to follow-up studies that interpret this effect more clearly.
A Proofs
A.1 Proof of Proposition 2.2
Proof. Since the prior P θ * (c) is independent to the parameter scale, P θ * (c)
d = P T (θ * ) (c) is trivial. For Jacobian w.r.t. parameters, we have J θ (x, T (ψ)) = ∂ ∂T (ψ) f (x, T (ψ)) = ∂ ∂T (ψ) f (x, ψ) = J θ (x, ψ)T −1
Then, the Jacobian of NN w.r.t. connectivity at T (ψ) holds
J θ (x, T (ψ))diag(T (ψ)) = J θ (x, ψ)T −1 T diag(ψ) (12) = J θ (x, ψ)diag(ψ)(13)
where the first equality holds from the above one and the fact that T is a diagonal linear transformation. Therefore, the covariance of posterior is invariant to T .
I P α 2 + diag(T (θ * ))J θ (X , T (θ * ))J θ (X , T (θ * ))diag(T (θ * )) σ 2 −1 = I P α 2 + diag(θ * )J θ (X , θ * )J θ (X , θ * )diag(θ * ) σ 2 −1 = I P α 2 + diag(θ * )J θ J θ diag(θ * ) σ 2 −1
Moreover, the mean of posterior is also invariant to T .
Σ Q diag(T (θ * ))J θ (X , T (θ * )) (Y − f (X , T (θ * ))) σ 2 = Σ Q diag(T (θ * ))J θ (X , T (θ * )) (Y − f (X , θ * )) σ 2 = Σ Q diag(θ * )J θ (X , θ * ) (Y − f (X , θ * )) σ 2 = Σ Q diag(θ * )J θ (Y − f (X , θ * )) σ 2
Therefore, equation 6 and equation 7 are invariant to function-preserving scale transformations. The remain part of the theorem is related to the definition of function-preserving scale transformation T . For generalization error, following holds
err D (Q T (θ * ) ) = E (x,y)∼D,ψ∼Q T (θ * ) [err(f (x, ψ), y)] = E (x,y)∼D,c∼Q T (θ * ) [err(g pert θ * (x, c), y)] = E (x,y)∼D,c∼Q θ * [err(g pert θ * (x, c), y)] = E (x,y)∼D,ψ∼Q θ * [err(f (x, ψ), y)] = err D (Q θ * )
This proof can be extended to the empirical error err S Q .
A.2 Proof of Theorem 2.3
Proof. (Construction of KL divergence) To construct PAC-Bayes-CTK, we need to arrange KL divergence between posterior and prior as follows:
KL[Q P] = 1 2 tr Σ −1 P (Σ Q + (µ Q − µ P )(µ Q − µ P ) ) + log |Σ P | − log |Σ Q | − P = 1 2 tr(Σ −1 P (µ Q − µ P )(µ Q − µ P ) )) + 1 2 tr(Σ −1 P Σ Q ) + log |Σ P | − log |Σ Q | − P = 1 2 (µ Q − µ P ) Σ −1 P (µ Q − µ P ) + 1 2 tr(Σ −1 P Σ Q ) − log |Σ −1 P Σ Q | − P = µ Q µ Q 2α 2 perturbation + 1 2 tr(Σ −1 P Σ Q ) − log |Σ −1 P Σ Q | − p sharpness
where the first equality uses the KL divergence between two Gaussian distributions, the thrid equality uses trace property (tr(AB) = tr(BA) and tr(a) = a for scalar a), and the last equality uses the definition of PAC-Bayes prior (P θ * (c) = N (c|0 P , α 2 I P )). For sharpness term, we first compute the Σ −1 P Σ Q term as
Σ −1 P Σ Q = I P + α 2 σ 2 J c J c −1 Since α 2 , σ 2 > 0 and J c J c is positive semi-definite, the matrix Σ −1 P Σ Q have non-zero eigenvalues of {β i } P i=1 .
Since trace is the sum of eigenvalues and log-determinant is the sum of log-eigenvalues, we have
KL[Q P] = µ Q µ Q 2α 2 + 1 2 P i=1 (β i − log(β i ) − 1) = µ Q µ Q 2α 2 + 1 2 P i=1 h(β i )
where h(x) = x − log(x) − 1. By plugging this KL divergence to the equation 2, we get equation 8.
(Eigenvalues of Σ −1 P Σ Q ) To show the scale-invariance of PAC-Bayes-CTK, it is sufficient to show that KL divergence posterior and prior is scale-invariant:
log(2 √ N Q /δ) 2N Q
is independent to KL PAC-Bayes prior/posterior and we already show the invariance property of empirical/generalization error term in Proposition 2.2. To show the invariance property of KL divergence, we consider the Connectivity Tangent Kernel (CTK) as defined in equation 2.4:
C θ * X := J c J c = J θ diag(θ * ) 2 J θ ∈ R N K×N K .
Since CTK is a real-symmetric matrix, one can assume the eigenvalue decomposition of CTK as C θ * X = QΛQ where Q ∈ R N K×N K is an orthogonal matrix and Λ ∈ R N K×N K is a diagonal matrix.
Then following holds for Σ −1
P Σ Q Σ −1 P Σ Q = I P + α 2 σ 2 J c J c −1 = I P + α 2 σ 2 QΛQ −1 = Q I P + α 2 σ 2 Λ −1 Q Therefore, eigenvalues of Σ −1 P Σ Q are 1 1+α 2 λ i /σ 2 = σ 2 σ 2 +α 2 λ i where {λ i } P i=1
are eigenvalues of CTK (and diagonal elements of Λ).
(Scale invariance of CTK) The scale-invariance property of CTK is a simple application of equation 13:
C T (ψ) xy = J θ (x, T (ψ))diag(T (ψ) 2 )J θ (y, T (ψ)) = J θ (x, ψ)T −1 T diag(ψ)diag(ψ)T T −1 J θ (x, ψ) = J θ (x, ψ)diag(ψ)diag(ψ)J θ (x, ψ) = C ψ xy , ∀x, y ∈ R D , ∀ψ ∈ R P .
Therefore, CTK is invariant to any function-preserving scale transformation T and so do its eigenvalues. This guarantees the invariance of Σ −1 P Σ Q and its eigenvalues. In summary, we showed the scale-invariance property of sharpness term of KL divergence. Now all that remains is to show the invariance of the perturbation term. However, this is already proved in the proof of Proposition 2.2. Therefore, we show PAC-Bayes-CTK is invariant to any function-preserving scale transformation.
A.3 Proof of Corollary 2.4
Proof. In proof of Theorem 2.3, we showed that eigenvalues of Σ −1 P Σ Q can be represented as
σ 2 σ 2 + α 2 λ i where {λ i } P i=1
are eigenvalues of CTK. Now, we identify the eigenvalues of CTK. To this end, we assume the singular value decomposition (SVD) of Jacobian w.r.t. connectivity J c ∈ R N K×P as J c = U ΣV where U ∈ R N K×N K and V ∈ R P ×P are orthogonal matrices and Σ ∈ R N K×P is a rectangular diagonal matrix. Then, CTK can be represented as
C θ * X = J c J c = U ΣV V ΣU = U Σ 2 U .
In summary, the column vectors of U are eigenvectors of CTK and eigenvalues of CTK are square of singular values of J c and so λ i ≥ 0, ∀i. Therefore β i ≤ 1 for all i = 1, . . . , P for eigenvalues {β i } P i=1 of Σ −1 P Σ Q and equality holds for λ i = 0. Now all that remains is to show that the sharpness term of PAC-Bayes-CTK is a monotonically increasing function on each eigenvalues of CTK. To show this, we first keep in mind that
s(x) := σ 2 σ 2 + α 2 xP i=1 h(β i ) 4N Q = P i=1 (h • s)(λ i ) 4N Q
This is a monotonically increasing function for x ≥ 0 since s(x) ≤ 1 for x ≥ 0. For your information, we plot y = h(x) and y = (h • s)(x) in Figure 2.
A.4 Proof of Proposition 2.5
We refer to Scale invariance of CTK part of proof of Theorem 2.3. This is a direct application of scale-invariance property of Jacobian w.r.t. connectivity.
A.5 Proof of Corollary 2.7
Proof. Since CS is trace of CTK, it is a sum of eigenvalues of CTK. As shown in the proof of Corollary 2.4, eigenvalues of CTK are square of singular values of Jacobian w.r.t. connectivity c. Therefore, eigenvalues of CTK are non-negative and vanishes to zero if CS vanishes to zero.
P i=1 λ i = 0 ⇒ λ i = 0 ⇒ β i = s(λ i ) = 1 ⇒ h(β i ) = 0, ∀i = 1, . . . , P
This means the sharpness term of KL divergence vanishes to zero. Furthermore, singular values of Jacobian w.r.t. c also vanishes to zero in this case. Therefore, µ Q vanishes to zero, also. Similarly, if CS diverges to infinity, this means (at least) one of eigenvalues of CTK diverges to infinity. In this case, following holds
λ i → ∞ ⇒ β i = s(λ i ) → 0 ⇒ h(β i ) → ∞, ∀i = 1, . . . , P
Therefore, KL divergence term of PAC-Bayes-CTK also diverges to infinity.
A.6 Proof of Proposition 3.1
Proof. By assumption, we fixed all non-scale invariant parameters. This means we exclude these parameters in sampling procedure of CL and LL. In terms of predictive distribution, this can be translated as
f lin θ * (x, ψ)|p LA (ψ|S) ∼ N (f (x, θ * ), α 2Θθ * xx − α 2Θθ * xXΘ X θ * −1Θθ * X x ) f lin θ * (x, ψ)|p CL (ψ|S) ∼ N (f (x, θ * ), α 2Ĉθ * xx − α 2Ĉθ * xXĈ θ * −1 XĈ θ * X x ) whereΘ ψ xx := i∈P ∂f (x,ψ) ∂θ i ∂f (x ,ψ) ∂θ i andĈ ψ xx := i∈P ∂f (x,ψ) ∂θ i ∂f (x ,ψ)
∂θ i (ψ i ) 2 for scale-invariant parameter set P. Thereby, we mask the gradient of non scale-invariant parameter as zero. Therefore, this can be arrange as followŝ
Θ ψ xx = J θ (x, ψ)diag(1 P )J θ (x, ψ) ,Ĉ ψ xx = J θ (x, ψ)diag(ψ)diag(1 P )diag(ψ)J θ (x, ψ) where 1 P ∈ R P
is a masking vector (i.e., one for included components and zero for excluded components). Then, the weight decay regularization for scale-invariant parameters can be represented as
W γ (ψ) i = γψ i , if ψ i ∈ P. ψ i , if ψ i ∈ P.
Therefore, we getΘ
Wγ (ψ) xx = J θ (x, W γ (ψ))diag(1 P )J θ (x, W γ (ψ))) = J θ (x, ψ)W −1 γ diag(1 P )W −1 γ J θ (x, ψ) = J θ (x, ψ)diag(1 P /γ 2 )J θ (x, ψ) = 1/γ 2 J θ (x, ψ)diag(1 P )J θ (x, ψ) = 1/γ 2Θψ xx for empirical NTK and C Wγ (ψ) xx = J θ (x, W γ (ψ))diag(W γ (ψ))diag(1 P )diag(W γ (ψ))J θ (x, W γ (ψ))) = J θ (x, ψ)W −1 γ W γ diag(ψ)diag(1 P )diag(ψ)W γ W −1 γ J θ (x, ψ) = J θ (x, ψ)diag(ψ)diag(1 P )diag(ψ)J θ (x, ψ) =Ĉ ψ xx for empirical CTK. Therefore, we get f lin Wγ (θ * ) (x, ψ)|p LA (ψ|S) ∼ N (f (x, θ * ), α 2 /γ 2Θθ * xx − α 2 /γ 2Θθ * xXΘ θ * −1 XΘ θ * X x ) f lin Wγ (θ * ) (x, ψ)|p CL (ψ|S) ∼ N (f (x, θ * ), α 2Ĉθ * xx − α 2Ĉθ * xXĈ θ * −1 XĈ θ * X x )
This gives us
Var ψ∼p LA (ψ|S) (f lin Wγ (θ * ) (x, ψ)) = Var ψ∼p LA (ψ|S) (f lin θ * (x, ψ))/γ 2 Var ψ∼p CL (ψ|S) (f lin Wγ (θ * ) (x, ψ)) = Var ψ∼p CL (ψ|S) (f lin θ * (x, ψ))
A.7 Derivation of PAC-Bayes-NTK
Theorem A.1 (PAC-Bayes-NTK). Let us assume pre-trained parameter θ * with data S P . Let us assume PAC-Bayes prior and posterior as P θ * (δ) := N (δ|0 P , α 2 I P )
Q θ * (δ) := N (δ|µ Q , Σ Q )(14)µ Q := Σ Q J θ (Y − f (X , θ * )) σ 2(15)Σ Q := I P α 2 + J θ J θ σ 2 −1(16)
By applying P θ * , Q θ * to data-dependent PAC-Bayes bound (equation 2), we get
err D (Q θ * ) ≤ err S Q (Q θ * ) + KL divergence µ Q µ Q 4α 2 N Q (average) perturbation + P i=1 h (β i ) 4N Q sharpness + log(2 N Q /δ) 2N Q (18) where {β i } P i=1 are eigenvalues of (I P + α 2 σ 2 J θ J θ ) −1 and h(x) := x − log(x) − 1.
This upper bound is not scale-invariant in general.
Proof. The main difference between PAC-Bayes-CTK and PAC-Bayes-NTK is the definition of Jacobian: PAC-Bayes-CTK use Jacobian w.r.t connectivity and PAC-Bayes-NTK use Jacobian w.r.t. parameter. Therefore, Construction of KL divergence of proof of Theorem 2.3 is preserved except
Σ −1 P Σ Q = (I P + α 2 σ 2 J θ J θ ) −1
and β i are eigenvalues of Σ −1 P Σ Q . Note that these eigenvalues satisfies
β i = σ 2 σ 2 + α 2 λ i where λ i are eigenvalues of NTK.
Remark A.2 (Function-preserving scale transformation to NTK). On the contrary to the CTK, scale invariance property is not applicable to the NTK due to Jacobian w.r.t. parameter:
J θ (x, T (ψ)) = ∂ ∂T (ψ) f (x, T (ψ)) = ∂ ∂T (ψ) f (x, ψ) = J θ (x, ψ)T −1
If we assume all parameters are scale-invariant (or equivalently masking the Jacobian for all non scale-invariant parameters as in the proof of Proposition 3.1), the scale of NTK is proportional to the inverse scale of parameters.
A.8 Deterministic limiting kernel of CTK
Theorem A.3 (Deterministic limiting kernel of CTK). Let us assume L-layered network with Lipschitz activation function and NN with NTK initialization. Then the empirical CTK converges in probability to a deterministic limiting kernel C xy as the layers width n 1 , . . . , n L → ∞ sequentially. Furthermore, C xy = Θ xy holds.
Proof. The proof is a modification to proof of convergence of NTK in Jacot et al. [17] considering NTK initialization (i.e. standard Gaussian for all parameters). We provide proof by induction. For single layer network, The CTK is summed as:
(C xx ) kk = 1 n 0 n 0 i=1 n 1 j=1 x i x i δ jk δ jk W ik W ik + β 2 n 1 j=1 δ jk δ jk → (Θ xx ) kk
since the weight is sampled from standard Gaussian distribution, whose variance is 1, and product of two (independent) random variable converges in probability converges to the product of converged values. If we assume CTK of l-th layer is converged to NTK of l-th layer in probability, then the convergence of the (l + 1)-th layer is also satisfied since multiplication of two random weights, which converges to 1, is multiplied to the empirical NTK of (l + 1)-th layer, which converges to the deterministic limiting NTK of (l + 1)-th layer. Therefore, empirical CTK converges in probability to the deterministic limiting CTK, which is equivalent to the deterministic limiting NTK.
B Details of Squared Loss for Classification Tasks
For the classification tasks in Sec. 4.2, we use the squared loss instead of the cross-entropy loss since our theoretical results are built on the squared loss. Here, we describe how we use the squared loss to mimic the cross-entropy loss. There are several works [23,24] that utilized the squared loss for the classification task instead of the cross-entropy loss. Specifically, Lee et al. [23] used
L(S, θ) = 1 2N K (x i ,y i )∈S f (x i , θ) − y i 2
where C is the number of classes, and Hui and Belkin [24] used
((x, c), θ) = 1 2K k(f c (x, θ) − M ) 2 + K i=1,i =c f i (x, θ) 2
for single data loss, where ((x, c), θ) is sample loss given input x, target c and parameter θ, f i (x, θ) ∈ R is the i-th component of f (x, θ) ∈ R K , k and M are dataset-specific hyper-parameters. These works used the mean for reducing the vector-valued loss into a scalar loss. However, this can be problematic when the number of classes is large. When the number of classes increases, the denominator of the mean (the number of classes) increases while the target value remains 1 (one-hot label). As a result, the scale of a gradient for the target class becomes smaller. To avoid such an unfavorable effect, we just use the sum for reducing vector-valued loss into a scalar loss instead of taking the mean, i.e.,
((x, c), θ) = 1 2 (f c (x, θ) − 1) 2 + K i=1,i =c f i (x, θ) 2
This analysis is consistent with the hyper-parameter selection in Hui and Belkin [24]. They used larger k and M as the number of classes increases (e.g., k = 1, M = 1 for CIFAR-10 [36], but k = 15, M = 30 for ImageNet [61]) which results in manually compensating the scale of gradient to the target class label.
C Derivation of PAC-Bayes posterior
Derivation of Q θ * (c)
For Bayesian linear regression, we compute the posterior of β ∈ R P
y i = x i β + i , for i = 1 . . . , M where i ∼ N (0, σ 2 ) is i.i.d.
sampled and the prior of β is given as β ∼ N (0 P , α 2 I P ). By concatenating this, we get
y = Xβ + ε
where y ∈ R M , X ∈ R M ×p are concatenation of y i , x i , respectively and ε ∼ N (0 M , σ 2 I M ). It is well known [62,63] that the posterior of β for this problem is
β ∼ N (β|µ, Σ) µ := ΣX y σ 2 Σ := I P α 2 + X X σ 2 −1 .
Similarly, we define Bayesian linear regression problem as
y i = f (x i , θ * ) + J θ (x i , θ * )diag(θ * )c + i , for i = 1 . . . , N K
where M = N K and the regression coefficient is β = c in this case. Thus, we treat y i − f (x i , θ * ) as a target and J θ (x i , θ * )diag(θ * ) as an input of linear regression problem. By concatenating this, we get
Y = f (X , θ * ) + J c c + ε ⇒ (Y − f (X , θ * )) = J c c + ε.
By plugging this to the posterior of Bayesian linear regression problem, we get
Q θ * (c) := N (c|µ Q , Σ Q ) µ Q := Σ Q J c (Y − f (X , θ * )) σ 2 = Σ Q diag(θ * )J θ (Y − f (X , θ * )) σ 2 Σ Q := I P α 2 + J c J c σ 2 −1 = I P α 2 + diag(θ * )J θ J θ diag(θ * ) σ 2 −1
E Implementation of Connectivity Laplace
To estimate the empirical/generalization bound in Sec. 2.4 and calibrate uncertainty in Sec. 4.2, we need to sample c from the posterior Q θ * (c). For this, we sample perturbations δ in connectivity space Three metrics are compared with the following baselines: trace of Hessian (tr(H); [19]), trace of Fisher information matrix (tr(F); [52]), trace of empirical NTK at θ * (tr(Θ θ * )), and four PAC-Bayes bound based measures, sharpness-orig (SO), pacbayes-orig (PO), 1/α sharpness mag (SM), and 1/σ pacbayes mag (PM), which are eq. (52), (49), (62), (61) in Jiang et al. [7].
For the granulated Kendall's coefficient, we use 5 hyper-parameters : network depth, network width, learning rate, weight decay and mini-batch size, along with 3 options for each hyper-parameters as in Table 5. We use the VGG-13 [53] as a base model and we adjust the depth and width of each conv block. We add BN layers after the convolution layer for each block. Specifically, the number of convolution layers of each conv block is the depth and the number of channels of convolution layers of the first conv block is the width. For the subsequent conv blocks, we follow the original VGG width multipliers (×2, ×4, ×8). An example with depth 1 and width 128 is depicted in Table 6.
We use SGD optimizer with a momentum 0.9. We train each model for 200 epochs and use cosine learning rate scheduler [38] with 30% of initial epochs as warm-up epochs. The standard data augmentations (padding, random crop, random horizontal flip, and normalization) for CIFAR-10 is used for training data. For the analysis, we only use models with above 99% training accuracy following Jiang et al. [7]. As a result, we use 200 out of 243 trained models for our correlation analysis. For every experiment, we use 8 NVIDIA RTX 3090 GPUs.
I Details and additional results on BNN experiments
I.1 Experimental Setting
Uncertainty calibration on image classification task We pre-train models for 200 epochs CIFAR-10/100 dataset [36] with ResNet-18 [37] as mentioned in Section 2.4. We choose ensemble size M as 8 except Deep Ensemble [55] and Batch Ensemble [58]. We use 4 ensemble members for Deep Ensemble and Batch Ensemble due to computational cost.
For evaluation, we define single member prediction as one-hot representation of network output with label smoothing. We select label smoothing coefficient as 0.01 for CIFAR-10, 0.1 for CIFAR-100. We define ensemble prediction as averaged prediction of single member predictions. For OOD detection, we use variance of prediction in output space, which is competitive to recent OOD detection methods [71,72]. We use 0.01 for σ and select best α with cross validation. For every experiment, we use 8 NVIDIA RTX 3090 GPUs.
Corollary 2. 4 (
4Relation between CTK and PAC-Bayes-CTK). Let us define empirical ConnectivityTangent Kernel (CTK) of S as C θ * X := J c J c = J θ diag(θ * ) 2 J θ ∈ R N K×N K by removing below term? Note that empirical CTK has T (≤ N K) non-zero eigenvalues of {λ i } T i=1 , then following holds for {β} P i=1in Theorem 2.3: (i) β i = σ 2 /(σ 2 + α 2 λ i ) < 1 for i = 1, . . . , T and (ii) β i = 1 for i = T + 1, . . . , P . Since h(1) = 0, this means P − T terms of summation in sharpness part of PAC-Bayes-CTK vanishes to 0. Furthermore, this sharpness term of PAC-Bayes-CTK is a monotonically increasing function for each eigenvalue of empirical CTK.
Fisher-Rao (FR ;Liang et al. [18]) metric, Adaptive Sharpness (AS; Kwon et al. [15]), and four PAC-Bayes bound based measures: Sharpness-Orig. (SO), Pacbayes-Orig. (PO), Sharpness-Mag. (SM), and Pacbayes-Mag. (PM), which are eq. (52), (49), (62), (61) in Jiang et al.
Figure 1 :
1Sensitivity to α. Expected calibration error (ECE), Negative Log-likelihood (NLL), and Brier score results on corrupted CIFAR-100 for ResNet-18. Showing the mean (line) and standard deviation (shaded area) across four different seeds.
1 Figure 2 :
12Plotting of y = (h s)(x)(b) y = (h • s)(x) where σ = α = Functions used in proofsis a monotonically decreasing function for x ≥ 0 and h(x) := x − log(x) − 1 is a monotonically decreasing function for x ∈ (0, 1]. Since sharpness term of KL divergence is
Table 1 :
1Results for experiments on PAC-Bayes-CTK estimation.CIFAR-10
CIFAR-100
Parameter scale
0.5
1.0
2.0
4.0
0.5
1.0
2.0
4.0
tr(C θ *
X )
14515.0039
14517.7793
14517.3506
14518.4746
12872.6895
12874.4395
12873.9512
12875.541
Bias
13.9791
13.4685
13.3559
13.3122
25.3686
24.8064
24.9102
24.7557
Sharpness
24.6874
24.6938
24.6926
24.6941
26.0857
26.0894
26.0874
26.0916
KL
19.3332
19.0812
19.0243
19.0032
25.7271
25.4479
25.4988
25.4236
Test err.
0.0468 ± 0.0014 0.0463 ± 0.0013 0.0462 ± 0.0012 0.0460 ± 0.0013 0.2257 ± 0.0020 0.2252 ± 0.0017 0.2256 ± 0.0015 0.2253 ± 0.0017
PAC-Bayes-CTK 0.0918 ± 0.0013 0.0911 ± 0.0011 0.0909 ± 0.0011 0.0907 ± 0.0009 0.2874 ± 0.0034 0.2862 ± 0.0031 0.2860 ± 0.0032 0.2862 ± 0.0032
Table 2 :
2Correlation analysis of sharpness measures with generalization gap. We refer Sec. 4.1 for the details of sharpness measures (row) and correlation metrics for sharpness-generalization relationship (column).
Table 3 :
3Test negative log-likelihood on two UCI variants[54,49] Original [54]
GAP variants [49]
Deep Ensemble
MCDO
LL
CL
Deep Ensemble
MCDO
LL
CL
boston_housing
2.90 ± 0.03
2.63 ± 0.01 2.85 ± 0.01
2.88 ± 0.02
2.71 ± 0.01
2.68 ± 0.01 2.74 ± 0.01
2.75 ± 0.01
concrete_strength
3.06 ± 0.01
3.20 ± 0.00
3.22 ± 0.01
3.11 ± 0.02
4.03 ± 0.07
3.42 ± 0.00 3.47 ± 0.01
4.03 ± 0.02
energy_efficiency
0.74 ± 0.01
1.92 ± 0.01
2.12 ± 0.01
0.83 ± 0.01
0.77 ± 0.01
1.78 ± 0.01
2.02 ± 0.01
0.90 ± 0.02
kin8nm
-1.07 ± 0.00
-0.80 ± 0.01 -0.90 ± 0.00 -1.07 ± 0.00
-0.94 ± 0.00
-0.71 ± 0.00 -0.87 ± 0.00 -0.93 ± 0.00
naval_propulsion
-4.83 ± 0.00
-3.85 ± 0.00 -4.57 ± 0.00 -4.76 ± 0.00
-2.22 ± 0.33
-3.36 ± 0.01 -3.66 ± 0.11 -3.80 ± 0.07
power_plant
2.81 ± 0.00
2.91 ± 0.00
2.91 ± 0.00
2.81 ± 0.00
2.91 ± 0.00
2.97 ± 0.00
2.98 ± 0.00
2.91 ± 0.00
protein_structure
2.89 ± 0.00
2.96 ± 0.00
2.91 ± 0.00
2.89 ± 0.00
3.11 ± 0.00
3.07 ± 0.00 3.07 ± 0.00
3.13 ± 0.00
wine
1.21 ± 0.00
0.96 ± 0.01 1.24 ± 0.01
1.27 ± 0.01
1.48 ± 0.01
1.03 ± 0.00
1.45 ± 0.01
1.43 ± 0.00
yacht_hydrodynamics
1.26 ± 0.04
2.17 ± 0.06 1.20 ± 0.04
1.25 ± 0.04
1.71 ± 0.03
3.06 ± 0.02
1.78 ± 0.02
1.74 ± 0.01
Table 4 :
4Uncertainty calibration results on CIFAR-100[36] for ResNet-18[37] CIFAR-100
NLL (↓)
ECE (↓)
Brier. (↓)
AUC (↑)
Deterministic
1.5370 ± 0.0117 0.1115 ± 0.0017 0.3889 ± 0.0031
-
MCDO
1.4264 ± 0.0110 0.0651 ± 0.0008 0.3925 ± 0.0020 0.6907 ± 0.0121
MCBN
1.4689 ± 0.0106 0.0998 ± 0.0016 0.3750 ± 0.0028 0.7982 ± 0.0210
Batch Ensemble
1.4029 ± 0.0031 0.0842 ± 0.0005 0.3582 ± 0.0010 0.7887 ± 0.0115
Deep Ensemble
1.0110
0.0507
0.2740
0.7802
Linearized Laplace
1.1673 ± 0.0093 0.0632 ± 0.0010 0.3597 ± 0.0020 0.8066 ± 0.0120
Connectivity Laplace (Ours) 1.1307 ± 0.0042 0.0524 ± 0.0009 0.3319 ± 0.0005 0.8423 ± 0.0204
Table 5 :
5Configuration of hyper-parameternetwork depth
1, 2, 3
network width
32, 64, 128
learning rate
0.1, 0.032, 0.001
weight decay
0.0, 1e-4, 5e-4
mini-batch size 256, 1024, 4096
Table 6 :
6Example of network configuration with respect to the depth 1, width 128 in [53]-style. J Additional results on bound estimationConvNet Configuration
input (224 × 224 RGB image)
Conv3-128
BN
ReLU
MaxPool
Conv3-256
BN
ReLU
MaxPool
Conv3-512
BN
ReLU
MaxPool
Conv3-1024
BN
ReLU
MaxPool
Conv3-1024
BN
ReLU
MaxPool
FC-4096
ReLU
FC-4096
ReLU
FC-1000
Table 7 :
7Results for experiments on PAC-Bayes-NTK estimation.CIFAR-10
CIFAR-100
Parameter scale
0.5
1.0
2.0
4.0
0.5
1.0
2.0
4.0
tr(Θ θ *
X )
18746194.0
6206303.5
3335419.75
2623873.25
12688970.0
3916139.25
2819272.5
2662497.0
Bias
483.86
427.0042
299.0085
197.3149
476.9061
478.1776
440.284
329.8767
Sharpness
579.6815
472.0
402.8186
369.3761
547.2874
434.7583
398.5075
387.3265
KL divergence
531.7708
449.5021
350.9135
283.3455
512.0967
456.4679
419.3957
358.6016
Test err.
0.5617 ± 0.0670 0.4566 ± 0.0604 0.2824 ± 0.0447 0.1530 ± 0.0199 0.6210 ± 0.0096 0.6003 ± 0.0094 0.5499 ± 0.0100 0.4666 ± 0.0093
PAC-Bayes-NTK 0.7985 ± 0.0694 0.6730 ± 0.0626 0.4718 ± 0.0465 0.3186 ± 0.0202 0.8530 ± 0.0140 0.8162 ± 0.0136 0.7602 ± 0.0112 0.6617 ± 0.0114
K Additional results on image classification
Table 8 :
8Uncertainty calibration results on CIFAR-10[36] for VGG-13[53]. Connectivity Laplace (Ours) 0.2674 ± 0.0028 0.0234 ± 0.0011 0.0946 ± 0.0010 0.9002 ± 0.0033CIFAR-10
NLL (↓)
ECE (↓)
Brier. (↓)
AUC (↑)
Deterministic
0.4086 ± 0.0018 0.0490 ± 0.0003 0.1147 ± 0.0005
-
MCDO
0.3889 ± 0.0049 0.0465 ± 0.0009 0.1106 ± 0.0015 0.7765 ± 0.0221
MCBN
0.3852 ± 0.0012 0.0462 ± 0.0002 0.1108 ± 0.0003 0.9051 ± 0.0065
Batch Ensemble
0.3544 ± 0.0036 0.0399 ± 0.0009 0.1064 ± 0.0012 0.9067 ± 0.0030
Deep Ensemble
0.2243
0.0121
0.0776
0.7706
Linearized Laplace
0.3366 ± 0.0013 0.0398 ± 0.0004 0.1035 ± 0.0003 0.8883 ± 0.0017
Table 9 :
9Uncertainty calibration results on CIFAR-100[36] for VGG-13[53]. Connectivity Laplace (Ours) 1.4073 ± 0.0039 0.0703 ± 0.0028 0.3827 ± 0.0012 0.7254 ± 0.0136CIFAR-100
NLL (↓)
ECE (↓)
Brier. (↓)
AUC (↑)
Deterministic
1.8286 ± 0.0066 0.1544 ± 0.0010 0.4661 ± 0.0018
-
MCDO
1.7439 ± 0.0089 0.1363 ± 0.0008 0.4456 ± 0.0017 0.6424 ± 0.0099
MCBN
1.7491 ± 0.0075 0.1399 ± 0.0010 0.4488 ± 0.0015 0.7039 ± 0.0197
Batch Ensemble
1.6142 ± 0.0101 0.1077 ± 0.0020 0.4143 ± 0.0027 0.7232 ± 0.0021
Deep Ensemble
1.2006
0.0456
0.3228
0.6929
Linearized Laplace
1.5806 ± 0.0054 0.1036 ± 0.0004 0.4127 ± 0.0010 0.6893 ± 0.0221
For a simple two layer linear NN f (x) := W2σ(W1x) with weight matrix W1, W2, the first case of equation 19 corresponds to k-th row of W1 and the second case of equation 19 corresponds to k-th column of W2.
2where ε ∼ N (ε|0 N K , σ 2 I N K ) and c 0 ∼ N (c 0 |0 P , α 2 I P ). Since we sample the noise of data/perturbation and optimize the perturbation, this can be interpreted as a Randomize-Then-Optimize implementation of Laplace Approximation and Connectivity Laplace[41,42].
Derivation of Q θ * (ψ)We define perturbed parameter ψ as follows ψ := θ * + θ * c.Since ψ is affine to c, we get the distribution of ψ asD Representative cases of function-preserving scaling transformationsActivation-wise rescaling transformation[14,64]For NNs with ReLU activations, following holdssubset connecting as input edges to k-th activation at l-th layer} θi/γ , if θi ∈ {param. subset connecting as output edges to k-th activation at l-th layer} θi , for θi in the other cases(19)Note that R γ,l,k (·) is a finer-grained rescaling transformation than layer-wise rescaling transformation (i.e. common γ for all activations in layer l) discussed in Dinh et al.[10]. Dinh et al.[10]showed that even layer-wise rescaling transformations can sharpen pre-trained solutions in terms of trace of Hessian (i.e., contradicting the FM hypothesis). This contradiction also occurs to previous PAC-Bayes bounds[14,15]due to the scale-dependent term.Weight decay with BN layers[65]For parameters W ∈ R n l ×n l−1 preceding BN layer,for an input u ∈ R n l−1 and a positive vector γ ∈ R n l + . This implies that scaling transformations on these parameters preserve function represented by NNs for ∀x ∈ R d , γ ∈ R n l + :subset connecting as input edges to k-th activation at l-th layer} θi , for θi in the other cases(21)Note that the weight decay regularization[12,13]can be implemented as a realization of S γ,l,k (·) (e.g., γ = 0.9995 for all activations preceding BN layers). Therefore, thanks to Theorem 2.2 and Theorem 2.5, our CTK-based bound is invariant to weight decay regularization applied to parameters before BN layers. We also refer to[50,66]for optimization perspective of weight decay with BN.G Predictive uncertainty of Connectivity/Linearized LaplaceIn this section, we derive predictive uncertainty of Linearized Laplace (LL) and Connectivity Laplace (CL). By matrix inversion lemma[70], the weight covariance of LL isTherefore, if σ 2 /α 2 → 0, then the weight covariance of LL converges toWith this weight covariance and linearized NN, the predictive uncertainty of LL isSimilarly, the predictive uncertainty of CL isH Details on sharpness-generalization experimentsTo verify that the CS has a better correlation with generalization performance compared to existing sharpness measures, we evaluate the three metrics: (a) Kendall's rank-correlation coefficient τ[51]which considers the consistency of a sharpness measure with generalization gap (i.e., if one has higher sharpness, then so has higher generalization gap) (b) granulated Kendall's coefficient[7]which examines Kendall's rank-correlation coefficient w.r.t. individual hyper-parameters to separately evaluate the effect of each hyper-parameter to generalization gap (c) conditional independence test[7]which captures the causal relationship between measure and generalization.
What uncertainties do we need in bayesian deep learning for computer vision? In I. Alex Kendall, Yarin Gal ; Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, Jasper Snoek, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D. Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
In search of the real inductive bias: On the role of implicit regularization in deep learning. Ryota Behnam Neyshabur, Nathan Tomioka, Srebro, ICLR (Workshop). Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In ICLR (Workshop), 2015.
Understanding deep learning requires rethinking generalization. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization, 2017.
Stronger generalization bounds for deep nets via a compression approach. Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang, PMLRProceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning80Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 254-263. PMLR, 10-15 Jul 2018.
Simplifying neural nets by discovering flat minima. Sepp Hochreiter, Jürgen Schmidhuber, Advances in Neural Information Processing Systems. G. Tesauro, D. Touretzky, and T. LeenMIT Press7Sepp Hochreiter and Jürgen Schmidhuber. Simplifying neural nets by discovering flat minima. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7. MIT Press, 1995.
Fantastic generalization measures and where to find them. Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, Samy Bengio, International Conference on Learning Representations. Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantas- tic generalization measures and where to find them. In International Conference on Learning Representations, 2020.
A practical bayesian framework for backpropagation networks. J C David, Mackay, 10.1162/neco.1992.4.3.448Neural Comput. 43David J. C. MacKay. A practical bayesian framework for backpropagation networks. Neural Comput., 4(3):448-472, may 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.3.448.
Laplace redux -effortless bayesian deep learning. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanErik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux -effortless bayesian deep learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
Sharp minima can generalize for deep nets. Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio, PMLRProceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine Learning70Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1019-1028. PMLR, 06-11 Aug 2017.
Visualizing the loss landscape of neural nets. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
Three mechanisms of weight decay regularization. Guodong Zhang, Chaoqi Wang, Bowen Xu, Roger Grosse, International Conference on Learning Representations. Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. In International Conference on Learning Representations, 2019.
Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using PAC-Bayesian analysis. Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama, PMLRProceedings of the 37th International Conference on Machine Learning. Hal Daumé III and Aarti Singhthe 37th International Conference on Machine Learning119Yusuke Tsuzuku, Issei Sato, and Masashi Sugiyama. Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using PAC-Bayesian analysis. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9636-9647. PMLR, 13-18 Jul 2020.
Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. Jungmin Kwon, Jeongseop Kim, Hyunseo Park, In Kwon Choi, arXiv:2102.11600arXiv preprintJungmin Kwon, Jeongseop Kim, Hyunseo Park, and In Kwon Choi. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. arXiv preprint arXiv:2102.11600, 2021.
Relative flatness and generalization. Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, Mario Boley, Advances in Neural Information Processing Systems. 342021Henning Petzka, Michael Kamp, Linara Adilova, Cristian Sminchisescu, and Mario Boley. Relative flatness and generalization. Advances in Neural Information Processing Systems, 34, 2021.
Neural tangent kernel: Convergence and generalization in neural networks. Arthur Jacot, Franck Gabriel, Clement Hongler, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa- Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
Fisher-rao metric, geometry, and complexity of neural networks. Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, James Stokes, The 22nd International Conference on Artificial Intelligence and Statistics. PMLRTengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. Fisher-rao metric, geometry, and complexity of neural networks. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 888-896. PMLR, 2019.
On large-batch training for deep learning: Generalization gap and sharp minima. Jorge Nitish Shirish Keskar, Ping Tak Peter Nocedal, Dheevatsa Tang, Mikhail Mudigere, Smelyanskiy, 5th International Conference on Learning Representations. Nitish Shirish Keskar, Jorge Nocedal, Ping Tak Peter Tang, Dheevatsa Mudigere, and Mikhail Smelyanskiy. On large-batch training for deep learning: Generalization gap and sharp minima. In 5th International Conference on Learning Representations, ICLR 2017, 2017.
Exploring generalization in deep learning. Srinadh Behnam Neyshabur, David Bhojanapalli, Nathan Mcallester, Srebro, arXiv:1706.08947arXiv preprintBehnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017.
Wide neural networks of any depth evolve as linear models under gradient descent. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, Jeffrey Pennington, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Bayesian deep ensembles via the neural tangent kernel. Bobby He, Yee Whye Balaji Lakshminarayanan, Teh, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian deep ensembles via the neural tangent kernel. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1010-1022. Curran Associates, Inc., 2020.
Finite versus infinite neural networks: an empirical study. Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, Jascha Sohl-Dickstein, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, and Jascha Sohl-Dickstein. Finite versus infinite neural networks: an empirical study. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 15156-15172. Curran Associates, Inc., 2020.
Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks. Like Hui, Mikhail Belkin, International Conference on Learning Representations. Like Hui and Mikhail Belkin. Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks. In International Conference on Learning Representations, 2021.
Approximate inference turns deep networks into gaussian processes. Mohammad Emtiyaz, E Khan, Alexander Immer, Ehsan Abedi, Maciej Korzepa, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Mohammad Emtiyaz E Khan, Alexander Immer, Ehsan Abedi, and Maciej Korzepa. Approximate inference turns deep networks into gaussian processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Improving predictions of bayesian neural nets via local linearization. Alexander Immer, Maciej Korzepa, Matthias Bauer, AISTATS. Alexander Immer, Maciej Korzepa, and Matthias Bauer. Improving predictions of bayesian neural nets via local linearization. In AISTATS, pages 703-711, 2021.
A scalable laplace approximation for neural networks. Hippolyt Ritter, Aleksandar Botev, David Barber, International Conference on Learning Representations. Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable laplace approximation for neural networks. In International Conference on Learning Representations, 2018.
Being bayesian, even just a bit, fixes overconfidence in relu networks. Agustinus Kristiadi, Matthias Hein, Philipp Hennig, International conference on machine learning. PMLRAgustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being bayesian, even just a bit, fixes overconfidence in relu networks. In International conference on machine learning, pages 5436-5446. PMLR, 2020.
Bayesian deep learning via subnetwork inference. Erik Daxberger, Eric Nalisnick, Javier James U Allingham, Jose Miguel Antoran, Hernandez-Lobato, PMLRProceedings of the 38th International Conference on Machine Learning. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning139Erik Daxberger, Eric Nalisnick, James U Allingham, Javier Antoran, and Jose Miguel Hernandez- Lobato. Bayesian deep learning via subnetwork inference. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 2510-2521. PMLR, 18-24 Jul 2021.
Pac-bayesian model averaging. A David, Mcallester, Proceedings of the twelfth annual conference on Computational learning theory. the twelfth annual conference on Computational learning theoryDavid A McAllester. Pac-bayesian model averaging. In Proceedings of the twelfth annual conference on Computational learning theory, pages 164-170, 1999.
Tighter risk certificates for neural networks. Maria Perez-Ortiz, Omar Risvaplata, John Shawe-Taylor, Csaba Szepesvári, Journal of Machine Learning Research. 22227Maria Perez-Ortiz, Omar Risvaplata, John Shawe-Taylor, and Csaba Szepesvári. Tighter risk certificates for neural networks. Journal of Machine Learning Research, 22(227):1-40, 2021.
SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY. Namhoon Lee, Thalaiyasingam Ajanthan, Philip Torr, International Conference on Learning Representations. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY. In International Conference on Learning Representations, 2019.
A signal propagation perspective for pruning neural networks at initialization. Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip Hs Torr, arXiv:1906.06307arXiv preprintNamhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr. A signal propagation perspective for pruning neural networks at initialization. arXiv preprint arXiv:1906.06307, 2019.
Lqf: Linear quadratic fine-tuning. Alessandro Achille, Aditya Golatkar, Avinash Ravichandran, Marzia Polito, Stefano Soatto, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAlessandro Achille, Aditya Golatkar, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Lqf: Linear quadratic fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15729-15739, 2021.
Fast adaptation with linearized neural networks. Wesley Maddox, Shuai Tang, Pablo Moreno, Andrew Gordon Wilson, Andreas Damianou, International Conference on Artificial Intelligence and Statistics. PMLRWesley Maddox, Shuai Tang, Pablo Moreno, Andrew Gordon Wilson, and Andreas Damianou. Fast adaptation with linearized neural networks. In International Conference on Artificial Intelligence and Statistics, pages 2737-2745. PMLR, 2021.
Learning multiple layers of features from tiny images. A Krizhevsky, University of TorontoMaster's thesisA Krizhevsky. Learning multiple layers of features from tiny images. Master's thesis, University of Toronto, 2009.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
An investigation into neural net optimization via hessian eigenvalue density. Behrooz Ghorbani, Shankar Krishnan, Ying Xiao, International Conference on Machine Learning. PMLRBehrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In International Conference on Machine Learning, pages 2232-2241. PMLR, 2019.
Randomize-then-optimize: A method for sampling from posterior distributions in nonlinear inverse problems. M Johnathan, Antti Bardsley, Heikki Solonen, Marko Haario, Laine, SIAM Journal on Scientific Computing. 364Johnathan M Bardsley, Antti Solonen, Heikki Haario, and Marko Laine. Randomize-then-optimize: A method for sampling from posterior distributions in nonlinear inverse problems. SIAM Journal on Scientific Computing, 36(4):A1895-A1910, 2014.
Samplethen-optimize posterior sampling for bayesian linear models. Jiri Alexander G De G Matthews, Richard E Hron, Zoubin Turner, Ghahramani, NIPS Workshop on Advances in Approximate Bayesian Inference. Alexander G de G Matthews, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Sample- then-optimize posterior sampling for bayesian linear models. In NIPS Workshop on Advances in Approximate Bayesian Inference, 2017.
Non-vacuous generalization bounds at the imagenet scale: a pac-bayesian compression approach. Wenda Zhou, Victor Veitch, Morgane Austern, P Ryan, Peter Adams, Orbanz, arXiv:1804.05862arXiv preprintWenda Zhou, Victor Veitch, Morgane Austern, Ryan P Adams, and Peter Orbanz. Non-vacuous generalization bounds at the imagenet scale: a pac-bayesian compression approach. arXiv preprint arXiv:1804.05862, 2018.
Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. Karolina Gintare, Daniel M Dziugaite, Roy, Proceedings of the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI). the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI)Gintare Karolina Dziugaite and Daniel M. Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In Proceedings of the 33rd Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2017.
Sharpness-aware minimization for efficiently improving generalization. Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur, arXiv:2010.01412arXiv preprintPierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412, 2020.
A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. F Michael, Hutchinson, Communications in Statistics-Simulation and Computation. 183Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 18(3):1059-1076, 1989.
Linearised laplace inference in networks with normalisation layers and the neural g-prior. Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato , Fourth Symposium on Advances in Approximate Bayesian Inference. Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, and José Miguel Hernández-Lobato. Linearised laplace inference in networks with normalisation layers and the neural g-prior. In Fourth Symposium on Advances in Approximate Bayesian Inference, 2021.
Adapting the linearised laplace model evidence for modern deep learning. Javier Antoran, David Janz, Erik James U Allingham, Riccardo Rb Daxberger, Eric Barbano, José Miguel Hernández-Lobato Nalisnick, International Conference on Machine Learning. PMLRJavier Antoran, David Janz, James U Allingham, Erik Daxberger, Riccardo Rb Barbano, Eric Nalisnick, and José Miguel Hernández-Lobato. Adapting the linearised laplace model evidence for modern deep learning. In International Conference on Machine Learning, pages 796-821. PMLR, 2022.
Y K Andrew, Yingzhen Foong, José Miguel Li, Richard E Hernández-Lobato, Turner, arXiv:1906.11537between'uncertainty in bayesian neural networks. arXiv preprintAndrew YK Foong, Yingzhen Li, José Miguel Hernández-Lobato, and Richard E Turner. 'in- between'uncertainty in bayesian neural networks. arXiv preprint arXiv:1906.11537, 2019.
Twan Van Laarhoven, arXiv:1706.05350L2 regularization versus batch and weight normalization. arXiv preprintTwan Van Laarhoven. L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350, 2017.
A new measure of rank correlation. G Maurice, Kendall, Biometrika. 301/2Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81-93, 1938.
Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo B Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof J Geras, International Conference on Machine Learning. PMLRStanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo B Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J Geras. Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. In International Conference on Machine Learning, pages 4772-4784. PMLR, 2021.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, International Conference on Learning Representations. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
Probabilistic backpropagation for scalable learning of bayesian neural networks. José Miguel Hernández-Lobato , Ryan Adams, International conference on machine learning. PMLRJosé Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International conference on machine learning, pages 1861-1869. PMLR, 2015.
Simple and scalable predictive uncertainty estimation using deep ensembles. Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Yarin Gal, Zoubin Ghahramani, international conference on machine learning. PMLRYarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR, 2016.
Bayesian uncertainty estimation for batch normalized deep networks. Mattias Teye, Hossein Azizpour, Kevin Smith, International Conference on Machine Learning. PMLRMattias Teye, Hossein Azizpour, and Kevin Smith. Bayesian uncertainty estimation for batch normalized deep networks. In International Conference on Machine Learning, pages 4907-4916. PMLR, 2018.
Batchensemble: an alternative approach to efficient ensemble and lifelong learning. Yeming Wen, Dustin Tran, Jimmy Ba, arXiv:2002.06715arXiv preprintYeming Wen, Dustin Tran, and Jimmy Ba. Batchensemble: an alternative approach to efficient ensemble and lifelong learning. arXiv preprint arXiv:2002.06715, 2020.
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, PMLRProceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine Learning70Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR, 06-11 Aug 2017.
Reading digits in natural images with unsupervised feature learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009.
Christopher M Bishop, Pattern Recognition and Machine Learning. SpringerChristopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Machine learning: a probabilistic perspective. P Kevin, Murphy, MIT pressKevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
Path-sgd: Path-normalized optimization in deep neural networks. Behnam Neyshabur, R Russ, Nati Salakhutdinov, Srebro, Advances in Neural Information Processing Systems. C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Behnam Neyshabur, Russ R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimiza- tion in deep neural networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, PMLRProceedings of the 32nd International Conference on Machine Learning. Francis Bach and David Bleithe 32nd International Conference on Machine LearningLille, France37Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448-456, Lille, France, 07-09 Jul 2015. PMLR.
On the periodic behavior of neural network training with batch normalization and weight decay. Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry P Vetrov, Advances in Neural Information Processing Systems. 34Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, and Dmitry P Vetrov. On the periodic behavior of neural network training with batch normalization and weight decay. Advances in Neural Information Processing Systems, 34:21545-21556, 2021.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Hutch++: Optimal stochastic trace estimation. A Raphael, Cameron Meyer, Christopher Musco, David P Musco, Woodruff, Symposium on Simplicity in Algorithms (SOSA). SIAMRaphael A Meyer, Cameron Musco, Christopher Musco, and David P Woodruff. Hutch++: Optimal stochastic trace estimation. In Symposium on Simplicity in Algorithms (SOSA), pages 142-155. SIAM, 2021.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vanderplas, Skye Wanderman-Milne, Qiao Zhang, JAX: composable transformations of Python+NumPy programs. github. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs. github, 2018.
Inverting Modified Matrices. M A Woodbury, / Statistical Research Group, Princeton. Statistical Research GroupMemorandum ReportM.A. Woodbury. Inverting Modified Matrices. Memorandum Report / Statistical Research Group, Princeton. Statistical Research Group, 1950.
Likelihood ratios for out-of-distribution detection. Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, Balaji Lakshminarayanan, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Uncertainty estimation using a single deep deterministic neural network. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal, PMLRProceedings of the 37th International Conference on Machine Learning. Hal Daumé III and Aarti Singhthe 37th International Conference on Machine Learning119Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep deterministic neural network. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9690-9700. PMLR, 13-18 Jul 2020.
ResNet [37] or BERT [67]). Therefore, one might wonder how Connectivity Sharpness can be computed for these architectures. However, Connectivity Sharpness in Sec. 2.5 is defined as trace of empirical CTK, thereby one can compute CS with Hutchison's method [46, 68. F Details of computing Connectivity Sharpness It is well known that empirical NTK or Jacobian is intractable in modern architecture of NNs. According to Hutchison's method, trace of a matrix A ∈ R m×m is tr(A) = tr(AI p ) = tr. AE z [zz ]) = E z [tr(Azz )] = E z [tr(z Az)] = E z [z AzF Details of computing Connectivity Sharpness It is well known that empirical NTK or Jacobian is intractable in modern architecture of NNs (e.g., ResNet [37] or BERT [67]). Therefore, one might wonder how Connectivity Sharpness can be computed for these architectures. However, Connectivity Sharpness in Sec. 2.5 is defined as trace of empirical CTK, thereby one can compute CS with Hutchison's method [46, 68]. According to Hutchison's method, trace of a matrix A ∈ R m×m is tr(A) = tr(AI p ) = tr(AE z [zz ]) = E z [tr(Azz )] = E z [tr(z Az)] = E z [z Az]
Since A = C θ * X = J c (M, 0 p )J c (M, 0 p ) ∈ R N k in our case. we further use mini-batch approximation to compute z Az: (i) Sample z M ∈ R M k from Rademacher distribution for mini-batch M with size M and (ii) compute v M := J c (M, 0 p ) z M ∈ R P with Jacobian-vector product of JAX. R m is a random variable with cov(z) = I mwhere z ∈ R m is a random variable with cov(z) = I m (e.g., standard normal distribution or Rademacher distribution). Since A = C θ * X = J c (M, 0 p )J c (M, 0 p ) ∈ R N k in our case, we further use mini-batch approximation to compute z Az: (i) Sample z M ∈ R M k from Rademacher distribution for mini-batch M with size M and (ii) compute v M := J c (M, 0 p ) z M ∈ R P with Jacobian-vector product of JAX |
173,990,564 | Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel | Neural Networks (NNs) have been extensively used for a wide spectrum of realworld regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough, but also the uncertainty (i.e. risk, or confidence) of that prediction must be estimated. Standard NNs, which are most often used in such tasks, do not provide any such information. Existing approaches try to solve this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not perform as well as standard NNs. In this paper, a new framework called RIO is developed that makes it possible to estimate uncertainty in any pretrained standard NN. RIO models prediction residuals using Gaussian Process with a composite input/output kernel. The residual prediction and I/O kernel are theoretically motivated and the framework is evaluated in twelve real-world datasets. It is found to provide reliable estimates of the uncertainty, reduce the error of the point predictions, and scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient in building real-world applications of NNs.Preprint. Under review. | [
6706414
] | Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel
Xin Qiu Cognizant
Elliot Meyerson [email protected]
Miikkulainen Risto [email protected]
Cognizant
Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel
Neural Networks (NNs) have been extensively used for a wide spectrum of realworld regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough, but also the uncertainty (i.e. risk, or confidence) of that prediction must be estimated. Standard NNs, which are most often used in such tasks, do not provide any such information. Existing approaches try to solve this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not perform as well as standard NNs. In this paper, a new framework called RIO is developed that makes it possible to estimate uncertainty in any pretrained standard NN. RIO models prediction residuals using Gaussian Process with a composite input/output kernel. The residual prediction and I/O kernel are theoretically motivated and the framework is evaluated in twelve real-world datasets. It is found to provide reliable estimates of the uncertainty, reduce the error of the point predictions, and scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient in building real-world applications of NNs.Preprint. Under review.
Introduction
Nowadays, Neural Networks (NNs) are arguably the most popular machine learning tool among Artificial Intelligence (AI) community. Researchers and practitioners have applied NNs to a wide variety of fields, including manufacturing [4], bioinformatics [23], physics [2], finance [28], chemistry [1], healthcare [33], etc. Although standard NNs are good at making a point prediction (a single outcome) for supervised learning tasks, they are unable to provide the uncertainty information about predictions. For real-world decision makers, representing the model uncertainty is of crucial importance [10,13,20]. For example, in the case of regression, providing a 95% confidence interval of the prediction allows the decision maker to anticipate the possible outcomes with explicit probability. In contrast, simply returning a single point prediction imposes increased risks on the decision making, e.g., a predictively good but actually risky medical treatment may be overconfidently interpreted without uncertainty information.
Conventional Bayesian models such as Gaussian Process (GP) [31] offer a mathematically grounded approach to reason about the predictive uncertainty, but usually come with a prohibitive computational cost and lessened performance compared to NNs [10]. As a potential solution, considerable research has been devoted to the combination of Bayesian models and NNs (see "Related Work" section for a detailed review of these approaches), aiming to overcome the downsides of both. However, from the classical Bayesian Neural Network [27], in which a distribution of weights is learned, to the most recent Neural Processes [11,12,19], in which a distribution over functions is defined, all these methods require significant modifications to the model infrastructure and training pipeline. Compared to standard (non-Bayesian) NNs, these new models are often computationally slower to train and harder to implement [21,10], creating tremendous difficulty for practical uses. In [10], a theoretical tool is derived to extract uncertainty information from dropout training, however, the method can only be applied to dropout models, and it also requires to change the internal inference pipeline of dropout NNs. Quantifying point-prediction uncertainty in standard NNs, which are overwhelmingly popular in practical applications, still remains a challenging problem with significant potential impact.
To circumvent the above issues, this paper presents a new framework that can quantitatively estimate the point-prediction uncertainty of standard NNs without any modifications to the model structure or training pipeline. The proposed framework can be directly applied to any pretrained NNs without retraining them. The main idea is to estimate the prediction residuals of NNs using a modified GP, which introduces a new composite kernel that makes use of both inputs and outputs of the NNs. The framework is referred to as RIO (for Residual Input/Output), and the new kernel as an I/O kernel. In addition to uncertainty estimation, RIO has an interesting and unexpected side-effect: It also provides a way to reduce the error of the NN predictions. Moreover, with the help of recent sparse GP models, RIO is well scalable to large-scale datasets. Since classification problems can also be treated as regression on class labels [24], this paper will focus on regression tasks. A theoretical foundation is provided to prove the effectiveness of both residual estimation and I/O kernel. Twelve real-world datasets are tested in empirical studies, in which RIO exhibits reliable uncertainty estimations, more accurate point predictions, and better scalability compared to existing approaches.
Method
In this section, the general problem statement will be given, the RIO framework will be developed, and justified theoretically, focusing on the two main contributions: estimating the residuals with GP and using an I/O kernel. For background introductions of NNs, GPs, and its more efficient approximation, Stochastic Variational Gaussian Processes (SVGPs) [16,17], see section S1 in appendix.
Problem Statement
Consider a training dataset D = (X , Y) = {(x i , y i ) | i = 1, 2, . . . , n}, and a pre-trained standard NN that outputs a point predictionŷ i given x i . The problem is two-fold: (1) Quantify the uncertainty in the predictions of the NN, and (2) calibrate the point predictions of the NN (i.e. make them more accurate).
Framework Overview
The main idea of the proposed framework, RIO (for Residual estimation with an I/O kernel), is to predict the residuals between observed outcomes y and NN predictionsŷ using GP with a composite kernel. The framework can be divided into two phases: training phase and deployment phase.
In the training phase, the residuals between observed outcomes and NN predictions on the training dataset are calculated as r i = y i −ŷ i , for i = 1, 2, . . . , n .
Let r denote the vector of all residuals andŷ denote the vector of all NN predictions. A GP with a composite kernel is trained assuming r ∼ N (0, K c ((X ,ŷ), (X ,ŷ)) + σ 2 n I), where K c ((X ,ŷ), (X ,ŷ)) denotes an n × n covariance matrix at all pairs of training points based on a composite kernel
k c ((x i ,ŷ i ), (x j ,ŷ j )) = k in (x i , x j ) + k out (ŷ i ,ŷ j ), for i, j = 1, 2, . . . , n .(2)
Suppose a radial basis function (RBF) kernel is used for both k in and k out . Then, Figure 1: Complete model-building process. Given a dataset, first a standard NN model is constructed and trained by a data scientist. The RIO method takes this pre-trained model and trains a GP to estimate the residuals of the NN using both the output of the NN and the original input. Blue pathways are only active during the training phase. In the deployment phase, the GP provides uncertainty estimates for the predictions, while calibrating them, i.e., making point predictions more accurate. Overall, RIO transforms the standard NN regression model into a more practical probabilistic estimator.
k c ((x i ,ŷ i ), (x j ,ŷ j )) = σ 2 in exp(− 1 2l 2 in x i − x j 2 ) + σ 2 out exp(− 1 2l 2 out ŷ i −ŷ j 2 ) .(3)
The training process of GP learns the hyperparameters σ 2 in , l in , σ 2 out , l out , and σ 2 n by maximizing the log marginal likelihood log p(r|X ,ŷ) given by
log p(r|X ,ŷ) = − 1 2 r (K c ((X ,ŷ), (X ,ŷ))+σ 2 n I)r− 1 2 log |K c ((X ,ŷ), (X ,ŷ))+σ 2 n I|− n 2 log 2π .(4)
In the deployment phase, a test point x * is input to the NN to get an outputŷ * . The trained GP predicts the distribution of the residual asr * |X ,ŷ, r, x * ,ŷ * ∼ N (r * , var(r * )), wherē
r = k * (K c ((X ,ŷ), (X ,ŷ)) + σ 2 n I) −1 r ,(5)var(r) = k c ((x * ,ŷ * ), (x * ,ŷ * )) − k * (K c ((X ,ŷ), (X ,ŷ)) + σ 2 n I) −1 k * ,(6)
where k * denotes the vector of kernel-based covariances (i.e., k c ((x * ,ŷ * ), (x i ,ŷ i ))) between (x * ,ŷ * ) and the training points.
Interestingly, note that the predicted residuals can also be used to calibrate the point predictions of the NN. Finally, the calibrated prediction with uncertainty information is given bŷ y * ∼ N (ŷ * +r * , var(r * )) .
In other words, RIO not only adds the uncertainty estimation to a standard NN-it also makes their output more accurate, without any modification of its architecture or training. Figure 1 shows the overall procedure when applying the proposed framework in real-world applications.
Underlying Rationale of Residual Prediction
This subsection gives a theoretical justification for why residual prediction helps both error correction and uncertainty estimation of an NN. Specifically, such prediction (1) leads to output that is more accurate than that of GP alone, (2) leads to output that is more accurate than that of the NN alone, and (3) leads to uncertainty estimates that are positively correlated with variance of NN residuals.
Consider a dataset D = (X , Y) = {(x i , y i )} n i=1 , with x i drawn i.i.d. from a distribution p(x), and y i = f (x i ) + g(x i ) + i ,
where f (·) ∼ GP(0, k(·, ·)), i ∼ N (0, σ 2 n > 0), and g is a function with mean zero and variance E x [g 2 (x)] = σ 2 g > 0. Without loss of generality, f (·) represents the component in the target function that can be modeled by a GP, represents the observation noise, and g(·) represents all the remaining components. With mean squared error as loss, the optimal predictor for y is h * (x) = f (x) + g(x).
Suppose that, from the perspective of GP given D, g is indistinguishable from noise, but that there is a neural network h NN that has successfully learned some of its structure. Consider the residuals h * (x) − h NN (x) = r(x) = r f (x) + r g (x). Here, r f is the remaining GP component, i.e., r f ∼ GP(0, αk(·, ·)), for some non-negative α ≤ 1. Similarly, r g is the remainder of g The neural network has discovered true complex structure in the labels, so the residuals have low variance and are easy for the GP to fit with high confidence; (right) The ineffective neural network has introduced unnecessary complexity, so the residuals are modeled with high uncertainty. In both cases, RIO matches the intuition for how uncertain the NN really is.
indistinguishable from noise, i.e., σ 2 g − E x [r 2 g (x)] = δ > 0.
Aside from these indistinguishable functions, assume GP hyperparameters can be estimated optimally.
Let h GP be the posterior mean of the GP trained directly on D, and r GP be that of a GP trained to fit the residuals, which yields the final predictor h GP+NN = h NN + r GP . Let E g GP (X ), E g NN (X ), and E g GP+NN (X ) be the expected generalization errors of h GP , h NN , and h GP+NN . First, following a standard approach [31], consider the eigenfunction expansion k(
x, x ) = j λ j φ j (x)φ j (x ) and k(x, x )φ i (x)p(x)dx = λ i φ i (x )
. Let Λ be the diagonal matrix of the eigenvalues λ j , and Φ be the design matrix, i.e., Φ ji = φ j (x i ). The following series of results capture the improvement due to residual estimation (proofs in section S3.1 of appendix).
Lemma 2.1. E g GP (X ) = tr(Λ −1 + (σ 2 n + σ 2 g ) −1 ΦΦ ) −1 + σ 2 g . Lemma 2.2. E g NN (X ) = αE[f 2 (x)] + σ 2 g − δ. Lemma 2.3. E g GP+NN (X ) = tr[α −1 Λ −1 + (σ 2 n + σ 2 g − δ) −1 ΦΦ ] −1 + σ 2 g − δ. Theorem 2.4. E g GP+NN (X ) < E g GP (X ) − δ and E g GP+NN (X ) < E g NN (X ).
In particular, the improvement with respect to GP is greater than the reduction in apparent noise. Importantly, this improvement in error corresponds to a predictive variance that is closer to the optimal for this problem, so uncertainty estimates are improved as well. Experiments on real world data confirm that when predicting the residuals of an NN, the estimated noise level of the GP is indeed lower and correlates with the reduction in generalization error (see S2.3 in appendix). This reduction is possible because the NN is able to extract higher-level features not available to the GP.
These results also lead to a key practical property, which is illustrated in Figure 2. Proof. Increases in E[r 2 f (x)] lead to increases in α, and increases in E[r 2 g (x)] lead to decreases in δ.
So, increases in either
E[r 2 f (x)] or E[r 2 g (x)] lead to increases of tr[α −1 Λ −1 +(σ 2 n +σ 2 g −δ) −1 ΦΦ ] −1 , which is the expected predictive variance of r GP .
This property matches the intuition that the GP's variance should generally be higher for bad NNs than for good NNs. Such a property is crucial to accurately measuring the confidence of NN predictions.
Robustness of I/O Kernel
This section provides a justification for why a GP using the proposed I/O kernel is more robust than the standard GP, i.e., using the input kernel alone. The reasoning closely matches that in Section 2.3.
Consider the setup in Section 2.3, but now with
y i = f in (x i ) + f out (x i ) + i , where f in ∼ GP (0, k in )
and f out ∼ GP (0, k out ), with non-trivial RBF kernels k in , k out (as in Equation 3). Again, suppose that, due to its high expressivity [7], h NN is indistinguishable from noise from the perspective of GP.
Let E g I (X ), E g O (X ), and E g I/O (X ) be the expected generalization errors of the standard GP, GP with output kernel only, and GP with I/O kernel. Then, the expected result follows (proof in S3.2 of appendix).
Theorem 2.5. E g I/O (X ) < E g I (X ) and E g I/O (X ) < E g O (X ).
The optimizer associated with GP simultaneously optimizes the hyperparameters of both kernels, so the less useful kernel usually receives a smaller signal variance. As a result, the I/O kernel is resilient to failures of either kernel. In particular, the GP using I/O kernel improves performance even in the case where the problem is so complex that Euclidean distance in the input space provides no useful correlation information or when the input space contains some noisy features. Conversely, when the NN is a bad predictor, and h NN is simply noise, the standard GP with input kernel alone is recovered. In other words, the I/O kernel is never worse than using the input kernel alone, and in practice it is often better. This conclusion is confirmed in experiments, as will be described next.
Scalability
RIO is scalable to large datasets by applying sparse GP methods, e.g., SVGP [16,17]. All the conclusions in previous sections still remain valid since sparse GP is simply an approximation of the original GP. In the case of applying SVGP with a traditional optimizer, e.g., L-BFGS-B [6,38], the computational complexity is O(nm 2 ), and space complexity is O(nm), where n is the number of data points and m is the number of inducing variables. Experiments show that the computational cost of this implementation is significantly cheaper than other state-of-the-art approaches.
Empirical Evaluation
Experiments in this section compare nine algorithms on 12 real-world datasets. The algorithms include standard NN, the proposed RIO framework, four ablated variants of RIO, and three state-ofthe-art models that can provide predictive uncertainty, namely, SVGP [16], Neural Network Gaussian Process (NNGP) [24], and Attentive Neural Processes (ANP) [19]. In naming the RIO variants, "R" means estimating NN residuals then correcting NN outputs, "Y" means directly estimating outcomes, "I" means only using input kernel, "O" means only using output kernel, and "IO" means using I/O kernel. For all RIO variants (including full RIO), SVGP is used as the GP component, but using the appropriate kernel and prediction target. Therefore, "Y+I" amounts to original SVGP, and it is denoted as "SVGP" in all the experimental results. All 12 datasets are real-world regression problems from UCI Machine Learning Repository [9], and cover a wide variety of dataset sizes and feature dimensionalities. Except for the "MSD" dataset, all other datasets are tested for 100 independent runs. During each run, the dataset is randomly split into training set, validation set, and test set, and all algorithms are trained on the same split. All RIO variants that involve an output kernel or residual estimation are based on the trained NN in the same run. For "MSD", since the dataset split is strictly predefined by the provider, only 10 independent runs are conducted. NNGP and ANP are only tested on the four smallest dataset (based on the product of dataset size and feature dimensionality) because they do not scale to larger datasets within the available computational time. It is notable that for all the RIO variants, no extensive hyperparameter tunings are conducted; the same default setup is used for all experiments, i.e., RBF kernel and 50 inducing points. See appendix for the details of experimental setups. Table 1 summarizes the numerical results from these experiments. The main conclusions in terms of point-prediction error, uncertainty estimation, computational requirements, and ablation studies are summarized below.
Point-Prediction Error
The errors between point predictions of models and true outcomes of test points are measured using Root Mean Square Error (RMSE), and the mean and standard deviation of RMSEs over multiple experimental runs are shown in the "Prediction RMSE" column of Table 1. For models that return a probabilistic distribution, the mean of the distribution is considered as the point prediction. RIO performs the best or equals the best (based on statistical tests) in 11 out of 12 datasets, and ranks second in the remaining dataset. Compared to standard NN, the probabilistic models that The symbols † and ‡ indicate that the difference between the marked entry and RIO is statistically significant at the 5% significance level using paired t-test and Wilcoxon test, respectively. The best entries that are significantly better than all the others under at least one statistical test are marked in boldface (ties are allowed).
predict the outcomes directly generally perform worse. This result demonstrates the advantages of standard NNs in making point predictions, and makes it compelling to extract uncertainty information from NN predictions. Figure 3 shows a comparison among NN, RIO and SVGP in terms of prediction RMSE. RIO is able to improve the NN predictions consistently regardless of how the dataset is split and how well the NN is trained. Conversely, SVGP performs significantly worse than NN in all the datasets. For the CT dataset, which has 386 input features, SVGP fails severely since the input kernel cannot capture the implicit correlation information. ANP shows an extremely unstable performance in airfoil dataset due to its poor scalability to the increase of dataset size. To conclude, applying RIO to NNs not only provides additional uncertainty information, but also reduces the point-prediction error.
Uncertainty Estimation Confidence interval (CI) is a useful tool to estimate the distribution of outcomes with explicit probabilities. In order to measure the quality of uncertainty estimation quantitatively, the percentages of testing outcomes that are within the 95%/90%/68% CIs as estimated by each algorithm are calculated. These percentages should be as close to the estimated confidence levels as possible, e.g., a perfect uncertainty estimator would have exactly 95% of testing outcomes within its estimated 95% CIs. The differences between the estimated CIs and true CIs are quantified using their standardized errors with respect to Z-score (see appendix for more details). The RMSEs based on these errors are shown in Table 1 for estimated 95%, 90% and 68% CIs. RIO provides the best or equals the best uncertainty estimations in 7, 6 and 5 out of 12 datasets for 95%, 90% and 68% CIs, respectively. In the remaining datasets, the differences between RIO and the best entries are also small (≤0.145 with respect to Z-score). One interesting observation is that in the CT dataset SVGP shows very poor prediction RMSE and 95% CI RMSE, but achieves the best 90% and 68% CI RMSE. After investigation, this is actually happened by chance and is not due to accurate CI estimations of SVGP (See S2.4 in appendix for more details).
To provide a more straightforward comparison, Figure 4 shows the distribution of percentages of testing outcomes that are within the estimated 68% CIs over 100 independent runs. It is notable that an accurate coverage percentage of 68% CI is usually more difficult to achieve than those of 95% and 90% because the true distribution of observed outcomes are denser in 68% CI borderline than in the tails. RIO makes reliable CI estimations in most cases, albeit it occasionally returns erroneous CIs for the yacht dataset. ANP provides reasonable CIs in two datasets but performs unstably in airfoil dataset. NNGP always returns overconfident CIs for these real-world datasets due to the lack of noise estimation in its original implementation. Boxplots for coverage percentages of all the CIs over all the datasets are presented in Appendix. The conclusion is that compared to all the other tested models, RIO provides reasonable CI estimations in most cases.
Computation Time
The "time" column in Table 1
Discussion and Future Directions
In addition to reliable uncertainty estimation, accurate point prediction, and good scalability, RIO also provides benefits in other aspects.
RIO can be directly applied to any standard NN without modification to the model architecture or training pipeline. Moreover, retraining of the NN or change of inference process are not required. The framework simply requires the outputs of an NN; it does not need to access any internal structure. This feature makes the framework more accessible to practitioners in real-world applications, e.g., data scientists can train NNs using traditional pipelines, then directly apply RIO to the trained NNs.
RIO also provides robustness to a type of adversarial attack. Consider a worst-case scenario, in which an adversary can arbitrarily alter the output of the NN with minuscule changes to the input. It is well-known that there are NNs for which this is possible [14]. In this case, with the help of the I/O kernel, the model becomes highly uncertain with respect to the output kernel. A confident prediction then requires both input and output to be reasonable. In the real world, a high degree of uncertainty may meet a threshold for disqualifying the prediction as outside the scope of the model's ability.
There are several promising future directions for extending RIO: First, applying RIO to reinforcement learning (RL) algorithms, which usually use standard NNs for reward predictions, allows uncertainty estimation of the future rewards. Agents can then employ more mathematically efficient exploration strategies, e.g., bandit algorithms [35], rather than traditional epsilon greedy methods. Second, RIO applied to Bayesian optimization (BO) [26] makes it possible to use standard NNs in surrogate modeling. This approach can potentially improve the expressivity of the surrogate model and the scalability of BO. Third, since RIO only requires access to the inputs and outputs of NNs, it can be directly applied to any existing prediction models, including hybrid and ensemble models. This makes RIO a more general tool for real-world practitioners.
Related Work
There has been significant interest in combining NNs with probabilistic Bayesian models. An early approach was Bayesian Neural Networks, in which a prior distribution is defined on the weights and biases of a NN, and a posterior distribution is then inferred from the training data [25,27]. Traditional variational inference techniques have been applied to the learning procedure of Bayesian NN, but with limited success [3,15,18]. By using a more advanced variational inference method, new approximations for Bayesian NNs were achieved that provided similar performance as dropout NNs [5]. However, the main drawbacks of Bayesian NNs remain: prohibitive computational cost and difficult implementation procedure compared to standard NNs.
Alternatives to Bayesian NNs have been developed recently. One such approach introduces a training pipeline that incorporates ensembles of NNs and adversarial training [21]. Another approach, NNGP, considers a theoretical connection between NNs and GP to develop a model approximating the Bayesian inference process of wide deep neural networks [24]. Deep kernel learning (DKL) combines NNs with GP by using a deep NN embedding as the input to the GP kernel [37] . Conditional Neural Processes (CNPs) combine the benefits of NNs and GP, by defining conditional distributions over functions given data, and parameterizing this dependence with a NN [11]. Neural Processes (NPs) generalize deterministic CNPs by incorporating a latent variable, strengthening the connection to approximate Bayesian and latent variable approaches [12]. Attentive Neural Processes (ANPs) further extends NPs by incorporating attention to overcome underfitting issues [19]. The above models all require significant modifications to the original NN model and training pipeline. Compared to standard NNs, they are also less computationally efficient and more difficult for practitioners to implement. In the approach that shares the most motivation with RIO, Monte Carlo dropout was used to estimate the predictive uncertainty of dropout NNs [10]. However, this method is restricted to dropout NNs, and also requires modifications to the NN inference process.
Conclusion
The RIO framework both provides estimates of predictive uncertainty of neural networks, and reduces their point-prediction errors. The approach is based on residual estimation and a composite I/O kernel, which are theoretically well founded and perform well on several real-world problems. By utilizing a sparse approximation of GP, RIO scales well to large datasets. Remarkably, it can be applied directly to any pretrained NNs without modifications to model architecture or training pipeline. Thus, RIO can be used to make NN regression practical and powerful in many real-world applications.
Appendix S1 Background
This section reviews notation for Neural Networks, Gaussian Processes, and its more efficient approximation, Stochastic Variational Gaussian Processes. The RIO method, introduced in Section 2 of the main paper, uses Gaussian Processes to estimate the uncertainty in neural network predictions and reduces their point-prediction errors.
S1.1 Neural Networks
Neural Networks (NNs) learn a nonlinear transformation from input to output space based on a number of training examples. Let D ⊆ R din × R dout denote the training dataset with size n, and X =
{x i : (x i , y i ) ∈ D, x i = [x 1 i , x 2 i , . . . , x din i ] | i = 1, 2, . . . , n} and Y = {y i : (x i , y i ) ∈ D, y i = [y 1 i , y 2 i , . . . , y dout i ] | i = 1, 2, .
. . , n} denote the inputs and outputs (i.e., targets). A fully-connected feed-forward neural network with L hidden layers of width N l (for layer l = 1, 2, . . . , L) performs the following computations: Let z j l denote the output value of jth node in lth hidden layer given input
x i , then z j l = φ( din k=1 w j,k l x k i + b j l ), for l = 1 and z j l = φ( N l−1 k=1 w j,k l z k l−1 + b j l ), for l = 2, .
. . , L, where w j,k l denotes the weight on the connection from kth node in previous layer to jth node in lth hidden layer, b j l denotes the bias of jth node in lth hidden layer, and φ is a nonlinear activation function. The output value of jth node in output layer is then given byŷ
j i = N L k=1 w j,k out z k L + b j out , where w j,k
out denotes the weight on the connection from kth node in last hidden layer to jth node in output layer, and b j out denotes the bias of jth node in output layer. A gradient-based optimizer is usually used to learn the weights and bias given a pre-defined loss function, e.g., a squared loss function L = 1 n n i=1 (y i −ŷ i ) 2 . For a standard NN, the learned parameters are fixed, so the NN outputŷ i is also a fixed point. For a Bayesian NN, a distribution of the parameters is learned, so the NN output is a distribution ofŷ i . However, a pre-trained standard NN needs to be augmented, e.g., with a Gaussian Process, to achieve the same result.
S1.2 Gaussian Processes
A Gaussian Process (GP) is a collection of random variables, such that any finite collection of these variables follows a joint multivariate Gaussian distribution [31]. Given a training dataset X = {x i | i = 1, 2, . . . , n} and Y = {y i = f (x i ) + | i = 1, 2, . . . , n}, where denotes additive independent identically distributed Gaussian noise, the first step for GP is to fit itself to these training data assuming Y ∼ N (0, K(X , X ) + σ 2 n I), where N denotes a multivariate Gaussian distribution with mean 0 and covariance matrix K(X , X ) + σ 2 n I. K(X , X ) denotes the kernel-based covariance matrix at all pairs of training points with each entry k i,j = k(x i , x j ), and σ 2 n denotes the noise variance of observations. One commonly used kernel is the radial basis function (RBF) kernel, which is defined as k(
x i , x j ) = σ 2 f exp(− 1 2l 2 f x i − x j 2 )
. The signal variance σ 2 f , length scale l f and noise variance σ 2 n are trainable hyperparameters. The hyperparameters of the covariance function are optimized during the learning process to maximize the log marginal likelihood log p(Y|X ).
After fitting phase, the GP is utilized to predict the distribution of label y * given a test point x * . This prediction is given by y * |X , Y, x * ∼ N (ȳ * , var(y * )) withȳ * = k * (K(X , X ) + σ 2 n I) −1 y and var(y * ) = k(x * , x * ) − k * (K(X , X ) + σ 2 n I) −1 k * , where k * denotes the vector of kernel-based covariances (i.e., k(x * , x i )) between x * and all the training points, and y denotes the vector of all training labels. Unlike with NN, the uncertainty of the prediction of a GP is therefore explicitly quantified.
S1.3 Stochastic Variational Gaussian Processes
The main limitation of the standard GP, as defined above, is that it is excessively expensive in both computational and storage cost. For a dataset with n data points, the inference of standard GP has time complexity O(n 3 ) and space complexity O(n 2 ). To circumvent this issue, sparse GP methods were developed to approximate the original GP by introducing inducing variables [8,30,32,36]. These [16,17] further improves the scalability of the approach by applying Stochastic Variational Inference (SVI) technique, as follows:
Consider the same training dataset and GP as in Section S1.2, and assume a set of inducing variables as Z = {z i | i = 1, 2, . . . , m} and U = {u i = f (z i ) + | i = 1, 2, . . . , m} (f (·) and are unknown). SVGP learns a variational distribution q(U) by maximizing a lower bound of log p(Y|X ), where log p(Y|X ) = log p(Y|U, X )p(U)dU and p(·) denotes the probability density under original GP. Trainable hyperparameters during the learning process include values of z i and hyperparameters of the covariance function of original GP. Given a test point x * , the predictive distribution is then given by p(y * |x * ) = p(y * |U, x * )q(U)dU, which still follows a Gaussian distribution. One advantage of SVGP is that minibatch training methods [22] can be applied in case of very large dataset. Suppose the minibatch size is m and m m , then for each training step/iteration, the computational complexity is O(m m 2 ), and the space complexity is O(m m). For full details about SVGP, see [16]. Since NNs typically are based on training with relatively large datasets, SVGP makes it practical to implement uncertainty estimates on NNs.
S2 Empirical Study
S2.1 Experimental Setups
Dataset Description In total, 12 real-world regression datasets from UCI machine learning repository [9] are tested. Table 1 summarizes the basic information of these datasets. For all the datasets except MSD, 20% of the whole dataset is used as test dataset and 80% is used as training dataset, and this split is randomly generated in each independent run. For MSD, the first 463715 samples are used as training dataset and the last 51630 samples are used as testing dataset according to the provider's guideline. During the experiments, all the datasets except for MSD are tested for 100 independent runs, and MSD datasets are tested for 10 independent runs. For each independent run, the same random dataset split are used by all the tested algorithms to ensure fair comparisons.
Parametric Setup for Algorithms
• NN: For SC dataset, a fully connected feed-forward NN with 2 hidden layers, each with 128 hidden neurons, is used. For CT dataset, a fully connected feed-forward NN with 2 hidden layers, each with 256 hidden neurons, is used. For all the remaining datasets, a fully connected feed-forward NN with 2 hidden layers, each with 64 hidden neurons, is used. The inputs to the NN are normalized to have mean 0 and standard deviation 1. The activation function is ReLU for all the hidden layers. The maximum number of epochs for training is 1000. 20% of the training data is used as validation data, and the split is random at each independent run. An early stop is triggered if the loss on validation data has not be improved for 10 epochs. The optimizer is RMSprop with learning rate 0.001, and the loss function is mean squared error (MSE). • RIO, RIO variants and SVGP [16]: SVGP is used as an approximator to original GP in RIO and all the RIO variants. For RIO, RIO variants and SVGP, the number of inducing points are 50 for all the experiments. RBF kernel is used for both input and output kernel. For RIO, RIO variants and SVGP, the signal variances and length scales of all the kernels plus the noise variance are the trainable hyperparameters. The optimizer is L-BFGS-B with default parameters as in Scipy.optimize documentation (https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html): 'maxcor': 10, 'ftol': 2.220446049250313e-09, 'gtol': 1e-05, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': -1, 'maxls': 20. The training process runs until the L-BFGS-B optimizer decides to stop.
• NNGP [24]: For NNGP kernel, the depth is 2, and the activation function is ReLU. n g = 101, n v = 151, and n c = 131. Following the learning process in original paper, a grid search is performed to search for the best values of σ 2 w and σ 2 b . Same as in the original paper, a grid of 30 points evenly spaced from 0.1 to 5.0 (for σ 2 w ) and 30 points evenly spaced from 0 to 2.0 (for σ 2 b ) was evaluated. The noise variance σ 2 is fixed as 0.01. The grid search process stops when Cholesky decomposition fails or all the 900 points are evaluated. The best values found during the grid search will be used in the experiments. No pre-computed lookup tables are utilized.
• ANP [19]: The parametric setups of ANP are following the recommendations in the original paper. The attention type is multihead, the hidden size is 64, the max number of context points is 50, the context ratio is 0.8, the random kernel hyperparameters option is on. The size of latent encoder is 64 × 64 × 64 × 64, the number of latents is 64, the size of deterministic encoder is 64 × 64 × 64 × 64, the size of decoder is 64 × 64 × 64 × 64 × 2, and the deterministic path option is on. Adam optimizer with learning rate 10 −4 is used, and the maximum number of training iterations is 2000.
Performance Metrics
• To measure the point-prediction error, the Root Mean Square Error (RMSE) between the method predictions and true outcomes on test datasets are calculated for each independent experimental run. After that, the mean and standard deviations of these RMSEs are used to measure the performance of the algorithms.
• To quantitatively measure the quality of uncertainty estimation, the estimated 95%/90%/68% confidence intervals (CIs) are calculated for all the algorithms except for standard NNs. A theoretically optimal way to measure the quality of these CIs is to calculate the difference between these estimated CIs and true CIs. This requires that the true distributions of the outcomes are known for each test point. However, for real-world datasets, only one observed outcome is available for each test point. To overcome this limitation, we develop a performance metric to quantitatively compare the quality of estimated CIs among algorithms. First, for each independent experimental run, the percentages of test outcomes within the estimated 95%/90%/68% CIs of each algorithm are calculated. The residuals between these coverage percentages and 95%/90%/68% is still biased, e.g., a meaninglessly wide estimated 95% CI that covers 100% of the test outcomes only has a 5% residual. To reduce this bias, the Z-score (under the assumption that the true test outcome follows a Gaussian distribution) is calculated for coverage percentages and targeted 95%/90%/68%. The differences between the two Z-scores are used as a standardized error to approximate the true Z-score error between estimated CIs and target CIs. For coverage percentages of 100%, the Z-score of 99.999% is used. Each independent experimental run has a standardized error based on the test set, and the RMSE over all the independent runs are used as the final metric to compare algorithms.
• To compare the computation time of the algorithms, the training time (wall clock time) of NN, RIO, all the RIO variants, SVGP and ANP are averaged over all the independent runs as the computation time. For NNGP, the wall clock time for the grid search is used. In case that the grid search stops due to Cholesky decomposition failures, the computation time of NNGP will be estimated as the average running time of all the successful evaluations × 900, which is the supposed number of evaluations. All the algorithms are implemented using Tensorflow, and tested in the exactly same python environment. All the experiments are running on a MacBook Pro machine with 2.5 GHz Intel Core i7 processor and AMD Radeon R9 M370X 2048 MB graphic card. Figure S1 and S2 show the distribution of the percentages that testing outcomes are within the estimated 95%/90%/68% CIs over all the independent runs for all the datasets and algorithms. In Figure S1 and S2, the box extends from the 25 to 75 quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers. Table S2 shows the mean of prediction RMSE and noise variance of RIO, Y+IO and SVGP over all the independent runs for all the datasets. A clear positive correlation between prediction RMSE and noise variance can be observed. In most cases, RIO has a lower prediction RMSE and a lower noise variance than Y+IO, which has lower values in both metrics than original SVGP. This results are in accordance with the theoretical analysis in section 2.3 of the main paper, and demonstrates the effectiveness of both residual estimation and I/O kernel.
S2.2 Supplementary Figures
S2.3 Correlation between Noise Variance and Prediction RMSE
S2.4 Additional Discussions
One interesting observation is that in CT dataset SVGP shows very poor prediction RMSE and 95% CI RMSE, but achieves the best 90% and 68% CI RMSE. After investigation, this is only happened by chance, and is not due to the accurate CI estimations of SVGP. Since SVGP is not able to extract any useful information from the high-dimensional input space, it treated all the outcomes as simply noise. As a result, SVGP shows a very large RMSE compared to other algorithms, and the mean of its predicted outcome distribution is always around 0. Since SVGP treats everything as noise, the estimated noise variance is very high, and the estimated 95% CI based on this noise variance is overly high and covers all the test outcomes in most cases. This leads to a high Z-score RMSE as we expected. However, when the estimated 90% CI is evaluated, the big error in mean estimation and big error in variance estimation cancel most part of each other by chance, i.e., the estimated 90% CI is mistakenly shifted by erroneous mean then the overly wide variance fortunately covers slightly more than 90% test outcomes. Similar thing happen to the estimated 68% CI, but because of the overly high estimated variance, the coverage percentages are now blow 68%, but still provide decent Z-score errors. This phenomenon shows that the performance metric we used may introduce outliers if the prediction errors are too high. For comparison among algorithms that give reasonable prediction errors, which happens in most cases in the experiments, the current performance metric still works. Figure S6: Quality of estimated CIs. These figures show the distribution of the percentages that testing outcomes are within the estimated 95%/90%/68% CIs over all the independent runs.
Proof. The first condition follows from Lemma 2.1, Lemma 2.3, and the fact that tr[α −1 Λ −1 + (σ 2 n + σ 2 g − δ) −1 ΦΦ ] −1 < tr(Λ −1 + (σ 2 n + σ 2 g ) −1 ΦΦ ) −1 . The second condition follows from Lemma 2.2, Lemma 2.3, and the fact that
tr[α −1 Λ −1 + (σ 2 n + σ 2 g − δ) −1 ΦΦ ] −1 < αE[f 2 (x)].
S3.2 Proof of Theorem 2.5.
Theorem 2.5. E g I/O (X ) < E g I (X ) and E g I/O (X ) < E g O (X ).
Proof.
The fact that f in and f out are non-trivial implies E[f 2 in (x)] > 0 and E[f 2 out (x)] > 0. Following the definitions at the beginning of Section S3.1, let Λ in , Φ in , and Λ out , Φ out be the diagonal matrices of eigenvalues and the design matrices for the eigenfunction expansions of k in and k out , respectively.
With only the input kernel, that fact that h NN is indistinguishable from noise from the perspective of GP implies f out is also indistinguishable from noise, since it is a function of h NN (x ) − h NN (x) . Thus, the GP optimizer estimates the noise variance as σ 2
n + E[f 2 out (x)], so the generalization error is
E g I (X ) = tr(Λ −1 in + (σ 2 n + E[f 2 out (x)]) −1 Φ in Φ in ) −1 + E[f 2 out (x)]
. Conversely, the fact that f out is indistinguishable from noise from the perspective of the input kernel implies that f in is indistinguishable from noise from the perspective of the output kernel. So, when only using the output kernel, the GP optimizer estimates the noise variance as σ 2
n + E[f 2 in (x)]. So, E g O (X ) = tr(Λ −1 out + (σ 2 n + E[f 2 in (x)]) −1 Φ out Φ out ) −1 + E[f 2 in (x)]
. Now, when considering the I/O kernel, note that y
i = f in (x i ) + f out (x i ) + i =⇒ y i = f (x i ) + i , where f ∼ GP (0, k in + k out )
. So, with I/O kernel, the GP optimizer correctly estimates all hyperparameters, and the resulting generalization error is
(σ 2 n + E[f 2 out (x)]) −1 Φ in Φ in ) −1 < E[f 2 in (x)] =⇒ E g I/O (X ) < E g O (X ).
Figure 2 :
2Capturing uncertainty of more and less accurate NNs. These figures illustrate the behavior of RIO in two cases: (left)
Proposition 1 .
1The variance of NN residuals is positively correlated with the uncertainty of r GP .
Figure 4 :
4Quality of estimated 68% CIs. These figures show the distribution of the percentages of testing outcomes that are within the estimated 68% CIs over 100 independent runs. As usual, the box extends from the 25 to 75 quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
O (X ) = tr(Λ −1 in + (σ 2 n + E[f 2 out (x)]) −1 Φ in Φ in ) −1 + tr(Λ −1 out + (σ 2 n + E[f 2 in (x)]) −1 Φ out Φ out ) −1 . Then, tr(Λ −1 out + (σ 2 n + E[f 2 in (x)]) −1 Φ out Φ out ) −1 < E[f 2 out (x)] =⇒ E g I/O (X ) < E g I (X ),and tr(Λ −1 in +
Table 1 :
1Summary of experimental resultsDataset Method
Prediction
95%CI
90%CI
68%CI
Time Dataset Method
Prediction
95%CI
90%CI
68%CI
Time
n × d
RMSE
RMSE
RMSE
RMSE
(sec)
n × d
RMSE
RMSE
RMSE
RMSE
(sec)
yacht
NN
3.76±1.86 † ‡
-
-
-
2.41
ENB/h
NN
0.94±0.37 † ‡
-
-
-
6.48
RIO
3.06±1.37
0.658
0.489
0.413
3.00
RIO
0.81±0.36
0.359
0.255
0.118
3.37
R+I
3.67±1.79 † ‡
0.673 ‡
0.466 ‡ 0.325 † ‡
2.73
R+I
0.84±0.37 † ‡
0.370
0.264
0.134
3.06
252
R+O
3.15±1.37 † ‡
0.667 ‡
0.512 ‡ 0.376 † ‡
3.37
768
R+O
0.83±0.36 † ‡ 0.333 † ‡ 0.234 † ‡ 0.135 †
3.72
×
Y+O
12.41±6.78 † ‡ 1.228 † ‡ 1.139 † ‡ 0.883 † ‡
5.26
×
Y+O
1.67±1.58 † ‡ 1.926 † ‡ 1.847 † ‡ 1.470 † ‡
6.41
6
Y+IO
11.35±6.86 † ‡ 1.298 † ‡ 1.112 † ‡ 0.946 † ‡
5.41
8
Y+IO
1.05±0.55 † ‡ 1.340 † ‡ 1.273 † ‡ 1.003 † ‡
6.91
SVGP 14.75±3.74 † ‡ 0.956 † ‡ 0.757 † ‡
0.444
4.74
SVGP
3.04±0.67 † ‡ 2.233 † ‡ 1.960 † ‡ 0.616 † ‡
5.51
NNGP 10.55±1.54 † ‡ 1.947 † ‡ 1.634 † ‡ 0.989 † ‡ 13562
NNGP
4.97±0.29 † ‡ 1.929 † ‡ 1.618 † ‡ 0.978 † ‡ 14175
ANP
6.89±3.22 † ‡
1.117 †
0.825 †
0.408
125.7
ANP
4.32±2.47 † ‡
0.625 †
0.260
0.157 ‡
566.8
ENB/c
NN
1.89±0.40 † ‡
-
-
-
6.25
airfoil
NN
4.82±0.47 † ‡
-
-
-
5.05
RIO
1.76±0.39
0.378
0.248
0.146
3.41
RIO
4.01±0.27
0.301
0.204
0.106
3.83
R+I
1.80±0.39 † ‡
0.365
0.237
0.146
3.09
R+I
4.14±0.30 † ‡
0.327 † 0.240 † ‡ 0.129 †
3.52
768
R+O
1.79±0.39 † ‡
0.377
0.245
0.148
3.75
1505
R+O
4.21±0.29 † ‡ 0.341 † ‡ 0.251 † ‡ 0.134 † ‡
4.15
×
Y+O
2.38±0.92 † ‡ 1.558 † ‡ 1.515 † ‡ 0.841 † ‡
6.21
×
Y+O
5.34±2.04 † ‡ 1.535 † ‡ 1.165 † ‡ 0.743 † ‡
6.92
8
Y+IO
2.06±0.71 † ‡ 1.227 † ‡ 1.194 † ‡ 0.752 † ‡
6.51
5
Y+IO
4.75±1.19 † ‡ 1.106 † ‡ 0.787 † ‡ 0.679 † ‡
7.14
SVGP
3.64±0.70 † ‡
0.387 ‡ 0.442 † ‡ 0.724 † ‡
5.49
SVGP
5.89±1.04 † ‡ 1.183 † ‡ 0.795 † ‡ 0.420 † ‡
6.22
NNGP
4.91±0.32 † ‡ 1.899 † ‡ 1.594 † ‡ 0.965 † ‡ 13406
NNGP
6.54±0.23 † ‡ 1.930 † ‡ 1.621 † ‡ 0.979 † ‡ 17024
ANP
4.33±1.93 † ‡
0.567 ‡
0.262
0.148
565.7
ANP
24.8±33.4 † ‡ 1.363 † ‡ 1.444 † ‡ 1.105 † ‡ 1657
CCS
NN
6.28±0.53 † ‡
-
-
-
6.71
wine/r
NN
0.692±0.04 † ‡
-
-
-
3.57
RIO
6.19±0.50
0.556
0.436
0.233
3.55
RIO
0.678±0.04
0.352
0.260
0.131
3.85
1030
R+I
6.20±0.51 † ‡
0.574 ‡ 0.455 † ‡
0.242
3.23
1599
R+I
0.690±0.04 † ‡
0.365
0.267
0.136
3.41
×
R+O
6.23±0.52 † ‡ 0.579 † ‡ 0.455 ‡
0.243
3.88
×
R+O
0.679±0.04
0.337
0.246
0.126
4.01
8
Y+O
7.81±3.01 † ‡ 0.316 † ‡ 0.252 † ‡ 0.198 †
6.05
11
Y+O
0.691±0.06 †
1.023 †
0.976 † 0.538 † ‡
6.72
Y+IO
7.28±2.41 † ‡ 0.355 † ‡ 0.291 † ‡ 0.202 ‡
6.26
Y+IO
0.676±0.04
0.430 ‡
0.299 ‡
0.243 †
5.92
SVGP 11.87±2.06 † ‡ 0.853 † ‡ 0.723 † ‡ 0.431 † ‡
5.59
SVGP 0.883±0.17 † ‡ 1.502 † ‡ 1.342 † ‡ 0.985 † ‡
6.37
wine/w
NN
0.723±0.02 † ‡
-
-
-
8.27
CCPP
NN
5.04±0.62 † ‡
-
-
-
12.6
RIO
0.710±0.02
0.224
0.142
0.065
5.92
RIO
4.25±0.13
0.141
0.120
0.083
9.21
4898
R+I
0.722±0.02 † ‡
0.212
0.142
0.066
4.82
9568
R+I
4.26±0.13 † ‡
0.160 ‡
0.138 ‡
0.096 ‡
8.69
×
R+O
0.711±0.02 † ‡ 0.205 † ‡ 0.130 ‡
0.062 ‡
5.66
×
R+O
4.37±0.16 † ‡
0.133
0.113
0.079
9.38
11
Y+O
0.726±0.04 † ‡ 0.587 † ‡ 0.395 † 0.309 † ‡ 10.20
4
Y+O
8.86±12.6 † ‡ 1.446 † ‡ 1.485 † ‡ 1.421 † ‡ 22.53
Y+IO
0.717±0.02 † ‡ 0.297 ‡
0.290 †
0.227 †
10.69
Y+IO
7.60±4.84 † ‡ 1.078 † ‡ 1.121 † ‡ 1.004 † ‡ 22.04
SVGP 0.873±0.12 † ‡ 1.041 † 1.044 † ‡ 0.700 † ‡ 11.05
SVGP
6.10±2.59 † ‡ 1.386 † ‡ 1.441 † ‡ 1.166 † ‡ 20.37
protein
NN
4.21±0.08 † ‡
-
-
-
135.6
SC
NN
12.27±0.73 † ‡
-
-
-
146.3
RIO
4.14±0.06
0.182
0.100
0.073
34.9
RIO
11.47±0.40
0.284
0.131
0.166
29.8
45730
R+I
4.16±0.06 † ‡ 0.156 † ‡ 0.081 † ‡ 0.060 † ‡
32.4
21263
R+I
11.49±0.42
0.281
0.129
0.167
29.2
×
R+O
4.16±0.07 † ‡
0.186
0.104 ‡
0.070
31.1
×
R+O
11.70±0.42 † ‡ 0.321 †
0.172 †
0.157
22.7
9
Y+O
4.19±0.12 † ‡ 0.272 † ‡ 0.175 † ‡ 0.050 † ‡
38.8
80
Y+O
12.20±1.39 † ‡ 0.396 † ‡ 0.241 † ‡ 0.114 † ‡
27.7
Y+IO
4.12±0.06 † ‡ 0.238 † ‡ 0.146 † ‡ 0.050 † ‡
45.8
Y+IO
11.72±0.53 † ‡ 0.320 † ‡ 0.156 † ‡ 0.132 † ‡
38.5
SVGP
5.22±0.07 † ‡
0.184
0.142 † ‡ 0.073 ‡
49.1
SVGP 18.29±1.51 † ‡ 0.614 † ‡ 0.473 † ‡ 0.210 † ‡
41.9
CT
NN
1.20±0.38 † ‡
-
-
-
632.4
MSD
NN
12.53±0.82 † ‡
-
-
-
1040
RIO
1.01±0.29
0.324
0.327
0.301
121.4
RIO
10.02±0.23
0.091
0.081
0.253
988.4
53500
R+I
1.20±0.38 † ‡ 0.245 † ‡ 0.235 † ‡ 0.195 † ‡
73.1
515345
R+I
12.53±0.82 † ‡
0.080
0.072
0.137 † ‡ 1265
×
R+O
0.99±0.29
0.259 †
0.238 † 0.210 † ‡
37.7
×
R+O
10.09±0.25 † ‡
0.137
0.114
0.254
1364
384
Y+O
1.99±1.07 † ‡ 1.409 † ‡ 1.497 † ‡ 1.426 † ‡ 110.7
90
Y+O
18.96±11.5 † ‡ 0.789 † ‡ 0.850 † ‡ 0.846 † ‡ 1564
Y+IO
1.89±0.65 † ‡ 1.340 † ‡ 1.389 † ‡ 1.272 † ‡ 292.1
Y+IO
20.72±12.6 † ‡ 0.609 † ‡ 0.669 † ‡ 0.701 † ‡ 2441
SVGP
52.1±0.19 † ‡
2.338 †
0.172 † 0.156 † ‡ 213.6
SVGP 14.40±1.78 † ‡ 0.171 † ‡ 0.237 † ‡ 0.414 † ‡ 6942
Figure 3: Comparison among NN, RIO, and SVGP. The horizontal axis denotes the prediction RMSE of the NN, and the vertical axis the prediction RMSE of RIO (blue dots) and SVGP (yellow dots). Each dot represents an independent experimental run. Since the scales are different, the solid blue line indicates where NN and RIO/SVGP have same prediction RMSE. Thus, a dot below the line means that the method (RIO or SVGP) performs better than the NN, and vice versa. Results of SVGP on the CT dataset are not plotted because its prediction RMSE exceeded the visible scale (i.e. they were > 50). RIO consistently reduces the error of the NN, while SVGP falls short of both.1
2
3
4
5
6
7
8
9
NN RMSE
0
5
10
15
20
RIO/SVGP RMSE
yacht
RIO
SVGP
0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50
NN RMSE
1
2
3
4
5
6
7
RIO/SVGP RMSE
ENB/h
RIO
SVGP
1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00
NN RMSE
1
2
3
4
5
6
7
RIO/SVGP RMSE
ENB/c
RIO
SVGP
4.0
4.5
5.0
5.5
6.0
NN RMSE
4
5
6
7
8
9
10
RIO/SVGP RMSE
airfoil
RIO
SVGP
5.5
6.0
6.5
7.0
7.5
NN RMSE
6
8
10
12
14
16
RIO/SVGP RMSE
CCS
RIO
SVGP
0.60
0.65
0.70
0.75
0.80
NN RMSE
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
RIO/SVGP RMSE
wine/r
RIO
SVGP
0.68
0.70
0.72
0.74
0.76
0.78
NN RMSE
0.7
0.8
0.9
1.0
1.1
1.2
1.3
RIO/SVGP RMSE
wine/w
RIO
SVGP
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
NN RMSE
4
5
6
7
8
9
10
RIO/SVGP RMSE
CCPP
RIO
SVGP
4.05 4.10 4.15 4.20 4.25 4.30 4.35 4.40 4.45
NN RMSE
4.0
4.2
4.4
4.6
4.8
5.0
5.2
5.4
RIO/SVGP RMSE
protein
RIO
SVGP
11.0
11.5
12.0
12.5
13.0
13.5
14.0
14.5
NN RMSE
12
14
16
18
20
22
24
26
RIO/SVGP RMSE
SC
RIO
SVGP
0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50
NN RMSE
0.8
1.0
1.2
1.4
1.6
1.8
2.0
RIO/SVGP RMSE
CT
RIO
11.0
11.5
12.0
12.5
13.0
13.5
NN RMSE
10
11
12
13
14
RIO/SVGP RMSE
MSD
RIO
SVGP
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
yacht: percentage of test points within 68% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
ENB/h: percentage of test points within 68% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
ENB/c: percentage of test points within 68% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
airfoil: percentage of test points within 68% CI
shows the computation time of each algorithm, averaged over all the independent runs. All algorithms are implemented using Tensorflow under the same running environment (see Appendix for details about the implementations). The RIO variants scale well to increasing dataset sizes and feature dimensionalities. In contrast, ANP's computation time increases significantly with the scale of the dataset. NNGP always needs very expensive computational budgets due to its costly grid search of hyperparameters. Thus, RIO scales better than other approaches to large datasets. Study The RIO variants with residual estimation generally perform better than its counterparts in both point-prediction error and uncertainty estimation. This result confirms the effectiveness of residual estimation, as suggested by the theoretical analysis in Section 2.3. Another important result is that Y+IO outperforms both Y+I and Y+O in most cases across all performance metrics, and RIO generally provides more robust performance than R+I and R+O in all aspects. This result, in turn, confirms that the I/O kernel is robust, as suggested by the analysis in Section 2.4. In sum, both residual estimation and the I/O kernel contribute substantially to the good performance of RIO.Ablation
Table S2 :
S2Summary of testing datasetabbreviation
full name in UCI ML repository
dataset size dimension
note
yacht
Yacht Hydrodynamics Data Set
252
6
-
ENB/h
Energy efficiency
768
8
Heating Load as target
ENB/c
Energy efficiency
768
8
Cooling Load as target
airfoil
Airfoil Self-Noise
1505
5
-
CCS
Concrete Compressive Strength
1030
8
-
wine/r
Wine Quality
1599
11
only use winequality-red data
wine/w
Wine Quality
4898
11
only use winequality-white data
CCPP
Combined Cycle Power Plant
9568
4
-
CASP
Physicochemical Properties of Protein Tertiary Structure
54730
9
-
SC
Superconductivty Data
21263
80
-
CT
Relative location of CT slices on axial axis
53500
384
-
MSD
YearPredictionMSD
515345
90
train: first 463715, test: last 51630
approximation approaches lead to a computational complexity of O(nm 2 ) and space complexity of
O(nm), where m is the number of inducing variables. Following this line of work, the Stochastic
Variational Gaussian Process (SVGP)
Table S3 :
S3Summary of Prediction RMSE and Noise VarianceDataset Method Prediction RMSE Noise Variance Dataset Method Prediction RMSE Noise Variance
yacht
SVGP
14.75
19.25
ENB/h
SVGP
3.04
25.56
Y+IO
11.35
18.30
Y+IO
1.05
3.49
RIO
3.06
4.65
RIO
0.81
0.57
ENB/c
SVGP
3.64
27.45
airfoil
SVGP
5.89
60.11
Y+IO
2.06
11.04
Y+IO
4.75
49.69
RIO
1.76
2.45
RIO
4.01
11.43
CCS
SVGP
11.87
45.59
wine/r
SVGP
0.883
2.12
Y+IO
7.28
39.23
Y+IO
0.676
0.46
RIO
6.19
19.07
RIO
0.678
0.28
wine/w
SVGP
0.873
2.02
CCPP
SVGP
6.10
141.95
Y+IO
0.717
0.53
Y+IO
7.60
116.28
RIO
0.710
0.38
RIO
4.25
14.91
protein
SVGP
5.22
27.17
SC
SVGP
18.29
152.57
Y+IO
4.12
14.40
Y+IO
11.72
82.88
RIO
4.14
15.67
RIO
11.47
88.37
CT
SVGP
52.1
2588.32
MSD
SVGP
14.40
247.89
Y+IO
1.89
18.00
Y+IO
20.72
409.70
RIO
1.01
0.95
RIO
10.02
97.56
percentage of test points within 95% CI yacht: percentage of test points within 95% CI percentage of test points within 90% CI yacht: percentage of test points within 90% CI ENB/c: percentage of test points within 95% CI RIO R+I R+O Y+O Y+IO SVGP NNGP ANP algorithm percentage of test points within 90% CI ENB/c: percentage of test points within 90% CI RIO R+I R+O Y+O Y+IO SVGP NNGP ANP algorithm ENB/c: percentage of test points within 68% CI RIO R+I R+O Y+O Y+IO SVGP NNGP ANP algorithm percentage of test points within 95% CI airfoil: percentage of test points within 95% CI RIO R+I R+O Y+O Y+IO SVGP NNGP ANP algorithm percentage of test points within 90% CI airfoil: percentage of test points within 90% CI RIO R+I R+O Y+O Y+IO SVGP NNGP ANP algorithm percentage of test points within 68% CI airfoil: percentage of test points within 68% CI percentage of test points within 95% CI CCS: percentage of test points within 95% CI percentage of test points within 90% CI CCS: percentage of test points within 90% CI percentage of test points within 68% CI CCS: percentage of test points within 68% CI percentage of test points within 95% CI wine/r: percentage of test points within 95% CI percentage of test points within 90% CI wine/r: percentage of test points within 90% CI percentage of test points within 68% CI wine/r: percentage of test points within 68% CI Figure S5: Quality of estimated CIs. These figures show the distribution of the percentages that testing outcomes are within the estimated 95%/90%/68% CIs over all the independent runs. percentage of test points within 95% CI wine/w: percentage of test points within 95% CI percentage of test points within 90% CI wine/w: percentage of test points within 90% CI percentage of test points within 68% CI wine/w: percentage of test points within 68% CI percentage of test points within 95% CI CCPP: percentage of test points within 95% CI percentage of test points within 90% CI CCPP: percentage of test points within 90% CI percentage of test points within 68% CI CCPP: percentage of test points within 68% CI percentage of test points within 95% CI protein: percentage of test points within 95% CI percentage of test points within 90% CI protein: percentage of test points within 90% CI percentage of test points within 68% CI protein: percentage of test points within 68% CI percentage of test points within 95% CI SC: percentage of test points within 95% CI percentage of test points within 90% CI SC: percentage of test points within 90% CI percentage of test points within 68% CI SC: percentage of test points within 68% CI percentage of test points within 95% CI CT: percentage of test points within 95% CI percentage of test points within 90% CI CT: percentage of test points within 90% CI percentage of test points within 68% CI CT: percentage of test points within 68% CI percentage of test points within 95% CI MSD: percentage of test points within 95% CI percentage of test points within 90% CI MSD: percentage of test points within 90% CI percentage of test points within 68% CI MSD: percentage of test points within 68% CIRIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.95
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.0
0.2
0.4
0.6
0.8
1.0
0.9
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
yacht: percentage of test points within 68% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.95
percentage of test points within 95% CI
ENB/h: percentage of test points within 95% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.0
0.2
0.4
0.6
0.8
1.0
0.9
percentage of test points within 90% CI
ENB/h: percentage of test points within 90% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
ENB/h: percentage of test points within 68% CI
RIO
R+I R+O Y+O Y+IO SVGP NNGP ANP
algorithm
0.00
0.20
0.40
0.60
0.80
1.00
0.95
percentage of test points within 95% CI
0.0
0.2
0.4
0.6
0.8
1.0
0.9
0.00
0.20
0.40
0.60
0.80
1.00
0.68
percentage of test points within 68% CI
0.00
0.20
0.40
0.60
0.80
1.00
0.95
0.0
0.2
0.4
0.6
0.8
1.0
0.9
0.00
0.20
0.40
0.60
0.80
1.00
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.50
0.60
0.70
0.80
0.90
1.00
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.4
0.5
0.6
0.7
0.8
0.9 0.9
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.30
0.40
0.50
0.60
0.70
0.80
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.825
0.850
0.875
0.900
0.925
0.950
0.975
1.000
0.950
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.75
0.80
0.85
0.90
0.95
1.00
0.90
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.50
0.60
0.70
0.80
0.90
1.00
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.88
0.90
0.92
0.94
0.96
0.98
1.00
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.800
0.825
0.850
0.875
0.900
0.925
0.950
0.975
1.000
0.900
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.60
0.70
0.80
0.90
1.00
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.9
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.20
0.40
0.60
0.80
1.00
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.86
0.88
0.90
0.92
0.94
0.96
0.98
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.80
0.82
0.84
0.86
0.88
0.90
0.92
0.94
0.90
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.50
0.55
0.60
0.65
0.70
0.75
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.60
0.70
0.80
0.90
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.5
0.6
0.7
0.8
0.9 0.9
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.30
0.40
0.50
0.60
0.70
0.80
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.84
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.80
0.85
0.90
0.95
1.00
0.90
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.50
0.60
0.70
0.80
0.90
1.00
0.68
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
0.95
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.800
0.825
0.850
0.875
0.900
0.925
0.950
0.975
1.000
0.900
RIO
R+I
R+O
Y+O
Y+IO
SVGP
algorithm
0.60
0.70
0.80
0.90
0.68
Proof. Since, from the perspective of GP, g is indistinguishable from noise, the optimal hyperparameter setting for the GP predictor is mean zero, kernel k(·, ·) and noise variance σ 2 n + σ 2 g . The expected generalization error of the GP predictor with posterior mean h GP iswhere the last step makes use of the fact that the expectation of the product of independent zero-mean random variables is zero. Plugging in a well-known result for the generalization error of Gaussian processes[29,31,34]yields the intended resultProof. Making use of the fact that E[r f ] = 0,Proof. Here, the goal of the GP is to fit the residuals.Since r f ∼ GP(0, αk(·, ·)) and r g is indistinguishable from noise, the optimal hyperparameter setting for the GP predictor is mean zero, kernel αk(·, ·), and noise variance σ 2 n + E[r 2 g (x)] = σ 2 n + σ 2 g − δ. Denote the posterior mean of the GP residual predictor by r GP . Then, the final predictor for y is h GP+NN (x) = h NN (x) + r GP (x). The expected generalization error of h GP+NN is theng (x)] = tr[α −1 Λ −1 + (σ 2 n + σ 2 g − δ) −1 ΦΦ ] −1 + σ 2 g − δ.Theorem 2.4. E g GP+NN (X ) < E g GP (X ) − δ and E g GP+NN (X ) < E g NN (X ).
Neural networks applied to discriminate botanical origin of honeys. O Anjos, C Iglesias, F Peres, J Martãnez, A Garcia, J Taboada, Food Chemistry. 175O. Anjos, C. Iglesias, F. Peres, J. MartÃnez, A. Garcia, and J. Taboada. Neural networks applied to discriminate botanical origin of honeys. Food Chemistry, 175:128-136, 05 2015.
Searching for exotic particles in high-energy physics with deep learning. P Baldi, P Sadowski, D Whiteson, Nature communications. 54308P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5:4308, 07 2014.
Ensemble learning in bayesian neural networks. D Barber, C Bishop, Generalization in Neural Networks and Machine Learning. Springer VerlagD. Barber and C. Bishop. Ensemble learning in bayesian neural networks. In Generalization in Neural Networks and Machine Learning, pages 215-237. Springer Verlag, January 1998.
On the use of artificial neural networks in simulation-based manufacturing control. S Bergmann, S Stelzer, S Strassburger, Journal of Simulation. 81S. Bergmann, S. Stelzer, and S. Strassburger. On the use of artificial neural networks in simulation-based manufacturing control. Journal of Simulation, 8(1):76-90, Feb 2014.
Weight uncertainty in neural networks. C Blundell, J Cornebise, K Kavukcuoglu, D Wierstra, Proceedings of the 32Nd International Conference on International Conference on Machine Learning. the 32Nd International Conference on International Conference on Machine Learning37ICML'15C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning -Volume 37, ICML'15, pages 1613-1622. JMLR.org, 2015.
A limited memory algorithm for bound constrained optimization. R Byrd, P Lu, J Nocedal, C Zhu, SIAM Journal on Scientific Computing. 165R. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190-1208, 1995.
Approximation with artificial neural networks. B Csaji, Budapest, HungaryDept. Science, Eotvos Lorand Univ.M.S.'ThesisB. Csaji. Approximation with artificial neural networks. M.S.'Thesis, Dept. Science, Eotvos Lorand Univ., Budapest, Hungary, 2001.
Sparse on-line gaussian processes. L Csató, M Opper, Neural computation. 14L. Csató and M. Opper. Sparse on-line gaussian processes. Neural computation, 14:641-68, 04 2002.
UCI machine learning repository. D Dua, C Graff, D. Dua and C. Graff. UCI machine learning repository, 2017.
Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, Proceedings of the 33rd International Conference on International Conference on Machine Learning. the 33rd International Conference on International Conference on Machine Learning48ICML'16Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncer- tainty in deep learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning -Volume 48, ICML'16, pages 1050-1059. JMLR.org, 2016.
Conditional neural processes. M Garnelo, D Rosenbaum, C Maddison, T Ramalho, D Saxton, M Shanahan, Y W Teh, D Rezende, S M A Eslami, PMLRProceedings of the 35th International Conference on Machine Learning. J. Dy and A. Krausethe 35th International Conference on Machine Learning80M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. Rezende, and S. M. A. Eslami. Conditional neural processes. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1704-1713. PMLR, 10-15 Jul 2018.
M Garnelo, J Schwarz, D Rosenbaum, F Viola, D J Rezende, S M A Eslami, Y W Teh, abs/1807.01622Neural processes. CoRR. M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. J. Rezende, S. M. A. Eslami, and Y. W. Teh. Neural processes. CoRR, abs/1807.01622, 2018.
Probabilistic machine learning and artificial intelligence. Z Ghahramani, Nature. 521452Z. Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521:452 EP -, 05 2015.
Explaining and harnessing adversarial examples. I Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations. I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
Practical variational inference for neural networks. A Graves, Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11. the 24th International Conference on Neural Information Processing Systems, NIPS'11USACurran Associates IncA. Graves. Practical variational inference for neural networks. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11, pages 2348- 2356, USA, 2011. Curran Associates Inc.
Gaussian processes for big data. J Hensman, N Fusi, N D Lawrence, Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI'13. the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI'13Arlington, Virginia, United StatesAUAI PressJ. Hensman, N. Fusi, and N. D. Lawrence. Gaussian processes for big data. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI'13, pages 282-290, Arlington, Virginia, United States, 2013. AUAI Press.
Scalable Variational Gaussian Process Classification. J Hensman, A Matthews, Z Ghahramani, PMLRProceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics. G. Lebanon and S. V. N. Vishwanathanthe Eighteenth International Conference on Artificial Intelligence and StatisticsSan Diego, California, USA38J. Hensman, A. Matthews, and Z. Ghahramani. Scalable Variational Gaussian Process Clas- sification. In G. Lebanon and S. V. N. Vishwanathan, editors, Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pages 351-360, San Diego, California, USA, 09-12 May 2015. PMLR.
Keeping the neural networks simple by minimizing the description length of the weights. G E Hinton, D Van Camp, Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT '93. the Sixth Annual Conference on Computational Learning Theory, COLT '93New York, NY, USAACMG. E. Hinton and D. van Camp. Keeping the neural networks simple by minimizing the descrip- tion length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT '93, pages 5-13, New York, NY, USA, 1993. ACM.
. H Kim, A Mnih, J Schwarz, M Garnelo, S M A Eslami, D Rosenbaum, O Vinyals, Y W Teh, Attentive neural processes. CoRR, abs/1901.05761H. Kim, A. Mnih, J. Schwarz, M. Garnelo, S. M. A. Eslami, D. Rosenbaum, O. Vinyals, and Y. W. Teh. Attentive neural processes. CoRR, abs/1901.05761, 2019.
Importance of being uncertain. M Krzywinski, N Altman, Nature Methods. 10809M. Krzywinski and N. Altman. Importance of being uncertain. Nature Methods, 10:809 EP -, 08 2013.
Simple and scalable predictive uncertainty estimation using deep ensembles. B Lakshminarayanan, A Pritzel, C Blundell, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6402-6413. Curran Associates, Inc., 2017.
On optimization methods for deep learning. Q V Le, J Ngiam, A Coates, A Lahiri, B Prochnow, A Y Ng, Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11. the 28th International Conference on International Conference on Machine Learning, ICML'11USAOmnipressQ. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, pages 265-272, USA, 2011. Omnipress.
Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 521436Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436 EP -, 05 2015.
J Lee, Y Bahri, R Novak, S Schoenholz, J Pennington, J Sohl-Dickstein, Deep neural networks as gaussian processes. International Conference on Learning Representations. J. Lee, Y. Bahri, R. Novak, S. Schoenholz, J. Pennington, and J. Sohl-dickstein. Deep neural networks as gaussian processes. International Conference on Learning Representations, 2018.
A practical bayesian framework for backpropagation networks. D J C Mackay, Neural Comput. 43D. J. C. MacKay. A practical bayesian framework for backpropagation networks. Neural Comput., 4(3):448-472, May 1992.
On bayesian methods for seeking the extremum. J Močkus, Optimization Techniques IFIP Technical Conference Novosibirsk. G. I. MarchukBerlin, Heidelberg; Berlin HeidelbergSpringerJ. Močkus. On bayesian methods for seeking the extremum. In G. I. Marchuk, editor, Optimiza- tion Techniques IFIP Technical Conference Novosibirsk, July 1-7, 1974, pages 400-404, Berlin, Heidelberg, 1975. Springer Berlin Heidelberg.
Bayesian Learning for Neural Networks. R M Neal, Springer-VerlagBerlin, HeidelbergR. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, Berlin, Heidelberg, 1996.
Forecasting s&p 500 index using artificial neural networks and design of experiments. S T A Niaki, S Hoseinzade, Journal of Industrial Engineering International. 911S. T. A. Niaki and S. Hoseinzade. Forecasting s&p 500 index using artificial neural networks and design of experiments. Journal of Industrial Engineering International, 9(1):1, Feb 2013.
General bounds on bayes errors for regression with gaussian processes. M Opper, F Vivarelli, Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II. the 1998 Conference on Advances in Neural Information Processing Systems IICambridge, MA, USAMIT PressM. Opper and F. Vivarelli. General bounds on bayes errors for regression with gaussian processes. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pages 302-308, Cambridge, MA, USA, 1999. MIT Press.
A unifying view of sparse approximate gaussian process regression. J , Quiñonero Candela, C E Rasmussen, J. Mach. Learn. Res. 6J. Quiñonero Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. J. Mach. Learn. Res., 6:1939-1959, Dec. 2005.
C Rasmussen, C Williams, Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT PressC. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. Adaptive Computa- tion and Machine Learning. MIT Press, Jan. 2006.
Fast forward selection to speed up sparse gaussian process regression. M Seeger, C K I Williams, N D Lawrence, WORKSHOP ON AI AND STATISTICS. 9M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse gaussian process regression. In IN WORKSHOP ON AI AND STATISTICS 9, 2003.
Applications of artificial neural networks in health care organizational decision-making: A scoping review. N Shahid, T Rappon, W Berta, PLOS ONE. 1422019N. Shahid, T. Rappon, and W. Berta. Applications of artificial neural networks in health care organizational decision-making: A scoping review. PLOS ONE, 14(2):1-22, 02 2019.
Learning curves for gaussian processes. P Sollich, Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II. the 1998 Conference on Advances in Neural Information Processing Systems IICambridge, MA, USAMIT PressP. Sollich. Learning curves for gaussian processes. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pages 344-350, Cambridge, MA, USA, 1999. MIT Press.
On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. W R Thompson, Biometrika. 253/4W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933.
Variational learning of inducing variables in sparse gaussian processes. M K Titsias, Artificial Intelligence and Statistics. 12M. K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In In Artificial Intelligence and Statistics 12, pages 567-574, 2009.
Deep kernel learning. A G Wilson, Z Hu, R Salakhutdinov, E P Xing, PMLRProceedings of the 19th International Conference on Artificial Intelligence and Statistics. A. Gretton and C. C. Robertthe 19th International Conference on Artificial Intelligence and StatisticsCadiz, Spain51A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep kernel learning. In A. Gretton and C. C. Robert, editors, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 370-378, Cadiz, Spain, 09-11 May 2016. PMLR.
Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. C Zhu, R H Byrd, P Lu, J Nocedal, ACM Trans. Math. Softw. 234C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw., 23(4):550-560, Dec.
Theorem 2.4 follows from a series of three lemmas. First, following a standard approach [31], consider the eigenfunction expansion k(x, x ) = j λ j φ j (x)φ j (x ) and k(x, x )φ i (x)p(x)dx = λ i φ i (x ). Theorem 2.4 follows from a series of three lemmas. First, following a standard approach [31], consider the eigenfunction expansion k(x, x ) = j λ j φ j (x)φ j (x ) and k(x, x )φ i (x)p(x)dx = λ i φ i (x ).
Let Λ be the diagonal matrix of the eigenvalues λ j , and Φ be the design matrix, i.e., Φ ji = φ j (x i ). Let Λ be the diagonal matrix of the eigenvalues λ j , and Φ be the design matrix, i.e., Φ ji = φ j (x i ). |
203,734,686 | PURE AND SPURIOUS CRITICAL POINTS: A GEOMETRIC STUDY OF LINEAR NETWORKS | The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. * Equal contribution.In particular, we prove that non-global local minima are necessarily pure critical points for convex losses, which means that many properties of the loss landscape can be read from the functional space. On the other hand, we emphasize that even for linear networks it is possible to find many smooth convex losses with non-global local minima. This happens when the functional space is a determinantal variety, i.e., a (non-smooth and non-convex) family of matrices with bounded rank. In this setting, the absence of non-global minima actually holds in the particular case of the quadratic loss, because of very special geometric properties of determinantal varieties that we discuss. showed that linear networks with "no bottlenecks" have no bad local minima for arbitrary smooth loss functions. Lu & Kawaguchi (2017) and Zhang (2019) argued that "depth does not create local minima", meaning that the absence of local minima of deep linear networks is implied by the same property of shallow linear networks. Our study of pure and spurious critical points can be used as a framework for explaining all these results in a unified and intuitive way. Our analysis is also closely related to objects of study in applied algebraic geometry, particularly determinantal varieties and ED discriminants(Draisma et al., 2013;Ottaviani et al., 2013).Main contributions.• We introduce a natural distinction between "pure" and "spurious" critical points for the loss function of networks. While most of the paper focuses on linear networks, this viewpoint applies to more general settings as well (see also our discussion in Appendix A.3).• We study the pure and critical locus for linear networks and arbitrary loss functions. We show that non-global local minima are always pure for convex losses, unifying many known properties on the landscape of linear networks. We also prove other new results, including a precise description of the number of topologically connected components of the set of global minima. | [] | PURE AND SPURIOUS CRITICAL POINTS: A GEOMETRIC STUDY OF LINEAR NETWORKS
Matthew Trager
New York University
New York University
Kathlén Kohn
New York University
New York University
Kth Stockholm
New York University
New York University
Joan Bruna
New York University
New York University
PURE AND SPURIOUS CRITICAL POINTS: A GEOMETRIC STUDY OF LINEAR NETWORKS
The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. * Equal contribution.In particular, we prove that non-global local minima are necessarily pure critical points for convex losses, which means that many properties of the loss landscape can be read from the functional space. On the other hand, we emphasize that even for linear networks it is possible to find many smooth convex losses with non-global local minima. This happens when the functional space is a determinantal variety, i.e., a (non-smooth and non-convex) family of matrices with bounded rank. In this setting, the absence of non-global minima actually holds in the particular case of the quadratic loss, because of very special geometric properties of determinantal varieties that we discuss. showed that linear networks with "no bottlenecks" have no bad local minima for arbitrary smooth loss functions. Lu & Kawaguchi (2017) and Zhang (2019) argued that "depth does not create local minima", meaning that the absence of local minima of deep linear networks is implied by the same property of shallow linear networks. Our study of pure and spurious critical points can be used as a framework for explaining all these results in a unified and intuitive way. Our analysis is also closely related to objects of study in applied algebraic geometry, particularly determinantal varieties and ED discriminants(Draisma et al., 2013;Ottaviani et al., 2013).Main contributions.• We introduce a natural distinction between "pure" and "spurious" critical points for the loss function of networks. While most of the paper focuses on linear networks, this viewpoint applies to more general settings as well (see also our discussion in Appendix A.3).• We study the pure and critical locus for linear networks and arbitrary loss functions. We show that non-global local minima are always pure for convex losses, unifying many known properties on the landscape of linear networks. We also prove other new results, including a precise description of the number of topologically connected components of the set of global minima.
INTRODUCTION
A fundamental goal in the theory of deep learning is to explain why the optimization of the nonconvex loss function of a neural network does not seem to be affected by the presence of nonglobal local minima. Many papers have addressed this issue by studying the landscape of the loss function (Baldi & Hornik, 1989;Choromanska et al., 2015;Kawaguchi, 2016;Venturi et al., 2018). These papers have shown that, in certain situations, any local minimum for the loss is in fact always a global minimum. Unfortunately, it is also known that this property does not apply in more general realistic settings (Yun et al., 2018;Venturi et al., 2018). More recently, researchers have begun to search for explanations based on the dynamics of optimization. For example, in certain limit situations, the gradient flow of over-parameterized networks will avoid local minimizers (Chizat & Bach, 2018;Mei et al., 2018). We believe however that the study of the static properties of the loss function (the structure of its critical locus) is not settled. Even in the case of linear networks, the existing literature paints a purely analytical picture of the loss, and provides no sort of explanation as to "why" such architectures exhibit no bad local minima. A complete understanding of the critical locus should be a prerequisite for investigating the dynamics of the optimization.
The goal of this paper is to revisit the loss function of neural networks from a geometric perspective, focusing on the relationship between the functional space of the network and its parameterization. In particular, we view the loss as a composition
{parameter space} µ → {functional space} → R.
In this setting, the function is almost always convex, however the composition L = • µ is not. Critical points for L can in fact arise for two distinct reasons: either because we are applying to a non-convex functional space, or because the parameterizing map µ is locally degenerate. We distinguish these two types of critical points by referring to them, respectively, as pure and spurious. Intuitively, pure critical points actually reflect the geometry of the functional space associated with the network, while spurious critical points arise as "artifacts" from the parameterization. After defining pure and critical points for arbitrary networks, we investigate in detail the classification of critical points in the case of linear networks. The functional space for such networks can be identified with a family of linear maps, and we can describe its geometry using algebraic tools. Many of our statements rely on a careful analysis of the differential of the matrix multiplication map.
• We spell out connections between the loss landscape and classical geometric objects such as caustics and ED discriminants. We believe that these concepts may be useful in the study of more general functional spaces.
Differential notation. Our functional spaces will be manifolds with singularities, so we will make use of elementary notions from differential geometry. If M and N are manifolds and g : M → N is a smooth map, then we write dg(x) for the differential of g at the point x. This means that dg(x) : T x M → T g(x) N is the first order linear approximation of g at the point x ∈ M. If M and N have singularities, then the same definitions apply if we restrict g to smooth points in M whose image is also smooth in N . For most of our analysis, manifolds will be embedded in Euclidean spaces, say M ⊂ R m and N ⊂ R n , so we can view the tangent spaces T x M and T g(x) N as also embedded in R m and R n . When N = R, the critical locus of a map g : M → R is defined as
Crit(g) = {x ∈ Smooth(M) | dg(x) = 0}.
PRELIMINARIES
PURE AND SPURIOUS CRITICAL POINTS
A neural network (or any general "parametric learning model") is defined by a continuous mapping Φ : R d θ × R dx → R dy that associates an input vector x ∈ R dx and a set of parameters θ ∈ R d θ to an output vector y = Φ(θ, x) ∈ R dy . In other words, Φ determines a family of continuous functions parameterized by θ ∈ R d θ :
M Φ = {f θ : R dx → R dy | f θ = Φ(θ, ·)} ⊂ C(R dx , R dy ).
Even though M Φ is naturally embedded in an infinite-dimensional functional space, it is itself finite dimensional. In fact, if the mapping Φ is smooth, then M Φ is a finite-dimensional manifold with singularities, and its intrinsic dimension is upper bounded by d θ . It is also important to note that Figure 1: Pure and spurious critical points: θ 1 is a pure critical point, while θ 2 is a spurious critical point (the level curves on the manifold M Φ describe the landscape in functional space). Note that θ 3 is mapped to the same function as θ 2 , but it is not a critical point.
neural networks are often non-identifiable models, which means that different parameters can represent the same function (i.e., f θ = f θ does not imply θ = θ ). The manifold M Φ is sometimes known as a neuromanifold (Amari, 2016). We now consider a general loss function of the form L = • µ, where µ : R d θ → M Φ is the (over)parameterization of M Φ by θ and is a functional defined on a subset of C(R dx , R dy ) containing M Φ : 1
L : R d θ µ −→ M Φ | M Φ −→ R.(1)
Definition 1. A critical point θ * ∈ Crit(L) is a pure critical point if µ(θ * ) is a critical point for the restriction | MΦ (note that this implicitly requires µ(θ * ) to be a smooth point of M Φ ). If θ * ∈ Crit(L) but µ(θ * ) ∈ Crit( | MΦ ), we say that θ * is a spurious critical point.
It is clear from this definition that pure critical points reflect the geometry of the functional space, while spurious critical points do not have an intrinsic functional interpretation. For example, if θ * ∈ Crit(L) is a spurious critical point, then it may be possible to find another parameter θ that represents the same function f θ * = f θ and is not a critical point for L (see Figure 1). In contrast, if θ * is a pure critical point, then all parameters θ such that µ(θ ) = µ(θ * ) are automatically in Crit(L), simply because dL(θ ) = d | MΦ (µ(θ )) • dµ(θ ). This will motivate us to study the fiber {θ | µ(θ) = f } of all parameters mapped to the same function f (particularly when the function f is a critical point of | MΦ ).
We note that a sufficient condition for θ * ∈ Crit(L) to be a pure critical point is that the differential dµ(θ * ) at θ * has maximal rank (namely dim M Φ ), i.e., that µ is locally a submersion at θ * . Indeed, we have in this case 0 = dL(θ * ) = d | MΦ (µ(θ * )) • dµ(θ * ) ⇒ d | MΦ (µ(θ * )) = 0, so µ(θ * ) is critical for the restriction of to M Φ . We also point out a special situation when M Φ is a convex set (as a subset of C(R dx , R dy )) and is a smooth convex functional. In this case, the only critical points of | MΦ are global minima, so we deduce that any critical point of L = • µ is either a global minimum or a spurious critical point. The following simple observation gives a sufficient condition for critical points to be saddles (i.e., they are not local minima or local maxima). Lemma 2. Let θ * ∈ Crit(L) be a (necessarily spurious) critical point with the following property: for any open neighborhood U of θ, there exists θ in U such that µ(θ ) = µ(θ) and θ ∈ Crit(L). Then θ * is a saddle for L.
Proof. Assume that θ * is a local minimum (the reasoning is analogous if θ * is a local maximum). This means that there exists a neighborhood U of θ * such that L(θ) ≥ L(θ * ) for all θ ∈ U . In particular, if θ ∈ U is such that µ(θ ) = µ(θ), then θ must also be a local minimum. This contradicts θ ∈ Crit(L).
This general discussion on pure and spurious critical points applies to any smooth network map Φ (with possible extensions to the case of piece-wise smooth mappings), and we believe that the distinction can be a useful tool in the study of the optimization landscape of general networks. In the remaining part of the paper, we use this perspective for an in-depth study of the critical points of linear networks. For this type of network, the functional set M Φ can be embedded in a finite dimensional ambient space, namely the space of all linear maps R dx → R dy . Furthermore, M Φ is an algebraic variety (a manifold that can have singularities and that can be described by algebraic equations). We will use basic tools from algebraic geometry to provide a complete description of pure and spurious critical points, and to prove new results on the landscape of linear networks.
LINEAR NETWORKS AND DETERMINANTAL VARIETIES
A linear network is a map Φ :
R d θ × R dx → R dy of the form Φ(θ, x) = W h . . . W 1 x, θ = (W h , . . . , W 1 ) ∈ R d θ , (2) where W i ∈ R di×di−1 are matrices (so d 0 = d x , d h = d y , and d θ = d 0 d 1 + d 1 d 2 + . . . + d h−1 d h ).
The functional space is in this case a subset of the space of all linear maps R d0 → R d h . As in (1), we can decompose a loss function L for a linear network Φ as
R d h ×d h−1 × . . . × R d1×d0 µ d −→ R d h ×d0 −→ R (W h , . . . , W 1 ) −→ W = W h . . . W 1 −→ (W ).(3)
Here µ d is the matrix multiplication map for the sequence of widths d = (d h , . . . , d 0 ), and is a functional on the space of (d h × d 0 )-matrices. In practice, it is typically a functional that depends on the training data, e.g. (W ) = W X − Y 2 for fixed matrices X, Y . 2 Note that even if is a convex functional, the set M Φ will often not be a convex set. In fact, it is easy to see that the image of µ d is the space M r of (d h × d 0 )-matrices of rank at most r = min{d 0 , . . . , d h }. If r < min(d 0 , d h ), this set is known as a determinantal variety, a classical object of study in algebraic geometry (Harris, 1995). It is in fact an algebraic variety, i.e., it is described by polynomial equations in the matrix entries (namely, it is the zero-set of all (r + 1) × (r + 1)-minors), and it is well known that the dimension of M r is r(m + n − r). Furthermore, for r > 0, the variety M r has many singularities: its singular locus is exactly M r−1 ⊂ M r , the set of all matrices with rank strictly smaller than r. We refer the reader to Appendix A.1 for more details on determinantal varieties.
MAIN RESULTS
In this section, we investigate the critical locus Crit(L) of general functions L : R d θ → R of the form L = • µ d where : R d h ×d0 → R is a (often convex) smooth map, and µ d is the matrix multiplication map introduced in (3). By studying the differential of µ d , we will characterize pure and spurious critical points of L. As previously noted, the image of µ d is M r ⊂ R d h ×d0 where r = min{d i }. In particular, we distinguish between two cases:
• We say that the map µ d is filling if r = min{d 0 , d h }, so M r = R d h ×d0 . In this case, the functional space is smooth and convex. • We say that the map µ d is non-filling if r < min{d 0 , d h }, so M r R d h ×d0 is a determinantal variety. In this case, the functional space is non-smooth and non-convex.
PROPERTIES OF THE MATRIX MULTIPLICATION MAP
We present some general results on the matrix multiplication map µ d , which we will apply to linear networks in the next subsection. These facts may also be useful in other settings, for example, to study the piece-wise linear behavior of ReLU networks.
We begin by noting that the differential map of µ d can be written explicitly as
dµ d (θ)(Ẇ h , . . . ,Ẇ 1 ) =Ẇ h W h−1 . . . W 1 + W hẆh−1 . . . W 1 + . . . + W h . . . W 2Ẇ1 .
(4) Given a matrix M ∈ R m×n , we denote by Row(M ) ⊂ R n and Col(M ) ⊂ R m the vector spaces spanned by the rows and columns of M , respectively. Writing
W >i = W h W h−1 . . . W i+1 and W <i = W i−1 W i−1 . . . W 1 , the image of dµ d (θ) in (4) is R d h ⊗ Row(W <h ) + . . . + Col(W >i ) ⊗ Row(W <i ) + . . . + Col(W >1 ) ⊗ R d0 .
(5) From this expression, we deduce the following useful fact.
Lemma 3. The dimension of the image of the differential dµ d at θ = (W h , . . . , W 1 ) is given by
rk(dµ d (θ)) = h i=1 rk(W >i ) · rk(W <i ) − h−1 i=1 rk(W >i ) · rk(W <i+1 ),
where we use the convention that W <1 = I d0 , W >h = I d h are the identity matrices of size d 0 , d h .
We can use Lemma 3 to characterize all cases when the differential dµ d at θ = (W h , . . . , W 1 ) has full rank (i.e., when the matrix multiplication map is a local submersion onto M r ). Theorem 4. Let r = min{d i }, θ = (W h , . . . , W 1 ), and W = µ d (θ).
• (Filling case) If r = min{d h , d 0 }, the differential dµ d (θ) has maximal rank equal to dim M r = d h d 0 if and only if, for every i ∈ {1, 2, . . . , h − 1}, either rk(W >i ) = d h or rk(W <i+1 ) = d 0 holds. • (Non-filling case) If r < min{d h , d 0 }, the differential dµ d (θ) has maximal rank equal to dim M r = r(d h + d 0 − r) if and only if rk(W ) = r.
Furthermore, in both situations, if rk(W ) = e < r, then the image of dµ d (θ) always contains the tangent space T W M e of M e ⊂ M r at W .
We note that dµ d (θ) has always maximal rank when rk(W ) = r = min{d i }, however in the filling case it is possible to obtain a local submersion even when rk(W ) < r (see Example 19 in appendix). We next describe the fiber of the matrix multiplication map, that is, the set
µ −1 d (W ) = {(W h , . . . , W 1 ) | W = W h . . . W 1 , W i ∈ R di×di−1 }.
It will be convenient to refer to µ −1 d (W ) as the set of d-factorizations of W . We are interested in understanding the structure of µ −1 d (W ) since, as argued in Section 2.1, pure critical loci consist of fibers of "critical functions". The following result completely describes the connectivity of µ −1 d (W ). Theorem 5. Let r = min{d i }. If rk(W ) = r, then the set of d-factorizations µ −1 d (W ) of W has exactly 2 b path-connected components, where b = #{i | d i = r, 0 < i < h}. If rk(W ) < r, then µ −1 d (W ) is always path-connected.
APPLICATION TO LINEAR NETWORKS
We now apply the general results from the previous subsection to study the critical locus Crit(L) with L = • µ d , where is any smooth function. In the following, we always use r = min{r i } and W = µ d (θ). The next two facts follow almost immediately from Theorem 4. Proposition 6. If θ is such that dµ d (θ) has maximal rank (see Theorem 4), then θ ∈ Crit(L) if and only if W ∈ Crit( | Mr ), and θ is a minimum (resp., saddle, maximum) for L if and only if W is a minimum (resp., saddle, maximum) for | Mr . If rk(W ) = r (which implies that dµ d (θ) has maximal rank) and θ ∈ Crit(L), then all d-factorizations of W also belong to Crit(L).
Proposition 7. If θ ∈ Crit(L) with rk(W ) = e ≤ r, then W ∈ Crit( | Me ). In other words, if rk(W ) < r, then θ ∈ Crit(L) implies that W is a critical point for the restriction of to a smaller determinantal variety M e (which is in the singular locus of the functional space M r in the non-filling case).
Note that if d h = 1, then either W = 0 or rk(W ) = 1, and in the latter case Proposition 7 implies that W ∈ Crit( | R d 0 \{0} ). If is convex, we immediately obtain that all critical points (not just local minima, as in Laurent & von Brecht (2017)) below a certain energy level are global minima. Corollary 8. Assume that is a smooth convex function and that d h = 1. If θ ∈ Crit(L), then either W = µ d (θ) = 0 or θ is global minimum for L.
Proposition 7 shows that critical points for L such that rk(W ) < r correspond to critical points for restricted to a smaller determinantal variety. Using Lemma 2, it is possible to show that these points are essentially always saddles for L.
Proposition 9. Let θ ∈ Crit(L) be such that rk(W ) < r, and assume that d (W ) = 0. Then, for any neighborhood U of θ, there exists θ in U such that µ d (θ ) = W but θ ∈ Crit(L). In particular, θ is a saddle point.
Proposition 10. Let be any smooth convex function, and let L = • µ d . If θ is a non-global local minimum for L, then necessarily rk(W ) = r (so θ is a pure critical point). In particular, L has non-global minima if and only if | Mr has non-global minima.
This statement succinctly explains many known facts on the landscape of linear networks. For example, we recover the main result from (Laurent & von Brecht, 2017), which states that when is a smooth convex function and µ d is filling (r = min{d h , d 0 }), then all local minima for L are global minima: indeed, this is because M r = R d h ×d0 is a linear space, so | Mr does not have non-global minima. On the other hand, when µ d is not filling, the functional space is not convex, and multiple local minima may exist even when is a convex function. We will in fact present many examples of smooth convex functions such that L = • µ d has non-global local minima (see Figure 3). In the special case that is a quadratic loss (for any data distribution), then it is a remarkable fact that there are no non-global local minima even when µ d is not filling (Baldi & Hornik, 1989;Kawaguchi, 2016). In the next section, we will provide an intrinsic geometric justification for this property. Remark 11. In Laurent & von Brecht (2017), the authors observe that their "structural hypothesis" (i.e., for us, the fact that the network is filling) is a necessary assumption for their main result, as otherwise critical points of might not lie in the functional space of the network. This last observation however does not imply the necessity of the filling assumption, and indeed in the case of the quadratic loss there are no local bad minima despite the fact M r R d h ×d0 .
Finally, we conclude this section by pointing out that although the pure critical locus is determined by the geometry of the functional space, the "lift" from function space to parameter space is not completely trivial. In particular, there is always a large positive-dimensional set of critical parameters associated with a critical linear function W (all possible d-factorizations of W ). More interestingly, this set may be topologically disconnected into a large number of components that are all functionally equivalent (see Theorem 5). This observation agrees with the folklore knowledge that neural networks can have many disconnected valleys where the loss function achieves the same value.
THE QUADRATIC LOSS
We now assume that : R dy×dx → R is of the form (W ) = W X − Y 2 , where X ∈ R dx×s and Y ∈ R dy×s are fixed data matrices. As mentioned above, it is known that L = • µ d has no non-global local minima, even when µ d is non-filling (Baldi & Hornik, 1989;Kawaguchi, 2016). In this section, we discuss the intrinsic geometric reasons for this special behavior.
It is easy to relate the landscape of L with the Euclidean distance function from a determinantal variety (or, equivalently, to the problem low-rank matrix approximation). Indeed, we know from Proposition 10 that L has non-global local minima if and only if the same is true for | Mr . Furthermore, assuming XX T = Id ("whitened data"), we have that
W X − Y 2 = W X, W X F − 2 W X, Y F + Y, Y F = W, W F − 2 W, Y X T F + Y, Y F = W − Y X T , W − Y X T F + const. = W − Q 0 2 + const.(6)
In other words, we are interested in studying the landscape of the function h Q0 (W ) = W − Q 0 2 , where Q 0 ∈ R dy×dx is fixed and W is restricted to the determinantal variety M r . Although (6) required XX T = Id, our general analysis actually still applies if XX T only has full rank. 3
The function h Q0 (W ) is described by following generalization of the classical Eckart-Young Theorem. The formulation we prove is an extension of Example 2.3 in Draisma & Horobet (2014) and Theorem 2.9 in Ottaviani et al. (2013). We consider a fixed matrix Q 0 ∈ R dy×dx and a singular value decomposition (SVD) Q 0 = U ΣV T , where we assume Σ ∈ R dy×dx has decreasing diagonal Figure 2: Left: If V ⊂ R 2 is an ellipse, the distance function h u (p) = p − u 2 restricted to V generally has 2 or 4 real critical points, depending on whether u lies inside or outside the diamondshaped region bounded by the caustic curve. Right: If V ⊂ R 2 is a circle, then the caustic curve degenerates to a point and the distance function generically has always 2 real critical points. entries σ 1 , . . . , σ m , with m = min(d y , d x ). For any I ⊂ {1, 2, . . . , m} we write Σ I ∈ R dy×dx for the diagonal matrix with entries σ I,1 , . . . , σ I,m where σ I,i = σ i if i ∈ I and σ I,i = 0 otherwise. Theorem 12. If the singular values of Q 0 are pairwise distinct and positive, h Q0 | Mr has exactly m r critical points, namely the matrices Q I = U Σ I V T with #(I) = r. Moreover, its unique local and global minimum is Q {1,...,r} . More precisely, the index of Q I as a critical point of h Q0 | Mr (i.e., the number of negative eigenvalues of the Hessian matrix for any local parameterization) is
index(Q I ) = #{(j, i) ∈ I × I c | j > i}, where I c = {1, . . . , m} \ I.
In the appendix we present a more general version of this statement without the assumption that the singular values of Q 0 are pairwise distinct and positive. The surprising aspect of this result is that the structure of the critical points is the same for almost all choices of Q 0 . We want to emphasize that this is a special behavior of determinantal varieties with respect to the Euclidean distance, and the situation changes drastically if we apply even infinitesimal changes to the quadratic loss function. More precisely, any linear perturbation of the Euclidean norm will result in a totally different landscape, as the following example shows (more details are given in Appendix A.2). Example 13. Let us consider the variety M 1 ⊆ R 3×3 of rank-one (3×3)-matrices. By Theorem 12, for almost all Q 0 , the function h Q0 | M1 has three (real) critical points. Applying a linear change of coordinates to R dy×dx ∼ = R dydx yields a different quadratic lossh Q0 . Using tools from algebraic geometry, it is possible to show that for almost all linear coordinate changes (an open dense set), the functionh Q0 | M1 has 39 critical points over the complex numbers. 4 The number of real critical points however varies, depending on whether Q 0 belongs to different open regions separated by a caustic hypersurface in R 3×3 . Furthermore, the number of local minima varies as well; in particular, it is no longer true that all Q 0 admit a unique local minimum. Figure 3 presents some simple computational experiments illustrating this behavior.
For all determinantal varieties, the situation is similar to the description in Example 13. More generally, given an algebraic variety V ⊂ R n and a point u ∈ R n , the number of (real) critical points of the distance function h u (p) = p − u 2 restricted to V is usually not constant as u varies: the behavior changes when u crosses the caustic hypersurface, or ED (Euclidean distance) discriminant, of V; see Figure 2. In the case of determinantal varieties with the standard Euclidean distance, this caustic hypersurface (more precisely its real locus) degenerates to a set of codimension 2, which does not partition the space into different regions. This is analogous to the case of the circle in Figure 2.
USING DIFFERENT PARAMETERIZATIONS: NORMALIZED NETWORKS
In the simple linear network model (2), the functional space M r ⊂ R d h ×d0 is parameterized using the matrix multiplication map µ d . On the other hand, one can envision many variations of this (Table 1 and Experiment 1). model that are network architectures with the same functional space but parameterized differently. Examples include linear networks with skip connections, or convolutional linear networks. In this subsection, we take a look at a model for normalized linear networks: these are maps of the form
Ψ(θ, x) = W h W h−1 W h−1 . . . W 1 W 1 x, θ = (W h , . . . , W 1 ),(7)
where W i ∈ R di×di−1 as before. This is a simple model for different types of weight normalization schemes often used in practice. It is easy to see that the difference between (7) and our previous linear network lies only in the parameterization of linear maps, since for normalized networks the matrix multiplication map is replaced by
ν d : Ω → R d h ×d0 , (W h , . . . , W 1 ) → W = W h W h−1 W h−1 . . . W 1 W 1 , where Ω = {(W h , . . . , W 1 ) | W i = 0, i = 1, . . . , h − 1} ⊂ R d θ .
According to our definitions, if L = •µ d and L = •ν d are losses respectively for linear networks and normalized linear networks, then the pure critical loci of L and L will correspond to each other (since these only depend on the functional space), but a priori the spurious critical loci induced by the two parameterizations may be different. In this particular setting, however, we show that this is not the case: the new paramerization effectively does not introduce different critical points, and in fact makes the critical locus slightly smaller. Proposition 14. If L = • ν d and L = • µ d , then the critical locus Crit(L ) is in "correspondence" with Crit(L) ∩ Ω, meaning that
{ν d (θ ) | θ ∈ Crit(L )} = {µ d (θ) | θ ∈ Crit(L) ∩ Ω}.
CONCLUSIONS
We have introduced the notions of pure and spurious critical points as general tools for a geometric investigation of the landscape of neural networks. In particular, they provide a basic language for describing the interplay between a convex loss function and an overparameterized, non-convex functional space. In this paper, we have focused on the landscape of linear networks. This simple model is useful for illustrating our geometric perspective, but also exhibits several interesting (and surprisingly subtle) features. For example, the absence of non-global minima in the loss landscape is a rather general property when the architecture is "filling", while in the "non-filling" setting it is a special property that holds for the quadratic loss. Furthermore, we have observed that even in this simple framework global minima can have (possibly exponentially) many disconnected components.
In the future, we hope to extend our analysis to different network models. For example, we can use our framework to study networks with polynomial activations (Kileel et al., 2019), which are a direct generalization of the linear model. We expect that an analysis of pure and spurious critical points in this context can be used to address a conjecture in Venturi et al. (2018) regarding the gap between "upper" and "lower" dimensions in functional space. A geometric investigation of networks with smooth non-polynomial activations is also possible; in that setting, the parameter space and the functional space are usually of the same dimension (i.e., d θ = dim(M Φ )), however there is still an interesting stratification of singular loci, as explained for example in (Amari, 2016, Section 12.2.2). General "discriminant hypersurfaces" can also be used to describe qualitative changes in the landscape as the data distribution varies. Finally, extending our analysis to networks with ReLU activations will require some care because of the non-differentiable setting. On the other hand, it is clear that ReLU networks behave as linear networks when restricted to appropriate regions of input space: this suggests that our study of ranks of differentials may be a useful building block for pursuing in this important direction. We present some additional properties of determinantal varieties. For proofs and more details, we refer the reader to Harris (1995). Given r < min(m, n), the r-th determinantal variety M r ⊂ R m×n is defined as the set of matrices with rank at most r:
M r = {P ∈ R m×n | rk(P ) ≤ r} ⊂ R m×n .
As mentioned in the main part of the paper, M r is an algebraic variety of dimension r(m + n − r), that can be described as the zero-set of all (r + 1) × (r + 1) minors. For r > 0, the the singular locus of M r is exactly M r−1 ⊂ M r . Some of our proofs will rely on the following explicit characterization of tangent space of determinantal varieties: given a a matrix P ∈ R m×n of rank exactly r (so P is a smooth point on M r ) we have that
T P M r = R m ⊗ Row(P ) + Col(P ) ⊗ R n ⊂ R m×n .
We will also make use of the normal space to the tangent space T P M r at P , with respect to the Frobenius inner product. This is given by
(T P M r ) ⊥ = Col(P ) ⊥ ⊗ Row(P ) ⊥ ,
where Col(P ) ⊥ and Row(P ) ⊥ are the orthogonal spaces to Col(P ) and Row(P ), respectively.
A.2 EUCLIDEAN DISTANCE DEGREES AND DISCRIMINANTS
In this section, we informally discuss some algebraic notions related to ED (Euclidean distance) degrees and discriminants. A detailed presentation can be found in Draisma et al. (2013). Given an algebraic variety V ⊂ R n and a point u ∈ R n , the number of real critical points of the distance function h u (p) = p − u 2 restricted to V is only locally constant as u varies. In general, the behavior changes when u crosses the caustic hypersurface, or ED (Euclidean distance) discriminant, of V. The ED discriminant can be defined over the complex numbers, and in this setting it is indeed always a hypersurface (i.e., it has codimension one), however it can have higher codimension over the real numbers. For instance, for a circle in the complex plane with the origin as its center, a point (u 1 , u 2 ) ∈ C 2 is on the ED discriminant if and only if u 2 1 + u 2 2 = 0. This defines a curve in the complex plane whose real locus is a point (see right side of Figure 2). By the Eckart-Young Theorem (Theorem 12), the ED discriminant of the determinantal variety M r is the locus of all matrices Q 0 with at least two coinciding singular values, so it is defined by the discriminant of Q 0 Q T 0 . As in the case of the circle, the ED discriminant of M r has codimension two in R dy×dx .
Over the complex numbers, the number of critical points of the distance function h u restricted to V is actually the same for every point u ∈ C n not on the ED discriminant of V. This quantity is known as the ED degree of the variety V. For instance, a circle has ED degree two whereas an ellipse has ED degree four (on the left side of Figure 2, points u outside of the caustic curve yield two real critical points and two imaginary critical points). The Eckart-Young Theorem (Theorem 12) tells us that the ED degree of the determinantal variety M r ⊂ R dy×dx is m r where m = min(d x , d y ). As argued in the main part of the paper, this does not hold any longer after perturbing either the determinantal variety or the Euclidean distance slightly, even using only a linear change of coordinates. For an algebraic variety V ⊂ C n , a linear change of coordinates is given by an automorphism ϕ : C n → C n . For almost all such automorphisms (i.e., for all ϕ except those lying in some subvariety of GL(n, C)) the ED degree of ϕ(V) is the same; see Theorem 5.4 in Draisma et al. (2013). This quantity is known as the general ED degree of V. For instance, almost all linear coordinate changes will deform a circle into an ellipse, such that the general ED degree of the circle is four.
In the above definition of the general ED degree, we fixed the standard Euclidean distance and perturbed the variety. Alternatively, we can fix the variety and change the standard Euclidean distance
· to dist ϕ = ϕ(·) . The new distance function h ϕ,u (p) = dist ϕ (p − u) 2 from u satisfies h ϕ(u) (ϕ(p)) = h id,ϕ(u) (ϕ(p)) = h ϕ,u (p).
Hence, the ED degree of ϕ(V) with respect to the standard Euclidean distance dist id = · equals the ED degree of V with respect to the perturbed Euclidean distance dist ϕ . In particular, the general ED degree of V can be obtained by computing the ED degree after applying a sufficiently random linear change of coordinates on either the Euclidean distance or the variety V itself.
As in the case of a circle, the general ED degree of the determinantal variety M r is not equal to the ED degree of M r . Furthermore, there is no known closed formula for the general ED degree of M r only involving the parameters d x , d y and r. In the special case of rank-one matrices, one can derive a closed expression from the Catanese-Trifogli formula (Theorem 7.8 in Draisma et al. (2013) This discussion shows that the Eckart-Young Theorem is indeed very special. The intrinsic reason for this is that the determinantal variety M r intersects the "isotropic quadric" associated with the standard Euclidean distance (i.e., zero locus of X 2 1,1 + . . . + X 2 dx,dy in C dx×dy ) in a particular way (i.e., non-transversely). Performing a random linear change of coordinates on either M r or the isotropic quadric makes the intersection transverse. So the ED degree after the linear change of coordinates is the general ED degree of M r , and the Eckart-Young Theorem does not apply.
): the general ED degree of M 1 is dx+dy s=0 (−1) s (2 dx+dy+1−s − 1)(d x + d y − s)! i+j=s i≤dx, j≤dy dx+1 i dy+1 j (d x − i)!(d y − j)! .
In summary, we have observed that the degeneration from an ellipse to a circle is analogous to the degeneration from a determinantal variety with a perturbed Euclidean distance to the determinantal variety with the standard Euclidean distance: in both cases, the ED degree drops because the situation becomes degenerate. Moreover, the ED discriminant drops dimension, which causes the special phenomenon that the number of real critical points is almost everywhere the same. Experiment 1. In general, it is very difficult to describe the open regions in R n that are separated by the ED discriminant of a variety V ⊂ R n . Finding the "typical" number of real critical points for the distance function h u restricted to V, requires the computation of the volumes of these open regions.
In the current state of the art in real algebraic geometry, this is only possible for very particular varieties V. For these reasons, and to get more insights on the typical number of real critical points of determinantal varieties with a perturbed Euclidean distance, we performed computational experiments with Macaulay2 (Grayson & Stillman, 2019) in the situation of Example 13. We fixed the determinantal variety M 1 ⊆ R 3×3 of rank-one (3 × 3)-matrices. In each iteration of the experiment, we picked a random automorphism ϕ : R 3×3 → R 3×3 and a random matrix Q 0 ∈ R 3×3 . We Table 1 and Figure 3. Although this is a very rudimentary experiment in an extremely simple setting, it provides clear evidence that the number of local minima of the perturbed distance function is generally not one.
Implementation details: We note that our computations of real critical points and local minima involved numerical methods and might thus be affected by numerical errors. In our implementation we used several basic tests to rule out numerically bad iterations, so that we can report our results with high confidence. The entries of the random matrix Q 0 are independently and uniformly chosen among the integers in Z = {−10, −9, . . . , 9, 10}. The random automorphism ϕ is given by a matrix in Z 9×9 whose entries are also chosen independently and uniformly at random.
A.3 PURE AND SPURIOUS CRITICAL POINTS IN PREDICTOR SPACE
We illustrate a variation of our functional setting where the notions of pure and spurious can also be naturally applied. We consider a training sample x 1 , . . . , x N ∈ R dx , y 1 , . . . , y N ∈ R (for notational simplicity we use d y = 1 but this is not necessary). We then write an empirical risk of the form
L(θ) = g(Ŷ (θ), Y ),
whereŶ (θ) = (Φ(θ, x 1 ), . . . , Φ(θ, x N )) ∈ R N , Y = (y 1 , . . . , y N ) ∈ R N and g : R N ×R N → R is a convex function. As θ varies,Ŷ (θ) defines a "predictor manifold" Y ⊂ R N , which depends only on the input data x 1 , . . . , x N , but not on θ. The function L(θ) can be naturally seen as a composition
R d θ η → Y g → R,
where η(θ) =Ŷ (θ) ∈ Y. We may now distinguish again between "pure" and "spurious" critical points for L. In an underparameterized regime d θ < N , or if the input data x 1 , . . . , x N is in some way special, then Y R N is a submanifold (with singularities), and critical points may arise because we are restricting g to Y (pure), or because of the parameterization map η (spurious). In a highly overparameterized regime d θ N (which is usually the case in practice), we expect Y = R N . This can be viewed as analogous to the "filling" situation described for linear networks in this paper. In particular, all critical points that are not global minima for L are necessarily spurious, since g| Y = g is convex.
A.4 PROOF OF THEOREM 4
We first show Lemma 3 with help of the following general observation:
Proposition 15. Let V + 1 ⊆ V + 2 ⊆ . . . ⊆ V + h and V − 1 ⊇ V − 2 ⊇ . . . ⊇ V − h be vector spaces with dimensions r + i := dim(V + i ) and r − i := dim(V − i ) for i = 1, . . . , h. Then we have dim V + 1 ⊗ V − 1 + V + 2 ⊗ V − 2 + . . . + V + h ⊗ V − h = h i=1 r + i r − i − h−1 i=1 r + i r − i+1 .
Proof. We prove this assertion by induction on h. The base case (h = 1) is clear:
dim(V + 1 ⊗ V − 1 ) = r + 1 r − 1 . For the induction step, we set V := (V + 1 ⊗ V − 1 ) + . . . + (V + h−1 ⊗ V − h−1 ). The key observation is that the inclusions V + 1 ⊆ V + 2 ⊆ . . . ⊆ V + h and V − 1 ⊇ V − 2 ⊇ . . . ⊇ V − h imply that V ∩ (V + h ⊗ V − h ) = V + h−1 ⊗ V − h .
Hence, applying the induction hypothesis to V , we derive
dim V + V + h ⊗ V − h = dim(V ) + dim(V + h ⊗ V − h ) − dim(V + h−1 ⊗ V − h ) = h−1 i=1 r + i r − i − h−2 i=1 r + i r − i+1 + r + h r − h − r + h−1 r − h = h i=1 r + i r − i − h−1 i=1 r + i r − i+1 .
Lemma 3. The dimension of the image of the differential dµ d at θ = (W h , . . . , W 1 ) is given by
rk(dµ d (θ)) = h i=1 rk(W >i ) · rk(W <i ) − h−1 i=1 rk(W >i ) · rk(W <i+1 ),
where we use the convention that W <1 = I d0 , W >h = I d h are the identity matrices of size d 0 , d h .
Proof. The image of the differential dµ d (θ) is given in (5). Due to
Col(W ) ⊆ Col(W >1 ) ⊆ . . . ⊆ Col(W >h−1 ) = Col(W h ), Row(W ) ⊆ Row(W <h ) ⊆ . . . ⊆ Row(W <2 ) = Row(W 1 ),(8)
we can apply Proposition 15, which concludes the proof. Proof. Due to (8) the image (5) of dµ d (θ) always contains R d h ⊗ Row(W ) + Col(W ) ⊗ R d0 = T W M e . Furthermore, there always exists (W h , . . . , W 1 ) ∈ µ −1 d (W ) such that each W i has rank exactly r and the containments in (8) are all equalities. For example, one way to achieve this is to consider any decomposition W = U V T where U ∈ R d h ×r and V = R d0×r and then set
W 1 = [V | 0] T , W h = [U | 0], and W i = I r 0 0 0 for 2 ≤ i ≤ h − 1,
where I r is the (r × r)-identity matrix and the zeros fill in the dimensions
(d i × d i−1 ) of W i .
The next two propositions discuss the first part of Theorem 4, which distinguishes between the filling and the non-filling case. Proposition 17. Let r = min{d i } and θ = (W h , . . . , W 1 ). In the non-filling case (i.e., if r < min{d h , d 0 }) we have that rk(dµ d (θ)) < dim M r if and only if rk(µ d (θ)) < r.
Proof. If rk(µ d (θ)) = r, then Proposition 16 implies that the image of the differential dµ d (θ) is the whole tangent space of M r at µ d (θ). To prove the other direction of the assertion, we assume that rk(µ d (θ)) < r. Since r < min{d h , d 0 }, there is some i ∈ {1, . . . , h − 1} such that d i = r. We view µ d as the following concatenation of the matrix multiplication maps:
R d h ×d h−1 × . . . × R d1×d0 µi,1 −→ R d h ×di × R di×d0 µi,2 −→ R d h ×d0 ,(9)
where µ i,1 = µ (d h ,...,di) × µ (di,...,d0) and µ i,2 = µ (d h ,di,d0) . Since rk(µ d (θ)) < r, we have that rk(W >i ) < r or rk(W <i+1 ) < r. Without loss of generality, we may assume the latter. So applying Lemma 3 to µ i,2 and θ := µ i,1 (θ) yields rk(dµ d (θ)) ≤ rk(dµ i,2 (θ )) = rk Proof. Let us first assume that rk(W >i ) < d h and rk(W <i+1 ) < d 0 for some i ∈ {1, . . . , h − 1}.
(W <i+1 ) (d h − rk(W >i )) + rk(W >i )d 0 < r (d h − rk(W >i )) + rk(W >i )d 0 = rk(W >i ) (d 0 − r) + r d h ≤ r (d 0 − r) + r d h = dim (M r ) .
We view µ d as the concatenation of the matrix multiplication maps in (9). Applying Lemma 3 to µ i,2 and θ := µ i,1 (θ) yields
rk(dµ d (θ)) ≤ rk(dµ i,2 (θ )) = rk(W <i+1 ) (d h − rk(W >i )) + rk(W >i )d 0 < d 0 (d h − rk(W >i )) + rk(W >i )d 0 = d h d 0 .
Secondly, we assume the contrary, i.e., that every i ∈ {1, . . . , h − 1} satisfies rk(W >i ) = d h or rk(W <i+1 ) = d 0 . We observe the following 2 key properties which hold for all i ∈ {1, . . . , h − 1}:
rk(W >i ) = d h ⇒ ∀j ≥ i : rk(W >j ) = d h , rk(W <i+1 ) = d 0 ⇒ ∀j ≤ i : rk(W <j+1 ) = d 0 .(10)
We consider the index set I := {i ∈ {1, . . . , h − 1} | rk(W <i+1 ) = d 0 }. If I = ∅, our assumption implies that rk(W >i ) = d h for every i ∈ {1, . . . , h}. So due to Lemma 3 we have
rk(dµ d (θ)) = d h h i=1 rk(W <i ) − d h h−1 i=1 rk(W <i+1 ) = d h rk(W <1 ) = d h d 0 .
If I = ∅, we define k := max I. So for every i ∈ {k + 1, . . . , h − 1} we have rk(W <i+1 ) < d 0 , and thus rk(W >i ) = d h by our assumption. Moreover, due to (10), every j ∈ {0, . . . , k} satisfies rk(W <j+1 ) = d 0 . Hence, Lemma 3 yields
rk(dµ d (θ)) = k j=1 rk(W >j )d 0 + d h d 0 + h i=k+2 d h rk(W <i ) − k j=1 rk(W >j )d 0 − h−1 i=k+1 d h rk(W <i+1 ) = d h d 0 .
Example 19. According to Proposition 18, the differential of the matrix multiplication map is surjective whenever rk(W ) = r, but also for certain θ when rk(W ) < r. For example, let us consider the map µ (2,2,2) : R 2×2 × R 2×2 → R 2×2 and the two factorizations θ = ([ 1 1 1 1 ] , [ 1 0 0 1 ]) and θ = ([ 1 0
1 0 ] , [ 1 1 0 0 ]) of the rank-one matrix [ 1 1 1 1 ]. According to Proposition 18, the differential dµ (2,2,2) (θ) has maximal rank 4. So it is surjective, whereas dµ (2,2,2) (θ ) is not. In fact, by Lemma 3, we have rk(dµ (2,2,2) (θ )) = 3. Proof. This is an amalgamation of Propositions 16, 17 and 18.
A.5 PROOF OF THEOREM 5
In the following we use the notation from Theorem 5:
Theorem 5. Let r = min{d i }. If rk(W ) = r, then the set of d-factorizations µ −1 d (W ) of W has exactly 2 b path-connected components, where b = #{i | d i = r, 0 < i < h}. If rk(W ) < r, then µ −1 d (W ) is always path-connected.
We also write GL + (r) for the set of matrices in GL(r) with positive determinant. Analogously, we set GL − (r) := {G ∈ GL(r) | det(G) < 0}.
We first prove Theorem 5 in the case that b = 0. To show that µ −1 d (W ) is path-connected in this case, we show the following stronger assertion: given two matrices W and W of arbitrary rank and factorizations θ ∈ µ −1 d (W ) and θ ∈ µ −1 d (W ), each path in the codomain of µ d from W to W can be lifted to a path in the domain of µ d from θ to θ . Proposition 20 (Path Lifting Property). If b = 0, then for every W, W ∈ R dy×dx , every θ ∈ µ −1 d (W ), every θ ∈ µ −1 d (W ) and every continuous function f :
[0, 1] → R dy×dx with f (0) = W and f (1) = W , there is a continuous function F : [−1, 2] → R dy×d h−1 × . . . × R d1×dx such that F (−1) = θ, F (2) = θ , µ d (F (t)) = W for every t ∈ [−1, 0], µ d (F (t)) = W for every t ∈ [1, 2] and µ d (F (t)) = f (t) for every t ∈ [0, 1].
Proof. Without loss of generality, we may assume that d y ≤ d x . Then the assumption b = 0 means that d i > d y for all i = 1, . . . , h − 1.
We prove the assertion by induction on h. For the induction beginning, we consider the cases h = 1 and h = 2. If h = 1, then µ d is the identity and Proposition 20 is trivial. For h = 2, we construct explicit lifts of the given paths. We first show that there is a path in µ −1 d (W ) from θ = (W 2 , W 1 ) to some (B 2 , B 1 ) such that B 2 has full rank.
Claim 1. Let h = 2, W ∈ R dy×dx and (W 2 , W 1 ) ∈ µ −1 d (W ). Then there is (B 2 , B 1 ) ∈ µ −1 d (W ) with rk(B 2 ) = d y and a continuous function g : [0, 1] → µ −1 d (W ) such that g(0) = (W 2 , W 1 ) and g(1) = (B 2 , B 1 ).
Proof. If rk(W 2 ) = d y , we have nothing to show. So we assume that s := rk(W 2 ) < d y . Without loss of generality, we may further assume that the first s rows of W 2 have rank s. As s < d 1 , we find a matrix G ∈ GL + (d 1 ) such that W 2 G = Is 0 M 0 , where I s ∈ R s×s is the identity matrix and M ∈ R (dy−s)×s . Since GL + (d 1 ) is path-connected, there is a continuous function g 1 : [0, 1] → GL + (d 1 ) with g 1 (0) = I d1 and g 1 (1) = G.
Concatenation with GL(d 1 ) → µ −1 d (W ), H → (W 2 H, H −1 W 1 ) yields a continuous path g 1 in µ −1 d (W ) from (W 2 , W 1 ) to (W 2 G, G −1 W 1 ). Since (W 2 G)(G −1 W 1 ) = W , we see that G −1 W 1 = W (s) N ,
where W (s) ∈ R s×dx is the first s rows of W and N ∈ R (dy−s)×dx . Replacing N by an arbitrary matrix N still yields that
W 2 G W (s) N = W . Hence, we find a continuous path g 2 in µ −1 d (W ) from (W 2 G, G −1 W 1 ) to (W 2 G, B 1 := W (s) 0 ).
Finally, we can replace the 0-columns in W 2 G by arbitrary matrices M 1 ∈ R s×(d1−s) and M 2 ∈ R (dy−s)×(d1−s) such that Is M1 M M2 B 1 = W still holds. In particular, we can pick M 1 = 0 and M 2 = [ I dy −s 0 ] such that B 2 := Is 0 M M2 has full rank d y , and we find a continuous path g 3 in µ −1 d (W ) from (W 2 G, B 1 ) to (B 2 , B 1 ). Putting g 1 , g 2 and g 3 together yields a path g as desired in Claim 1. ♦
As B 2 has full rank, we find a matrix H ∈ GL + (d 1 ) such that B 2 H = [ I dy 0 ]. As in the proof of Claim 1, we construct a continuous path in
µ −1 d (W ) from (B 2 , B 1 ) to (B 2 H, H −1 B 1 ). Since H −1 B 1 = [ W N ] for some N ∈ R (d1−→ R dy×d1 × R d1×dx , t → ([ I dy 0 ] , f (t) 0
) such that putting F 1 , F 2 and F 3 together yields a path F as desired in Proposition 20.
For the induction step, we view µ d as the concatenation of the following two matrix multiplication maps:
R dy×d h−1 × . . . × R d1×dx µ (d h ,...,d 1 ) ×id −→ R dy×d1 × R d1×dx µ (dy ,d 1 ,dx) −→ R dy×dx .
We consider θ = (W h , . . . , W 1 ) and θ = (W h , . . . , W 1 ), as well as W >1 = W h · · · W 2 and W >1 = W h · · · W 2 . Given a path f : [0, 1] → R dy×dx from W = W >1 W 1 to W = W >1 W 1 , we apply the induction beginning (h = 2) to µ (dy,d1,dx) to get a path F 2 : [−0.5, 1.5] → R dy×d1 × R d1×dx such that F 2 (−0.5) = (W >1 , W 1 ), F 2 (1.5) = (W >1 , W 1 ), µ (dy,d1,dx) (F 2 (t)) = W for all t ∈ [−0.5, 0], µ (dy,d1,dx) (F 2 (t)) = W for all t ∈ [1, 1.5] and µ (dy,d1,dx) (F 2 (t)) = f (t) for all t ∈ [0, 1]. Now we apply the induction hypothesis on µ (d h ,...,d1) and the path from W >1 to W >1 given by the first factor of F 2 . This yields a path F 1 :
[−1, 2] → R dy×d h−1 × . . . × R d2×d1 with F 1 (−1) = (W h , . . . W 2 ), F 1 (2) = (W h , . . . W 2 ), µ (d h ,...,d1) (F 1 (t)) = W >1 for all t ∈ [−1, −0.5], µ (d h ,...,d1) (F 1 (t)) = W >1
for all t ∈ [1.5, 2] and µ (d h ,...,d1) (F 1 (t)) is the first factor of F 2 (t) for all t ∈ [−0.5, 1.5]. This allows us to define a continuous path F :
[−1, 2] → R dy×d h−1 × . . . × R d1×dx from θ to θ by setting F (t) = (F 1 (t), W 1 ) for all t ∈ [−1, −0.5], F (t) = (F 1 (t), W 1 ) for all t ∈ [1.5, 2]
and for all t ∈ [−1, −0.5] we let F (t) consist of F 1 (t) and the second factor of F 2 (t).
Corollary 21. If b = 0, then µ −1 d (W ) is path-connected for each W ∈ R dy×dx .
Proof. Apply Proposition 20 to the constant function f :
[0, 1] → R dy×dx , t → W .
Now we study the case b > 0. We write 0 < i 1 < . . . < i b < h for those indices i j such that d ij = r. Then we view µ d as the concatenation of the following two matrix multiplication maps:
R dy×d h−1 × . . . × R d1×dx µ1 −→ R dy×di b × R di b ×di b−1 × . . . × R di 1 ×dx µ2 −→ R dy×dx ,(11)
where ...,d0) and µ 2 = µ (dy,di b ,...,di 1 ,dx) . Applying the path lifting property described above to the map µ 1 , we will show in Proposition 26 that µ −1 2 (W ) and µ −1 d (W ) have the same number of (path-)connected components. So it remains to study the connected components of µ −1 2 (W ). We can shortly write the map µ 2 as
µ 1 = µ (d h ,...,di b ) ×µ (di b ,...,di b−1 ) ×. . .×µ (di 1 ,µ 2 : R dy×r × R r×r b−1 × R r×dx −→ R dy×dx .
In the case that rk(W ) = r, we use the following natural action of GL(r) b on µ −1 2 (W ):
GL(r) b × µ −1 2 (W ) −→ µ −1 2 (W ), ((G b , . . . , G 1 ), (A b+1 , . . . , A 1 )) −→ A b+1 G b , G −1 b A b G b−1 , . . . , G −1 1 A 1 .(12)
In fact, we show now that µ −1 2 (W ) is the orbit of any element under this action if rk(W ) = r. From this we will deduce in Corollaries 23 and 24 that µ −1 2 (W ) is homeomorphic to GL(r) b and thus has 2 b (path-)connected components if the matrix W has maximal rank r.
Proposition 22. Let b > 0 and θ = (A b+1 , . . . , A 1 ) ∈ R dy×r × (R r×r ) b−1 × R r×dx such that W = µ 2 (θ)
has maximal rank r. Then µ −1 2 (W ) is the orbit of θ under the action defined in (12), i.e.,
µ −1 2 (W ) = A b+1 G b , G −1 b A b G b−1 , . . . , G −1 1 A 1 | G 1 , . . . , G b ∈ GL(r) .
Proof. One inclusion, namely "⊇", is trivial. We prove the other inclusion "⊆" by induction on b.
For the induction beginning (b = 1), we write W = W11 W12 W21 W22 where W 11 ∈ R r×r . Without loss of generality, we may assume that rk(W 11 ) = r. Similarly, we write A 2 = A21 A22 and A 1 = [ A11 A12 ] where A i1 ∈ R r×r for i = 1, 2. For (A 2 , A 1 ) ∈ µ −1 2 (W ), we write analogously A 2 = A 21 A 22
and A 1 = [ A 11 A 12 ]. Due to rk(W 11 ) = r, we have that rk(A 21 ) = r = rk(A 21 ). Hence, there is a matrix G ∈ GL(r) such that A 21 = A 21 G. This implies that A 21 GA 11 = W 11 = A 21 A 11 , so A 11 = G −1 A 11 . Due to A 21 A 12 = W 12 = A 21 GA 12 , we get that A 12 = G −1 A 12 . Finally, rk(W 11 ) = r implies that rk(A 11 ) = r, so
A 22 A 11 = W 21 = A 22 G −1 A 11 shows A 22 = A 22 G.
Thus we have shown that A 2 = A 2 G and A 1 = G −1 A 1 .
For the induction step (b > 1), we consider (A b+1 , . . . , A 1 ) ∈ µ −1 2 (W ) and A >1 = A b+1 · · · A 2 , A >1 = A b+1 · · · A 2 ∈ R dy×r . Now we can apply the induction beginning to find G 1 ∈ GL(r) such that A >1 = A >1 G 1 and A 1 = G −1 1 A 1 . As A >1 has rank r and A b+1 · · · A 2 = A >1 = A b+1 · · · (A 2 G 1 ), we can apply the induction hypothesis on the map which multiplies the left-most b matrices. This yields G b , . . .
G 2 ∈ GL(r) such that A b+1 = A b+1 G b , . . . , A 3 = G −1 3 A 3 G 2 , A 2 = G −1 2 (A 2 G 1 ).
Corollary 23. If b > 0 and W ∈ R dy×dx has maximal rank r, then µ −1 2 (W ) is homeomorphic to GL(r) b .
Proof. We fix θ = (A b+1 , . . . , A 1 ) ∈ µ −1 2 (W ). The map ϕ : GL(r) b → µ −1 2 (W ) given by the action (12) on θ is continuous. We now construct its inverse. As rk(W ) = r, we have that rk(A i ) = r for all i = 1, . . . , b + 1. Without loss of generality, we may assume that the first r rows of A b+1 have rank r. We write π : R dy×r → R r×r for the projection which forgets the last d y − r rows of a given matrix. For θ = (A b+1 , . . . , A 1 ) ∈ µ −1 2 (W ), Proposition 22 shows that θ = ϕ(G b , . . . , G 1 ) for some (G b , . . . , G 1 ) ∈ GL(r) b . So we have that
G b = π(A b+1 ) −1 π(A b+1 ), G b−1 = A −1 b G b A b , . . . , G 1 = A −1 2 G 2 A 2 , which defines a map ψ : µ −1 2 (W ) −→ GL(r) b , A b+1 , . . . , A 1 −→ G(A b+1 ), A −1 b G(A b+1 )A b , . . . , A −1 2 · · · A −1 b G(A b+1 )A b · · · A 2 ,
where G(A b+1 ) := π(A b+1 ) −1 π(A b+1 ). By construction, ψ is the inverse of ϕ. Since ψ is continuous, it is a homeomorphism between µ −1 2 (W ) and GL(r) b .
Corollary 24. If b > 0 and W ∈ R dy×dx has maximal rank r, then µ −1 2 (W ) has 2 b connected components. Each of these components is path-connected.
Proof. The group GL(r) has two connected components, namely GL + (r) and GL − (r). Both components are path-connected. Hence, GL(r) b has 2 b connected components, each of them pathconnected. By Corollary 23, the same holds for µ −1 2 (W ).
To complete our understanding of the connected components of µ −1 2 (W ), we consider the case that the matrix W ∈ R dy×dx does not have maximal rank r. In that case, it turns out that µ −1 2 (W ) is pathconnected, which we show by constructing explicit paths between any two elements of µ −1 2 (W ). Proposition 25. Let W ∈ R dy×dx . If b > 0 and rk(W ) < r, then µ −1 2 (W ) is path-connected.
Proof. We write W = W 1 W 2
where W 1 ∈ R r×dx , and denote by e the rank of W . If rk(W 1 ) = e, then W 2 = M W 1 for some M ∈ R (dy−r)×r . Claim 2. If b = 1, (A 2 , A 1 ) ∈ µ −1 2 (W ), rk(W 1 ) = e and W 2 = M W 1 , then there is a continuous function f : [0, 1] → µ −1 2 (W ) with f (0) = (A 2 , A 1 ) and f (1) = ( Ir M , W 1 ).
Proof. Since rk(W ) < r, we have that rk(A 2 ) < r or rk(A 1 ) < r. If rk(A 2 ) < r, we proceed as in the proof of Claim 1 to find a path in µ −1 2 (W ) from (A 2 , A 1 ) to (A 2 , A 1 ) such that rk(A 2 ) = r. Hence, we may assume that rk(A 2 ) = r. This implies that rk(A 1 ) = e. So K := ker(A T 1 ) ⊆ R r has positive dimension r − e. We write A 2 = A21 A22 where A 21 ∈ R r×r , and denote by r 2 the rank of A 21 . So the rowspace R ⊆ R r of A 21 has dimension r 2 . We now show that K + R = R r . To see this we set δ := dim(K ∩ R). Without loss of generality, we may assume that the first e rows of W 1 are linearly independent. Then the first e rows of A 21 must also be linearly independent, so we might further assume that the first r 2 rows of A 21 are linearly independent. We denote by A 211 and W 11 the matrices formed by the first r 2 rows of A 21 and W 1 , respectively. In particular, we have that A 211 A 1 = W 11 . Now we choose a basis (b 1 , . . . , b r2 ) for R such that (b 1 , . . . , b δ ) is a basis for K ∩ R. Since R is the rowspace of A 211 , there is some G ∈ GL(r 2 ) such that the i-th row of GA 211 is b i . So the first δ rows of GW 11 = GA 211 A 1 are zero, which shows that e = rk(GW 11 ) ≤ r 2 − δ. Thus, dim(K + R) = (r − e) + r 2 − δ ≥ r, which proves that K + R = R r .
If r 2 < r, we now show that there is a path in µ −1 2 (W ) from (A 2 , A 1 ) to (A 2 = A 21 A22 , A 1 ) such that rk(A 21 ) = r. We may assume again that the first r 2 rows a 1 , . . . , a r2 of A 21 are linearly independent, i.e., that they form a basis for R. Since K +R = R r , we can extend this basis to a basis (a 1 , . . . , a r ) for R r such that a i ∈ K for all i > r 2 . We define A 21 such that its first r 2 rows are a 1 , . . . , a r2 and such that its i-th row, for r 2 < i ≤ r, is the sum of a i ∈ K and the i-th row of A 21 .
Then A 2 = A 21 A22 satisfies A 2 A 1 = W . Moreover, the straight line from (A 2 , A 1 ) to (A 2 , A 1 ) is a path in µ −1 2 (W ).
Since the last r − r 2 rows of A 21 are contained in the linear span R of the first r 2 rows of A 21 , the linearity of the determinant implies that det(A 21 ) = det([ a1 ··· ar ]) = 0.
Thus, we may assume that r 2 = r. If det(A 21 ) < 0, we now construct a path in µ −1 2 (W ) from (A 2 , A 1 ) to (A 2 = A 21 A22 , A 1 ) such that det(A 21 ) > 0. For this, we pick a vector v ∈ K \ {0}. Since the rows of A 21 form a basis for R r , there is an index i ∈ {1, . . . , r} such that the matrix D ∈ R r×r obtained from A 21 by replacing its i-th row with v has full rank. We pick µ ∈ R such that det(A 21 ) + µ det(D) > 0 and define A 21 to be the matrix obtained from A 21 by adding µv to its i-th row. Then det(A 21 ) = det(A 21 ) + µ det(D) > 0 and A 2 = A 21 A22 satisfies A 2 A 1 = W . Moreover, the straight line from (A 2 , A 1 ) to (A 2 , A 1 ) is a path in µ −1 2 (W ). Therefore, we may assume that det(A 21 ) > 0, so G := A −1 21 ∈ GL + (r). Any path in GL + (r) from I r to G yields a path in µ −1
2 (W ) from (A 2 , A 1 ) to (A 2 G, G −1 A 1 ) = ( Ir A22G , W 1 ). Since (A 22 G)W 1 = W 2 = M W 1 , the straight line from ( Ir A22G , W 1 ) to ( Ir M , W 1 ) is a path in µ −1 2 (W ). ♦
Claim 3. If θ = (A b+1 , . . . , A 1 ) ∈ µ −1 2 (W ), rk(W 1 ) = e and W 2 = M W 1 , then there is a continuous function F : [0, 1] → µ −1 2 (W ) with F (0) = θ and F (1) = ( Ir M , I r , . . . , I r , W 1 ).
Proof. As e < r, at least one of the A i has rank smaller than r. We set k := min{i ∈ {1, . . . , b+1} | rk(A i ) < r}. If k < b + 1, we first show that there is a path in µ −1 2 (W ) from θ to (A b+1 , . . . , A 1 ) such that min{i ∈ {1, . . . , b + 1} | rk(A i ) < r} = b + 1. Since rk(A k ) < r, the rank of W := A k+1 A k is smaller than r. We write W = [ W 1 W 2 ] where W 1 has r columns. Without loss of generality, we may assume that rk(W 1 ) = rk(W ). Then there is a matrix N such that W 2 = W 1 N . Hence, we can apply the transposed version of Claim 2, which yields a path from (A k+1 , A k ) to (W 1 , [ Ir N ]) in the set of factorizations of W . Definingà k+1 := W 1 ,à k := [ Ir N ] andà i := A i for all i ∈ {1, . . . , b + 1} \ {k, k + 1}, extends this path to a path in µ −1 2 (W ) from θ to (à b+1 , . . . ,à 1 ) such that min{i ∈ {1, . . . , b + 1} | rk(à i ) < r} = k + 1. We note that this construction increased the number k. So we can repeat the construction until we reach (A b+1 , . . . , A 1 ) as desired.
Hence, we may assume that k = b + 1. Since rk(A b+1 ) < r, the rank of W := A b+1 A b is smaller than r. We write W = W 1 W 2
where W 1 has r rows. Since W 1 = W 1 A b−1 · · · A 1 and the matrices A b−1 , . . . , A 1 have rank r, we have that rk(W 1 ) = rk(W 1 ) = e. Analogously, rk(W ) = e. So there is matrix M such that W 2 = M W 1 . Applying Claim 2 yields a path from (A b+1 , A b ) to ( Ir Now we finally show that µ −1 2 (W ) is path-connected. Without loss of generality, we may assume that rk(W 1 ) = e, and we write W 2 = M W 1 . For θ, θ ∈ µ −1 2 (W ), there are paths in µ −1 2 (W ) from θ resp. θ to ( Ir M , I r , . . . , I r , W 1 ), so there is a path from θ to θ in µ −1 2 (W ).
To settle the proof of Theorem 5, it is left to show that µ −1 2 (W ) and µ −1 d (W ) have indeed the same number of (path-)connected components, as we promised earlier.
Proposition 26. Let b > 0 and W ∈ R dy×dx . Then µ −1 d (W ) and µ −1 2 (W ) have the same number of connected components. Moreover, each of these components is path-connected.
Proof. Let us denote the connected components of µ −1 2 (W ) be C 1 , . . . , C k . By Corollary 24 and Proposition 25, each of these components is path-connected. Using the notation in (11), we have that
µ −1 d (W ) = k i=1 µ −1 1 (C i ).
Since the µ −1 1 (C 1 ), . . . , µ −1 1 (C k ) are pairwise disconnected, we see that µ −1 d (W ) has at least k disconnected components. It is left to show that each µ −1 1 (C i ) is path-connected. For this, let θ, θ ∈ µ −1 1 (C i ) and σ := µ 1 (θ), σ := µ 1 (θ ) ∈ C i . As C i is path-connected, there is a path in C i from σ to σ . The map µ 1 is a direct product of b + 1 matrix multiplication maps. To each factor we can apply Proposition 20, which yields a path in µ −1 1 (C i ) from θ to θ .
Corollary 27. Let b > 0 and W ∈ R dy×dx . If rk(W ) = r, then µ −1 d (W ) has 2 b connected components, and each of these components is path-connected. If rk(W ) < r, then µ −1 d (W ) is path-connnected.
Proof. Combine Corollary 24 and Propositions 25 and 26.
Proof of Theorem 5. Theorem 5 is an amalgamation of Corollaries 21 and 27.
A.6 PROOFS OF PROPOSITIONS 6, 7, 9, 10 AND 14 Proposition 6. If θ is such that dµ d (θ) has maximal rank (see Theorem 4), then θ ∈ Crit(L) if and only if W ∈ Crit( | Mr ), and θ is a minimum (resp., saddle, maximum) for L if and only if W is a minimum (resp., saddle, maximum) for | Mr . If rk(W ) = r (which implies that dµ d (θ) has maximal rank) and θ ∈ Crit(L), then all d-factorizations of W also belong to Crit(L).
Proof. If µ d is a local submersion at θ onto M r , then there exists an open neighborhood U of W in M r and an open neighborhood V of θ with the property that µ d acts as a projection from V onto U (see, e.g., (Lee, 2003, Theorem 7.3)). From this, we easily deduce that θ is a minimum (resp. saddle, maximum) for L if and only if W = µ d (θ) is a minimum (resp. saddle, maximum) for | Mr . Finally, if rk(W ) = r, then dµ d (θ) has maximal rank for all θ ∈ µ −1 d (W ) by Theorem 4.
Proposition 7. If θ ∈ Crit(L) with rk(W ) = e ≤ r, then W ∈ Crit( | Me ). In other words, if rk(W ) < r, then θ ∈ Crit(L) implies that W is a critical point for the restriction of to a smaller determinantal variety M e (which is in the singular locus of the functional space M r in the non-filling case).
Proof. According to Theorem 4, if µ d (θ) = W with rk(W ) = e, then Im(dµ d (θ)) ⊃ T W M e . This means that θ ∈ Crit(L) implies that W is critical for | Me .
Proposition 9. Let θ ∈ Crit(L) be such that rk(W ) < r, and assume that d (W ) = 0. Then, for any neighborhood U of θ, there exists θ in U such that µ d (θ ) = W but θ ∈ Crit(L). In particular, θ is a saddle point.
Proof. Our proof is a modification of an argument used in Zhang (2019). Let us first consider the case that µ d is filling, so r = min{d h , d 0 }. Without loss of generality, we assume r = d 0 . Recall that the image of dµ d (θ) is given by
We first note that Row(W <h ) = R d0 , for otherwise dµ d (θ) would be surjective, implying that
d (W ) = 0. We define i = max{j | Row(W <j ) = R d0 }, so 1 ≤ i < h (writing W <1 = I d0 ). We have that d (W )(Col(W >i ) ⊗ Row(W <i )) = d (W )(Col(W >i ) ⊗ R d0 ) = 0.(13)
Since Row(W <i+1 ) R d0 , we may find w i ∈ R di , w i = 0 such that w T i W i . . . W 1 = 0. We now fix > 0 and v i+1 ∈ R di+1 arbitrarily, and defineW
i+1 = W i+1 + (v i+1 ⊗ w i ). Clearly, we have thatW i+1 W i . . . W 1 = W i+1 W i . . . W 1 . If (W h , . . . ,W i+1 , W i , . . . , W 1 ) is also a critical point of L, then d (W )(Col(W h , . . . ,W i+1 ) ⊗ R d0 ) = 0.(14)
Combining (13) and (14), we have that
d (W )(Col(W h , . . . , W i+2 (v i+1 ⊗ w i )) ⊗ R d0 ) = 0.
If this were true for all v i+1 , then it would imply
d (W )(Col(W h , . . . , W i+2 ) ⊗ R d0 ) = 0.(15)
Hence, we have either found an arbitrarily small perturbation θ of θ as required in Proposition 9, or (15) must hold. In the latter case, we reapply the same argument forW
i+2 = W i+2 + (v i+2 ⊗ w i+1 ) where w i+1 = 0 and w T i+1 W i+1 . . . W 1 = 0.
Again, we can either construct an arbitrarily small perturbation θ of θ as required in Proposition 9, or we have d (W )(Col(W >i+2 ) ⊗ R d0 ) = 0. Proceeding this way we eventually arrive at d (W )(R d h ⊗ R d0 ) = 0 so d (W ) = 0, which contradicts the hypothesis. Thus, at some point we must find an arbitrarily small perturbation θ of θ as required in Proposition 9, which concludes the proof in the case that µ d is filling.
We now consider the case that µ d is not filling. We pick i ∈ {1, . . . , h − 1} such that d i = r, and write for simplicity A = W h . . . W i+1 and B = W i . . . W 1 . The assumption rk(W ) < r implies that rk(A) < r or rk(B) < r, and we assume without loss of generality that rk(A) < r. We define the map L B (W h , . . . , W i+1 ) = (W h . . . W i+1 B). We also introduce the map B (A ) = (A B) and the matrix multiplication map µ d A where d A = (d h , . . . , d i+1 ), so that L B = B • µ d A . If θ is a critical point for L, then θ A = (W h , . . . , W i+1 ) must be a critical point for L B . We are thus in the position to apply the analysis carried out in the filling case. In particular, we have that either
θ A can be perturbed toθ A such that µ d A (θ A ) = µ d A (θ A ) butθ A / ∈ Crit(L B ), or d B (A) = 0.
In the former case, we have that θ = (θ A , θ B ) is not a critical point for L, and we are done. If instead d B (A) = 0, then we have that
d (W )(R d h ⊗ Row(B)) = 0,
because the image of the differential of the map A → A B is given by R d h ⊗ Row(B). We now proceed in a similar manner as before. We have that Col(W >i ) R d h , because we assumed that W >i = A had rank less than r ≤ d h . Thus, we may find w i+1 ∈ R di , w i+1 = 0 such that W h . . . W i+1 w i+1 = 0. We fix > 0 and v i arbitrarily, and defineW
i = W i + (w i+1 ⊗v i ). We have that W h . . . W i+1Wi = W h . . . W i+1 W i . If for all choices of v i we have that (W h , . . . ,W i , . . . , W 1 )
is still a critical point for L, then we can deduce that
d (W )(R d h ⊗ Row(W i−1 . . . W 1 )) = 0.
Repeating this reasoning, we obtain our result as before.
Proposition 10. Let be any smooth convex function, and let L = • µ d . If θ is a non-global local minimum for L, then necessarily rk(W ) = r (so θ is a pure critical point). In particular, L has non-global minima if and only if | Mr has non-global minima.
Proof. The first statement follows immediately from Proposition 9: if θ ∈ Crit(L) is a non-global local minimum, then necessarily d (W ) = 0, and we conclude that rk(W ) = r. For the second statement, we observe that if is a convex function, then θ is a local minimum for L if and only if W = µ d (θ) is a local minimum for | Mr . Indeed, if W = µ d (θ) is a local minimum for | Mr , then it is always true that any θ ∈ µ −1 d (W ) is a local minimum. Conversely, if θ is a local minimum, then from Proposition 9 we see that either d (W ) = 0, in which case W is a (global) minimum because is convex, or W must have maximal rank. In the latter case, dµ d (θ) would be surjective (by Theorem 4), so W would also be a local minimum for | Mr (see Proposition 6). Finally, it is clear that θ is also a global minimum for L if and only if W is a global minimum for | Mr .
Proposition 14. If L = • ν d and L = • µ d , then the critical locus Crit(L ) is in "correspondence" with Crit(L) ∩ Ω, meaning that
{ν d (θ ) | θ ∈ Crit(L )} = {µ d (θ) | θ ∈ Crit(L) ∩ Ω}.
Proof. Let us define
p : Ω → R d θ , (W h , . . . , W 1 ) → W h , W h−1 W h−1 , . . . , W 1 W 1 , q : Ω → R d θ , (W h , . . . , W 1 ) → W h W h−1 . . . W 1 , W h−1 W h−1 , . . . , W 1 W 1 .
The image of both of these maps is N = {(W h , . . . , W 1 ) | W i = 1, i = 1, . . . , h − 1}. In fact, both maps are submersions onto N . Since ν d = µ d • p and µ d • q = µ d | Ω = ν d • q, it is enough to show the following two assertions: 1) θ ∈ Crit(L ) if and only if p(θ ) ∈ Crit(L); and 2) θ ∈ Crit(L) ∩ Ω if and only if q(θ) ∈ Crit(L ).
For 1), we deduce from L = L • p that dL (θ ) = dL(p(θ )) • dp(θ ) = 0 if dL(p(θ )) = 0, but this also holds conversely: if dL (θ ) = 0, then Im(dp(θ )) is contained in Ker(dL(p(θ ))).
Since q • p = p and both maps p and q are submersions, we have that Im(dp(θ )) = T p(θ ) N = Im(dq(p(θ ))). Now it follows from L • q = L| Ω that dL(p(θ )) = dL(p(θ )) • dq(p(θ )) = 0. For 2), we can argue analogously, exchanging the roles of L and L as well as p and q.
A.7 PROOF OF THEOREM 12
We consider a fixed matrix Q 0 ∈ R dy×dx and a singular value decomposition (SVD) Q 0 = U ΣV T .
Here U ∈ R dy×dy and V ∈ R dx×dx are orthogonal and Σ ∈ R dy×dx is diagonal with decreasing diagonal entries σ 1 , σ 2 , . . . , σ m where m = min(d x , d y ). We also write shortly
if (Q 0 − P ) ∈ T P M ⊥ r = Col(P ) ⊥ ⊗ Row(P ) ⊥ . If P = r i=1 σ i (u i ⊗ v i ) and Q 0 − P = e j=1 σ j (u j ⊗ v j )
are SVD decompositions with σ i = 0 and σ j = 0, the column spaces of P and Q 0 − P are spanned by the u i and u j , respectively. Similarly, the row spaces of P and Q 0 − P are spanned by the v i and v j , respectively. So P is a critical point if and only if the vectors u i , u j and v i , v j are orthonormal, i.e., if
Q 0 = P + (Q 0 − P ) = r i=1 σ i (u i ⊗ v i ) + e j=1 σ j (u j ⊗ v j )
is a SVD of Q 0 . This proves that the critical points are of the form U Σ I V T where Q 0 = U ΣV T is a SVD and I ∈ [m] r .
Since h Q0 (U Σ I V T ) = U Σ [m]\I V T 2 = Σ [m]\I 2 = i / ∈I σ 2 i , we see that the global minima are exactly the critical points selecting r of the largest singular values of Q 0 , i.e., with I = [r]. It is left to show that there are no other local minima. For this, we consider a critical point P = U Σ I V T such that at least one selected singular value σ i for i ∈ I is strictly smaller than σ r . We will show now that P cannot be a local minimum. Since σ i < σ r , there is some j ∈ [r] such that j / ∈ I. As above, we write u k and v k for the columns of U and V T such that Q 0 = m k=1 σ k (u k ⊗v k ) and P = k∈I σ k (u k ⊗ v k ). We consider rotations in the planes spanned by u i , u j and v i , v j , respectively: for a ∈ [0, π 2 ], we set u (α) = cos(α)u i + sin(α)u j and v (α) = cos(α)v i + sin(α)v j . Note that u (0) = u i and u ( π 2 ) = u j ; analogously for v (α) . Next we define σ (α) = cos 2 (α)σ i + sin 2 (α)σ j and
If the singular values of Q 0 are pairwise distinct and positive, the singular vectors of Q 0 are unique up to sign. So for each index set I ∈ [m] r the matrix Q I = U Σ I V T is the unique critical point of h Q0 | Mr whose singular values are the σ i for i ∈ I. Hence, Theorem 28 implies immediately the following:
Corollary 29. If the singular values of Q 0 are pairwise distinct and positive, h Q0 | Mr has exactly m r critical points, namely the Q I = U Σ I V T for I ∈ [m] r . Moreover, its unique local and global minimum is Q [r] .
We can strengthen this result by explicitly calculating the index of each critical point, i.e., the number of negative eigenvalues of the Hessian matrix. To prove this assertion, we may assume without loss of generality that d y ≤ d x , so m = d y . We may further assume that Q 0 is a diagonal matrix, so Q 0 = Σ. Let µ (dy,r,dx) : R dy×r × R r×dx → R dy×dx be the matrix multiplication map, and L = h Σ • µ (dy,r,dx) . For (A, B) ∈ µ −1 (dy,r,dx) (Σ I ), Theorem 4 implies that the condition Σ I ∈ Crit(h Σ | Mr ) is equivalent to dL(A, B) = 0. Moreover, the number of negative eigenvalues of the Hessian of L at any such factorization (A, B) of Σ I is the same. This number is the index of Σ I . So we can compute it by fixing one specific factorization (A, B) of Σ I .
To compute the Hessian of L at (A, B), we compute the partial derivatives of first and second order of L:
∂L ∂a ij = 2 (AB − Σ)B T ij , ∂L ∂b ij = 2 A T (AB − Σ) ij , ∂ 2 L ∂a ij ∂a kl = 0 , if i = k 2[BB T ] jl , if i = k ,(16)∂ 2 L ∂b ij ∂b kl = 0 , if j = l 2[A T A] ik , if j = l ,(17)∂ 2 L ∂a ij ∂b kl = 2a ik b jl , if j = k 2 (a ik b jl + [AB − Σ] il ) , if j = k .(18)
To assemble these second order partial derivatives into a matrix, we choose the following order of the entries of (A, B): a 11 , . . . , a 1r , a 21 , . . . , a 2r , . . . , a dy1 , . . . , a dyr , b 11 , . . . , b r1 , b 12 , . . . , b r2 , . . . , b 1dx , . . . , b rdx .
We denote by H the Hessian matrix of L with respect to this ordering at the following specifically chosen matrices (A 0 , B 0 ): denoting by i 1 , i 2 , . . . , i r the entries of I in decreasing order, we pick the j-th column of A 0 to be the i j -th standard basis vector in R dy and the j-th row of B 0 to be the σ ij -multiple of the i j -th standard basis vector in R dx . Note that A 0 B 0 = Σ I , A T 0 A 0 = I r is the r ×r-identity matrix, and B 0 B T 0 is the r ×r-diagonal matrix with entries σ 2 i1 , σ 2 i2 , . . . , σ 2 ir . We write
H = D M M T N , where D ∈ R rdy×rdy , N ∈ R rdx×rdx , M ∈ R rdy×rdx .
Our first observation is that N , whose entries are described by (17), is twice the identity matrix, so N = 2I rdx . Similarly, we see from (16) Now we fix i and j and consider the ij-th row of M . We apply our observations above to the following three cases.
If i = i j , then M ij,kl is non-zero if and only if k = j and l = i. In that case, M ij,ji = 2σ i .
If i ∈ I, but i = i j , then there is some n = j such that i = i n . Now M ij,kl is non-zero if and only if k = n and l = i j . In that case, M ij,kij = 2σ ij .
Finally, if i / ∈ I, then M ij,kl is non-zero if and only if k = j and l = i. In that case, we have that M ij,ji = −2σ i .
Corollary 32. The square matrix ∆ := M M T ∈ R rdy×rdy is a diagonal matrix. For i ∈ [d y ] and j ∈ [r], its ij-th diagonal entry is ∆ ij,ij = 4σ 2 ij if i ∈ I and ∆ ij,ij = 4σ 2 i if i / ∈ I.
Proof. The computation of the diagonal entries follows directly from Lemma 31. To see that all other entries of ∆ are zero, we need to show that no column of M has more than one non-zero entry. So let us assume for contradiction that the kl-th column of M has non-zero entries in the ij-th row and in the ı-th row for (i, j) = (ı, ).
If i, ı ∈ I, then Lemma 31 implies i = i k = ı and i j = l = i , which contradicts (i, j) = (ı, ).
If i, ı / ∈ I, we see from Lemma 31 that j = k = and i = l = ı, which contradicts (i, j) = (ı, ).
Finally, if i ∈ I and ı / ∈ I, then Lemma 31 yields that ı = l = i j ∈ I; a contradiction.
Corollary 33. The characteristic polynomial of H is (t − 2) r|dx−dy| · t r 2 · k∈I t − 2(σ 2 k + 1) r · i∈[m]\I j∈I t 2 − 2t(σ 2 j + 1) + 4(σ 2 j − σ 2 i ) . (19) Proof. Using Schur complements, we can compute the characteristic polynomial of H as follows:
χ H (t) = det tI r(dx+dy) − H = det (tI rdx − 2I rdx ) det (tI rdy − D) − M (tI rdx − 2I rdx ) −1 M T = (t − 2) rdx det (tI rdy − D) − (t − 2) −1 ∆ = (t − 2) r(dx−dy) det (t − 2)(tI rdy − D) − ∆ .
By Corollary 32, the matrix (t − 2)(tI rdy − D) − ∆ is a diagonal matrix whose ij-th diagonal entry is (t − 2)(t − D ij,ij ) − ∆ ij,ij . We write shortly δ ij := ∆ ij,ij and use the identity D ij,ij = 2σ 2 ij to further derive
χ H (t) = (t − 2) r(dx−dy) dy i=1 r j=1 (t − 2)(t − 2σ 2 ij ) − δ ij = (t − 2) r(dx−dy) dy i=1 r j=1 t 2 − 2t(σ 2 ij + 1) + (4σ 2 ij − δ ij ) = (t − 2) r(dx−dy) i∈I r j=1 t t − 2(σ 2 ij + 1) · i∈[dy]\I r j=1 t 2 − 2t(σ 2 ij + 1) + 4(σ 2 ij − σ 2 i ) .
The latter equality was derived by substituting specific values into the δ ij according to Corollary 32.
Rearranging the terms of this last expression of χ H (t) yields (19).
Lemma 34. Let x, y > 0. The polynomial f (t) = t 2 − 2t(x + 1) + 4(x − y) has two real roots and at least one of them is positive. Moreover, f (t) has a negative root if and only if x < y.
Proof. The roots of f (t) are x + 1 ± (x + 1) 2 − 4(x − y) = x + 1 ± (x − 1) 2 + 4y. So the discriminant is positive and f (t) has two real roots. Clearly, one of these is positive. The other one is negative if and only if x + 1 < (x − 1) 2 + 4y, which is equivalent to (x + 1) 2 < (x − 1) 2 + 4y and thus to x < y.
Proof of Theorem 30. It is left to count the number of negative roots of the univariate polynomial (19). All the linear factors of (19) have non-negative roots. The ij-th quadratic factor of (19), for i ∈ [d y ] \ I and j ∈ I, has at most one negative root due to Lemma 34. Moreover, it has exactly one negative root if and only if σ 2 j < σ 2 i , which is equivalent to j > i. Hence, the polynomial (19) has exactly #{(j, i) ∈ I × [d y ] \ I | j > i} many negative roots. Proof. This is an amalgamation of Corollary 29 and Theorem 30.
Figure 3 :
3Real critical points and local minima for random choices ofh Q0 | M1 as defined in Example 13. The size of each disk is proportional to the number of instances we found with that number of critical points and local minima. This shows that linear networks with a convex loss may indeed have multiple non-global local minima. More details in Appendix A.2
This expression yields 39 for d x = d y = 3, as mentioned in Example 13. For general r, formulas for the general ED degree of M r involving Chern and polar classes can be found inOttaviani et al. (2013);Draisma et al. (2013). A short algorithm to compute the general ED degree of M r is given in Example 7.11 ofDraisma et al. (2013); it uses a package for advanced intersection theory in the algebro-geometric software Macaulay2 (Grayson & Stillman, 2019).
that the number of complex critical points of the perturbed quadratic distance function h ϕ,Q0 restricted to M 1 is the expected number 39. After that, we computed the number of real critical points and the number of local minima among them. Our results for 2000 iterations can be found in
Now we provide a proof for Theorem 4, starting from a refinement of the last statement.Proposition 16. Let r = min{d i }, θ = (W h , . . . , W 1 ), W = µ d (θ), and e = rkW . The image of the differential dµ d at θ contains the tangent space T W M e of M e at W . Furthermore, for every W ∈ M e \M e−1 there exists θ such that µ d (θ ) = W and the image of dµ d (θ ) is exactly T W M e .
Theorem 4 .
4Let r = min{d i }, θ = (W h , . . . , W 1 ), and W = µ d (θ).• (Filling case) If r = min{d h , d 0 }, the differential dµ d (θ) has maximal rank equal to dim M r = d h d 0 if and only if, for every i ∈ {1, 2, . . . , h − 1}, either rk(W >i ) = d h or rk(W <i+1 ) = d 0 holds.• (Non-filling case) If r < min{d h , d 0 }, the differential dµ d (θ) has maximal rank equal to dim M r = r(d h + d 0 − r) if and only if rk(W ) = r.Furthermore, in both situations, if rk(W ) = e < r, then the image of dµ d (θ) always contains the tangent space T W M e of M e ⊂ M r at W .
M
, W 1 ) in the set of factorizations of W . This path can be extended to a path in µ −1 2 (W ) from θ to θ := ( Ir M , W 1 , A b−1 , . . . , A 1 ). Applying the same construction on W := W 1 A b−1 yields a path in µ −1 2 (W ) from θ to θ := ( Ir M , I r , W 1 A b−1 , . . . , A 1 ). We repeat the contruction until ( Ir M , I r , . . . , I r , W 1 ) is reached. Since M W 1 = W 2 = M W 1 , the straight line from ( Ir M , I r , . . . , I r , W 1 ) to ( Ir M , I r , . . . , I r , W 1 ) is a path in µ −1 2 (W ). ♦
[m] = {1, 2, . . . , m} and denote by [m] r the set of all subsets of [m] of cardinality r. For I ∈ [m] r , we define Σ I ∈ R dy×dx to be the diagonal matrix with entries σ I,1 , σ I,2 , . . . , σ I,m where σ I,i = σ i if i ∈ I and σ I,i = 0 otherwise. These matrices yield the critical points of the function h Q0 (P ) = P − Q 0 2 restricted to the determinantal variety M r . Theorem 28. If Q 0 / ∈ M r , the critical points of h Q0 | Mr are all matrices of the form U Σ I V T where Q 0 = U ΣV T is a SVD and I ∈ [m] r . The local minima are the critical points with I = [r]. They are all global minima. Proof. A matrix P ∈ M r is a critical point if and only
Theorem 30 .
30If the singular values of Q 0 are pairwise distinct and positive, the index of Q I as a critical point of h Q0 | Mr is index(Q I ) = #{(j, i) ∈ I × ([m] \ I) | j > i}.
that D is a diagonal matrix. According to our fixed ordering, the entries of D are indexed by pairs (ij, kl) of integers i, k ∈ [d y ] and j, l ∈ [r]. With this, the diagonal entries of D are D ij,ij = 2σ 2 ij . Analogously, the entries of M are indexed by pairs(ij, kl) of integers i ∈ [d y ], j, k ∈ [r] and l ∈ [d x ]. Lemma 31. Let i ∈ [d y ] and j ∈ [r].The ij-th row of M has exactly one non-zero entry. If i ∈ I, there is some k ∈ [r] with i = i k and the non-zero entry is M ij,kij = 2σ ij . Otherwise, so if i / ∈ I, the non-zero entry is M ij,ji = −2σ i .Proof. The entries of M are given by (18). We first observe that (A 0 ) ik (B 0 ) jl is non-zero if and only if i = i k and l = i j . Moreover, we have that(A 0 ) i k k (B 0 ) jij = σ ij . Similarly, [A 0 B 0 − Σ] ilis non-zero if and only i = l / ∈ I. For i / ∈ I we have that [A 0 B 0 − Σ] ii = −σ i .
Theorem 12 .
12If the singular values of Q 0 are pairwise distinct and positive, h Q0 | Mr has exactly m r critical points, namely the matrices Q I = U Σ I V T with #(I) = r. Moreover, its unique local and global minimum is Q {1,...,r} . More precisely, the index of Q I as a critical point of h Q0 | Mr (i.e., the number of negative eigenvalues of the Hessian matrix for any local parameterization) is index(Q I ) = #{(j, i) ∈ I × I c | j > i}, where I c = {1, . . . , m} \ I.
Haihao Lu and KenjiKawaguchi. Depth Creates No Bad Local Minima. arXiv:1702.08580 [cs, math, stat], February 2017. Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A Mean Field View of the Landscape of Two-Layers Neural Networks. arXiv:1804.06561 [cond-mat, stat], April 2018. Giorgio Ottaviani, Pierre-Jean Spaenlehauer, and Bernd Sturmfels. Exact Solutions in Structured Low-Rank Approximation. arXiv:1311.2376 [cs, math, stat], November 2013. Luca Venturi, Afonso S Bandeira, and Joan Bruna. Spurious valleys in two-layer neural network optimization landscapes. arXiv preprint arXiv:1802.06384, 2018. Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Global optimality conditions for deep neural networks. arXiv preprint arXiv:1707.02444, 2017. Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Small nonlinearities in activation functions create bad local minima in neural networks. arXiv preprint arXiv:1802.03487, 2018. Li Zhang. Depth creates no more spurious local minima. arXiv preprint arXiv:1901.09827, 2019. Yi Zhou and Yingbin Liang. Critical Points of Neural Networks: Analytical Forms and Landscape Properties. arXiv:1710.11205 [cs, stat], October 2017.A APPENDIX
A.1 DETERMINANTAL VARIETIES
Table
Proposition 18. Let r = min{d i } and θ = (W h , . . . , W 1 ). In the filling case (i.e., if r = min{d h , d 0 }) we have that rk(dµ d (θ)) < d h d 0 if and only if there is some i ∈ {1, . . . , h − 1} with rk(W >i ) < d h and rk(W <i+1 ) < d 0 .
This setting applies to both the empirical loss and the population loss.
Our setting can also be applied when includes a regularizer term defined in function space, e.g., (W ) = W X − Y 2 + λR(W ).
Basically, it is enough to use X to define a new metric M, N X = M XX T N T which replaces the standard Frobenius inner product. In this setting, the critical points in Theorem 12 are obtained from the SVD decomposition of Q0X.
This means that the algebraic equations corresponding to the vanishing of the differential have exactly 39 complex solutions.
R d h ⊗ Row(W <h ) + . . . + Col(W >i ) ⊗ Row(W <i ) + . . . + Col(W >1 ) ⊗ R d0 .
], which concludes the proof.
Acknowledgements. We thank James Mathews for many helpful discussions in the beginning of this project. We are gratuful to ICERM (NSF DMS-1439786 and the Simons Foundation grant 507536) for the hospitality during the academic year 2018/2019 where many ideas for this project were developed. MT and JB were partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, and Samsung Electronics. KK was partially supported by the Knut and Alice Wallenbergs Foundation within their WASP AI/Math initiative.
Information Geometry and Its Applications. Shun-Ichi Amari, 10.1007/978-4-431-55978-8Applied Mathematical Sciences. 194SpringerShun-ichi Amari. Information Geometry and Its Applications, volume 194 of Applied Mathematical Sciences. Springer Japan, Tokyo, 2016. ISBN 978-4-431-55977-1 978-4-431-55978-8. doi: 10.1007/978-4-431-55978-8.
Neural networks and principal component analysis: Learning from examples without local minima. Pierre Baldi, Kurt Hornik, 10.1016/0893-6080(89)90014-2Neural Networks. 21Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2(1):53-58, January 1989. ISSN 08936080. doi: 10.1016/0893-6080(89)90014-2.
On the Global Convergence of Gradient Descent for Overparameterized Models using Optimal Transport. Lenaic Chizat, Francis Bach, arXiv:1805.09545cs, math, statLenaic Chizat and Francis Bach. On the Global Convergence of Gradient Descent for Over- parameterized Models using Optimal Transport. arXiv:1805.09545 [cs, math, stat], May 2018.
The Loss Surfaces of Multilayer Networks. Anna Choromanska, Mikael Henaff, Michaël Mathieu, Gérard Ben Arous, Yann Lecun, Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics. the Eighteenth International Conference on Artificial Intelligence and StatisticsSan Diego, California, USA2015Anna Choromanska, Mikael Henaff, Michaël Mathieu, Gérard Ben Arous, and Yann LeCun. The Loss Surfaces of Multilayer Networks. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2015, San Diego, California, USA, May 9-12, 2015, 2015.
The average number of critical rank-one approximations to a tensor. Jan Draisma, Emil Horobet, arXiv:1408.3507Jan Draisma and Emil Horobet. The average number of critical rank-one approximations to a tensor. arXiv:1408.3507 [math], August 2014.
The Euclidean distance degree of an algebraic variety. Jan Draisma, Emil Horobet, Giorgio Ottaviani, Bernd Sturmfels, Rekha R Thomas, arXiv:1309.0049Jan Draisma, Emil Horobet, Giorgio Ottaviani, Bernd Sturmfels, and Rekha R. Thomas. The Eu- clidean distance degree of an algebraic variety. arXiv:1309.0049 [math], August 2013.
R Daniel, Michael E Grayson, Stillman, Macaulay2, a software system for research in algebraic geometry. Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/, 2019.
. Moritz Hardt, Tengyu Ma, arXiv:1611.04231Identity Matters in Deep Learning. cs, statMoritz Hardt and Tengyu Ma. Identity Matters in Deep Learning. arXiv:1611.04231 [cs, stat], November 2016.
Algebraic Geometry: A First Course. Joe Harris, 978-0-387-97716-4Number 133 in Graduate Texts in Mathematics. New York, corrSpringer3rd print editionJoe Harris. Algebraic Geometry: A First Course. Number 133 in Graduate Texts in Mathematics. Springer, New York, corr. 3rd print edition, 1995. ISBN 978-0-387-97716-4.
Deep Learning without Poor Local Minima. CoRR. Kenji Kawaguchi, abs/1605.07110Kenji Kawaguchi. Deep Learning without Poor Local Minima. CoRR, abs/1605.07110, 2016.
Joe Kileel, Matthew Trager, Joan Bruna, arXiv:1905.12207On the expressive power of deep polynomial neural networks. arXiv preprintJoe Kileel, Matthew Trager, and Joan Bruna. On the expressive power of deep polynomial neural networks. arXiv preprint arXiv:1905.12207, 2019.
Deep linear neural networks with arbitrary loss: All local minima are global. Thomas Laurent, James Von Brecht, arXiv:1712.01473cs, statThomas Laurent and James von Brecht. Deep linear neural networks with arbitrary loss: All local minima are global. arXiv:1712.01473 [cs, stat], December 2017.
Introduction to Smooth Manifolds. John M Lee, 978-0-387-95495-0 978-0-387-95448-6Number 218 in Graduate Texts in Mathematics. New YorkSpringerJohn M. Lee. Introduction to Smooth Manifolds. Number 218 in Graduate Texts in Mathematics. Springer, New York, 2003. ISBN 978-0-387-95495-0 978-0-387-95448-6. |
236,170,938 | EFFICIENT NEURAL CAUSAL DISCOVERY WITHOUT ACYCLICITY CONSTRAINTS | Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which, however, require constrained optimization to enforce acyclicity or lack convergence guarantees. In this paper, we present ENCO, an efficient structure learning method for directed, acyclic causal graphs leveraging observational and interventional data. ENCO formulates the graph search as an optimization of independent edge likelihoods, with the edge orientation being modeled as a separate parameter. Consequently, we provide for ENCO convergence guarantees when interventions on all variables are available, without having to constrain the score function with respect to acyclicity. In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible, while handling deterministic variables and discovering latent confounders. * Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.In this work, we address both problems. By modeling the orientation of an edge as a separate parameter, we can define the score function without any acyclicity constraints or regularizers. This allows for unbiased low-variance gradient estimators that scale learning to much larger graphs. Yet, if we are able to intervene on all variables, the proposed optimization is guaranteed to converge to the correct, acyclic graph. Importantly, since such interventions might not always be available, we show that our algorithm performs robustly even when intervening on fewer variables and having small sample sizes. We call our method ENCO for Efficient Neural Causal Discovery.We make the following four contributions. Firstly, we propose ENCO, a causal structure learning method for observational and interventional data using continuous optimization. Different from recent methods, ENCO models the edge orientation as a separate parameter. Secondly, we derive unbiased, low-variance gradient estimators, which is crucial for scaling up the model to large numbers of variables. Thirdly, we show that ENCO is guaranteed to converge to the correct causal graph if interventions on all variables are available, despite not having any acyclicity constraints. Yet, we show in practice that the algorithm works on partial intervention sets as well. Fourthly, we extend ENCO to detecting latent confounders. In various experimental settings, ENCO recovers graphs accurately, making less than one error on graphs with 1,000 variables in less than nine hours of computation. | [
184486852,
59413789
] | EFFICIENT NEURAL CAUSAL DISCOVERY WITHOUT ACYCLICITY CONSTRAINTS
Phillip Lippe [email protected]
QUVA lab
University of Amsterdam QUVA
University of Amsterdam
Taco Cohen
QUVA lab
University of Amsterdam QUVA
University of Amsterdam
Qualcomm Ai Research
QUVA lab
University of Amsterdam QUVA
University of Amsterdam
Efstratios Gavves [email protected]
QUVA lab
University of Amsterdam QUVA
University of Amsterdam
EFFICIENT NEURAL CAUSAL DISCOVERY WITHOUT ACYCLICITY CONSTRAINTS
Published as a conference paper at ICLR 2022
Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which, however, require constrained optimization to enforce acyclicity or lack convergence guarantees. In this paper, we present ENCO, an efficient structure learning method for directed, acyclic causal graphs leveraging observational and interventional data. ENCO formulates the graph search as an optimization of independent edge likelihoods, with the edge orientation being modeled as a separate parameter. Consequently, we provide for ENCO convergence guarantees when interventions on all variables are available, without having to constrain the score function with respect to acyclicity. In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible, while handling deterministic variables and discovering latent confounders. * Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.In this work, we address both problems. By modeling the orientation of an edge as a separate parameter, we can define the score function without any acyclicity constraints or regularizers. This allows for unbiased low-variance gradient estimators that scale learning to much larger graphs. Yet, if we are able to intervene on all variables, the proposed optimization is guaranteed to converge to the correct, acyclic graph. Importantly, since such interventions might not always be available, we show that our algorithm performs robustly even when intervening on fewer variables and having small sample sizes. We call our method ENCO for Efficient Neural Causal Discovery.We make the following four contributions. Firstly, we propose ENCO, a causal structure learning method for observational and interventional data using continuous optimization. Different from recent methods, ENCO models the edge orientation as a separate parameter. Secondly, we derive unbiased, low-variance gradient estimators, which is crucial for scaling up the model to large numbers of variables. Thirdly, we show that ENCO is guaranteed to converge to the correct causal graph if interventions on all variables are available, despite not having any acyclicity constraints. Yet, we show in practice that the algorithm works on partial intervention sets as well. Fourthly, we extend ENCO to detecting latent confounders. In various experimental settings, ENCO recovers graphs accurately, making less than one error on graphs with 1,000 variables in less than nine hours of computation.
INTRODUCTION
Uncovering and understanding causal mechanisms is an important problem not only in machine learning (Schölkopf et al., 2021;Pearl, 2009) but also in various scientific disciplines such as computational biology (Friedman et al., 2000;Sachs et al., 2005), epidemiology (Robins et al., 2000;Vandenbroucke et al., 2016), and economics (Pearl, 2009;Hicks et al., 1980). A common task of interest is causal structure learning (Pearl, 2009;Peters et al., 2017), which aims at learning a directed acyclic graph (DAG) in which edges represent causal relations between variables. While observational data alone is in general not sufficient to identify the DAG (Yang et al., 2018;Hauser & Bühlmann, 2012), interventional data can improve identifiability up to finding the exact graph (Eberhardt et al., 2005;Eberhardt, 2008). Unfortunately, the solution space of DAGs grows super-exponentially with the variable count, requiring efficient methods for large graphs. Current methods are typically applied to a few dozens of variables and cannot scale so well, which is imperative for modern applications like learning causal relations with gene editing interventions (Dixit et al., 2016;Macosko et al., 2015).
A promising new direction for scaling up DAG discovery methods are continuous-optimization methods (Zheng et al., 2018;2020;Zhu et al., 2020;Ke et al., 2019;Brouillard et al., 2020;Yu et al., 2019). In contrast to score-based and constrained-based (Peters et al., 2017;Guo et al., 2020) methods, continuous-optimization methods reinterpret the search over discrete graph topologies as a continuous problem with neural networks as function approximators, for which efficient solvers are amenable. To restrict the search space to acyclic graphs, Zheng et al. (2018) first proposed to view the search as a constrained optimization problem using an augmented Lagrangian procedure to solve it. While several improvements have been explored (Zheng et al., 2020;Brouillard et al., 2020;Yu et al., 2019;, constrained optimization methods remain slow and hard to train. Alternatively, it is possible to apply a regularizer (Zhu et al., 2020;Ke et al., 2019) to penalize cyclic graphs. While simpler to optimize, methods relying on acyclicity regularizers commonly lack guarantees for finding the correct causal graph, often converging to suboptimal solutions. Despite the advances, beyond linear, continuous settings (Ng et al., 2020;Varando, 2020) continuous optimization methods still cannot scale to more than 100 variables, often due to difficulties in enforcing acyclicity.
BACKGROUND AND RELATED WORK
CAUSAL GRAPHICAL MODELS
A causal graphical model (CGM) is defined by a distribution P over a set of random variables X = {X 1 , ..., X N } and a directed acyclic graph (DAG) G = (V, E). Each node i ∈ V corresponds to the random variable X i and an edge (i, j) ∈ E represents a direct causal relation from variable X i to X j : X i → X j . A joint distribution P is faithful to a graph G if all and only the conditional independencies present in P are entailed by G (Pearl, 1988). Vice versa, a distribution P is Markov to a graph G if the joint distribution can be factorized as p(X) = N i=1 p i (X i |pa(X i )) where pa(X i ) is the set of parents of the node i in G. An important concept which distinguishes CGMs from standard Bayesian Networks are interventions. An intervention on a variable X i describes the local replacement of its conditional distribution p i (X i |pa(X i )) by a new distributionp(X i |pa(X i )). X i is thereby referred to as the intervention target. An intervention is "perfect" when the new distribution is independent of the original parents, i.e.p(X i |pa(X i )) =p(X i ).
CAUSAL STRUCTURE LEARNING
Discovering the graph G from samples of a joint distribution P is called causal structure learning or causal discovery, a fundamental problem in causality (Pearl, 2009;Peters et al., 2017). While often performed from observational data, i.e. samples from P (see Glymour et al. (2019) for an overview), we focus in this paper on algorithms that recover graphs from joint observational and interventional data. Commonly, such methods are grouped into constraint-based and score-based approaches.
Constraint-based methods use conditional independence tests to identify causal relations (Monti et al., 2019;Spirtes et al., 2000;Kocaoglu et al., 2019;Jaber et al., 2020;Sun et al., 2007;Hyttinen et al., 2014). For instance, the Invariant Causal Prediction (ICP) algorithm (Peters et al., 2016;Christina et al., 2018) exploits that causal mechanisms remain unchanged under an intervention except the one intervened on (Pearl, 2009;Schölkopf et al., 2012). We rely on a similar idea by testing for mechanisms that generalize from the observational to the interventional setting. Another line of work is to extend methods working on observations only to interventions by incorporating those as additional constraints in the structure learning process (Mooij et al., 2020;Jaber et al., 2020).
Score-based methods, on the other hand, search through the space of all possible causal structures with the goal of optimizing a specified metric (Tsamardinos et al., 2006;Ke et al., 2019;Goudet et al., 2017;Zhu et al., 2020). This metric, also referred to as score function, is usually a combination of how well the structure fits the data, for instance in terms of log-likelihood, as well as regularizers for encouraging sparsity. Since the search space of DAGs is super-exponential in the number of nodes, many methods rely on a greedy search, yet returning graphs in the true equivalence class (Meek, 1997;Hauser & Bühlmann, 2012;Wang et al., 2017;Yang et al., 2018). For instance, GIES (Hauser & Bühlmann, 2012) repeatedly adds, removes, and flips the directions of edges in a proposal graph
NN f φ1 f φ2 f φ3 X 1 X 2 X 3 NN 1 NN 2 NN 3 X 1 X 3 X 2
Alternate between both steps
Distribution fitting Graph fitting Figure 1: Visualization of the two training stages of ENCO, distribution fitting and graph fitting, on an example graph with 3 variables (X 1 , X 2 , X 3 ). The graph on the right further shows how the parameters γ and θ correspond to edge probabilities. We learn those parameters by comparing multiple graph samples on how well they generalize from observational to interventional data.
until no higher-scoring graph can be found. The Interventional Greedy SP (IGSP) algorithm (Wang et al., 2017) is a hybrid method using conditional independence tests in its score function.
Continuous-optimization methods are score-based methods that avoid the combinatorial greedy search over DAGs by using gradient-based methods (Zheng et al., 2018;Ke et al., 2019;Yu et al., 2019;Zheng et al., 2020;Zhu et al., 2020;Brouillard et al., 2020). Thereby, the adjacency matrix is parameterized by weights that represent linear factors or probabilities of having an edge between a pair of nodes. The main challenge of such methods is how to limit the search space to acyclic graphs. One common approach is to view the search as a constrained optimization problem and deploy an augmented Lagrangian procedure to solve it (Zheng et al., 2018;2020;Yu et al., 2019;Brouillard et al., 2020), including NOTEARS (Zheng et al., 2018) and DCDI (Brouillard et al., 2020). Alternatively, Ke et al. (2019) propose to use a regularization term penalizing cyclic graphs while allowing unconstrained optimization. However, the regularizer must be designed and weighted such that the correct, acyclic causal graph is the global optimum of the score function.
EFFICIENT NEURAL CAUSAL DISCOVERY
SCOPE AND ASSUMPTIONS
We consider the task of finding a directed acyclic graph G = (V, E) with N variables of an unknown CGM given observational and interventional samples. Firstly, we assume that: (1) The CGM is causally sufficient, i.e., all common causes of variables are included and observable;
(2) We have N interventional datasets, each sparsely intervening on a different variable; (3) The interventions are "perfect" and "stochastic", meaning the intervention does not set the variable necessarily to a single value. Thereby, we do not strictly require faithfulness, thus also recovering some graphs violating faithfulness. We emphasize that we place no constraints on the domains of the variables (they can be discrete, continuous, or mixed) or the distributions of the interventions. We discuss later how to extend the algorithm to infer causal mechanisms in graphs with latent confounding causal variables. Further, we discuss how to extend the algorithm to support interventions to subsets of variables only.
OVERVIEW
ENCO learns a causal graph from observational and interventional data by modelling a probability for every possible directed edge between pairs of variables. The goal is that the probabilities corresponding to the edges of the ground truth graph converge to one, while the probabilities of all other edges converge to zero. For this to happen, we exploit the idea of independent causal mechanisms (Pearl, 2009;Peters et al., 2016), according to which the conditional distributions for all variables in the ground-truth CGM stay invariant under an intervention, except for the intervened ones. By contrast, for graphs modelling the same joint distribution but with a flipped or additional edge, this does not hold (Peters et al., 2016). In short, we search for the graph which generalizes best from observational to interventional data. To implement the optimization, we alternate between two learning stages, that is distribution fitting and graph fitting, visually summarized in Figure 1.
Distribution fitting trains a neural network f φi per variable X i parameterized by φ i to model its observational, conditional data distribution p(X i |...). The input to the network are all other variables, X −i . For simplicity, we want this neural network to model the conditional of the variable X i with respect to any possible set of parent variables. We, therefore, apply a dropout-like scheme to the input to simulate different sets of parents, similar as (Ke et al., 2019;Ivanov et al., 2019;Li et al., 2020;Brouillard et al., 2020). In that case, during training, we randomly set an input variable X j to zero based on the probability of its corresponding edge X j → X i , and minimize
min φi E X E M [− log f φi (X i ; M −i X −i )] ,(1)
where M j ∼ Ber(p(X j → X i )). For categorical random variables X i , we apply a softmax output activation for f φi , and for continuous ones, we use Normalizing Flows (Rezende & Mohamed, 2015).
Graph fitting uses the learned networks to score and compare different graphs on interventional data. For parameterizing the edge probabilities, we use two sets of parameters: γ ∈ R N ×N represents the existence of edges in a graph, and θ ∈ R N ×N the orientation of the edges. The likelihood of an edge is determined by p(X i → X j ) = σ(γ ij ) · σ(θ ij ), with σ(...) being the sigmoid function and θ ij = −θ ji . The probability of the two orientations always sum to one. The benefit of separating the edge probabilities into two independent parameters γ and θ is that it gives us more control over the gradient updates. The existence of an (undirected) edge can usually be already learned from observational or arbitrary interventional data alone, excluding deterministic variables (Pearl, 2009). In contrast, the orientation can only be reliably detected from data for which an intervention is performed on its adjacent nodes, i.e., X i or X j for learning θ ij . While other interventions eventually provide information on the edge direction, e.g., intervening on a node X k which is a child of X i and a parent of X j , we do not know the relation of X k to X i and X j at this stage, as we are in the process of learning the structure. Despite having just one variable for the orientation, γ ij and γ ji are learned as two separate parameters. One reason is that on interventional data, an edge can improve the log-likelihood estimate in one direction, but not necessarily the other, leading to conflicting gradients.
We optimize the graph parameters γ and θ by minimizing
L = EÎ ∼p I (I) EpÎ (X) E p γ,θ (C) N i=1 L C (X i ) + λ sparse N i=1 N j=1 σ(γ ij ) · σ(θ ij )(2)
where p I (I) is the distribution over which variable to intervene on (usually uniform), andpÎ (X) the joint distribution of all variables under the interventionÎ. In other words, these two distributions represent our interventional data distribution. With p γ,θ (C), we denote the distribution over adjacency matrices C under γ, θ, where C ij ∼ Ber(σ(γ ij )σ(θ ij )). L C (X i ) is the negative log-likelihood estimate of variable X i conditioned on the parents according to C: L C (X i ) = − log f φi (X i ; C ·,i X −i ). The second term of Equation 2 is an 1 -regularizer on the edge probabilities. It acts as a prior, selecting the sparsest graph of those with similar likelihood estimates by removing redundant edges.
Prediction. Alternating between the distribution and graph fitting stages allows us to fine-tune the neural networks to the most probable parent sets along the training. After training, we obtain a graph prediction by selecting the edges for which σ(γ ij ) and σ(θ ij ) are greater than 0.5. The orientation parameters prevent loops between any two variables, since σ(θ ij ) can only be greater than 0.5 in one direction. Although the orientation parameters do not guarantee the absence of loops with more variable, we show that under certain conditions ENCO yet converges to the correct, acyclic graph.
LOW-VARIANCE GRADIENT ESTIMATORS FOR EDGE PARAMETERS
To update γ and θ based on Equation 2, we need to determine their gradients through the expectation E p γ,θ (C) , where C is a discrete variable. For this, we apply REINFORCE (Williams, 1992). For clarity of exposition, we limit the discussion here to the final results and provide the detailed derivations in Appendix A. For parameter γ ij , we obtain the following gradient:
∂ ∂γ ijL = σ (γ ij ) · σ(θ ij ) · E X,C−ij L Xi→Xj (X j ) − L Xi →Xj (X j ) + λ sparse(3)
where E X,C−ij summarizes for brevity the three expectations in Equation 2, excluding the edge X i → X j from C. Further, L Xi→Xj (X j ) denotes the negative log-likelihood for X j , if we include the edge X i → X j to the adjacency matrix C −ij , i.e., C ij = 1, and L Xi →Xj (X j ) if we set C ij = 0. The gradient in Equation 3 can be intuitively explained: if by the addition of the edge X i → X j , the log-likelihood estimate of X j is improved by more than λ sparse , we increase the corresponding edge parameter γ ij ; otherwise, we decrease it.
We derive the gradients for the orientation parameters θ similarly. As mentioned before, we only take the gradients for θ ij when we perform an intervention on either X i or X j . This leads us to:
∂ ∂θ ijL = σ (θ ij ) p(I Xi ) · σ(γ ij ) · E I X i ,X,C−ij L Xi→Xj (X j ) − L Xi →Xj (X j ) − p(I Xj ) · σ(γ ji ) · E I X j ,X,C−ij L Xj →Xi (X i ) − L Xj →Xi (X i )(4)
The probability of taking an intervention on X i is represented by p(I Xi ) (usually uniform across variables), and E I X i ,X,C−ij the same expectation as before under the intervention on X i . When the oriented edge X i → X j improves the log-likelihood of X j under intervention on X i , then the first part of the gradient increases θ ij . In contrast, when the true edge is X j → X i , the correlation between X i and X j learned from observational data would yield a worse likelihood estimate of X j on interventional data on X i than without the edge X j → X i . This is because p(X j |X i , ...) does not stay invariant under intervening on X i . The same dynamic holds for interventions on X j . Lastly, for independent nodes, the expectation of the gradient is zero.
Based on Equations 3 and 4, we obtain a tractable, unbiased gradient estimator by using Monte-Carlo sampling. Luckily, samples can be shared across variables, making training efficient. We first sample an intervention, a corresponding data batch, and K graphs from p γ,θ (C) (K usually between 20 and 100). We then evaluate the log likelihoods of all variables for these graphs on the batch, and estimate L Xi→Xj (X j ) and L Xi →Xj (X j ) for all pairs of variables X i and X j by simply averaging the results for the two cases separately. Finally, the estimates are used to determine the gradients for γ and θ.
Low variance. Previous methods (Ke et al., 2019;Bengio et al., 2020) relied on a different REINFORCE-like estimator proposed by Bengio et al. (2020). Adjusting their estimator to our setting of the parameter γ ij , for instance, the gradient looks as follows: : ENCO estimates gradients of significantly lower variance compared to (Bengio et al., 2020).
g ij = σ(θ ij ) · E X E C [(σ(γ ij ) − C ij ) · L C (X j )] E C [L C (X j )] + λ sparse(5)
where g ij represents the gradient of γ ij . Performing Monte-Carlo sampling for estimating the gradient leads to a biased estimate which becomes asymptotically unbiased with increasing number of samples (Bengio et al., 2020). The division by the expectation of L C (X j ) is done for variance reduction (Mahmood et al., 2014). Equation 5, however, is still sensitive to the proportion of sampled C ij being one or zero. A major benefit of our gradient formulation in Equation 3, instead, is that it removes this noise by considering the difference of the two independent Monte-Carlo estimates L Xi→Xj (X j ) and L Xi →Xj (X j ). Hence, we can use a smaller sample size than previous methods and attain 10 times lower standard deviation, as visualized in Figure 2.
CONVERGENCE GUARANTEES
Next, we discuss the conditions under which ENCO convergences to the correct causal graph. We show that not only does the global optimum of Equation 2 correspond to the true graph, but also that there exist no other local minima ENCO can converge to. We outline the derivation and proof of these conditions in Appendix B, and limit our discussion here to the main assumptions and implications.
To construct a theoretical argument, we make the following assumptions. First, we assume that sparse interventions have been performed on all variables. Later, we show how to extend the algorithm to avoid this strong assumption. Further, given a CGM, we assume that its joint distribution p(X) is Markovian with respect to the true graph G. In other words, the parent set pa(X i ) reflects the inputs to the causal generation mechanism of X i . We assume that there exist no latent confounders in G. Also, we assume the neural networks in ENCO are sufficiently large and sufficient observational data is provided to model the conditional distributions of the CGM up to an arbitrary small error.
Under these assumptions, we produce the following conditions for convergence:
Theorem 3.1. Given a causal graph G with variables X 1 , ..., X N and conditional observational distributions p(X i |...), the proposed method ENCO will converge to the true, causal graph G, if the following conditions hold for all edges X i → X j in G:
1. For all possible sets of parents of X j excluding X i , by adding X i the log-likelihood estimate of X j is improved or unchanged under the intervention on X i :
∀ pa(X j ) ⊆ X −i,j : E I X i ,X [log p(X j | pa(X j ), X i ) − log p(X j | pa(X j ))] ≥ 0(6)
2. For at least one set of nodes pa(X j ), for which the probability to be sampled as parents of X j is greater than 0, Equation 6 must be strictly greater than zero.
3. The effect of X i on X j cannot be described by other variables up to λ sparse :
min pa⊆gpa i (Xj ) EÎ ∼p I −j (I) EpÎ (X) log p(X j |pa, X i ) − log p(X j |pa) > λ sparse (7)
where gpa i (X j ) is the set of nodes excluding X i which, according to the ground truth graph, could have an edge to X j without introducing a cycle, and p I−j (I) refers to the distribution over interventions p I (I) excluding the intervention on variable X j .
Further, for all other pairs X i , X j for which X j is a descendant of X i , condition 1 and 2 must hold.
Condition 1 and 2 ensure that the orientations can be learned from interventions. Intuitively, ancestors and descendants in the graph have to be dependent when intervening on the ancestors. This aligns with the technical interpretation in Theorem 3.1 that the likelihood estimate of the child variable must improve when intervening and conditioning on its ancestor variables. Condition 3 states intuitively that the sparsity regularizer needs to be selected such that it chooses the sparsest graph among those graphs with equal joint distributions as the ground truth graph, without trading sparsity for worse distribution estimates. The specific condition in Theorem 3.1 ensures thereby that the set can be learned with a gradient-based algorithm. We emphasize that this condition only gives an upper bound for λ sparse when sufficiently large datasets are available. In practice, the graph can thus be recovered with a sufficiently small sparsity regularizer and dependencies among variables under interventions. We provide more details for various settings and further intuition in Appendix B.
Interventions on fewer variables. It is straightforward to extend ENCO to support interventions on fewer variables. Normally, in the graph fitting stage, we sample one intervention at a time. We can, thus, simply restrict the sampling only to the interventions that are possible (or provided in the dataset). In this case, we update the orientation parameters θ ij of only those edges that connect to an intervened variable, either X i or X j , as before. For all other orientation parameters, we extend the gradient estimator to include interventions on all variables. Although this estimate is more noisy and does not have convergence guarantees, it can still be informative about the edge orientations.
Enforcing acyclicity When the conditions are violated, e.g. by limited data, cycles can occur in the prediction. Since ENCO learns the orientations as a separate parameter, we can remove cycles by finding the global order of variables O ∈ S N , with S N being the set of permutations, that maximizes the pairwise orientation probabilities:
arg max O N i=1 N j=i+1 σ(θ Oi,Oj )
. This utilizes the learned ancestor-descendant relations, making the algorithm more robust to noise in single interventions.
HANDLING LATENT CONFOUNDERS
So far, we have assumed that all variables of the graph are observable and can be intervened on. A common issue in causal discovery is the existence of latent confounders, i.e., an unobserved common cause of two or more variables introducing dependencies between each other. In the presence of latent confounders, structure learning methods may predict false positive edges. Interestingly, in the context of ENCO latent confounders for two variables X i , X j cause a unique pattern of learned parameters. When intervening on X i or X j , having an edge between the two variables is disadvantageous, as in the intervened graph X i and X j are (conditionally) independent. For interventions on all other variables, however, an edge can be beneficial as X i and X j are correlated.
Exploiting this, we extend ENCO to detect latent confounders. We focus on latent confounders between two variables that do not have any direct edges with each other, and assume that the confounder is not a child of any other observed variable. For all other edges besides between X i ±0.5) and X j , we can still rely on the guarantees in Section 3.4 since Equation 7 already includes the possibility of additional edges in such cases. After convergence, we score every pair of variables on how likely they share a latent confounder using a function lc(·) that is maximized in the scenario mentioned above. For this, we define γ ij = γ
ENCO (ours) 2.2 (±1.4) 1.7 (±1.3) 1.6 (±1.6) 9.2 (±3.4) 1.7 (±1.3) 4.6 (±1.9) ENCO-acyclic (ours) 0.0 (±0.0) 0.0 (±0.0) 1.6 (±1.6) 5.3 (±2.3) 0.6 (±1.1) 0.2 ((I) ij + γ (O) ij where γ (I)
ij is only updated with gradients from Equation 3 under interventions on X i , and γ
(O)
ij on all others. With this separation, we define the following score function which is maximized by latent confounders:
lc(X i , X j ) = σ γ (O) ij · σ γ (O) ji · 1 − σ γ (I) ij · 1 − σ γ (I) ji(8)
To converge to the mentioned values, especially of γ
(O)
ij , we need a similar condition as in Equation 7: the improvement on the log-likelihood estimate gained by the edge X i → X j and conditioned on all other parents of X j needs to be larger than λ sparse on interventional data excluding X i and X j . If this is not the case, the sparsity regularizer will instead remove the edge between X i and X j preventing any false predictions among observed variables. For all other pairs of variables, at least one of the terms in Equation 8 converges to zero. Thus, we can detect latent confounders by checking whether the score function lc(X i , X j ) is greater than a threshold hyperparameter τ ∈ (0.0, 1.0). We discuss possible guarantees in Appendix B, and experimentally verify this approach in Section 4.5.
EXPERIMENTS
We evaluate ENCO on structure learning on synthetic datasets for systematic comparisons and realworld datasets for benchmarking against other methods in the literature. The experiments focus on graphs with categorical variables, and experiments on continuous data are included in Appendix D.5. Our code is publicly available at https://github.com/phlippe/ENCO.
EXPERIMENTAL SETUP
Graphs and datasets. Given a ground-truth causal graphical model, all methods are tasked to recover the original DAG from a set of observational and interventional data points for each variable. In case of synthetic graphs, we follow the setup of Ke et al. (2019) and create the conditional distributions from neural networks. These networks take as input the categorical values of its variable's parents, and are initialized orthogonally to output a non-trivial distribution.
Baselines. We compare ENCO to GIES (Hauser & Bühlmann, 2012) and IGSP (Wang et al., 2017;Yang et al., 2018) as greedy score-based approaches, and DCDI (Brouillard et al., 2020) and SDI (Ke et al., 2019) as continuous optimization methods. Further, as a common observational baseline, we apply GES (Chickering, 2002) on the observational data to obtain a graph skeleton, and orient each edge by learning the skeleton on the corresponding interventional distribution. We perform a separate hyperparameter search for all baselines, and use the same neural network setup for SDI, DCDI, and ENCO. Appendix C provides a detailed overview of the hyperparameters for all experiments.
CAUSAL STRUCTURE LEARNING ON COMMON GRAPH STRUCTURES
We first experiment on synthetic graphs. We pick six common graph structures and sample 5,000 observational data points and 200 per intervention. The graphs chain and full represent the minimally-and maximally-connected DAGs. The graph bidiag is a chain with 2-hop connections, and jungle is a tree-like graph. In the collider graph, one node has all other nodes as parents. Finally, random has a randomly sampled graph structure with a likelihood of 0.3 of two nodes being connected by a direct edge. For each graph structure, we generate 25 graphs with 25 nodes each, on which we report the average performance and standard deviation. Following common practice, we use structural hamming distance (SHD) as evaluation metric. SHD counts the number of edges that need to be removed, added, or flipped in order to obtain the ground truth graph. Table 1 shows that the continuous optimization methods outperform the greedy search approaches on categorical variables. SDI works reasonably well on sparse graphs, but struggles with nodes that have many parents. DCDI can recover the collider and full graph to a better degree, yet degrades for sparse graphs. ENCO performs well on all graph structures, outperforming all baselines. For sparse graphs, cycles can occur due to limited sample size. However, with enforcing acyclicity, ENCO-acyclic is able to recover four out of six graphs with less than one error on average. We further include experiments with various sample sizes in Appendix D.1. While other methods do not reliably recover the causal graph even for large sample sizes, ENCO attains low errors even with smaller sample sizes. Next, we test ENCO on graphs with large sets of variables. We create random graphs ranging from N = 100 to N = 1,000 nodes with larger sample sizes. Every node has on average 8 edges and a maximum of 10 parents. The challenge of large graphs is that the number of possible edges grows quadratically and the number of DAGs super-exponentially, requiring efficient methods.
We compare ENCO to the two best performing baselines from Table 1, SDI and DCDI. All methods were given the same setup of neural networks and a maximum runtime which corresponds to 30 epochs for ENCO. We plot the SHD over graph size and runtime in Figure 3. ENCO recovers the causal graphs perfectly with no errors except for the 1,000-node graph, for which it misses only one out of 1 million edges in 2 out of 10 experiments. SDI and DCDI achieve considerably worse performance. This shows that ENCO can efficiently be applied to 1,000 variables while maintaining its convergence guarantees, underlining the benefit of its low-variance gradient estimators.
INTERVENTIONS ON FEWER VARIABLES
We perform experiments on the same datasets as in Section 4.2, but provide interventional data only for a randomly sampled subset of the 25 variables of each graph. We compare ENCO to DCDI, which supports partial intervention sets, and plot the SHD over the number of intervened variables in Figure 4. Despite ENCO's guarantees only holding for full interventions, it is still competitive and outperforms DCDI in most settings. Importantly, enforcing acyclicity has an even greater impact on fewer interventions as more orientations are trained on non-adjacent interventions (see Appendix B.4 for detailed discussion). We conclude that ENCO works competitively with partial interventions too. To test the detection of latent confounders, we create a set of 25 random graphs with 5 additional latent confounders. The dataset is generated in the same way as before, except that we remove the latent variable from the input data and increase the observational and interventional sample size (see Appendix C.3 for ablation studies). After training, we predict the existence of a latent confounder on any pair of variables X i and X j if lc(X i , X j ) is greater than τ . We choose τ = 0.4 but verify in Appendix C.3 that the method is not sensitive to the specific value of τ . As shown in Table 2, ENCO detects more than 95% of the latent confounders without any false positives. What is more, the few mistakes do not affect the detection of all other edges, which are recovered perfectly.
REAL-WORLD INSPIRED DATA
Finally, we evaluate ENCO on causal graphs from the Bayesian Network Repository (BnLearn) (Scutari, 2010). The repository contains graphs inspired by real-world applications that are used as benchmarks in literature. In comparison to the synthetic graphs, the real-world graphs are sparser with a maximum of 6 parents per node and contain nodes with strongly peaked marginal distributions. They also include deterministic variables, making the task challenging even for small graphs.
We evaluate ENCO, SDI, and DCDI on 7 graphs with increasing sizes, see Table 3. We observe that ENCO recovers almost all real-world causal graphs without errors, independent of their size. In contrast, SDI suffers from more mistakes as the graphs become larger. An exception is pigs (Scutari, 2010), which has a maximum of 2 parents per node, and hence is easier to learn. The most challenging graph is diabetes (Andreassen et al., 1991) due to its large size and many deterministic variables. ENCO makes only two mistakes, showing that it can handle deterministic variables well. We discuss results on small sample sizes in Appendix C.5, observing similar trends. We conclude that ENCO can reliably perform structure learning on a wide variety of settings, including deterministic variables.
CONCLUSION
We propose ENCO, an efficient causal structure learning method leveraging observational and interventional data. Compared to previous work, ENCO models the edge orientations as separate parameters and uses an objective unconstrained with respect to acyclicity. This allows for easier optimization and low-variance gradient estimators while having convergence guarantees. As a consequence, the algorithm can efficiently scale to graphs that are at least one order of magnitude larger graphs than what was possible. Experiments corroborate the capabilities of ENCO compared to the state-of-the-art on an extensive array of settings on graph sizes, sizes of observational and interventional data, latent confounding, as well as on both partial and full intervention sets.
Limitations. The convergence guarantees of ENCO require interventions on all variables, although experiments on fewer interventions have shown promising results. Future work includes investigating guarantee extensions of ENCO to this setting. A second limitation is that the orientations are missing transitivity: if X 1 X 2 and X 2 X 3 , then X 1 X 3 must hold. A potential direction is incorporating transitive relations for improving convergence speed and results on fewer interventions.
ETHICS STATEMENT
Causal structure learning algorithms such as the proposed method are mainly used to uncover and understand causal mechanisms from data. The knowledge of the underlying causal mechanisms can then be applied to decide on specific actions that influence variables or factors in a desired way. For instance, by knowing that the environmental pollution in a city has an impact on the risk of cancer of its residents, one can try to reduce the pollution to decrease the risk of cancer. The applications of causal structure learning are ranging across many scientific disciplines, including computational biology (Friedman et al., 2000;Sachs et al., 2005;Opgen-Rhein & Strimmer, 2007), epidemiology (Robins et al., 2000;Vandenbroucke et al., 2016), and economics (Pearl, 2009;Hicks et al., 1980). We envision that our work can have positive impacts on those fields. One example we want to highlight is the field of genomics. Recent advances have enabled to perform gene knockdown experiments in a large scale, providing large amounts of interventional data (Dixit et al., 2016;Macosko et al., 2015). Gaining insights into how specific genes and diseases interact can lead to the development of novel pharmaceutic methods for treating current diseases. Since the number of variables in those experiments is tremendous, efficient causal structure learning algorithms are needed. The proposed method constitutes a first step towards this goal, and our work can foster future work for creating algorithms scaling beyond 10,000 variables.
Since the possible applications are fairly wide-ranging, there might be potential impacts we cannot forecast at the current time. This includes misuses of the method for unethical purposes. For instance, the method can be used to justify gender and race as causes for irrelevant variables if the output is misinterpreted, initial assumptions of the model are ignored, or the input data has been manipulated. Hence, the obligation to use this method in a correct way within ethical boundaries lies on the user. We emphasize this responsibility of the user in the license of our code.
REPRODUCIBILITY STATEMENT
To ensure reproducibility, we have published the source code of the proposed method ENCO at https://github.com/phlippe/ENCO. The code includes instructions on how to download the datasets, and reproduce the experiments in Section 4 and additional experiments in Appendix D. Further, for all experiments of Section 4, we have included a detailed overview in Appendix C of (a) the used data and its generation process, (b) all hyperparameters used for all methods, and (c) additional details on the results. All experiments have been repeated with 5 to 25 seeds to obtain stable, reproducible results. Appendix C.1.2 outlines the packages that have been used for running the baselines.
The computation resources deployed for all experiments are a 24-core CPU with a single NVIDIA RTX3090 GPU. All experiments can be reproduced on a computer with a single GPU, and only the experiments on graphs larger than 100 variables require a GPU memory of about 24GB. The other experiments can be performed on smaller GPUs as well.
SUPPLEMENTARY MATERIAL EFFICIENT NEURAL CAUSAL DISCOVERY WITHOUT ACYCLICITY CONSTRAINTS
A GRADIENT ESTIMATORS
The following section describes in detail the derivation of the gradient estimators discussed in Section 3.3. We consider the problem of causal structure learning where we parameterize the graph by edge existence parameters γ and orientation parameters θ. Our objective is to optimize γ and θ such that we maximize the probability of interventional data, i.e., data generated from the true graphs under (arbitrary) interventions on single variables. Thereby, the likelihood estimates have been trained on observational data only. Additionally, we want to ensure that the graph is as sparse as possible to prevent unnecessary connections. Thus, an 1 regularizer is added on top of the edge probabilities. The full objective can be written as follows:
L = EÎ ∼p I (I) EpÎ (X) E p γ,θ (C) N i=1 L C (X i ) + λ sparse N i=1 N j=1 σ(γ ij )σ(θ ij )(9)
where:
• N is the number of variables in the causal graph (X 1 , ..., X N ); • p I (I) is the distribution over interventions that are performed. This distribution can be set as a hyperparameter to weight certain interventions higher than others. In our experiments, we assume it to be uniform across interventions on variables; •pÎ (X) is the joint distribution of all variables under the interventionÎ; • p γ,θ (C) is the distribution over adjacency matrices C, which we model as a product of independent edge probabilities:
p γ,θ (C) = N i=1 N j=1,j =i σ(γ ij ) · σ(θ ij ); • L C (X i ) is the negative log-likelihood estimate of variable X i under sampled adjacency matrix C: L C (X i ) = − log f φi (X i ; C ·,i X −i ); • λ sparse is a hyperparameter representing the regularization weight.
Based on this objective, we derive the gradient estimators for optimizing both edge existence and orientation parameters.
A.1 LOW-VARIANCE GRADIENT ESTIMATOR FOR EDGE PARAMETERS
In order to optimize the edge parameters via SGD, we need to determine the gradient ∂ ∂γijL . Since L consists of a sum of two terms, i.e., the log-likelihood estimate and the regularization, we can look at both parts separately. To prevent any confusion of index variables, we will use k, l as indices for the parameter γ kl for which we determine the gradient, i.e., ∂ ∂γ klL , and i, j as indices for sums.
As a first step, we determine the gradients for the regularization term. Those can be found by taking the derivative of the sigmoid:
∂ ∂γ kl λ sparse N i=1 N j=1 σ(γ ij )σ(θ ij ) = σ(γ kl ) · (1 − σ(γ kl )) · σ(θ kl )λ sparse(10)
Thus, it is straight-forward to calculate for any edge parameter. In the following, we use σ (...) to abbreviate the derivate of the sigmoid:
σ (γ kl ) = σ(γ kl )(1 − σ(γ kl )).
For the log-likelihood term, we start by reorganizing the expectations to simplify the gradient expression. The derivate term ∂ ∂γ kl can be moved inside the two expectations over interventional data since those are independent of the graph parameters. Thus, we can write:
∂ ∂γ klL = EÎ ∼p I (I) EpÎ (X) ∂ ∂γ kl E p γ,θ (C) N i=1 L C (X i )(11)
For readability, we denoteL to be the objective in Equation 9 without the regularizer.
Next, we take a closer look at the derivate of the expectation over adjacency matrices. Note that we have defined the adjacency matrix distribution as p γ,
θ (C) = N i=1 N j=1,j =i σ(γ ij )σ(θ ij ), with C ij = 1 representing the edge X i → X j .
Since a parameter γ ij only influences the likelihood of the edge X i → X j and no other edges, we can reduce the expectation to a single binary variable over which we need to differentiate the expectation:
∂ ∂γ kl E p γ,θ (C) N i=1 L C (X i ) = E p γ,θ (C −kl ) ∂ ∂γ kl E p γ,θ (C kl ) N i=1 L C (X i )(12)X 1 X 2 X 3
Sample graphs
X 1 X 2 X 3 X 1 X 2 X 3 X 1 X 2 X 3
Graph parameters Intervention X 1
Data batch
Determine gradients
C (1) C (2) C (3) L X1!X2 (X 2 ) L ! L X16 !X2 (X 2 ) L 6 ! L X3!X2 (X 2 ) L ! L X36 !X2 (X 2 ) X 1 X 2 X 3
Sample graphs
X 1 X 2 X 3 X 1 X 2 X 3 X 1 X 2 X 3
Graph parameters Intervention X 1
Data batch
Determine gradients
∂ ∂γ 12L = σ 0 (γ 12 ∂ ∂γ 32L = σ 0 (γ 32 ∂ ∂θ 12L = − ∂ ∂θ 21
Es timate log-lik elihoods and average … … … Figure 5: Visualizing the gradient calculation for the incoming edges of X 2 in an example graph with three variables. The intervention is being performed on X 1 , and the data is used to calculate the loglikelihood estimates under the three randomly sampled graphs:
L C 1 (X 2 ), L C 2 (X 2 ) and L C 3 (X 2 ).
Those terms are assigned to the Monte-Carlo estimators for L Xi→X2 (X 2 ) and L Xi →X2 (X 2 ), and finally used to determine the gradients for γ and θ. The same process is performed for X 3 as well.
where p γ,θ (C kl ) = σ(γ kl ) · σ(θ kl ). The first expectation over p γ,θ (C −kl ) is independent of γ kl as we have defined the adjacency matrix distribution to be a product of independent edge probabilities.
The log-likelihood estimate of a variable, L C (X i ), depends on the adjacency matrix column C ·,i which represents the input connections to the node X i . All other edges have no influence on the log-likelihood estimate of X i . Hence, the parameter γ kl only influences L C (X l ), and thus we can reduce the sum inside the expectation to:
∂ ∂γ kl E p γ,θ (C kl ) N i=1 L C (X i ) = ∂ ∂γ kl E p γ,θ (C kl ) [L C (X l )](13)
The REINFORCE trick is a simple method to move the derivative of a discrete distribution inside the expectation. Applied to our situation, we obtain:
∂ ∂γ kl E p γ,θ (C kl ) [L C (X l )] = E p γ,θ (C kl ) L C (X l ) ∂ log p γ,θ (C kl ) ∂γ kl(14)
This leaves us with two cases in the expectation: C kl = 0 and C kl = 1. In other words, we need to distinguish between samples of C where we have the edge X k → X l , and where we do not have such an edge (X k → X l ). Thus, we can also write the expectation as a weighted sum of those two cases:
E p γ,θ (C) L C (X l ) ∂ log p γ,θ (C kl ) ∂γ kl = σ(γ kl ) · σ(θ kl ) · L X k →X l (X l ) · ∂ log σ(γ kl ) · σ(θ kl ) ∂γ kl + (1 − σ(γ kl ) · σ(θ kl )) · L X l →X k (X k ) · ∂ log (1 − σ(γ kl ) · σ(θ kl )) ∂γ kl(15)
We use L X k →X l (X l ) to denote the (expected) negative log likelihood for X l under adjacency matrices where we have an edge from X k to X l :
L X k →X l (X l ) = E p γ,θ (C −kl ),C kl =1 [L C (X l )] (16) L X k →X l (X l ) = E p γ,θ (C −kl ),C kl =0 [L C (X l )](17)
The final step is to solve the two derivative terms in Equation 14. This is done as follows: The gradients have been scaled to match in terms of averages. We can see a clear reduction in variance with the gradient estimator of ENCO allowing us to use lower sample sizes.
∂ log σ(γ kl ) · σ(θ kl ) ∂γ kl = ∂ log σ(γ kl ) + log σ(θ kl ) ∂γ kl = 1 − σ(γ kl )(18)∂ log (1 − σ(γ kl ) · σ(θ kl )) ∂γ kl = − σ(γ kl ) · (1 − σ(γ kl )) · σ(θ kl ) 1 − σ(γ kl ) · σ(θ kl )(19)
Putting these results back in the original equation and adding the sparsity regularizer, we get: (20) To align the result with the gradient in Section 3.3, we switch the index notation from k, l to i, j again. The expectation E X,C−ij is a short form for the expectations EÎ ∼p
∂ ∂γ ijL = σ(γ ij ) · (1 − σ(γ ij )) · σ(θ ij ) · E X,C−ij L Xi→Xj (X j ) − L Xi →Xj (X j ) + λ sparseI (I) EpÎ (X) E p γ,θ (C−ij ) .
From this expression, we can see that the gradients of γ ij are proportional to the difference of the expected negative log-likelihood of X j with having an edge between X i → X j , and the cases where X i → X j . The sparsity regularizer thereby biases the difference towards the no-edge case. The value of γ ij and θ ij only scale the gradient, but do not influence its direction.
In order to train this objective on a dataset of interventional data, we can use Monte-Carlo sampling to obtain an unbiased gradient estimator. Note that the adjacency matrix samples to estimate L Xi→Xj (X j ) and L Xi →Xj (X j ) are not required to be the same. For efficiency, we instead sample K adjacency matrices from p γ,θ (C), evaluate the likelihood of a batch X under all these graphs. Afterwards, we assign the evaluated samples to one of the two cases, depending on C ij being zero or one. This way, we can reuse the same graph samples for all edge parameters γ. We visualize the gradient calculation in Figure 5. In the cases where we perform an intervention on X i , we do not optimize γ ij for this step and set the gradients to zero. The same holds for gradient steps where we do not have any samples for one of the two log-likelihood estimates.
A.1.1 COMPARISON TO PREVIOUS GRADIENT ESTIMATORS
As discussed in Section 3, previous work on similar structure learning methods (Bengio et al., 2020; Ke et al., 2019) relied on a different estimator. In terms of derivation, the main difference is the continuation from Equation 14 on. In our proposed method, we write the expectation as the sum of two terms that can independently be approximated via Monte-Carlo sampling. In comparison, Bengio et al. (2020) proposed to directly apply a Monte-Carlo sampler to Equation 14, and apply an importance sampling weight to reduce the variance. This estimator is also used in the method SDI (Ke et al., 2019) to which we have experimentally compared our method. Figure 2 compared the gradient estimator in terms of standard deviation. The gradient estimator of ENCO achieves a 10 times lower standard deviation compared to Bengio et al. (2020) making it much more efficient. Since the estimator by Bengio et al. (2020) is biased and has a different mean, we have scaled both estimators to have the same mean. Specifically, we have applied ENCO to random graphs from our experiments on synthetic graphs (see Section 4.2), and evaluated 64, 000 sampled adjacency matrices in terms of log-likelihood estimates. These 64, 000 samples are grouped into sets of K samples which we have used to estimate the gradients of γ. We evaluated different values of K, from K = 20 to K = 4000, and plotted the standard deviation of those estimates in Figure 2. We have also visualized three examples as violin plots in Figure 6 that demonstrate that despite both estimators having a similar mean, the variance of gradient estimates is much higher for Bengio et al. (2020).
To verify that the improvement of ENCO is not just because of the gradient estimators, we have performed an ablation study with ENCO deploying the gradient estimator of Bengio et al. (2020) in Appendix D.3.
A.2 LOW-VARIANCE GRADIENT ESTIMATOR FOR ORIENTATION PARAMETERS
To derive the gradients for the orientation parameters θ, we can mostly follow the same approach as for the edge existence parameters γ. However, we have to keep in mind the constraint θ kl = −θ lk which ensures that the orientation probability sums to one: σ(θ kl ) + σ(θ lk ) = 1.
To determine the gradient of the likelihood term, we can separate the two gradients of θ kl and θ lk . This is because θ kl only influences the expectation over L C (X l ), while θ lk concerns L C (X k ). We can follow Equation 11 to Equation 20 of Section A.1 by swapping θ kl and γ kl . For the derivative through the expectation, we obtain the following gradient:
∂ ∂θ kl E p γ,θ (C) [L C (X l )] = σ (θ kl ) · σ(γ kl ) · E p γ,θ (C −kl ) [L X k →X l (X l ) − L X k →X l (X l )] (21)
Since we have the condition that θ kl = −θ lk , the full gradient for θ kl would therefore consist of the gradient above minus the gradient of Equation 21 with respect to θ ji . However, as discussed in Section 3.3, the orientation of an edge cannot be learned from observational data in this framework. Hence, we only want to use the gradients of θ kl if we intervene on node X k , which gives us the following gradient expression:
∂ ∂θ ijL = σ (θ ij ) p(I Xi ) · σ(γ ij ) · E I X i ,X,C−ij L Xi→Xj (X j ) − L Xi →Xj (X j ) − p(I Xj ) · σ(γ ji ) · E I X j ,X,C−ij L Xj →Xi (X i ) − L Xj →Xi (X i )(22)
To align the equation with the one in Section 3.3, we swap the indices k, l with i, j again. The first line represents cases where we have an intervention on the variable X i , while we have it over interventions on the variable X j in the second line. The two terms are weighted based on the edge existence likelihood σ(γ ij ) and σ(γ ji ) respectively, and the likelihood of performing an intervention on X i or X j . In our experiments, we use a uniform probability across interventions on variables, but emphasize that this is not strictly required. Moreover, one could design heuristics that selects the intervention to update the parameters on with the aim of increasing computational efficiency. The gradient estimators presented in Equation 22 would still be valid in such a case.
We clarify that we do not consider the gradients of θ ij with respect to the edge regularizer. This is done for two reasons. Firstly, the orientation parameter models only the direction of the edge, not whether it exists or not. The regularizer would increase θ ij if the edge existence for the opposite direction would be greater than for the direction from X i to X j , i.e.γ ij < γ ji , and decrease θ ij if we have γ ij > γ ji . However, the orientation should only model the causal direction of an edge. Hence, we do not gain any value from a causal perspective when adding the regularizer to the gradient. Secondly, the regularizer would require us to take additional assumptions for guaranteeing the discovery of the true graph upon convergence. In experiments with using a regularizer in the θ-gradient, we did not observe any difference to the experiments without the regularizer.
We note that the orientation parameters are considered to be pairwise independent. In other words, θ ij and θ kl are considered independent parameters if i = k, l and j = k, l. Global order distributions such as Plackett-Luce (Plackett, 1975;Luce, 1959) can be used to also incorporate transitive relations. However, those require high variance gradient estimators and struggled with chains in early experiments. The pairwise orientation parameters provide much easier optimization while still providing convergence guarantees for the full intervention setting.
A.3 TRAINING LOOP
Finally, we give an overview over the full training loop in Algorithm 1. The distribution over interventions p(I) is set to a uniform distribution for all our experiments. However, the distribution can also be replaced by a heuristic which selects interventions to increase computational efficiency.
To keep the convergence guarantees, p(I) would have to guarantee a non-zero probability to pick any variable. In experiments, we experienced that the Adam optimizer (Kingma & Ba, 2015) speeds up the convergence of the parameters γ and θ while not interfering with the convergence guarantees in practice.
Algorithm 1: Learning algorithm of ENCO Data: N variables {X 1 , ..., X N }; observational dataset D obs ; interventional datasets D int (Î) for sparse, perfect interventions on all variables; distribution over interventions p(I) Result: A graph structure corresponding to the cuasal relations among variables Initialize γ = 0, θ = 0 for number of epochs do N ] and i = j and σ(γ ij ) > 0.5 and σ(θ ij ) > 0.5}
/ * Distribution fitting * / for F iterations do X ∼ D obs for i ← 1 to N do M l ∼ Ber(σ(γ li ) · σ(θ li )) L = − log f φi (X i ; M −i X −i ) φ i ← Adam(φ i , ∇ φi L) end end / * Graph fitting * / for G iterations do / * Sample an intervention * / I ∼ p(I) with intervention target X t X ∼ D int (Î) / * Evaluate multiple graph samples for gradient estimator * / for k = 1 to K do C (k) ∼ Ber(σ(γ) · σ(θ)) for i = 1 to N do L C (k) (X i ) ← − log f φi (X i ; C (k) :,i X −i ) end end / * Update parameters * / for i = 1 to N do for j = 1 to N where j = i and j = t do / * Considering edge Xi → Xj * / L Xi→Xj (X j ) ← K k=1 C (k) ij ·L C (k) (Xj ) K k=1 C (k) ij L Xi →Xj (X j ) ← K k=1 (1−C (k) ij )·L C (k) (Xj ) K k=1 (1−C (k) ij ) γ ij ← γ ij − σ (γ ij ) · σ(θ ij ) · L Xi→Xj (X j ) − L Xi →Xj (X j ) + λ sparse if i == t then θ ij ← θ ij − σ (θ ij ) · σ(γ ij ) · L Xi→Xj (X j ) − L Xi →Xj (X j ) θ ji ← −θ ij end end end end end return G = (V, E) with V = X and E = {(X i , X j ) | i, j ∈ [1,
B CONDITIONS FOR CONVERGING TO THE TRUE CAUSAL GRAPH
The following section gives an overview and proves the conditions under which ENCO converges to the correct causal graph given sufficient time and data. We emphasize that we provide conditions here for which no local optima exist, meaning that if ENCO converges, it returns the correct causal graph. This is a stronger statement than showing that the global optimum corresponds to the true graph, since a gradient-based algorithm can get stuck in a local optimum. We will discuss the conditions for the global optimum in Appendix B.2.5.
To make the proof more accessible, we will first discuss the assumptions that are needed for the guarantee, and then give a sketch of the proof. The proof will first assume that we work in the data limit, i.e. have given sufficient data, such that we can derive conditions that solely depend on the causal graphical model. In Appendix B.2.3, we extend the proof to the limited data setting.
B.1 ASSUMPTIONS Assumption 1 We are given a dataset of observational data from the joint distribution p(X). Additionally, we have N interventional datasets for N variables where in each intervention a different node is intervened on (the intervention size for each dataset is therefore 1).
Assumption 2 A common assumption in causal structure learning is that the data distribution over all variables p(X) is Markovian and faithful with respect to the causal graph we are trying to model. This means that the graph represents the (conditional) independence relations between variables in the data, and (conditional) independence relations in the data reflect the edges in the graph. For ENCO, faithfulness is not strictly required. This is because we work with interventional data. Instead, we rely on the Markov property and assume that for all variables, the parent set pa(X i ) reflects the inputs to the causal generation mechanism of X i . This allows us to also handle deterministic variables.
Assumption 3 For this proof, we assume that all variables of the graph are known and observable, and no latent confounders exist. Latent confounders can introduce dependencies between variables which are not reflected by the ground truth graph solely on the observed variables. We discuss the extension of latent confounders in Section 3.5 and Appendix B.3.
Assumption 4 ENCO relies on neural networks to determine the conditional data distributions p(X i |...). Hence, for providing a guarantee, we assume that in the graph learning step the neural networks have been sufficiently trained such that they accurately model all possible conditional distribution p(X i |...). In practice, the neural networks might have a slight error. However, as long as enough data, network complexity, and training time is provided, it is fair to assume that the difference between the modeled distribution and the true conditional is smaller than an arbitrary constant , based on the universal approximation theorem (Hornik et al., 1989). For the limited data setting, see Appendix B.2.3.
Assumption 5
We are given a sufficiently large interventional dataset such that sampling data points from it models the exact interventional distribution under the true causal graph. This can be achieved by, for example, sampling directly from the causal graph, or having an infinitely large dataset. For the limited data setting, see Appendix B.2.3.
B.2 CONVERGENCE CONDITIONS
The proof of the convergence conditions consists of the following three main steps:
Step 1 We show under which conditions the orientation parameters θ ij will converge to +∞, i.e. σ(θ ij ) → 1, if X i is an ancestor of X j . Similarly, if X i is a descendant of X j , the parameter θ ij will converge to −∞, i.e. σ(θ ij ) → 0.
Step 2 Under the assumption that the orientation parameters have converged as in Step 1, we show that for edges in the true graph, γ ij will converge to 1. Note that we need to take additional assumptions/conditions with respect to λ sparse here.
Step 3 Once the parameters γ ij and θ ij have converged for the edges in the ground truth graph, we show that all other edges will be removed by the sparsity regularizer.
The following paragraphs provide more details for each step. Note that causal graphs that do not fulfill all parts of the convergence guarantee can still eventually be recovered. The reason is that the conditions listed in the theorems below ensure that there exists no local minima for θ and γ to converge in. Although local minima exist, the optimization process might converge to the global minimum of the true causal graph. Theorem B.1. Consider the edge X i → X j in the true causal graph. The orientation parameter θ ij converges to σ(θ ij ) = 1 if the following two conditions are fulfilled:
(1) for all possible sets of parents of X j excluding X i , adding X i improves the log-likelihood estimate of X j under the intervention on X i , or leaves it unchanged:
∀ pa(X j ) ⊆ X −i,j : E I X i ,X [log p(X j | pa(X j ), X i ) − log p(X j | pa(X j ))] ≥ 0(23)
(2) there exists a set of nodes pa(X j ), for which the probability to be sampled as parents of X j is greater than 0, and the following condition holds:
∃ pa(X j ) ⊆ X −i,j : E I X i ,X [log p(X j | pa(X j ), X i ) − log p(X j | pa(X j ))] > 0(24)
Proof. Based on the conditions in Equations 23 and 24, we need to show that the gradient of θ ij is negative in expectation, independent of other values of γ and θ. For readability, we define the following function:
T (X k , X l ) = E I X k ,X,C −kl [L X k →X l (X l ) − L X k →X l (X l )](25)
Hence, the gradient of θ ij can be written as:
∂ ∂θ ijL = σ (θ ij ) · p(I Xi ) · σ(γ ij ) · T (X i , X j ) − p(I Xj ) · σ(γ ji ) · T (X j , X i )(26)
Looking at the gradient of θ ij in Equation 26, the conditions correspond to T (X i , X j ) being smaller or equals to zero. Note that the sign is flipped because in T (X i , X j ), we have negative log-likelihoods represented by L X k →X l (X l ), while in Equations 23 and 24, we have log-likelihoods. Further, the other factors of σ (θ ij ), σ(γ ij ) and p(I X ) are all limited in the range of (0, 1) meaning that the sign of the gradient is solely determined by T (X i , X j ) and T (X j , X i ). If T (X i , X j ) − T (X j , X i ) is smaller than zero, then the gradient of θ ij is negative, i.e.increasing θ ij .
First, we look at when T (X i , X j ) < 0. The condition in Equation 23 ensures that conditioning X j on a true parent X i when intervening on X i does not lead to a worse log-likelihood estimate than without. While this condition might seem natural, there are special cases where this condition does not hold for all variables (see Section B.2.4). The second condition, Equation 24, guarantees that there is at least one parent set for which T (X i , X j ) is negative. Therefore, in expectation over all possible adjacency matrices p γ,θ (C), T (X i , X j ) is smaller than zero if the two conditions hold.
To guarantee that the whole gradient of θ ij is negative, we also need to show that for interventions on X j , T (X j , X i ) can only be positive. When intervening on X j , X i and X j become independent as the edge X i → X j is removed in the intervened graph. A distribution p(X i |X j , ...) relying on correlations between X i and X j from observational data cannot achieve a better estimate than the same distribution when removing X j . This is because the cross entropy is minimized when the sampled distribution, in this case p(X i ), is equal to the log-likelihood estimator (Cover & Thomas, 2005):
− xi,xj p(X i = x i ) log p(X i = x i ) ≤ xi,xj p(X i = x i ) log q(X i = x i |X j = x j )(27)
The only situation where X i and X j can become conditionally dependent under interventions on X j is if X i and X j share a collider X k , and X i is being conditioned on the collider X k and X j . However, this requires that θ ki has negative gradients, i.e. θ ki increasing, when intervening on X k . This cannot be the case since under interventions on X k , X i and X k become conditionally independent, and the correlations learned from observational data cannot be transferred to the interventional setting. If X k and X i again share a collider, we can apply this argumentation recursively until a node X n does not share a collider with X i . The recursion will always come to an end as we have a finite set of nodes, and the causal graph is assumed to be acyclic.
Thus, if the conditions in Equations 23 and 24 hold for an edge X i → X j in the causal graph, we can guarantee that with sufficient time and data, the corresponding orientation parameter θ ij will converge to σ(θ ij ) = 1.
Theorem B.2. Consider a pair of variables X i , X j for which X i is an ancestor of X j without direct edge in the true causal graph. Then, the orientation parameter of the edge X i → X j converges to σ(θ ij ) = 1 if the same conditions as in Theorem B.1 hold for the pair of X i , X j .
Proof. To show this theorem, we need to consider two cases for a pair of variables X i and X j : X i and X j are conditionally independent under a sampled adjacency matrix C, or X i and X j are not independent. Both cases need to be considered for an intervention on X i with the log-likelihood estimate of X j , and an intervention on X j with the log-likelihood estimate of X i .
First, we discuss interventions on X i . If under the sampled adjacency matrix C, X j is conditionally independent of X i , the difference in the log-likelihood estimates T (X i , X j ) is zero in expectation.
The variables can be independent if, for example, the parents of X j are all parents of the true causal graph. If X j is not conditionally independent of X i , the conditions in Equations 23 and 24 from Theorem B.1 ensure that X i has, in expectation, a positive effect on the log-likelihood estimate of X j . Thus, under interventions on X i , the gradient of θ ij is smaller or equals to zero, i.e.increases θ ij .
Next, we consider interventions on X j . If under the sampled adjacency matrix X i is conditionally independent of X j , the difference in the log-likelihood estimates T (X j , X i ) is zero. The variables can be independent if X i is conditioned on variables that d-separate X i and X j in the true causal graph. For instance, having the children of X i as parents of X i creates this scenario. However, for this scenario to take place, one or more orientation parameters of parent-child or ancestor-descendant pairs must be incorrectly converged. In case of a parent-child pair X i , X k , Theorem B.1 shows that σ(θ ik ) will converge to one removing any possibility of a reversed edge to be sampled. In case of an ancestor-descendant pair X i , X l , we can apply a recursive argument: as X l d-separates X i and X j , X l must come before X j in the causal order. If for the gradient θ il , we have a similar scenario with X i being conditionally independent of X j , the same argument applies. This can be recursively applied until no more variables except direct children of X i can d-separate X i and X j . In that case, σ(θ ik ) will converge to one, which leads to all other orientation parameters to converge to one as well. If X i is not conditionally independent of X j , we can rely back on the argumentation of Theorem B.1 when we have an edge X i → X j : as in the intervened causal graph, X i and X j are independent, any correlation learned from observational data can only lead to a worse log-likelihood estimate. In cases of colliders, we can rely on the recursive argument from before. Thus, under interventions on X j , the gradient of θ ij must be smaller or equals to zero in expectation, i.e.increases θ ij .
Therefore, we can conclude that σ(θ ij ) converges to one for any ancestor-descendant pairs X i , X j under the conditions in Theorem B.1.
Theorem B.3. Consider an edge X i → X j in the true causal graph. The parameter γ ij converges to σ(γ ij ) = 1 if the following condition holds:
min pa⊆gpa i (Xj ) EÎ ∼p I −j (I) EpÎ (X) log p(X j |pa, X i ) − log p(X j |pa) > λ sparse(28)
where gpa i (X j ) is the set of nodes excluding X i which, according to the ground truth graph, could have an edge to X j without introducing a cycle, and p I−j (I) refers to the distribution over interventions p I (I) excluding the intervention on variable X j .
Proof. To show this convergence, we assume that the orientation parameters have converged corresponding to Theorem B.1 and B.2. The parameter γ ij converges to σ(γ ij ) = 1 if its gradient, ∂ ∂γijL , is negative independent of other values of γ and orientation parameters θ that are not included in Theorem B.1 and B.2. The gradient of γ ij includes an expectation over adjacency matrices p γ,θ (C). Based on the converged θ-values, we only need to consider sets of nodes as parents for X j that contain parents, ancestors, or (conditionally) independent nodes according to the ground truth graph. This sets of parents is represented by gpa i (X j ). Among those remaining parent sets, we need to ensure that for any such set, the gradient is negative. The condition in Equation 28 corresponds to the inequality ∂ ∂γijL < 0 since the term on the left represents the log-likelihood difference L Xi→Xj (X j ) − L Xi →Xj (X j ) in the gradients of γ ij in Equation 20 with a flipped sign. For readability and better interpretation, λ sparse has been moved on the right site of the inequality. This is possible as λ sparse is independent of the two expectations in Equation 28. If the inequality holds for all parent setspa), the gradient of γ ij can be guaranteed to be negative in expectation, independent of the other values of γ. Since the distribution over parent setspa) depends on other values of γ, the condition in Equation 28 ensures that even for the parent set with the lowest log-likelihood difference, it is still larger than λ sparse . If this condition holds, then the gradient of ∂ ∂γijL will be smaller than zero independent of other values of γ.
The condition in Equation 28 introduces a dependency between convergence guarantees and the regularizer parameter λ sparse . The lower we set the regularization weight λ sparse , the more edges we can guarantee to recover. If the regularization weight is set too high, we can eventually obtain false negative edge predictions. If the regularization weight is set very low, we take a longer time to converge as it requires lower gradient variance or more update steps, and is more sensitive in a limited data regime. Nonetheless, if sufficient computational resources and data is provided, any value of λ sparse > 0 can be used. Theorem B.4. Assume for all edges X i → X j in the true causal graph, σ(θ ij ) and σ(γ ij ) have converged to one. Then, the likelihood of all other edges, i.e.σ(θ lk ) · σ(θ lk ), will converge to zero under the condition that λ sparse > 0.
Proof. If all edges in the ground truth graph have converged, all other pairs of variables X l , X k are (conditionally) independent in the graph. This statement follows from the Markov property of the graph and excludes ancestor-descendant pairs X i , X j . The possibility of having edges from descendants to ancestors has been removed by the fact that the orientation parameters θ ij have converged according to Theorem B.2. Thus, for those cases, we already have the guarantee that σ(θ ij ) · σ(θ ij ) converges to zero.
For a conditionally independent pair X l , X k , the difference of the log-likelihood estimate in the gradient of γ lk , i.e.L X l →X k (X k ) − L X l →X k (X k ), is zero in expectation since independent nodes do not share any information. Thus, the gradient remaining is:
∂ ∂γ lkL = σ (γ lk ) · σ(θ lk ) · λ sparse(29)
Since the gradient is positive independent of the values of γ lk and θ lk , γ lk will decrease until it converges to σ(γ lk ) = 0.
Hence, if γ lk decreases for all pairs of (conditionally) independent variables X l , X k in the ground truth graph, and σ(θ lk ) converged to zero for children and descendants, the product σ(γ lk ) · σ(θ lk ) will converge to zero for all edges not existing in the ground truth graph.
For graphs that fulfill all conditions in the Theorems B.1 and B.4, ENCO is guaranteed to converge given sufficient data and time. The conditions in the theorems ensure that there exist no local minima or saddle points in the loss surface of the objective in Equation 2 with respect to γ and θ.
Summary We can summarize the conditions discussed above as follows. Given a causal graph G with variables X 1 , ..., X N and sparse interventions on all variables, the proposed method ENCO will converge to the true, causal graph G, if the following three conditions hold for all edges X i → X j in the true causal graph G:
1. For all possible sets of parents of X j excluding X i , adding X i improves the log-likelihood estimate of X j under the intervention on X i , or leaves it unchanged:
∀ pa(X j ) ⊆ X −i,j : E I X i ,X [log p(X j | pa(X j ), X i ) − log p(X j | pa(X j ))] ≥ 0(30)
2. There exists a set of nodes pa(X j ), for which the probability to be sampled as parents of X j is greater than 0, and the following condition holds:
∃ pa(X j ) ⊆ X −i,j : E I X i ,X [log p(X j | pa(X j ), X i ) − log p(X j | pa(X j ))] > 0(31)
Published as a conference paper at ICLR 2022 3. The effect of X i on X j cannot be described by other variables up to λ sparse :
X 1 X 2 X 3min pa⊆gpa i (Xj ) EÎ ∼p I −j (I) EpÎ (X) log p(X j |pa, X i ) − log p(X j |pa) > λ sparse(32)
where gpa i (X j ) is the set of nodes excluding X i which, according to the ground truth graph, could have an edge to X j without introducing a cycle.
Further, for all other pairs X i , X j for which X j is a descendant of X i , conditions (1) and (2) need to hold as well.
B.2.1 EXAMPLE FOR CHECKING CONVERGENCE CONDITIONS
In the following, we will provide a walkthrough for how the conditions above can be checked on a simple example graph. For further details on the precise calculations, we provide a Jupyter Notebook that contains all calculations in this example 1 .
Suppose we have a graph with 3 binary variables, X 1 , X 2 , X 3 , with the causal graph being X 1 → X 2 → X 3 , i.e., a small chain. For simplicity, let us assume that the true, conditional distributions are the following:
p(X 1 ) = Bern(0.7)(33)
p(X 2 |X 1 ) = X 1 with prob. 0.6 X 1 ⊕ 1 with prob. 0.4
p(X 3 |X 2 ) = X 2 with prob. 0.2 X 2 ⊕ 1 with prob. 0.8
In other words, X 2 is equals to the value of X 1 with a probability of 0.6, and the opposite binary value otherwise. Similarly, X 3 is equals to the value of X 2 with a probability of 0.2, and the opposite binary value with a probability of 0.8. Therefore, the sample with the highest probability in this joint distribution would be X 1 = 1, X 2 = 1, X 3 = 0. Further, we assume that all interventions replace the respective conditional distribution by a uniform distribution, i.e., p I X i (X i ) = Bern(0.5). Next, we will check the conditions for the edges in G, i.e., X 1 → X 2 and X 2 → X 3 , and the remaining ancestor-descendant pair X 1 , X 3 .
Edge X 1 → X 2 :
• Condition 1: the possible parent sets that exclude X 1 and X 2 are X 3 and the empty set. For the empty set, we get:
E I X 1 ,X [log p(X 2 |X 1 ) − log p(X 2 )] ≈ 0.023 ≥ 0
For conditioning on X 3 , we obtain:
E I X 1 ,X [log p(X 2 |X 1 , X 3 ) − log p(X 2 |X 3 )] ≈ 0.015 ≥ 0
Since both values are greater than zero, condition 1 is fulfilled for X 1 → X 2 .
• Condition 2: is already fulfilled by the equations in condition 1 since all parent sets have a difference greater than zero.
• Condition 3: the set gpa 1 (X 2 ) is the empty set since X 3 is a descendant of X 2 , and no other nodes exist in the graph. Thus, the parent set minimizing the expression on the left can only be the empty set, and we can calculate it as follows:
EÎ ∼p I −2 (I) EpÎ (X) log p(X 2 |X 1 )−log p(X 2 ) ≈ 1/2 · 0.023
I X 1 + 1/2 · 0.017 I X 3
= 0.020 > λ sparse with assuming p I (I) being the uniform distribution, and excluding I X2 since we do not update γ 12 in this case. Hence, as long as λ sparse is smaller than 0.02, the condition is fulfilled.
Edge X 2 → X 3 :
• Condition 1: the possible parent sets that exclude X 2 and X 3 are X 1 and the empty set. For the empty set, we get:
E I X 2 ,X [log p(X 3 |X 2 ) − log p(X 3 )] ≈ 0.194 ≥ 0
For conditioning on X 1 , we obtain:
E I X 2 ,X [log p(X 3 |X 2 , X 1 ) − log p(X 3 |X 1 )] ≈ 0.200 ≥ 0
Since both values are greater than zero, condition 1 is fulfilled for X 2 → X 3 . • Condition 2: is already fulfilled by the equations in condition 1 since all parent sets have a difference greater than zero. • Condition 3: the set gpa 2 (X 3 ) contains the variable X 1 since we can introduce an edge X 1 → X 3 without introducing acyclicity in the true, causal graph. Thus, we need to compare two parent sets for finding the minimum of the left-side term: X 1 and the empty set. First, we consider the empty set:
EÎ ∼p I −3 (I) EpÎ (X) log p(X 3 |X 2 )−log p(X 3 ) ≈ 1/2 · 0.194 I X 1 + 1/2 · 0.194 I X 2 = 0.194 > λ sparse
Again, we exclude I X3 since we do not update γ 23 in this case. The second case considers X 1 as additional parent setpa:
EÎ ∼p I −3 (I) EpÎ (X) log p(X 3 |X 2 , X 1 ) − log p(X 3 |X 1 ) ≈ 1/2 · 0.186
I X 1 + 1/2 · 0.200 I X 2 = 0.193 > λ sparse
The minimum of both values is 0.193. Hence, the edge X 2 → X 3 can be recovered if 0.193 > λ sparse .
Ancestor-descendant pair X 1 , X 3 :
• Condition 1: the possible parent sets that exclude X 1 and X 3 are X 2 and the empty set. For the empty set, we get:
E I X 1 ,X [log p(X 3 |X 1 ) − log p(X 3 )] ≈ 0.008 ≥ 0
For conditioning on X 2 , we obtain:
E I X 1 ,X [log p(X 3 |X 1 , X 2 ) − log p(X 3 |X 2 )] = 0 ≥ 0
The difference is zero because X 3 is independent of X 1 when conditioned on X 2 : p(X 3 |X 1 , X 2 ) = p(X 3 |X 2 ) Since both values are greater or equals to zero, condition 1 is fulfilled for the pair X 1 , X 3 . • Condition 2: from condition 1, we can see that the parent set of pa(X 3 ) being the empty set is the only option that fulfills the condition being greater than zero. Since we start the optimization process with an initialization that assigns a non-zero probability to all possible parent sets, it follows that pa(X 3 ) being the empty set has a probability greater than zero throughout the optimization process. Hence, condition 2 is fulfilled as well.
Summary: in conclusion, for the discussed example, we can guarantee that ENCO converges to the correct causal graph if λ sparse < 0.02. To experimentally verify this results, we applied ENCO on this graph with two hyperparameter settings for the sparsity regularizer: λ sparse = 0.019 and λ sparse = 0.021. We considered a very large sample size, more specifically 10k per intervention and 100k observational samples, to simulate the data limit regime. For λ sparse = 0.019, ENCO was able to recover the graph without errors while for λ sparse = 0.021, the edge X 1 → X 2 was, as expected, missed. This verifies the theoretical result above with respect to λ sparse . Note that if the condition is not fulfilled by selecting a too large sparsity regularizer, this does not necessarily mean that ENCO will not be able to recover the graph. This is because we consider the 'worst-case' parent set in condition 3, while this case might not be in the true causal graph to which the other edges converge.
B.2.2 INTUITION BEHIND CONDITION 1 AND 2
As mentioned in the text, condition 1 and 2 of Theorem 3.1 ensure that the orientation probabilities cannot converge to any local optima. Since the conditions explicitly involve the data distributions and implicitly the gradient estimators, we provide below an assumption from a data generation mechanism perspective as an alternative, that ensures condition 1 and 2 to be satisfied.
Firstly, we assume that ancestors and descendants are not independent under interventions on the ancestors. Note that there can exist graphs where the ancestors are independent of descendants, for instance in a linear Gaussian setting when the ancestor has a weight of zero on the descendant. However, those graphs, violating faithfulness, are impossible to find for any causal discovery method since the variables are independent under any setting. In terms of condition 1 and 2, it would imply that the inequality is always zero.
Next, we show that under the previous assumption, local optima of the orientation probabilities can only occur in the following structure: for an edge X i → X j , there exist one or more parent(s) of X j sharing a common confounder X k with X i , where X k is not a direct parent of X j . An example of this structure is the following: X 1 → X 2 , X 3 ; X 2 , X 3 → X 4 where the orientations of the edges X 2 → X 4 and X 3 → X 4 could have a local optimum. This statement can be proven as follows by using the do-calculus (Pearl, 2009). Suppose a graph that includes the three variables X 1 , X 2 , X 3 with X 1 → X 2 , X 3 → X 2 , and X 2 having no parents besides X 1 and X 3 . If X 1 and X 3 do not share a confounder, then, from do-calculus, we know that p(X 2 |do(X 1 = x 1 )) = p(X 2 |X 1 = x 1 ) and p(X 2 |do(X 3 = x 3 )) = p(X 2 |X 3 = x 3 ). Furthermore, since the conditional entropy of a variable can only be smaller or equals to the marginal, i.e. H(X) ≥ H(X|Y ), estimating X 2 under interventions on X 1 can only be improved by conditioning on X 1 , and similarly for X 3 . Thus, condition 1 is strictly fulfilled when parents do not share a confounder under the previous assumption of no independence in all possible settings. Now, consider the situation where X 1 and X 3 share a common confounder. Then, from do-calculus, we can state that there can exist a parameterization of the conditional distributions for which p(X 2 |do(X 1 = x 1 )) = p(X 2 |X 1 = x 1 ). Under this setting, we cannot guarantee that condition 1 is always fulfilled. However, whether this parent-confounder structure above actually leads to a local optimum or not depends on the distributions, which condition 1 models. Intuitively, this requires the mutual information between the two or more parents to be very high, and the initial edge probabilities of those edges to be very low. Further, as the results show, this combination of events is not very common in practice, meaning that as long as the ancestor and descendant are not independent under the interventions, we usually converge to a graph with the correct orientation.
Besides, if the confounder X j of X 1 and X 3 is a parent of X 2 , then the local optimum would disappear with learning that edge since p(X 2 |do(X 1 = x 1 ), X j ) = p(X 2 |X 1 = x 1 , X j ). In conclusion, for many of the graph structures like chain, bidiag, collider, full and jungle, this shows that there does not exist any local optima for the orientation probabilities. Only for the certain structures of confounded parents, there may exist local optima that depend on the specific distribution parameterization.
B.2.3 LIMITED DATA REGIME
Assumption (3) and (4) are taken with respect to the data limit such that the conditions derived in the next section solely depend on the given causal graphical model. However, in practice, we often have a limited data set. The proof presented for the data limit is straightforward to extend to this setting with the following modification:
X 3 X 1 X 2 (a) Fork example X 1 X 2 X 3
(b) Fully connected example Figure 8: Causal graph structures for which, under specific parameterization of the conditional distributions, the conditions for guaranteeing convergence can be violated.
• The conditional distributions p(X|...) are replaced by the conditional distributions that follow from the given, observational data.
• The expectations over the interventional datapÎ (X) is replaced by the joint distribution over samples given for the interventionÎ.
• Theorem B.1 and B.2 for the edge X i → X j are extended as follows:
(3) For all possible sets of parents of X i excluding X j , adding X j does not improve the log-likelihood estimate of X i under the intervention on X j , or leaves it unchanged:
∀ pa(X i ) ⊆ X −i,j : E I X j ,X [log p(X i | pa(X i ), X j ) − log p(X i | pa(X i ))] ≤ 0(36)
This condition is the inverse statement of Equation 23, in the sense that we consider interventions on the child/descendant X j . In the data limit, this naturally follows from Equation 23 • Finally, Theorem B.4 does not necessarily hold anymore since noise in our data can lead to an overestimation of edges. Thus, we add the following condition:
(1) For all pairs of variables X i , X j for which there exists no direct causal relation in the true causal graph, and X j not being the ancestor of X i , the following condition has to hold: min pa⊆(gpa i (Xj )\pa(Xj )) EÎ ∼p I −j (I) EpÎ (X) log p(X j |pa(X j ),pa, X i )−log p(X j |pa(X j ),pa) < λ sparse (37) where gpa i (X j ) is the set of nodes excluding X i which, according to the ground truth graph, could have an edge to X j without introducing a cycle.
This condition ensures that no correlations due to sample biases introduce additional edges in the causal graphs.
If the conditions discussed above hold with respect to the given observational and interventional dataset, we can guarantee that ENCO will converge to the true causal graph given sufficient time.
B.2.4 GRAPHS WITH LIMITED GUARANTEES
Most common causal graphs fulfill the conditions mentioned above, as long as a small enough value for λ sparse is chosen. Still, there are situations where we cannot guarantee that ENCO convergences to the correct causal graph independent of the chosen value of λ sparse . Here, we want to discuss two scenarios visualized in Figure 8 under which the guarantees fail. Still, we want to emphasize that despite graphs not fulfilling the conditions, ENCO might still converge to the correct DAG for those as the guarantee conditions assume the worst-case scenarios for θ and γ in all situations.
The first example we discuss is based on a fork structure where we have three binary variables, {X 1 , X 2 , X 3 }, and the edges X 1 → X 3 and X 2 → X 3 (see Figure 8a). The parameterization we look at is a (noisy) XOR-gate for X 3 with its two input variables X 1 , X 2 being independent of each other and uniformly distributed. The conditional probability distribution p(X 3 |X 1 , X 2 ) can be Published as a conference paper at ICLR 2022 summarized in the following probability function:
p(X 3 = 1|X 1 , X 2 ) = if X 1 = 0, X 2 = 0 1 − if X 1 = 1, X 2 = 0 if X 1 = 0, X 2 = 1 1 − if X 1 = 1, X 2 = 1(38)
In other words, if X 1 = X 2 , X 3 is equals 1 with a likelihood of 1 − . If X 1 = X 2 , X 3 is equals 1 with a likelihood of . The issue that this probability table creates is the following. Knowing only one out of the two variables does not improve the log likelihood estimate for the output. This is because X 1 and X 2 are independent of each other, and p(X 3 |X 1 ) = p(X 3 ) is a uniform distribution. Hence, the worst-case parent set in Equation 6 would be the empty set, and leads to an expected difference log-likelihood difference of zero. As λ sparse is required to be greater than zero for Theorem B.4, we cannot fulfill the condition for that graph. This means that an empty graph without any edges is a local minimum to which ENCO could converge. Yet, when the edge probabilities are non-zero, we will sample adjacency matrices with both input variables being a parent of X 3 with a non-zero probability. Hence, the log-likelihood difference for X 1 and X 2 to X 3 is unequal zero. Further, this graph is still often correctly discovered despite ENCO not having a convergence guarantee for it. We have conducted experiments on this graph with = {0.1, 0.2, 0.3, 0.4, 0.45} using a sparsity regularizer of λ sparse = 1e-4, and in all cases, ENCO converged to the correct, acyclic graph. Note that values close to 0.5 for are most challenging, because the difference between the true conditional and marginal distribution goes against zero.
The second example we want to discuss aims at graphs that violate the condition in Theorem B.1, more specifically Equation 23. The graph we consider is a fully connected graph with three variables X 1 , X 2 , X 3 (see Figure 8b). The scenario can be described as follows: if knowing X 2 informs the log-likelihood estimate of X 3 more about X 1 than about X 2 itself, an intervention on X 2 and a sampled graph with the edge X 2 → X 3 could lead to a worse likelihood estimate of X 3 than without the edge. For this scenario to happen, p(X 2 |X 1 ) must be close to deterministic. Additionally, p(X 3 |X 1 , X 2 ) must be much less reliant on X 2 than on X 1 , such as in the following probability density:
p(X 3 = 1|X 1 , X 2 ) = 1 if X 1 = 0, X 2 = 0 1 − 1 if X 1 = 1, X 2 = 0 2 if X 1 = 0, X 2 = 1 1 − 2 if X 1 = 1, X 2 = 1(39)
The two variables 1 , 2 represent small constants close to zero. In this case, the graph can violate the condition in Equation 23 since intervening on X 2 breaks the dependency between X 1 and X 2 . The conditional distribution p(X 3 |X 2 ) learned from observational data relies on the dependency between X 1 and X 2 which can make it to a worse estimate than p(X 3 ). Note that if the edge X 1 → X 3 is learned by ENCO though, this will not constitute a problem anymore since with conditioning on X 1 , i.e. p(X 3 |X 2 , X 1 ), the edge X 2 → X 3 will gain a gradient towards the correct graph. Thus, when γ and θ are not initialized with the worst-case values, the graph with both X 1 and X 2 as parents of X 3 can be sampled and provides gradients in the correct direction. Further, we did not observe any of these situations in the synthetic and real-world graphs we experimented on.
B.2.5 CONDITIONS FOR THE GLOBAL OPTIMUM
So far, the discussion focused on proving that the optimization space does not contain any local optima with respect to the graph parameters γ and θ besides the global optimum. If these conditions are violated, ENCO might still converge to the correct solution, since we are not guaranteed to find and get stuck in one of these local optima. Thus, in this section, we provide conditions under which the ground truth graph is the global optimum of the objective in Equation 2. Graphs that fulfill these conditions are very likely to be correctly identified by ENCO, but with a suboptimal choice of hyperparameters, initial starting conditions etc., we could return an incorrect graph.
The conditions and proof follow a similar structure to those as before for the local optima. We first discuss when we can guarantee that the global optima has the same orientation of edges, and then when we also find the correct parent set of the remaining variables.
Published as a conference paper at ICLR 2022 Theorem B.5. For every pair of variables X i , X j where X i is a parent of X j , the graphĜ that optimizes objective in Equation 2 models the orientation X i → X j , if there exists an edge between X i and X j inĜ, under the following conditions:
• X i and X j are not independent under observational data.
• Under interventions on X i , X i and X j are not independent given the true parent set of X j .
Proof. If X i and X j are independent under observational data, the observational distributions would not identify any correlation among those two variables. Hence, transferring them for any graph to interventional data would have p(X i |...) = p(X i |X j , ...), thus making the objective invariant to the orientation of the edge, and removing any edge between X i and X j for sparsity.
If X i and X j are dependent, we can prove the statement by showing that modeling the orientation X j → X i will strictly lead to a worse estimate under the intervention on X j since the orientation parameters are optimized by comparing the interventions of the two adjacent variables. Under interventions on X i , the causal mechanism p(X j |pa(X j )), with pa(X j ) being the parent set of the ground truth graph including X i , remains invariant under interventions on X i , and is strictly better than p(X j |pa(X j )\X i ) for estimating X j due to the direct causal relation. Under interventions on X j , the causal mechanism p(X i |X j , ...) leads to a strictly worse estimate as discussed in Theorem B.1, since the dependency between X i and X j does not exist in the interventional regime. Hence, the inverse orientation of the edge X i → X j , i.e. X j → X i cannot be part of the global optimum.
Theorem B.6. For every pair of variables X i , X j where X i is an ancestor but not direct parents of X j , the graphĜ that optimizes objective in Equation 2 does not include the edge X j → X i if the conditions in Theorem B.5 hold.
Proof. To show this statement, we need to consider different independence relations between X i and X j . First, if X i and X j are independent in the observational dataset given any conditional set, the edge will be removed since any edge between two independent variables is removed for any λ sparse > 0. The same holds if X i and X j are independent for interventions on X i and X j .
If they are dependent, we can follow a similar argument as in Theorem B.2. The causal mechanism p(X j |X i , ...) transfers from observational to interventional data on X i since on interventions on X i , the causal mechanism of X j is not changed. Further, when intervening on X j , X i and X j become independent such that any mechanism p(X i |X j , ...) cannot transfer except if X i and X j are independent under interventions on X j . In this case, the edge will be again removed by the sparsity regularizer. This shows that for any setting, the orientation of the edge X j → X i cannot lead to a better estimate than X i → X j , and in case of independence, the edge X j → X i will be removed as well.
Theorem B.7. The graphĜ that optimizes objective in Equation 2 models the same parent sets for each variable under the following conditions:
• The conditions in Theorem B.5 hold.
• For any variable X i with its true parent set pa(X i ), there does not exist a smaller parent setpa ⊂ X \ X i , descendants(X i ) which approximates the log-likelihood of X i up to λ sparse · (|pa(X i )| − |pa|) on average.
• The regularization parameter λ sparse is greater than zero.
Proof. If the orientations for all edges are according to the ground truth graph in the global optimum following Theorem B.5 and B.6, the parent set for a variable X i is limited to those variables which are not descendants of X i . From the ground truth graph, we know that, conditioned on the true parent set pa(X i ), X i is independent of all other non-descendant variables. Thus, the log-likelihood estimate of X i , i.e. the left part of the objective in Equation 2, is optimized by p(X i |pa(X i )). To show that this is also the global optimum when combining with the regularizer, we need to consider those parent sets of X i which obtain a lower penalty, i.e. smaller parent sets. The difference between two parent sets pa(X i ) andpa in terms of the regularizer corresponds to λ sparse · (|pa(X i )| − |pa|).
X l X 1 X 2 (a) Independent children X l X 2 X 1 X 3 (b)
Ancestor-descendant pair Figure 9: Visualization of the different graph structures we need to consider in the guarantee discussion of latent confounder detection. Latent variables X l are shown in white, all other variables are observed. (a) The two children of X l , X 1 and X 2 , are independent of each other. (b) X 1 is an ancestor of X 3 , and the two variables have a shared latent confounder X l .
Thus, if there exists no parent set for which this difference is greater than the penalty for the worse log-likelihood estimate, the true parent set pa(X i ) constitutes the global optimum.
B.3 CONVERGENCE CONDITIONS FOR LATENT CONFOUNDER DETECTION
In Section 3.5, we have discussed that ENCO can be extended to graph with latent confounders. For this, we have to record the gradients of γ ij for the interventional data on X i and all other interventional data separately. We define γ ij = γ
(I) ij + γ (O) ij where γ (I)
ij is only updated with gradients from Equation 3 under interventions on X i , and γ (O) ij on all others. The score to detect latent confounders is:
lc(X i , X j ) = σ γ (O) ij · σ γ (O) ji · 1 − σ γ (I) ij · 1 − σ γ (I) ji(40)
In this section, we show under which conditions the score lc(X i , X j ) converges to one if X i and X j share a latent confounder. We restrict our discussion to latent confounders between two variables that do not have any direct edges with each other, and assume that the confounder is not a child of any other observed variable. We assume that the causal graph based on the observed variable fulfills all conditions of Theorem B.1 to B.4 in Section B.2, meaning that without the latent confounders, the graph could have been recovered without errors. Under those conditions, we can also show that the graph among observed variables with latent confounders is also correctly recovered. This is since the latent confounders only affect Theorem B.4: if X i and X j share a latent confounder, they are not conditionally independent given their observed parents. Thus, we can rely on the fact that all edges in the true causal graph will be found according to Theorem B.1 to B.4, and the edges with latent confounders do not fulfill Theorem B.4.
For all pairs of variables that do not share a latent confounder, lc(X i , X j ) converges to zero. The edges that are removed in Theorem B.4 converge to σ(γ (O) ij ) = 0 which sets lc(X i , X j ) to zero. For edges that have been recovered, we state in Equation 24 that the gradient for interventional data must be negative for interventions on the parent. Hence, σ(γ (I) ji ) converges to one which brings lc(X i , X j ) to zero again.
For variables that share a latent confounder, we distinguish between two cases that are visualized in Figure 9. In the first case, we assume that X i and X j are independent in the true causal graph excluding the latent confounder. This means that an intervention on X i does not cause any change in X j , and vice versa. The second case describes the situation where X i is an ancestor of X j . The case of X i being a parent of X j has been excluded in earlier assumptions as in those cases, we cannot separate the causal effect of X i on X j based on its causal relation and the latent confounder.
In case that the two children of the latent confounder are not an ancestor-descendant pair, we can provide a guarantee under the following conditions. Theorem B.8. Consider a pair of variables X i , X j that share a latent confounder X l . Assume that X i and X j are conditionally independent given the latent confounder and their observed parents. Further, all other edges in the causal graph have been recovered under the conditions of Theorem B.1 to B.4. The confounder score lc(X i , X j ) converges to one if the following two conditions hold:
EÎ ∼p I -X i (I) EpÎ (X) log p(X j |pa(X j ), X i ) − log p(X j |pa(X j )) > λ sparse (41) EÎ ∼p I -X j (I) EpÎ (X) log p(X i |pa(X i ), X j ) − log p(X i |pa(X i )) > λ sparse (42)
Proof. We need to show that under the two conditions above, σ(γ
(O) ij ) and σ(γ (O)
ji ) are guaranteed to converge to one while σ(γ (I) ij ) and σ(γ (I) ji ) converge to zero. The distribution p I -X k (I) represents the distribution over interventions excluding the ones performed on the variable X k . The two conditions resemble Equation 28 with the difference that the intervention on the potential parent variable is excluded, and the parent set is the true parent set of the correct causal graph. This is because all other edges have been correctly recovered, and the two conditions are concerning σ(γ ji . Under the intervention on X i , X i and X j become independent since we assume perfect interventions. In this case, the log-likelihood estimate of X j cannot be improved by conditioning on X i . Hence, the difference L Xi→Xj (X j ) − L Xi →Xj (X j ) is greater or equal to zero. When further considering the sparsity regularizer λ sparse , the gradient of γ ij under interventions on X i can only be positive, i.e.decreasing γ ji ) might converge to zero. This results in the score lc(X i , X j ) being zero, but also σ(γ ij ) converging to zero. Hence, we do not get any false positive edge predictions as we have seen in the experiments of Section 4.5.
For the second case where X i is an ancestor of X j , we cannot give such a guarantee because of Theorem B.2. Theorem B.2 states that σ(θ ij ) converges to one for ancestor-descendant pairs. However, σ(θ ji ) is a factor in the gradients of γ ji . This means that if σ(θ ji ) converges to zero according to Theorem B.2, we cannot guarantee that γ ji converges to the desired value since its gradient becomes zero. Nevertheless, 59.2% of the latent confounders in our experiments of Section 4.5 were on ancestor-descendant pairs. ENCO detects a majority of those confounders, showing that ENCO still works on such confounders despite not having guarantees. Further, we show in Section C.3 that the confounder scores lc(X i , X j ) indeed converge to one for detected confounders, and zero for all other edges.
B.4 CONVERGENCE FOR PARTIAL INTERVENTION SETS
In Section 4.4, we have experimentally shown that ENCO works on partial intervention sets as well.
Here, we will discuss convergence guarantees in the situation when interventions are not provided on all variables.
We start with discussing the case where, for a graph with N variables, we are given samples of interventions on N − 1 variables. In this case, we can rely on the previous convergence guarantees discussed in Appendix B.2 with minor modifications. Specifically, for the variable X i on which we do not have interventions, the orientation parameters θ i· are only updated by interventions on other variables. Hence, for this variable X i , the following conditions need to hold instead of Theorem B.1:
• For all possible sets of parents of X i excluding X j , adding X j does not improve the log-likelihood estimate of X i under the intervention on X j , or leaves it unchanged:
∀ pa(X i ) ⊆ X −i,j : E I X j ,X [log p(X i | pa(X i ), X j ) − log p(X i | pa(X i ))] ≤ 0(43)
For at least one parent set pa(X i ), which has a probability greater than zero to be sampled, this inequality is strictly smaller than zero.
This condition ensures that θ ij converges to the correct values, where X i is the parent or ancestor of X j . Thus, in conclusion, we can provide convergence guarantees if N − 1 interventions are provided.
Next, we can consider the case of having N − 2 interventions. With the conditions above, we can ensure that the all orientation parameters are learned, excluding θ ij where X i and X j are the variables for which we have not obtained interventions. In this case, we cannot give strict convergence guarantees for the edge X i ↔ X j , especially when X i and X j have a direct causal relationship. If X j is the child of X i , we might obtain the edge X j → X i which violates the assumptions in the second and third step of the proof. Therefore, we cannot give guarantees of correctness for incoming/outgoing edges of X i and X j , and might make incorrect predictions of edges between these two variables.
When taking the next step to having M interventions provided, ENCO can create more incorrect predictions. For the variables for which interventions are provided, we can use the same convergence guarantees (Theorem B.1-B.4) since all conditions are independent across variables. For variables without interventions, we cannot rely on those. While we have observed that learning the missing θ's from other interventions give reasonable results, we see a degradation of performance the further the distance is between a node and the closest intervened variable. As an example, suppose we have a chain with 5 variables, i.e. X 1 → X 2 → X 3 → X 4 → X 5 , and we are provided with an intervention on X 1 only. This allows us to learn the orientation between X 1 and X 2 . The orientation between X 2 and X 3 is often learned correctly as well because adding the edge X 2 → X 3 instead of X 3 → X 2 gives a greater decrease in overall log-likelihood, since part of the information from X 3 to predict X 2 is already included in X 1 . However, the further we go away from X 1 , the less information is shared between the child and the intervened variable. Moreover, the likelihood of a mistake occurring due to limited data further increases. This is why the orientation of the edge X 4 → X 5 is not always learned correctly, which can also cause false positive edges.
Many scenarios of predicting false positive edges can, in theory, be solved by providing an undirected skeleton of the graph, for example, obtained from observational data. Still, one of the cornerstones of ENCO is that it does not assume faithfulness. Without faithfulness or any other assumption on the functional form of the causal mechanisms, the correct undirected graph cannot be recovered by any method. One of the future directions will be to include faithfulness in ENCO to solve the scenarios mentioned above, although this would imply that we might not be able to recover edges of deterministic variables anymore.
B.5 EXAMPLE FOR NON-FAITHFUL GRAPHS
Below we give an example of a distribution which is not faithful with respect to its graph structure, but can yet be found by ENCO. Suppose we have a chain of three variables, X 1 → X 2 → X 3 . For simplicity, we assume here that all the variables are binary, but the argument can similarly hold for any categorical data. The distribution p(X 1 ) is an arbitrary function with 0 < p(X 1 = 1) < 1, and the other two conditionals are deterministic functions: p(X 2 |X 1 ) = δ[X 2 = X 1 ], p(X 3 |X 2 ) = δ[X 3 = X 2 ]. This joint distribution is not faithful to the graph, since X 3 is independent of X 2 given X 1 , which is not implied by the graph. We will now focus our discussion on the edge X 1 → X 3 to show that, despite the independence, the proposed method ENCO can identify the true parent set of X 3 . The first step of ENCO is to fit the neural networks to the observational distributions, which include p(X 3 ), p(X 3 |X 1 ), p(X 3 |X 2 ), p(X 3 |X 1 , X 2 ). Now, the update of the edge parameter γ 13 under interventions on X 2 can be summarized as follows, where we marginalize out over graph samples:
∂L ∂γ 13 = σ (γ 13 ) · σ(θ 13 ) · E X p(X 2 → X 3 ) · [− log p(X 3 |X 1 , X 2 ) + log p(X 3 |X 2 )]
Gradients for graph samples with edge X2→X3
+ p(X 2 → X 3 ) · [− log p(X 3 |X 1 ) + log p(X 3 )]
Gradients for graph samples without edge X2 →X3
where the samples X are sampled from the graph under interventions on X 2 . Intuitively, the gradient points towards increasing the probability of the edge X 1 → X 3 if adding X 1 to the conditionals improves the log-likelihood estimate of X 3 in both graph samples, i.e. where we have the edge X 2 → X 3 or not. Thus, to show that the gradient points towards decreasing the probability of the edge X 1 → X 3 , we need to show that both of the above log-likelihood differences are greater than zero (note that positive gradients lead to a decrease since we minimize the objective).
In the first difference, E X [− log p(X 3 |X 1 , X 2 ) + log p(X 3 |X 2 )], we know that p(X 3 |X 2 ) is the optimal estimator, since the ground truth data is generated via this conditional. Thus, independent of what function the network has learned for p(X 3 |X 1 , X 2 ), the difference above can be only greater or equals to zero. Note that for p(X 3 |X 1 , X 2 ), there are value combinations that have never been observed, i.e. X 1 = X 2 , such that in practice we can't guarantee a specific distribution to be learned for such conditionals. Still, this does not constitute a problem for finding the graph as shown above.
The second difference, E X [− log p(X 3 |X 1 ) + log p(X 3 )], can be similarly reasoned. Since X 1 is independent of X 3 under interventions on X 2 , p(X 3 |X 1 ) cannot lead to a better estimator than p(X 3 ), even when trained on the interventional data. However, since p(X 3 |X 1 ) was trained on observational data, where there exist a strong correlation between X 1 and X 3 , this estimator must be strictly worse than p(X 3 ). Hence, this difference will be strictly positive.
To summarize, under interventions on X 2 , the edge X 1 → X 3 will be trained towards decreasing its probability. Further, under interventions on X 1 , the effect of X 1 on X 3 can be fully expressed by conditioning on X 2 , making this gradient going to zero when the edge probability X 2 → X 3 goes towards one. For the edge X 2 → X 3 itself, the same reasoning as here can be followed such that independent of whether the edge X 1 → X 3 is included in the graph or not, conditioning X 3 on X 2 can only lead to an improvement in its estimator. Therefore, ENCO is able to find the correct graph despite it not being faithful. Figure 10: Visualization of the common graph structures for graphs with 8 nodes. The graphs used in the experiments had 25 nodes. Note that the graph random is more densely connected for larger graphs, as the number of possible edges scales quadractically with the number of nodes.
C EXPERIMENTAL DETAILS
The following section gives an overview of the hyperparameters used across experiments. Additionally, we discuss details of the graph generation and the learning process of different algorithms.
C.1 COMMON GRAPH STRUCTURE EXPERIMENTS
C.1.1 DATASETS
Graph generation The six common graph structures we have used for the experiments in Table 1 are visualized in Figure 10. In the graph bidiag, a variable X i has X i−2 and X i−1 as parents, and consequently X i+1 and X i+2 as children. Hence, this graph represents a chain with 2-hop connections. The graph chain is a bidiag with a single hop, meaning that X i is the parent of X i−1 but not X i−2 . In the graph collider, the variable X N has all other variables, X −i , as parents. In the graph full, the parents of a variable X i are all previous variables: pa(X i ) = {X 1 , X 2 , ..., X i−1 }. Hence, it is the densest connected graph possible. The graph jungle represents a binary tree where a node is also connected to its parent's parent. Finally, the graph random follows a randomly sampled adjacency matrix. For every possible pair of variables X i , X j , we sample an edge with a likelihood of 0.3. To determine the orientation of the edges, we assume the causal ordering of X i X i+1 . In other words, if we have an edge between X i and X j , it is oriented X i → X j is i < j else X j → X i .
Conditional distributions
In the generated graphs, we use categorical variables that each have 10 categories. To model the ground-truth conditional distributions, we use randomly initialized neural networks. Specifically, we use MLPs of two layers where the categorical inputs are represented by embedding vectors. For a variable X i with M parents, we stack the M embedding vectors to form the input to the following MLPs. Each embedding has a dimensionality of 4, hence the input size to the first linear layer is 4M . The hidden size of the layers is 48, and we use a LeakyReLU activation function in between the two linear layers. Finally, a softmax activation function is used on (Brouillard et al., 2020). For the initialization of the networks, we follow Ke et al. (2019) and use the orthogonal initialize with a gain of 2.5. The biases are thereby initialized uniformly between −0.5 and 0.5. This way, we obtain non-trivial, random distributions. Experiments with different synthetic distributions are provided in Appendix D.7.
C.1.2 METHODS AND HYPERPARAMETERS
Baseline implementation We used existing implementations to run the baselines GIES (Hauser & Bühlmann, 2012), IGSP (Wang et al., 2017), GES (Chickering, 2002) and DCDI (Brouillard et al., 2020). For GIES, we used the implementation from the R package pcalg 2 . To run categorical data, we used the GaussL0penIntScore score function. For IGSP, we used the implementation of the python package causaldag 3 . As IGSP uses conditional independence tests in its score function, we cast the categorical data into continuous space first and experiment with different kernel-based independence tests. Due to its long runtime for large dataset sizes, we limit the interventional and observational data set size to 25k. Larger dataset sizes did not show any significant improvements. For details on the observational GES experiments, see Section D.6. Finally, we have used the original python code for DCDI published by the authors 4 . We have added the same neural networks used by ENCO into the framework to perform structure learning on categorical data. Bugs in the original code were corrected to the best of our knowledge. Since SDI ( Hyperparameters To ensure a fair comparison, we performed a hyperparameter search for all methods. The hyperparameter search was performed on a hold-out set of graphs containing two of each graph structure.
GIES We performed a hyperparameter search over the regularizer values λ ∈ {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200}.
The value obtaining the best results in terms of structural hamming distance (SHD) was λ = 20. The average run time of GIES was 2mins per graph. Figure 11: Learning curves of ENCO in terms of recall and precision on edge predictions for synthetic graph structures. The orientations for the ground-truth edges are not plotted as they have usually been correctly learned after the first epoch except for the graph full. Overall, we see that the edge recall starts very high for all graphs, and the precision catches up over the epochs. This is in line the steps in the convergence proof in Section B.
IGSP We experimented with two different conditional independence tests, kci and hsic, and different cutoff values α = {1e-5, 1e-4, 1e-3, 1e-2}. The best hyperparameter setting was kci with α = 1e-3. The average run time of IGSP was 13mins. SDI We focused the hyperparameter search for SDI on its two regularizers, λ sparse and λ DAG , as well as its learning rate for γ. The other hyperparameters with respect to the neural networks were kept the same as ENCO for a fair comparison. We show all details of the hyperparameter search in Table 4. The best combination of regularizers found was λ sparse = 0.02 and λ DAG = 0.5. Lower values of λ sparse lead to more false positives, especially in sparse graphs, while a lower value of λ DAG caused many two-variable loops. Compared to the reported hyperparameter by Ke et al. (2019) (λ DAG = 0.5, λ sparse = 0.1), we found a lower sparsity regularizer to work better. This is likely because of testing SDI on larger graphs. In contrast to ENCO, SDI needed a lower learning rate for γ due to its higher variance gradient estimators. To compensate for it, we ran it for 50 instead of 30 epochs. In general, SDI achieved lower scores than in the original experiments by Ke et al. (2019) which was because of the larger graph size and smaller dataset size. The average run time of SDI was 4mins per graph.
DCDI The most crucial hyperparameter in DCDI was the initialization of the constraint factor µ 0 . We experimented with a range of µ 0 ∈ {1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5} and found µ 0 = 1e-9 to work best. This is close to the reported value of 1e-8 by Brouillard et al. (2020).
Higher values lead to empty graphs, while lower values slowed down the optimization. Additionally, we search over the regularizer hyperparameter λ ∈ {1e-3, 1e-2, 1e-1, 1.0, 10.0} where we found λ = 0.1, which is the same value used by Brouillard et al. (2020). We stop the search after the Lagrangian constraint is below 1e-7, following Brouillard et al. (2020), or 50k iterations have been used which was sufficient to converge on all graphs. We have experimented with using weight decay to prevent overfitting, but did not find it to provide any gain in terms of structure learning performance. The average run time of DCDI was 16 minutes.
ENCO We outline the hyperparameters of ENCO in Table 4. As discussed in Section 3.4, the most crucial hyperparameter in ENCO is the sparsity regularizer λ sparse . The larger it is, the faster ENCO converges, but at the same time might miss edges in the ground-truth graph. Table 1 with the metric structural intervention distance (SID) (lower is better), averaged over 25 graphs each. The conclusion is the same as for SHD, namely that ENCO outperforms all baselines, while the acyclic heuristic has an even greater impact. Lower values allow the detection of more edges for the price of longer training times. We have found that for the graph structures given, only the graph full was sensitive to the value of λ sparse where λ sparse = 0.002 and λ sparse = 0.004 performed almost equally well. In comparison to SDI, ENCO can make use of larger learning rates due to lower variance gradient estimators. Especially for θ, we have noticed that high learning rates are beneficial. This is in line with our theoretical guarantees which require the orientation parameters to converge first. We use the Adam optimizer for γ and θ with the β-hyperparameters (0.9, 0.9) and (0.9, 0.999) respectively. A lower β 2 hyperparameter for γ allows the second momentum to adapt faster to a change of gradient scale which happens for initial false positive predictions. The average run time of ENCO was 2mins per graph. The algorithm could be sped up even more by reducing the number of graph samples K and model fitting iterations. However, for graphs of larger than 100 nodes, K = 100 and longer model fitting times showed to be beneficial. The learning curves in terms of recall and precision are shown in Figure 10.
Enforcing acyclicity ENCO is guaranteed to converge to acyclic graphs in the data limit; arguably, an assumption that does not always hold. In the presence of cycles, which can occur especially when low data is available, a simple heuristic is to keep the graph, which maximizes the orientation probabilities. Specifically, we aim to find the order O ∈ S N , where S N represents the set of all permutations from 1 to the number of variables N , for which we maximize the following objective:
O = arg max O N i=1 N j=i+1 σ(θ Oi,Oj ) (45)
For small cycles it is easy to do this exhaustively by checking all permutations. For larger cycles, we apply a simple greedy search that works just as well. Once the orderÔ has been found, we remove all edges X i → X j where i comes after j inÔ. This guarantees to result in an acyclic graph.
The intuition behind this heuristic is the following. Cycles are often caused by a single orientation pair being incorrect due to noise in the interventional data. For example, in a chain X 1 → X 2 → X 3 → X 4 → X 5 , it can happen that the orientation parameter θ 14 is incorrectly learned as orientation of the edge between X 1 and X 4 as X 4 → X 1 if the interventional data on X 4 does not show the independence of X 1 and X 4 . However, most other orientation parameters, e.g. θ 12 , θ 13 , θ 24 , θ 34 , etc., have been likely learned correctly. Thus, it is easy to spot that θ 14 is an outlier, and this is what the simple heuristic above implements.
Results Besides the structural hamming distance, a common alternative metric is structural intervention distance (SID) (Peters & Bühlmann, 2015). In contrast to SHD, SID quantifies the closeness between two DAGs in terms of their corresponding causal inference statements. Hence, it is suited for comparing causal discovery methods. The results of the experiments on the synthetic graphs in terms of SID are shown in Table 5, and show a similar trend as before, namely that ENCO is outperforming all baselines.
C.2 SCALABILITY EXPERIMENTS
Graph generation For generating the graphs, we use the same strategy as for the graph random in the previous experiments. The probability of sampling an edge is set to 8/N , meaning that on . Every node has on average 8 edges and a maximum of 10 parents. (b) Plotting recall and precision of the edge predictions for the training on graphs with 1, 000 nodes. The small standard deviation across graphs shows that ENCO can reliably recover large graphs.
average, every node has 8 in-and outgoing edges. We limit the number of parents to 10 per node since, otherwise, we cannot guarantee that the randomly sampled distributions take all parents faithfully into account. This is also in line with the real-world inspired graphs of the BnLearn repository, which have a maximum of 6 parents. To give an intuition on the complexity of such graphs, we show an example graph of 100 nodes in Figure 12a. Accordingly to the number of variables, we have increased the data set size to 4096 samples per intervention, and 100k observational samples. We did not apply the order heuristic on the predictions, since ENCO was able to recover acyclic graphs by itself with the given data.
γ-freezing stage For ENCO, one challenge of large graphs is that the orientation parameters θ are updated very sparsely. The gradients for θ ij require data from an intervention on one of its adjacent nodes X i or X j , which we evaluate less frequently with increasing N as we iterate over interventions on all N nodes. Hence, we require more iterations/epochs just for training the orientation parameters while wasting a lot of computational resources. To accelerate training of large graphs, we freeze γ in every second graph fitting stage. Updating only θ allows us to use the same graph sample C −ij for both L Xi→Xj (X j ) and L Xi →Xj (X j ) since the log-likelihood estimate of X j only needs to be evaluated for θ ij . With this gradient estimator, we experience that as little as 4 graph samples are sufficient to obtain a reasonable gradient variance. Hence, it is possible to perform more gradient updates of θ in the same computation time. Note that this is estimator not efficient when training γ as we require different C −ij samples for every i. In experiments, we alternate the standard graph fitting step with this pure θ-training stage. We want to emphasize that this approach can also be used for small graphs obtaining similar results as in Table 1. However, it is less needed because the orientation parameters are more frequently updated in the first place. Such an approach is not possible for the baselines, SDI and DCDI, because they do not model the orientation as a separate variable.
Hyperparameters To experiment with large graphs, we mostly keep to the same hyperparameters as reported in Section C.1. However, all methods showed to gain by a small hyperparameter search. For SDI and ENCO, we increase the number of distribution fitting iterations as the neural networks need to model a larger set of possible parents. We also increase the learning rate of γ to 2e-2. However, while SDI reaches better performance with the increased learning rate at epoch 30, it showed to perform worse when training for longer. This indicates that high learning rates can lead to local minima in SDI. Additionally, we noticed that a slightly higher sparsity regularizer improved convergence speed for ENCO while SDI did not improve with a higher sparsity regularizer. Table 6 shows a hyperparameter overview of ENCO on large-scale graphs, and Figure 12b the learning curve on graphs of 1, 000 nodes.
For DCDI, we noticed that the hyperparameters around the Lagrangian constraint needed to be carefully fine-tuned. The Lagrangian constraint can reach values larger than possible to represent with double, and starts with 8e216 for graphs of 1, 000 nodes. Following Brouillard et al. (2020), we normalize the constraint by the value after initialization, which gives us a more reasonable value to start learning. We performed another hyperparameter search on µ 0 , noticed however that it did not have a major impact. In the run time of ENCO, DCDI just starts to increase the weighting factor of the augmented Lagrangian while the DAG constraint is lower than 1e-10 for the smallest graph. The best value found was µ 0 = 1e-7. Results For clarity, we report the results of all methods below. The exact values might not be easily readable in Figure 3 due to large differences in performance. Table 7: Results for graphs between 100 and 1000 nodes. We report the average and standard deviation of the structural hamming distance (SHD) over 10 randomly sampled graphs. † The maximum runtime of ENCO was measured on an NVIDIA RTX3090. Baselines were executed on the same hardware.
C.3 LATENT CONFOUNDER EXPERIMENTS
Graph generation The graphs used for testing the latent confounding strategy are based on the random graphs from Section 4.2. We use graphs of 25 nodes, and add 5 extra nodes that represent latent confounders. Each latent confounder X l is connected to two randomly sampled nodes X i , X j that do not have a direct connection. However, X i and X j can be an ancestor-descendant pair and have any other (shared) parent set (see Figure 13a). In the adjacency matrix, we add the edges X l → X i and X l → X j , and perform the data generation as for the previous graphs. After data generation, we remove the 5 latent confounders from both observational and interventional data. The task is to learn the graph structure of the remaining 25 observable variables, as well as detecting whether there exists a latent confounder between any pair of variables. We use the same setup in terms of dataset size as before for the observational samples, namely 5k, but increased the samples per intervention to 512. Little interventional data showed to cause a high variance in the interventional gradients, γ
ij , which is why more false positives occured. The results in for the limited data with 200 interventions, and results in the data limit, i.e. for 10k interventional samples and 100k observational samples, are shown in Table 8. Figure 13: Left: Example of a latent confounder scenario, where X l is not observed and introduces a dependency between X i and X j on observational data. The dots on the left and right represent eventual (observed) parents of X i and X j . Right: Plotting the average score lc(X i , X j ) for confounders X i ← X l → X j in the true causal graph (orange) and maximum score of any other node pair (blue). The plot shows the detection of latent confounders in ENCO is not sensitive to the specific value of τ .
X l X i X j · · · · · ·(
Baselines None of our previous continuous optimization baselines, i.e., SDI and DCDI, are able to deal with latent confounders. To the best of our knowledge, other methods that are able to handle latent confounders commonly take assumptions that do not hold in our experimental setup. Further, most methods are able to deal with latent confounders in the sense that they obtain the correct results despite latent confounding being present. However, in our case, we explicitly predict latent confounders which is a different task by itself.
Hyperparameters To show that we can perform latent confounder detection without specific hyperparameters, we use the same hyperparameters as for the experiment on the previous graph structures (see Appendix C.3). To record γ
(I) ij and γ (O)
ij separately, we use separate first and second order momentum parameters in the Adam optimizer. We plot in Figure 13b the latent confounder scores lc(X i , X j ) calculated based on Equation 8. We see that the score converges close to 1 for pairs with a latent confounder, and for all other, it converges to 0. This verifies our motivation of the score function discussed in Section 3.5, and also shows that the method is not sensitive to the threshold hyperparameter τ . We choose τ = 0.4 which was slightly higher than the highest value recorded for any other pair at early stages of training.
C.4 INTERVENTIONS ON FEWER VARIABLES
Datasets We perform the experiments of interventions on fewer variables on the same graphs and datasets as used for the initial synthetic graphs (see Section C.1). To simulate having interventions on fewer variables, we randomly sample a subset of variables for which we include the interventional data, and remove for all others. The sampled variables are the same for both ENCO and DCDI, and differ across graphs. The dataset size is the same as before, namely 200 samples per intervention and 5k observational datasets.
ENCO for partial intervention sets While the theoretical guarantees for convergence to an acyclic graph apply when interventions on all variables are possible, it is straightforward to extend the ENCO algorithm to support partial interventions as well. Normally, in the graph fitting stage, we sample one intervention at a time. We can, thus, simply restrict the sampling only to the interventions that are possible (or provided in the dataset). In this case, we update the orientation parameters θ ij of only those edges that connect to an intervened variable, either X i or X j , as before. All other orientation parameters would remain unchanged throughout the training, since their gradients rely on interventions missing from the dataset. Instead, we extend the gradient estimator in Equation 4 to not be exclusive to adjacent interventions, but include interventions on all variables. Specifically, for the orientation parameter θ ij without any interventions on X i or X j , we use the following gradient estimator:
∂ ∂θ ijL = σ (θ ij ) σ(γ ij ) · E I X k ,X,C−ij L Xi→Xj (X j ) − L Xi →Xj (X j ) − σ(γ ji ) · E I X k ,X,C−ij L Xj →Xi (X i ) − L Xj →Xi (X i )(46)
where we have an intervention on an arbitrary variable X k with k = i, k = j. This still represents an unbiased gradient estimator since in the derivation of the estimator, we excluded interventions on other variables only to reduce noise.
ENCO has been designed under the assumption that interventional data is provided. When we have interventional data on only a very small subset of variables, we might not optimally use the information that is provided by the observational data. To overcome this issue, we can run a causal discovery method that solely work on observational data and return an undirected graph. This skeleton can be used as a prior, and prevents false positive edges between conditionally independent variables.
Hyperparameters We reuse the hyperparameters of the experiments on the synthetic graph except that we use a slighly smaller sparsity regularizer, λ sparse = 0.002, and a weight decay of 4e-5. For the orientation parameters without adjacent intervention, we use a learning rate of 0.1 · lr θ which is 1e-3 for this experiment. For DCDI, we observed that a higher regularization term of λ = 1.0 obtained best performance. All other hyperparameters are the same as in Section C.1.
Results For additional experimental results, see Section D.2.
C.5 REAL-WORLD INSPIRED EXPERIMENTS
Datasets We perform experiments on a collection of causal graphs from the Bayesian Network Repository (BnLearn) (Scutari, 2010). The repository contains graphs inspired by real-world applications that are used as benchmarks in literature. We chose the graphs to reflect a variety of sizes and different challenges (rare events, deterministic variables, etc.). The chosen graphs are cancer (Korb & Nicholson, 2010), earthquake (Korb & Nicholson, 2010), asia (Lauritzen & Spiegelhalter, 1988), sachs (Sachs et al., 2005, child (Spiegelhalter & Cowell, 1992), alarm (Beinlich et al., 1989), diabetes (Andreassen et al., 1991), and pigs (Scutari, 2010). The graphs have been downloaded from the BnLearn website 5 . For the small graphs, we have used a dataset size of 50k observational samples and 512 samples per intervention. This is a larger dataset size than for the synthetic graph because many edges in the real-world graphs have very small causal effects that cannot be recovered from limited data, and the goal of the experiment was to show that the convergence conditions also hold on real-world graphs. Hence, we need more observational and interventional samples. The results with a smaller dataset size, i.e. 5k observational and 200 interventional samples as before, are shown in Table 9. For the large graphs, we follow the dataset size for the scalability experiments (see Section C.2).
Hyperparameters We reuse most of the hyperparameters of the previous experiments. For all graphs less than 100 nodes, we use the hyperparameters of Appendix C.1, i.e. the synthetic graphs of 25 nodes. For all graphs larger than 100 nodes, we use the hyperparameters of Appendix C.2, i.e. the large-scale graphs. One exception is that we allow the fine-tuning of the regularizer parameter for both sets. For ENCO, we used a slightly smaller regularizer, λ sparse = 0.002, for the small graphs, and a larger one, λ sparse = 0.02, for the large graphs. Due to the large amount of deterministic variables, ENCO tends to predict more false positives in the beginning before removing them one by one. For SDI, we also found a smaller regularizer, λ sparse = 0.01, to work best for the small graphs. However, in line with the results of Ke et al. (2019), SDI was not able to detect all edges. Even lower regularizers showed to perform considerably worse on the child dataset, while minor improvements were made on the small graphs. Hence, we settled for λ sparse = 0.01. In terms of run time, both methods used 100 epochs for the small graphs and 50 for the large graphs.
Results
The results including standard deviations can be found in Table 9. The low standard deviation for ENCO shows that the approach is stable across seeds, even for large graphs. SDI has a zero standard deviation for a few graphs. In those cases, SDI converged to the same graph across seeds, but not necessarily the correct graph. We have also applied DCDI (Brouillard et al., 2020) to the real-world datasets and report the results in Table 9 and 10. DCDI performs relatively similar to SDI, making a few more mistakes on the very small graphs (< 10 nodes) while being slightly better on sachs and child. Nonetheless, ENCO outperforms DCDI on all graphs. We do not report results of DCDI on the largest graphs, diabetes and pigs, because it ran out of memory for diabetes (larger number of max. categories per variable) and did not converge within the same time limitations as SDI and ENCO (see Section 4.3 for a comparison on scalability).
D ADDITIONAL EXPERIMENTS
In this section, we show additional experiments performed as ablation studies of ENCO. First, we discuss further experiments We then discuss the effect of using our gradient estimators proposed in Section 3.4 compared to Bengio et al. (2020). Next, we show experiments on synthetic graphs with deterministic variables violating faithfulness, and experiments on continuous data with Normalizing Flows. Finally, we discuss experiments with different causal mechanism functions for generating synthetic, conditional categorical distributions besides neural networks.
D.1 EFFECT OF THE SAMPLE SIZE
The number of samples provided as observational and interventional data is crucial for causal structure learning methods since the more data we have, the better we can estimate the underlying causal mechanisms. To gain further insights in the effect of the sample size on ENCO and the compared baselines, we repeat the experiments of Section 4.2 with different sample sizes.
Large sample size First, we use very large sample sizes to find the upper bound performance level that we can expect from each method. For this, we sample 100k observational samples per graph, and 10k samples per intervention. We observed that this is sufficient to model most conditional probabilities up to a negligible error. The results are shown in Table 11. We find that, in line with the theoretical guarantees, ENCO can reliably recover most graphs, only making 0.3 mistakes on average on the full graph. Of the baselines, only DCDI is able to recover the collider graph without errors since its edges can be independently orientated. For all other graphs, DCDI converges to acyclic graphs, but incorrectly orients some edges and predicts false positive edges, while being 8 times slower than ENCO on the same hardware. All other baselines show improved SHD scores than in Table 1 as well, but are not able to match ENCO's performance. This shows that, even in the data limit, ENCO achieves notably better results than concurrent methods.
Next, we consider situations where data is very limited. Thereby, we consider two data sample axes: observational and interventional data.
Limited interventional data sample sizes We repeat the experiments of Table 1 for ENCO while limiting the sample size per intervention to 20, 50, and 100 (200 before). The observational dataset size of 5000 samples is thereby kept constant. We plot the performance for all graph structures in Figure 14. Overall, the decrease of performance with lower interventional sample size is consistent across graph structures. With only 20 samples per intervention, it becomes especially hard to reason about variables with many parents, since the variable's distribution is determined by many other parents as well. Yet, for four out of the six graphs, we obtain an SHD of less than 1 with 100 interventional samples, and less than 6 when only 20 samples are available. In conclusion, ENCO works well with little interventional data if most variables have a small parent set.
Limited observational data sample sizes Similarly as above, we repeat the experiments of Table 1 for ENCO but limit the observational sample size to 1000 and 2000 (5000 before) while keeping 200 samples per interventions. Observational data is important in ENCO for learning the conditional distributions. For variables with many parents, this becomes more difficult when fewer samples are available, because the input space grows exponentially with the number of parents. Thus, we Note the different scale of the y-axis for the six graphs. While the general trend is the same for all graphs, i.e. decreasing performance with fewer samples, the order heuristic can reduce the SHD error by a considerable margin for most graphs. Note the different scale of the y-axis for the six graphs. The structure learning performance remains good for sparse graphs, and suffers for graphs with larger parent sets.
would expect the collider and full graph suffer the most from having less observational data, and this is indeed the case as shown by the results in Figure 15. The results of all other graphs are less affected, although interestingly, some become even better with less observational data. For the chain X 1 → X 2 → ...X N , for instance, we observed that the learned conditional distributions picked up spurious correlations among variables, e.g., between X 1 and X 3 when modeling p(X 3 |X 1 , X 2 ) which are, in the data limit, independent given X 2 . Since those correlations do not necessarily transfer to the interventional setting, it is easier to spot false positive edges, and we can obtain even better results than for the larger sample sizes. In conclusion, having sufficient observational data is crucial in ENCO for graphs with variables that have larger parent sets, while being less important for sparser graphs.
Limited interventional and observational data sample sizes Finally, we combine the smallest interventional and observational data sample sizes, and also include the results of the previously best baselines, SDI and DCDI, in Table 12. The results of ENCO show the combination of the previous two effects: graphs consisting of variables with small parent sets can still be recovered well by ENCO, while errors increase for the collider and full graph. Similar trends are observed for SDI, while DCDI showed a considerable decrease in performance for all graphs. In conclusion, ENCO still works well for graphs with smaller parent sets under a small observational and interventional data regime, and outperforms related baselines in this setting. Table 13.
D.2 INTERVENTIONS ON FEWER VARIABLES
We have performed the experiments in Section 4.4 using fewer interventions for all six synthetic graph structures. The results are visualized in Figure 16, and presented in table-form in Table 13.
From the experiments, we can see that ENCO with interventions on only 4 variables matches or outperforms DCDI with 10 interventions for 4 out of the 6 graph structures (bidiag, chain, full, jungle) when enforcing acyclicity. Especially on chain-like graphs such as jungle, ENCO achieves lower SHD scores for the same number of interventions on variables, while DCDI incorrectly orientates many edges and predicts false positive edges between children. On the graph collider, we observed a high variance for settings with very few interventions. This is because when we intervene on the collider node itself, ENCO can deduce the orientation for all edges. Finally, on the graph random, we observe that enforcing acyclicity for ENCO reduces the error a lot. This is because incorrectly orientated edges cause more false positives in this densely connected graph, which are removed with the cycles. We include a longer discussion of the limitations of ENCO on fewer interventions in Appendix B.4. Still, we conclude that, in practice, ENCO performs competitively to DCDI, even when very few interventions are provided, and scales better to more interventions.
D.3 ABLATION STUDY ON GRADIENT ESTIMATORS
To analyze the importance of the low-variance gradient estimators in ENCO, we repeat the experiments on synthetic graph structure from Section 4.2 where the gradient estimators of ENCO have been replaced by those from Bengio et al. (2020). The results are shown in Table 14. Overall, the scores are very similar with minor differences for the graphs full and bidiag. In comparisons to the learning curves in Figure 11, the curves with the gradient estimator of Bengio et al. (2020) are more noisy, with recall and precision jumping up and down. While ENCO easily converged early to the correct graphs for all graph types, this model often required the full 30 iterations to reach the optimal recovery. The difference between the two gradient estimators becomes more apparent on large graphs. We repeated the experiments of Section 4.3 on the graphs with 100 nodes using the gradient estimator of Bengio et al. (2020). Within the 30 epochs, the model obtained an SHD of 15.4 on average over 5 experiments, which is considerably higher than ENCO with the proposed gradient estimators (0.0). Still, this is only half of the errors that SDI (Ke et al., 2019) with the same gradient estimator achieved. Hence, we can conclude that the proposed gradient estimators are beneficial for ENCO but not strictly necessary for small graphs. For large graphs, the low variance of the estimator becomes much more important.
D.4 DETERMINISTIC VARIABLES
In contrast to algorithms working on observational data, ENCO does not strictly require the faithfulness assumption. Hence, we can apply ENCO to graphs with deterministic variables. Deterministic variables have a distribution that is defined by a one-to-one mapping of its parents' inputs to an output value. In other words, we have the following distribution:
p(X i |pa(X i )) = 1 [X i = f (pa(X i ))](47)
where f is an arbitrary function. The difficulty of deterministic variables is that a variable X i can be fully replaced by its parents pa(X i ) in any conditional distribution. The only way we can identify deterministic variables is from interventional data, where an intervention on X i breaks the dependency to its parents.
We have already tested ENCO on deterministic variables in the context of the real-world inspired graphs of Section 4.6. To have a more detailed analysis, we created synthetic graphs following the random graph setup with an edge probability of 0.1 and an average of two parents, and maximum of three parents. An example graph is shown in Figure 17. All variables except the leaf nodes have deterministic distributions, where the function f (pa(X i )) is randomly created by picking a Figure 17: Example graph for the deterministic setup. We use the random setup with edge probability 0.1 and limit number of parents to 3. All variables except the leaf nodes have deterministic distributions.
X 1 X 2 X 3 Figure 18: Example graph for cancelling paths X 1 → X 2 → X 3 and X 1 → X 3 . This can, for instance, occur when the conditional distribution of X 2 is a Dirac delta around X 1 : p(X 2 |X 1 ) = δ[X 2 = X 1 ].
random output category for any pair of input values. We create 10 such graphs and use the same hyperparameter setup as for the synthetic graphs, except that we increase the sparsity regularizer to λ sparse = 0.02. We report the results in Table 15. In line with the results on the real-world graphs, ENCO is able to recover most graphs with less than two errors. As a baseline, we apply SDI (Ke et al., 2019) and see a significant higher error rate. The method predicts many false positives, including two-variable loops, but was also missing out some true positive edges. We conclude that ENCO also works well on deterministic graphs.
Cancellation of paths Besides deterministic nodes, a common example for faithfulness violation is the cancellation of two paths. For instance, consider the causal graph with the three variables X 1 , X 2 , X 3 shown in Figure 18, and the conditional distribution p(X 2 |X 1 ) = δ[X 2 = X 1 ]. In this case, the two paths X 1 → X 2 → X 3 and X 1 → X 3 cancel each other, i.e. X 3 is independent of X 2 when conditioned on X 1 , and independent of X 1 when conditioned on X 2 . This implies that only one of the two graphs is necessary for describing the relations. Yet, ENCO can find the edge X 1 → X 3 by observing interventions on X 2 , since in this case, X 1 ⊥ ⊥ X 2 and X 1 ⊥ ⊥ X 3 |X 2 . The remaining edges can be learned in the same manner. We also emperically verify this by running ENCO on the graph structure of Figure 18 with the three variables being binary. We set p(X 2 |X 1 ) = δ[X 2 = X 1 ] for Brouillard et al. (2020). The suffix "-G" denotes that the neural networks model a Gaussian density, and "-DSF" a two-layer deep sigmoidal flow. ENCO outperforms all baselines in this scenario, verifying that ENCO also works on continuous data well.
Graph type Linear Nonlinear with additive noise Nonlinear with non-additive noise canceling the two paths, and the remaining distributions are randomly initialized. ENCO reconstructs the graph without errors, showing that it also works in practice.
D.5 CONTINUOUS DATA
We verify that ENCO works just as well with continuous data by performing the experiments on datasets from Brouillard et al. (2020) that contained interventions on all variables. In these datasets, the graphs consist of 10 variables with an average of one edge per variable, and deploy three different causal mechanisms: linear, nonlinear additive noise models, and nonlinear models with non-additive noise using neural networks. The datasets contain 909 observational samples and 909 samples per intervention. All results of GIES, IGSP, and DCDI have been taken from Brouillard et al. (2020) (Appendix C.7, Table 22-24). We follow the setup of Brouillard et al. (2020) and compare two different neural network setups. First, we use MLPs that model a Gaussian density by predicting a mean and variance variable (denoted by suffix G). The second setup uses normalizing flows, more specifically a two-layer deep sigmoidal flow (Huang et al., 2018), which is flexible enough to model more complex distributions (denoted by suffix DSF). The rest of the experimental setup in ENCO is identical to the categorical case.
Results are shown in Table 16, and the observations are the same as with categorical data. ENCO outperforms all other methods in all settings, especially for the more complex distributions. The higher error rate for the DSF setup is mostly due to overfitting of the flow models. We conclude that ENCO works as accurately for both continuous and categorical data.
D.6 SKELETON LEARNING WITH OBSERVATIONAL BASELINE
To show the benefit of learning a graph from observational and interventional data jointly, we compare ENCO to a simple observational baseline. This baseline first learns the skeleton of the graph by applying greedy equivalence search (GES) (Chickering, 2002) on the observational data. Then, for each interventional dataset, we apply GES as well and use those skeletons to orientate the edges of the original one. This can be done by checking for each undirected edge X − Y whether X → Y is in the skeleton of interventions on X or not. As a reference implementation of GES, we have used the one provided in the Causal Discovery Toolbox (Kalainathan et al., 2020).
The results on continuous data are shown in Table 16. Since GES assumes linear mechanisms and gaussianity of the data, it is unsurprising that it performs better on the linear Gaussian dataset than on the non-linear datasets. However, on all the three datasets, it constitutes the lowest performance compared to the other methods, including ENCO. This highlights the benefits of incorporating interventional data in the learning of the skeleton and graph structure. To gain further insights in comparison to the constraint-based baseline, we repeat the experiments with smaller sample sizes. The original dataset has 909 samples for observational data and per intervention, and we sub-sample 500 and 100 of those respectively for simulating smaller dataset sizes. The results of those experiments can be found in Table 17. It is apparent that the results of GES on the linear dataset get considerably worse with fewer data samples being available, while ENCO-G is able to reconstruct most graphs still without errors. Especially for the small dataset of 100 samples, we noticed that the skeletons Table 17: Experiments on graph with continuous data from Brouillard et al. (2020) with smaller sample sizes for both observational and interventional datasets (in brackets). ENCO shows to perform much better in smaller sample sizes than a skeleton+orientation method, underlining the benefit of learning the whole graph from observational and interventional data jointly. Table 18: Comparing structure learning methods in terms of structural hamming distance (SHD) on common graph structures (lower is better), averaged over 25 graphs each. ENCO outperforms all baselines, and by enforcing acyclicity after training, can recover most graphs with minimal errors. found by GES on observational data already contained couple of mistakes. This shows that for small datasets, observational data alone might not be sufficient to find the correct skeleton while by jointly learning from observational and interventional data, we can yet find the graph up to minor errors.
Further, we also apply GES on the categorical data with an additional hyperparameter search over the penalty discount. The results in Table 18 give a similar conclusion as on the continuous data. While the baseline attains good scores for chains, it makes considerably more errors on all other graph structures than ENCO. This shows that ENCO is much more robust by jointly learning from observational and interventional data.
D.7 NON-NEURAL BASED DATA SIMULATORS
Using neural networks to generate the simulated data might give SDI, DCDI and ENCO an advantage in our comparisons since they rely on similar neural networks to model the distribution. To verify that ENCO works for other simulated data similarly well, we run experiments on categorical data with other function forms for the causal mechanisms instead of neural networks. Since there is no straightforward way of defining 'linear' mechanisms for categorical data, we instead express a conditional distribution as a product of independent, single conditionals:
p(X i |pa(X i )) = Xj ∈pa(Xi) p(X i |X j )
xi Xj ∈pa(Xi) p(X i =x i |X j )
with p(X i |X j ) = exp(α X i ,X j ) X i exp(α X i ,X j ) , α ·,· ∼ N (0, 2). Hence, the effect of each variable in the parent set is independent of all others, similar to linear functions in the continuous case. The individual probability densities represent a softmax distribution over variables sampled from a Gaussian distribution.
We apply GIES, IGSP, DCDI, SDI and ENCO to the same set of synthetic graph structures with these new causal mechanisms. Similar to the previous experiments, we provide 200 samples per intervention and 5k observational samples to the algorithms, and repeat the experiments with 25 independently sampled graphs. The results in Table 19 give the same conclusion as the experiments on neural-based causal mechanisms, namely that ENCO outperforms all baselines. Most methods experience a decrease in performance since the average causal effect of each parent is lower than in the neural case where more complex interactions between parents can be modeled. Still, ENCO only shows minor decreases, having less than one mistake on average for every graph structure when applying the orientation heuristic.
Figure 2
2Figure 2: ENCO estimates gradients of significantly lower variance compared to (Bengio et al., 2020).
Figure 4 :
4Experiments on graphs with interventions on fewer variables. Additional graphs are shown in Appendix D.2. ENCO outperforms DCDI on bidiag and jungle, even for very few interventions.
Figure 6 :
6Plotting the gradient estimate variance of ENCO for three variables of γ compared to previous REINFORCE-like approach byBengio et al. (2020) on an example graph with K = 100.
Figure 7 :
7Example graph for which we check the convergence conditions described in Appendix B.2. The conditional distributions are given in Equation 33 to 35.
Ke et al., 2019) has a similar learning structure as ENCO, we have implemented it in the same code base as ENCO. This allows us to compare the learning algorithms under exact same perquisites. Further, all methods with neural networks used the deep learning framework PyTorch (Paszke et al., 2019) which ensures a fair run time comparison across methods.
0 (
0.0 (±60.1) 224.2 (±87.3) 83.6 (±143.5) 527.9 (±35.7) 441.4 (±26.1) 471.9 (±33.7) IGSP 423.3 (±48.2) 240.1 (±78.8) 120.7 (±51.4) 554.8 (±26.4) 394.8 (±73.5) 524.±18
Figure 12 :
12(a) Example graph of 100 variables (best viewed electronically)
SDI 3. 0 (
0±0.0) 0.4 (±0.5) 4.0 (±0.0) 7.0 (±0.0) 11.2 (±0.4) 24.4 (±1.7) 422.4 (±8.7) 18.0 (±1.6) DCDI 4.0 (±0.0) 2.0 (±0.0) 5.0 (±0.0)
Figure 14 :
14GIES 47.4 (±5.2) 22.3 (±3.5) 13.3 (±3.0) 152.7 (±12.0) 53.9 (±8.9) 86.1 (±12.0) IGSP 33.0 (±4.2) 12.0 (±1.9) 23.4 (±2.2) 264.6 (±7.4) 38.6 (±5.7) 76.3 (±7.7) SDI 2.1 (±1.5) 0.8 (±0.9) 14.7 (±4.0) 121.6 (±18.4) 1.8 (±1.6) 1.8 (±1.9) DCDI 3.7 (±1.5) 4.0 (±1.3) 0.0 (±0.0) 2.8 (±2.1) 1.2 (±1.5) 2.2 (±1.5) ENCO (Ours) 0.0 (±0.0) 0.0 (±0.0) 0.0 (±0.0) 0.3 (±0.9) 0.0 (±0.0) 0.0 (±0.0) Results of ENCO for different graph structures under limited interventional data sample size.
Figure 15 :
15Results of ENCO for different graph structures under limited observational data sample size.
Figure 16 :
16Results of ENCO and DCDI for different graph structures under fewer interventions provided. Note the different scale of the y-axis for the six graphs. For four out of six graphs, ENCO outperforms DCDI even for as few as 4 interventions, especially when enforcing acyclicity. The detailed numbers of the results are listed in
et al. (2020) grads 0.1 (±0.4) 0.0 (±0.0) 0.0 (±0.0) 1.9 (±1.5) 0.0 (±0.0) 0.0 (±0.0) -Our gradient estimator 0.0 (±0.0) 0.0 (±0.0) 0.0 (±0.0) 0.3 (±0.9) 0.0 (±0.0) 0.0 (±0.0)
33.6 (±7.5) 17.5 (±7.3) 24.0 (±2.9) 216.5 (±15.2) 33.1 (±2.9) 57.5 (±14.2) IGSP 32.7 (±5.1) 14.6 (±2.3) 23.7 (±2.3) 253.8 (±12.6) 35.9 (±5.
30.7 (±3.1) 16.9 (±2.4) 18.6 (±2.5) 238.6 (±4.0) 30.1 (±3.5) 110.0 (±11.8) IGSP 27.0 (±4.2) 14.0 (±2.8) 25.0 (±1.4) 259.5 (±3.5) 24.0 (±7.
Table 1 :
1Comparing structure learning methods in terms of structural hamming distance (SHD) on common graph structures (lower is better), averaged over 25 graphs each. ENCO outperforms all baselines, and by enforcing acyclicity after training, can recover most graphs with minimal errors.Graph type
bidiag
chain
collider
full
jungle
random
GIES
33.6 (±7.5) 17.5 (±7.3) 24.0 (±2.9) 216.5 (±15.2) 33.1 (±2.9) 57.5 (±14.2)
IGSP
32.7 (±5.1) 14.6 (±2.3) 23.7 (±2.3) 253.8 (±12.6) 35.9 (±5.2)
65.4 (±8.0)
GES + Orientating
14.8 (±2.6)
0.5 (±0.7)
20.8 (±2.4)
282.8 (±4.2)
14.7 (±3.1)
60.1 (±8.9)
SDI
9.0 (±2.6)
3.9 (±2.0)
16.1 (±2.4) 153.9 (±10.3)
6.9 (±2.3)
10.8 (±3.9)
DCDI
16.9 (±2.0) 10.1 (±1.1) 10.9 (±3.6)
21.0 (±4.8)
17.9 (±4.1)
7.7 (±3.2)
Table 3 :
3Results on graphs from the BnLearn library measured in structural hamming distance (lower is better). Results are averaged over 5 seeds with standard deviations listed in Appendix C.5. Despite deterministic variables and rare events, ENCO can recover all graphs with almost no errors.Dataset
cancer
asia
sachs
child
alarm
diabetes
pigs
(5 nodes) (8 nodes) (11 nodes) (20 nodes) (37 nodes) (413 nodes) (441 nodes)
Table 2 :
2Results of ENCO on detecting la-
tent confounders. The missed confounders
do not affect other edge predictions.
Metrics
ENCO
SHD
0.0 (±0.0)
Confounder recall
96.8% (±9.5%)
Confounder precision 100.0% (±0.0%)
TABLE OF CONTENTS
OFConvergence for partial intervention sets . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.5 Example for non-faithful graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Real-world inspired experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Effect of the sample size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 D.2 Interventions on fewer variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Non-neural based data simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50A Gradient estimators
15
A.1 Low-variance gradient estimator for edge parameters . . . . . . . . . . . . . . . . . . 16
A.2 Low-variance gradient estimator for orientation parameters . . . . . . . . . . . . . . . 19
A.3 Training loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B Conditions for converging to the true causal graph
21
B.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
B.2 Convergence conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
B.3 Convergence conditions for latent confounder detection . . . . . . . . . . . . . . . . . 31
B.4 C Experimental details
35
C.1 Common graph structure experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 35
C.2 Scalability experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
C.3 Latent confounder experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
C.4 Interventions on fewer variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
C.5 D Additional experiments
44
D.1 D.3 Ablation study on gradient estimators . . . . . . . . . . . . . . . . . . . . . . . . . . 46
D.4 Deterministic variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
D.5 Continuous data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
D.6 Skeleton learning with observational baseline . . . . . . . . . . . . . . . . . . . . . . 49
D.7
and Equation 24, but in the limited data regime, we might have violations of Equation 36 due to biases in our samples. Violations of Equation 36 are the cause of ENCO predicting cyclic graphs as seen in Section 4.2.
Table 4 :
4Hyperparameter overview for the simulated graphs dataset experiments presented inTable 1.Hyperparameters
SDI
ENCO
Sparsity regularizer λ sparse
{0.01, 0.02, 0.05, 0.1, 0.2}
{0.002, 0.004, 0.01}
DAG regularizer
{0.2, 0.5, 1.0, 2.0, 5.0}
-
Distribution model
2 layers, hidden size 64, LeakyReLU(α = 0.1)
Batch size
128
Learning rate -model
{2e-3, 5e-3, 2e-2, 5e-2}
Weight decay -model
{1e-5, 1e-4, 1e-5}
Distribution fitting iterations F
1000
Graph fitting iterations G
100
Graph samples K
100
Epochs
50
30
Learning rate -γ
{5e-3, 2e-2, 5e-2}
{5e-3, 2e-2, 5e-2}
Learning rate -θ
-
{5e-3, 2e-2, 5e-2, 1e-1}
the output to obtain a distribution over the 10 categories. The MLP and hyperparameters have been
chosen based on the design of networks used in ENCO, SDI (Ke et al., 2019) and DCDI
Table 5 :
5Extension of
Table 6 :
6Hyperparameter overview of ENCO for the scalability experiments presented inTable 7. Numbers in the center represent that we use the same value for all graphs.Hyperparameters
100 nodes 200 nodes 400 nodes 1000 nodes
Distribution model
2 layers, hidden size 64, LeakyReLU(α = 0.1)
Batch size -model
128
128
128
64
Learning rate -model
5e-3
Distribution fitting iterations F
2000
2000
4000
4000
Graph fitting iterations G
100
Graph samples
100
Learning rate -γ
2e-2
Learning rate -θ
1e-1
θ-training iterations
1000
1000
2000
2000
θ-training graph samples
2 + 2
Sparsity regularizer λ sparse
{0.002, 0.004, 0.01}
Number of epochs (max.)
30
Table 8 :
8Results of ENCO on detecting latent confounders averaged over 25 graphs with 25 nodes in the data limit (10k samples per intervention, 100k observational samples) and limited data (200 samples per intervention, 5k observational samples). In the data limit, only false negative predictions of latent confounders occured which did not affect other edge predictions. With little interventional data, more false positives occur reducing the precision.Metrics
ENCO
ENCO
ENCO
(10k interv./100k observ.) (512 interv./5k observ.) (200 interv./5k observ.)
SHD
0.0 (±0.0)
1.24 (±1.33)
4.12 (±1.86)
Confounder recall
96.8% (±9.5%)
93.6% (±13.8%)
92.0% (±11.5%)
Confounder precision
100.0% (±0.0%)
96.5% (±7.1%)
83.8% (±16.4%)
Table 9 :
9Results on graphs from the BnLearn library measured in structural hamming distance (lower is better). Results are averaged over 5 seeds with standard deviations.Dataset
cancer
earthquake
asia
sachs
child
alarm
diabetes
pigs
(5 nodes)
(5 nodes)
(8 nodes)
(11 nodes)
(20 nodes)
(37 nodes)
(413 nodes)
(441 nodes)
Table 10 :
10Results on graphs from the BnLearn library measured in structural hamming distance (lower is better), using 5k observational and 200 interventional samples.Dataset
cancer
earthquake
asia
sachs
child
alarm
(5 nodes)
(5 nodes)
(8 nodes)
(11 nodes)
(20 nodes)
(37 nodes)
Table 11 :
11Repeating experiments ofTable 1with large sample sizes (10k samples per intervention, 100k observational samples). In line with the theoretical guarantees, ENCO can reliably recover five out of the six graph structures without errors.Graph type
bidiag
chain
collider
full
jungle
random
Table 12 :
12Repeating experiments of Table 1with very small sample sizes (20 samples per intervention, 1k observational samples). Despite the limited data, ENCO can recover graphs with small parent sets reasonably well, while the graphs collider and full suffer for all methods.Graph type
bidiag
chain
collider
full
jungle
random
SDI
10.9 (±2.7)
6.1 (±1.5)
22.1 (±1.9) 211.0 (±6.2) 10.4 (±2.7) 22.7 (±7.4)
DCDI
30.0 (±4.2) 22.0 (±1.5) 23.2 (±1.3) 185.2 (±4.5) 25.8 (±2.7) 40.2 (±8.4)
ENCO (Ours)
9.7 (±3.6)
5.6 (±1.7)
22.7 (±2.1) 132.6 (±8.0)
8.1 (±2.3)
18.4 (±4.8)
ENCO-acyclic (Ours)
2.0 (±2.3)
2.7 (±2.1)
22.9 (±2.3) 88.4 (±6.6)
4.1 (±2.0)
5.3 (±2.5)
Table 13 :
13Detailed results of the experiments with fewer interventions. SeeFigure 16for a visualization and discussion.Graph type
bidiag
chain
collider
full
jungle
random
DCDI 4 vars
25.8 (±2.0) 23.6 (±11.3) 12.5 (±1.8) 143.8 (±10.7) 38.5 (±3.2) 26.3 (±6.2)
DCDI 6 vars
24.3 (±2.2)
14.6 (±2.7)
12.5 (±1.9) 142.2 (±13.5) 32.7 (±3.8) 23.1 (±7.2)
DCDI 8 vars
23.5 (±1.4)
13.3 (±2.4)
12.3 (±2.0)
134.8 (±8.9)
28.8 (±7.4) 19.1 (±5.5)
DCDI 10 vars
22.4 (±1.1)
13.0 (±4.1)
10.8 (±3.6)
126.2 (±4.2)
28.0 (±3.2) 14.8 (±6.3)
DCDI 15 vars
22.0 (±1.9)
12.5 (±2.1)
11.5 (±3.7)
90.2 (±7.1)
25.8 (±2.1) 12.2 (±5.3)
DCDI 20 vars
20.8 (±1.4)
11.5 (±1.3)
12.0 (±2.9)
62.8 (±9.8)
19.8 (±3.3) 10.2 (±3.0)
DCDI 25 vars
16.9 (±2.0)
10.1 (±1.1)
10.9 (±3.6)
21.0 (±4.8)
17.9 (±4.1)
7.7 (±3.2)
Table 14 :
14Extension ofTable 11with ablation study of usingBengio et al. (2020) gradients with ENCO.
Table 15 :
15Experiments on graphs with deterministic variables. The performance over 10 experiments is reported in terms of SHD with standard deviation in brackets. ENCO can recover most of the graphs with less than two errors.Graph type
deterministic
SDI (Ke et al., 2019)
20.6 (±3.8)
ENCO (Ours)
1.4 (±1.3)
X1
X2
X3
X4
X5
X6
X7
X8
X9
X10
X11
X12
X13
X14
X15
X16
X17
X18
X19
X20
X21
X22
X23
X24
X25
Table 16 :
16Experiments on graph with continuous data from
Table 19 :
19Experiments with a different data simulator, introducing independence among parents for each variable. Similar to the neural-based synthetic data, ENCO recovers most graphs with a minor error rate, outperforming other baselines.
The calculations can be found in the notebook called convergence_guarantees_ENCO.ipynb, see https://github.com/phlippe/ENCO/blob/main/convergence_guarantees_ENCO. ipynb .
https://cran.r-project.org/web/packages/pcalg/index.html 3 https://github.com/uhlerlab/causaldag 4 https://github.com/slachapelle/dcdi
https://www.bnlearn.com/bnrepository/
ACKNOWLEDGEMENTSThis work is financially supported by Qualcomm Technologies Inc., the University of Amsterdam and the allowance Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy. We thank Pascal Mettes, Christina Winkler, and Sara Magliacane for their valuable comments and feedback on an initial draft of this work. We thank the anonymous reviewers for the reviews, suggestions, and engaging discussion which helped to improve this work further. Finally, we thank SURFsara for the support in using the Lisa Compute Cluster.
A model-based approach to insulin adjustment. Roman Steen Andreassen, Jonathan Hovorka, Kristian G Benn, Ewart R Olesen, Carson, Lecture Notes in Medical Informatics. M. Stefanelli, A. Hasman, M. Fieschi, and J. TalmonGermanySpringer44Steen Andreassen, Roman Hovorka, Jonathan Benn, Kristian G. Olesen, and Ewart R. Carson. A model-based approach to insulin adjustment. In M. Stefanelli, A. Hasman, M. Fieschi, and J. Talmon (eds.), Lecture Notes in Medical Informatics, volume 44 of Lecture Notes in Medical Informatics, pp. 239-249, Germany, 1991. Springer. Conference date: 24-06-1991 Through 27-06-1991.
The ALARM Monitoring System: A Case Study with two Probabilistic Inference Techniques for Belief Networks. A Ingo, Henri Jacques Beinlich, R Martin Suermondt, Gregory F Chavez, Cooper, 978-3-642-93437-7Jim Hunter, John Cookson, and Jeremy WyattSpringer89Berlin, Heidelberg; Berlin HeidelbergIngo A. Beinlich, Henri Jacques Suermondt, R. Martin Chavez, and Gregory F. Cooper. The ALARM Monitoring System: A Case Study with two Probabilistic Inference Techniques for Belief Networks. In Jim Hunter, John Cookson, and Jeremy Wyatt (eds.), AIME 89, pp. 247-256, Berlin, Heidelberg, 1989. Springer Berlin Heidelberg. ISBN 978-3-642-93437-7.
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, Christopher J Pal, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher J. Pal. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, 2020.
Differentiable Causal Discovery from Interventional Data. Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, Alexandre Drouin, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and Alexan- dre Drouin. Differentiable Causal Discovery from Interventional Data. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Optimal structure identification with greedy search. David Maxwell, Chickering , Journal of machine learning research. 3David Maxwell Chickering. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov):507-554, 2002.
Invariant Causal Prediction for Nonlinear Models. Meinshausen Heinze-Deml Christina, Peters Nicolai, Jonas, Journal of Causal Inference. 62Heinze-Deml Christina, Meinshausen Nicolai, and Peters Jonas. Invariant Causal Prediction for Nonlinear Models. Journal of Causal Inference, 6(2):1-35, 2018.
Elements of Information Theory. M Thomas, Joy A Cover, Thomas, John Wiley and Sons9780471748823LtdThomas M. Cover and Joy A. Thomas. Elements of Information Theory. John Wiley and Sons, Ltd, 2005. ISBN 9780471748823.
Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens. A Dixit, O Parnas, B Li, J Chen, C P Fulco, L Jerby-Arnon, N D Marjanovic, D Dionne, T Burks, R Raychowdhury, B Adamson, T M Norman, E S Lander, J S Weissman, N Friedman, A Regev, Cell. 1677A. Dixit, O. Parnas, B. Li, J. Chen, C. P. Fulco, L. Jerby-Arnon, N. D. Marjanovic, D. Dionne, T. Burks, R. Raychowdhury, B. Adamson, T. M. Norman, E. S. Lander, J. S. Weissman, N. Friedman, and A. Regev. Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens. Cell, 167(7):1853-1866, 2016.
Almost Optimal Intervention Sets for Causal Discovery. Frederick Eberhardt, Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI'08. the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI'08Arlington, Virginia, USAAUAI PressISBN 0974903949Frederick Eberhardt. Almost Optimal Intervention Sets for Causal Discovery. In Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI'08, pp. 161-168, Arlington, Virginia, USA, 2008. AUAI Press. ISBN 0974903949.
On the Number of Experiments Sufficient and in the Worst Case Necessary to Identify All Causal Relations among N Variables. Frederick Eberhardt, Clark Glymour, Richard Scheines, Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI'05. the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI'05Arlington, Virginia, USAAUAI PressISBN 0974903914Frederick Eberhardt, Clark Glymour, and Richard Scheines. On the Number of Experiments Suffi- cient and in the Worst Case Necessary to Identify All Causal Relations among N Variables. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, UAI'05, pp. 178-184, Arlington, Virginia, USA, 2005. AUAI Press. ISBN 0974903914.
Using Bayesian networks to analyze expression data. Nir Friedman, Michal Linial, Journal of computational biology. 73-4Iftach Nachman, and Dana Pe'erNir Friedman, Michal Linial, Iftach Nachman, and Dana Pe'er. Using Bayesian networks to analyze expression data. Journal of computational biology, 7(3-4):601-620, 2000.
Review of Causal Discovery Methods Based on Graphical Models. Clark Glymour, Kun Zhang, Peter Spirtes, 1664-8021Frontiers in Genetics. 10524Clark Glymour, Kun Zhang, and Peter Spirtes. Review of Causal Discovery Methods Based on Graphical Models. Frontiers in Genetics, 10:524, 2019. ISSN 1664-8021.
Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag, arXiv:1711.08936Causal generative neural networks. arXiv preprintOlivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michèle Sebag. Causal generative neural networks. arXiv preprint arXiv:1711.08936, 2017.
. Ruocheng Guo, Lu Cheng, Jundong Li, P Richard Hahn, Huan Liu, 2020. ISSN 0360-0300A Survey of Learning Causality with Data: Problems and Methods. ACM Comput. Surv. 534Ruocheng Guo, Lu Cheng, Jundong Li, P. Richard Hahn, and Huan Liu. A Survey of Learning Causality with Data: Problems and Methods. ACM Comput. Surv., 53(4), 2020. ISSN 0360-0300.
Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs. Alain Hauser, Peter Bühlmann, 1532-4435Journal of Machine Learning Research. 131Alain Hauser and Peter Bühlmann. Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs. Journal of Machine Learning Research, 13(1): 2409-2464, 2012. ISSN 1532-4435.
. John Hicks, Causality in economics. Australian National University PressJohn Hicks et al. Causality in economics. Australian National University Press, 1980.
Multilayer feedforward networks are universal approximators. Kurt Hornik, Maxwell Stinchcombe, Halbert White, 0893-6080Neural Networks. 25Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989. ISSN 0893-6080.
Neural Autoregressive Flows. Chin-Wei Huang, David Krueger, Alexandre Lacoste, Aaron C Courville, Proceedings of the 35th International Conference on Machine Learning. Jennifer G. Dy and Andreas Krausethe 35th International Conference on Machine LearningStockholmsmässan, Stockholm, SwedenPMLR80Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron C. Courville. Neural Autoregressive Flows. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 2083-2092. PMLR, 2018.
Learning Bayesian Networks with the bnlearn R Package. Marco Scutari, Journal of Statistical Software. 353Marco Scutari. Learning Bayesian Networks with the bnlearn R Package. Journal of Statistical Software, 35(3):1-22, 2010.
Learning in probabilistic expert systems. D J Spiegelhalter, R G Cowell, Bayesian Statistics. 4D. J. Spiegelhalter and R. G. Cowell. Learning in probabilistic expert systems. Bayesian Statistics, 4: 447-466, 1992.
Causation, Prediction, and Search, Second Edition. Peter Spirtes, Clark Glymour, Richard Scheines, 978-0-262-19440-2MIT PressAdaptive computation and machine learningPeter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search, Second Edition. Adaptive computation and machine learning. MIT Press, 2000. ISBN 978-0-262-19440-2.
A kernel-based causal learning algorithm. Xiaohai Sun, Dominik Janzing, Bernhard Schölkopf, Kenji Fukumizu, Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007). Zoubin GhahramaniCorvallis, Oregon, USAACM227ACM International Conference Proceeding SeriesXiaohai Sun, Dominik Janzing, Bernhard Schölkopf, and Kenji Fukumizu. A kernel-based causal learning algorithm. In Zoubin Ghahramani (ed.), Machine Learning, Proceedings of the Twenty- Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, volume 227 of ACM International Conference Proceeding Series, pp. 855-862. ACM, 2007.
The Max-Min Hill-Climbing Bayesian Network Structure Learning Algorithm. Ioannis Tsamardinos, Laura E Brown, Constantin F Aliferis, 0885-6125Machine Learning. 65Ioannis Tsamardinos, Laura E. Brown, and Constantin F. Aliferis. The Max-Min Hill-Climbing Bayesian Network Structure Learning Algorithm. Machine Learning, 65(1):31-78, 2006. ISSN 0885-6125.
Causality and causal inference in epidemiology: the need for a pluralistic approach. P Jan, Alex Vandenbroucke, Neil Broadbent, Pearce, International journal of epidemiology. 456Jan P Vandenbroucke, Alex Broadbent, and Neil Pearce. Causality and causal inference in epidemiol- ogy: the need for a pluralistic approach. International journal of epidemiology, 45(6):1776-1786, 2016.
Gherardo Varando, arXiv:2006.03005Learning DAGs without imposing acyclicity. arXiv preprintGherardo Varando. Learning DAGs without imposing acyclicity. arXiv preprint arXiv:2006.03005, 2020.
Permutation-based Causal Inference Algorithms with Interventions. Yuhao Wang, Liam Solus, Karren D Yang, Caroline Uhler, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USAYuhao Wang, Liam Solus, Karren D. Yang, and Caroline Uhler. Permutation-based Causal Inference Algorithms with Interventions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5822-5831, 2017.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ronald J Williams, Machine Learning. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, pp. 229-256, 1992.
Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions. Karren D Yang, Abigail Katoff, Caroline Uhler, Proceedings of the 35th International Conference on Machine Learning. Jennifer G. Dy and Andreas Krausethe 35th International Conference on Machine LearningStockholmsmässan, Stockholm, SwedenPMLR80Karren D. Yang, Abigail Katoff, and Caroline Uhler. Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 5537-5546. PMLR, 2018.
Yue Yu, Jie Chen, Tian Gao, Mo Yu, Dag-Gnn, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USAPMLR97DAG Structure Learning with Graph Neural NetworksYue Yu, Jie Chen, Tian Gao, and Mo Yu. DAG-GNN: DAG Structure Learning with Graph Neural Networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th Inter- national Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 7154-7163. PMLR, 2019.
DAGs with NO TEARS: Continuous Optimization for Structure Learning. Xun Zheng, Bryon Aragam, Pradeep Ravikumar, Eric P Xing, M Hanna, Hugo Wallach, Kristen Larochelle, Grauman, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. Nicolò Cesa-Bianchi, and Roman GarnettNeurIPS; Montréal, CanadaSamy Bengio,Xun Zheng, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. DAGs with NO TEARS: Continuous Optimization for Structure Learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 9492-9503, 2018.
Learning Sparse Nonparametric DAGs. Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P Xing, The 23rd International Conference on Artificial Intelligence and Statistics. Silvia Chiappa and Roberto CalandraOnline [Palermo, Sicily, ItalyPMLR2020Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. Learning Sparse Nonparametric DAGs. In Silvia Chiappa and Roberto Calandra (eds.), The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], volume 108 of Proceedings of Machine Learning Research, pp. 3414-3425. PMLR, 2020.
Causal Discovery with Reinforcement Learning. Shengyu Zhu, Ignavier Ng, Zhitang Chen, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Shengyu Zhu, Ignavier Ng, and Zhitang Chen. Causal Discovery with Reinforcement Learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, 2020. |
231,648,113 | ZERO-COST PROXIES FOR LIGHTWEIGHT NAS | Neural Architecture Search (NAS) is quickly becoming the standard methodology to design neural network models. However, NAS is typically compute-intensive because multiple models need to be evaluated before choosing the best one. To reduce the computational power and time needed, a proxy task is often used for evaluating each model instead of full training. In this paper, we evaluate conventional reduced-training proxies and quantify how well they preserve ranking between multiple models during search when compared with the rankings produced by final trained accuracy. We propose a series of zero-cost proxies, based on recent pruning literature, that use just a single minibatch of training data to compute a model's score. Our zero-cost proxies use 3 orders of magnitude less computation but can match and even outperform conventional proxies. For example, Spearman's rank correlation coefficient between final validation accuracy and our best zero-cost proxy on NAS-Bench-201 is 0.82, compared to 0.61 for EcoNAS (a recently proposed reduced-training proxy). Finally, we use these zerocost proxies to enhance existing NAS search algorithms such as random search, reinforcement learning, evolutionary search and predictor-based search. For all search methodologies and across three different NAS datasets, we are able to significantly improve sample efficiency, and thereby decrease computation, by using our zero-cost proxies. For example on NAS-Bench-101, we achieved the same accuracy 4× quicker than the best previous result. | [
209531937,
49411844,
52920837,
211146532,
53388625,
12713052
] | ZERO-COST PROXIES FOR LIGHTWEIGHT NAS
Mohamed S Abdelfattah
Samsung AI Center
Cambridge
Abhinav Mehrotra
Samsung AI Center
Cambridge
Łukasz Dudziak
Samsung AI Center
Cambridge
Nicholas D Lane
Samsung AI Center
Cambridge
University of Cambridge
ZERO-COST PROXIES FOR LIGHTWEIGHT NAS
Published as a conference paper at ICLR 2021
Neural Architecture Search (NAS) is quickly becoming the standard methodology to design neural network models. However, NAS is typically compute-intensive because multiple models need to be evaluated before choosing the best one. To reduce the computational power and time needed, a proxy task is often used for evaluating each model instead of full training. In this paper, we evaluate conventional reduced-training proxies and quantify how well they preserve ranking between multiple models during search when compared with the rankings produced by final trained accuracy. We propose a series of zero-cost proxies, based on recent pruning literature, that use just a single minibatch of training data to compute a model's score. Our zero-cost proxies use 3 orders of magnitude less computation but can match and even outperform conventional proxies. For example, Spearman's rank correlation coefficient between final validation accuracy and our best zero-cost proxy on NAS-Bench-201 is 0.82, compared to 0.61 for EcoNAS (a recently proposed reduced-training proxy). Finally, we use these zerocost proxies to enhance existing NAS search algorithms such as random search, reinforcement learning, evolutionary search and predictor-based search. For all search methodologies and across three different NAS datasets, we are able to significantly improve sample efficiency, and thereby decrease computation, by using our zero-cost proxies. For example on NAS-Bench-101, we achieved the same accuracy 4× quicker than the best previous result.
INTRODUCTION
Instead of manually designing neural networks, neural architecture search (NAS) algorithms are used to automatically discover the best ones (Tan & Le, 2019a;Bender et al., 2018). Early work by Zoph & Le (2017) proposed using a reinforcement learning (RL) controller that constructs candidate architectures, these are evaluated and then feedback is provided to the controller based on the performance of the candidate. One major problem with this basic NAS methodology is that each evaluation is very costly -typically on the order of hours or days to train a single neural network fully. We focus on this evaluation phase -we propose using proxies that require a single minibatch of data and a single forward/backward propagation pass to score a neural network. This is inspired by recent pruning-at-initialization work by Lee et al. (2019), Wang et al. (2020) and Tanaka et al. (2020) wherein a per-parameter saliency metric is computed before training to inform parameter pruning. Can we use such saliency metrics to score an entire neural network? Furthermore, can we use these "single minibatch" metrics to rank and compare multiple neural networks for use within NAS? If so, how do we best integrate these metrics within existing NAS algorithms such as RL or evolutionary search? These are the questions that we hope to (empirically) tackle in this work with the goal of making NAS less compute-hungry. Our contributions are:
• Zero-cost proxies We adapt pruning-at-initialization metrics for use with NAS. This requires these metrics to operate at the granularity of an entire network rather than individual parameters -we devise and validate approaches that aggregate parameter-level metrics in a manner suitable for ranking candidates during NAS search. • Comparison to conventional proxies We perform a detailed comparison between zerocost and conventional NAS proxies that use a form of reduced-computation training. First, we quantify the rank consistency of conventional proxies on large-scale datasets: 15k models vs. 50 models used in (Zhou et al., 2020). Second, we show that zero-cost proxies can match or exceed the rank consistency of conventional proxies. • Ablations on NAS benchmarks We perform ablations of our zero-cost proxies on five different NAS benchmarks (NAS-Bench-101/201/NLP/ASR and PyTorchCV) to both test the zero-cost metrics under different settings, and expose properties of successful metrics.
• Integration with NAS Finally, we propose two ways to use zero-cost metrics effectively within NAS algorithms: random search, reinforcement learning, aging evolution and predictor-based search. For all algorithms and three NAS datasets we show significant speedups, up to 4× for NAS-Bench-101 compared to current state-of-the-art.
RELATED WORK
NAS Efficiency To decrease NAS search time, various techniques were used in the literature. Pham et al. (2018) and Cai et al. (2018) use weight sharing between candidate models to decrease the training time during evaluation. and others use smaller datasets (CIFAR-10) as a proxy to the full task (ImageNet1k). In EcoNAS, Zhou et al. (2020) extensively investigated reduced-training proxies wherein input size, model size, number of training samples and number of epochs were reduced in the NAS evaluation phase. We compare to EcoNAS in this work to elucidate how well our zero-cost proxies perform compared to familiar and widely-used conventional proxies.
Pruning The goal is to reduce the number of parameters in a neural network, one way to do this is by identifying a saliency (importance) metric for each parameter, and the less-important parameters are removed. For example, Han et al. (2015), Frankle & Carbin (2019) and others use parameter magnitudes as the criterion while LeCun et al. (1990), Hassibi & Stork (1993) and Molchanov et al. (2017) use gradients. However, the aforementioned works require training before computing the saliency criterion. A new class of pruning-at-initialization algorithms, that require no training, were introduced by Lee et al. (2019) and extended by Wang et al. (2020) and Tanaka et al. (2020). A single forward/backward propagation pass is used to compute a saliency criterion which is successfully used to heavily prune neural networks before training. We extend these pruning-at-initialization criteria towards scoring entire neural networks and we investigate their use with NAS algorithms.
Intersection between pruning and NAS Concepts from pruning have been used within NAS multiple times. For example, Mei et al. (2020) use channel pruning in their AtomNAS work to arrive at customized multi-kernel-size convolutions (mixconvs as introduced by Tan & Le (2019b)). In their Blockswap work, Turner et al. (2020) use Fisher information at initialization to score different lightweight primitives that are substituted into a neural network to decrease computation. This is the earliest work we could find that attempts to perform a type of NAS by scoring neural networks without training using a pruning criterion, More recently, Mellor et al. (2020) introduced a new metric for scoring neural networks at initialization based on the correlation of Jacobians with different inputs. They perform "NAS without training" by performing random search with their zero-cost metric (jacob cov) to rank neural networks instead of using accuracy. We include jacob cov in our analysis and we introduce five more zero-cost metrics in this work.
PROXIES FOR NEURAL NETWORK ACCURACY
CONVENTIONAL NAS PROXIES (ECONAS)
In conventional sample-based NAS, a proxy training regime is often used to predict a model's accuracy instead of full training. Zhou et al. (2020) investigate conventional proxies in depth by computing the Spearman rank correlation coefficient (Spearman ρ) of a proxy task to final validation accuracy. The proxy used is a reduced-computation training, wherein one of the following four variables is reduced: (1) number of epochs, (2) number of training samples, (3) input resolution (4) model size (controlled through the number of channels after the first convolution). Even though such proxies were used in many prior works, EcoNAS is the first systematic study of conventional proxy tasks that we found. One main finding by Zhou et al. (2020) is that using approximately 1 4 of the model size and input resolution, all training samples, and 1 10 the number of epochs was a reasonable proxy which yielded the best results for their experiment (Zhou et al., 2020).
ZERO-COST NAS PROXIES
We present alternative proxies for network accuracy that can be used to speed up NAS. A simple proxy that we use is grad norm in which we sum the Euclidean norm of the gradients after a single minibatch of data training data. Other metrics listed below were previously introduced in the context of parameter pruning at the granularity of a single parameter -a saliency is computed to rank parameters and remove the least important ones. We adapt these metrics to score and rank entire neural network models for NAS.
SNIP, GRASP AND SYNAPTIC FLOW
In their snip work, Lee et al. (2019) proposed performing parameter pruning based on a saliency metric computed at initialization using a single minibatch of data. This saliency criteria approximates the change in loss when a specific parameter is removed. Wang et al. (2020) attempted to improve on the snip metric by approximating the change in gradient norm (instead of loss) when a parameter is pruned in their grasp objective. Finally, Tanaka et al. (2020) generalized these so-called synaptic saliency scores and proposed a modified version (synflow) which avoids layer collapse when performing parameter pruning. Instead of using a minibatch of training data and cross-entropy loss (as in snip or grasp), with synflow we compute a loss which is simply the product of all parameters in the network; therefore, no data is needed to compute this loss or the synflow metric itself. These are the three metrics:
snip : S p (θ) = ∂L ∂θ θ , grasp : S p (θ) = −(H ∂L ∂θ ) θ, synflow : S p (θ) = ∂L ∂θ θ
(1) where L is the loss function of a neural network with parameters θ, H is the Hessian 1 , S p is the per-parameter saliency and is the Hadamard product. We extend these saliency metrics to score an entire neural network by summing over all parameters N in the model: Theis et al. (2018) perform channel pruning by removing activation channels (and their corresponding parameters) that are estimated to have the least effect on the loss. They build on the work of Molchanov et al. (2017) and Figurnov et al. (2016). More recently, Turner et al. (2020) aggregated this fisher metric for all channels in a convolution primitive to quantify the importance of that primitive when it is replaced by a more efficient alternative. We further aggregate the fisher metric for all layers in a neural network to score an entire network as shown in the following equations:
S n = N i S p (θ) i .
FISHER
fisher : S z (z) = ∂L ∂z z 2 , S n = M i=1 S z (z i )(2)
where S z is the saliency per activation z, and M is the length of the vectorized feature map.
JACOBIAN COVARIANCE
This metric was purpose-designed to score neural networks in the context of NAS -we refer the reader to the original paper for detailed reasoning and derivation of the metric which we call jacob cov (Mellor et al., 2020). In brief, this metric captures the correlation of activations within a network when subject to different inputs within a minibatch of data -the lower the correlation, the better the network is expected to perform as it can differentiate between different inputs well.
EMPIRICAL EVALUATION OF PROXY TASKS
Generally, most of the proxies presented in the previous section try to capture how trainable a neural network is by inspecting the gradients at the beginning of training. In this work, we refrain from attempting to explain precisely why each metric works (or does not work) and instead focus on the empirical evaluation of those metrics in different scenarios. We use the Spearman rank correlation coefficient (Spearman ρ) to quantify how well a proxy ranks models compared to the ground-truth ranking produced by final test accuracy (Daniel, 1990).
NAS-BENCH-201
NAS-Bench-201 is a purpose-built benchmark for prototyping NAS algorithms (Dong & Yang, 2020). It contains 15,625 CNN models from a cell-based search space and corresponding training statistics. We first use NAS-Bench-201 to evaluate conventional proxies from EcoNAS, then we evaluate our zero-cost proxies and compare the two approaches. Even though Zhou et al. (2020) thoroughly investigated reduced-training proxies, they only evaluated a small model zoo consisting of 50 models. To study EcoNAS more extensively we evaluate it on all 15,625 models in NAS-Bench-201 search space (training details in A.1). The full configuration training of NAS-Bench-201 on CIFAR-10 uses input resolution r=32, number of channels in the stem convolution c=16 and number of epochs e=200 -we summarize this as: r 32 c 16 e 200 . According to the EcoNAS study, the most effective configuration divides both the input resolution and stem channels by 4 and the number of epochs by 10, that is, r 8 c 4 e 20 for NAS-Bench-201 models. Keeping that in mind we investigate r 8 c 4 in Fig. 1 (labeled econas); however, this proxy training seems to suffer from overfitting as correlation to final accuracy started to drop after 20 epochs. Additionally, the Spearman ρ was a modest 0.61 when evaluated on all 15,625 models in NAS-Bench-201 -a far cry from the 0.87 achieved on the 50 models in the EcoNAS paper (Zhou et al., 2020). We additionally explore r 8 c 8 , r 16 c 4 and r 16 c 8 and find a very good proxy with r 16 c 8 e 15 , labeled in Fig. 1 as econas+. From the plots in Fig. 1, we would like to highlight that:
1. A reduced-training proxy that works well on one search space may not work well on another as highlighted by the difference in Spearman ρ between econas and econas+. This occurs even though both tasks in this case were CIFAR-10 image classification.
2. Even though econas-style proxies reduce computation load by a large factor (as seen in the middle plot in Fig. 1, this does not translate fully into actual runtime improvement when run on a nominal desktop GPU 2 . We therefore plot actual GPU speedup in the third subplot in Fig. 1. For example, notice that the point labeled econas (r 8 c 4 e 20 ) has the same FLOPS as 1 10 of a full training epoch, but when measured on a GPU, takes time equivalent to 5 full training epochs -a 50× gap between theoretical and actual speedup.
ZERO-COST PROXIES ON NAS-BENCH-201
We now shift our focus towards our zero-cost NAS proxies which rely on gradient computations using a single minibatch of data at initialization. A clear advantage of zero-cost proxies is that they take very little time to compute -the forward/backward pass using a single minibatch of data. We ran the zero-cost proxies on all 15,625 models in NAS-Bench-201 for three datasets and we summarize the results in Table 1. The synflow metric performed the best on all three datasets with a Spearman ρ consistently above 0.73, jacob cov was second best but was also very well-correlated to final accuracy. Next came grad norm and snip with a Spearman ρ close to 0.6. We add another metric that we simply label with vote that takes a majority vote between the three metrics synflow, jacob cov and snip when ranking two models. This performed better than any single metric with a Spearman ρ consistently above 0.8. At the cost of just 3 minibatches instead of 1000, this is already performing slightly better than econas+, and much better than econas as shown in Fig. 2a. In Fig. 2 we also plot the rank correlation of validation accuracy (without any reduced training) over the first 10 epochs of training for the three datasets available in NAS-Bench-201. Having set a comparison point with EcoNAS and normal training proxies, we have shown that zero-cost proxies can match and outperform these conventional methods in a large-scale empirical analysis. However, different NAS search spaces may behave differently, so in the remainder of this section, we test the zero-cost proxies on different search spaces.
MODELS IN THE WILD (PYTORCHCV)
To study zero-cost proxies in a different setting, we scored the models in the PyTorchCV database (Sémery, 2020). PytorchCV contains common state-of-the-art neural networks such as ResNets (He et al., 2016), DenseNets (Huang et al., 2017), MobileNets (Howard et al., 2017) and Efficient-Nets (Tan & Le, 2019a) -a representative assortment of top-performing models. We evaluated 50 models for CIFAR-10, CIFAR-100 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011), and 200 models for ImageNet1k (Deng et al., 2009). Fig. 3 shows the resulting correlation for the zero-cost metrics. synflow, snip, fisher and grad norm all perform similarly well on all datasets, with the exception of SVHN where synflow outperforms other metrics by a large margin. However, grasp failed in this setting completely as shown by the low mean Spearman ρ and high variance as shown in Fig. 3. Curiously, jacob cov also failed in this setting even though it performed well on NAS-Bench-201. This suggests that this metric is better at scoring models from within a search space (similar topology), but becomes worse when scoring unrelated models.
OTHER SEARCH SPACES
We investigate these metrics with other NAS datasets. Our goal is to empirically find a good metric to speed up NAS algorithms reliably -we evaluate our zero-cost proxies on the following datasets.
• NAS-Bench-101: This is the first and largest NAS benchmark available with over 423k CNN models and training statistics on CIFAR-10 (Ying et al., 2019).
• NAS-Bench-NLP: Klyuchnikov et al. (2020) investigate the architectures of 14k different recurrent cells in natural language processing (NLP) tasks such as next word prediction.
• NAS-Bench-ASR: This is our in-house dataset for convolution-based automatic speech recognition models evaluated on the TIMIT dataset (Garofolo et al., 1993). The search space includes linear, convolution, zeroize and skip-connections, forming 8242 models. Compared to NAS-Bench-201, these datasets are either much larger (NAS-Bench-101) or based on a different task (NAS-Bench-NLP/ASR). From Table 2 we would like to highlight that the synflow metric is the only consistent one across all analyzed benchmarks. Additionally, even for the synflow metric, rank correlation is quite a bit lower than that for NAS-Bench-201 ( 0.3 vs. 0.8). Other than global rank correlation, we posit that ranking of top models from a search space is also critically important for NAS algorithms -this is because we ultimately care about finding those top models. In Section A.3 we perform an analysis of how top models are ranked by zero-cost proxies. Top models (according to validation accuracy) need to be ranked correctly by a zero-cost metric. Additionally, local rank correlation of top models could be important for NAS algorithms when two good models are compared using their proxy metric value. Tables 10 and 9 show that the only metric that maintains correct ranking among top models consistently well across all NAS benchmarks is synflow. In Section 5 we deliberately evaluate 3 benchmarks that exhibit different levels of rank correlation: NAS-Bench-201/101/ASR to see if we can integrate synflow within NAS and achieve consistent gains for all three search spaces. Mellor et al. (2020) proposed using jacob cov to score a set of randomly-sampled models and to greedily choose the model with the highest score. This "NAS without training" methodology is very attractive thanks to its simplicity and low computational cost. In this section, we evaluate our metrics in this setting that we simply call "random search" (RAND). We extend this methodology slightly: instead of just training the top model, we keep training models (from best to worst as ranked by the zero-cost metric) until the desired accuracy is achieved. However, this approach can only produce results that are as good as the metric being used -and we have no guarantees (just empirical evidence) that these metrics will perform well on all datasets. Therefore, we also investigate how to integrate zero-cost metrics within existing NAS algorithms such as reinforcement learning (RL) (Zoph & Le, 2017), aging evolution (AE) search and predictor-based search (Dudziak et al., 2020). More specifically, we investigate enhancing these search algorithms through either (a) zero-cost warmup phase or (b) zero-cost move proposal.
ZERO-COST NAS
ZERO-COST WARMUP
Generally speaking, by warmup we mean using the zero-cost proxies at the beginning of the search process to initialize the search algorithm without training any models or using accuracy. The main parameter in zero-cost warmup is the number of models for which we compute and use the zero-cost metric (N ), and the potential gain comes from the fact that this number can be usually much larger than the number of models we can afford to train (T N ).
Aging Evolution
We score N random models with our proxy metric and choose the ones ranked highest as the initial population (pool) in the aging evolution (AE) algorithm .
Binary Predictor
We warm up a binary graph convolutional network (GCN) predictor (from Dudziak et al. (2020)) by training it to predict relative performance of two models by considering their zero-cost scores instead of accuracy. For N warmup points, we use the relative rankings (according to the zero-cost metric) of all pairs of models (0.5N (N − 1) pairs) when performing warmup training for the predictor. As in (Dudziak et al., 2020), models ranked by the predictor after each training round (including the warmup phase) and the top models are evaluated.
ZERO-COST MOVE PROPOSAL
Whereas warmup tries to leverage global correlation of the proxy metrics to the accuracy of models, move proposal focuses on a local neighborhood at each step. A common parameter for move proposal algorithms is denoted as R and means sample ratio, i.e., how many models can be checked using zero-cost metrics each time we select a model to train.
Aging Evolution The algorithm is enhanced by performing "guided" mutations. More specifically, each time a model is being mutated (in the baseline algorithm this is done randomly) we consider all possible mutations with edit distance 1 from the current model, score them using the zero-cost proxies and select the best one to add to the pool.
Reinforcement Learning
In the case of REINFORCE, move proposal is similar to warmupinstead of rewarding a controller N time before the search begins, we interleave R zero-cost rewards for each accuracy reward (R N ).
RESULTS
For all NAS experiments, we repeat experiments 32 times and we plot the median and shade between the lower/upper quartiles. Our baselines are already heavily tuned and achieve the same or better results than those reported in the original NAS-Bench-101/201 papers. When adding zero-cost warmup or move proposal with synflow, we leave all search hyper-parameters unchanged.
NAS-Bench-201
The global/top-10% rank correlations of synflow for this dataset are (0.76/0.42) so we expect this proxy to perform quite well. Indeed, as Figure 4 and Table 3 show, we improve search speed on all four types of searches using zero-cost warmup and move proposal. RAND and RL are both significantly improved, both in terms of sample efficiency and final achieved accuracy. But even more powerful algorithms like AE and BP exhibit 5.2× and 1.5× speedups respectively to arrive at 73% accuracy. Generally, the more zero-cost warmup, the better the results. This holds true for all algorithms except RL which degrades at 15k warmup points, suggesting that the controller is overfitting to the synflow metric instead of learning to optimize for accuracy.
NAS-Bench-101
This dataset is an order of magnitude larger than NAS-Bench-201 and has lower global/top-10% rank correlations of (0.37/0.14). In many ways, this provides a true test as to whether these lower correlations are still useful with zero-cost warmup and move proposal. Table 4 shows a summary of the results and Figure 7 (in Section A.4) shows the full plots. As the table shows, even with modest correlations, there is a major boost to all searching algorithms thus outperforming the best previously published result by a large margin and setting a new state-of-the-art result on this dataset. However, it is worth noting that the binary predictor exhibits no improvement (but also no degradation). Perhaps this is because it was already very sample-efficient and synflow warmup couldn't help further due to its relatively poor correlation on this dataset. NAS-Bench-ASR We repeat our evaluation on NAS-Bench-ASR with global/top-10% correlations (0.38/0.03). Even though this is a different task (speech recognition) and the top-10% correlation is effectively nil, synflow warmup and move proposal both yield large improvements in search speeds compared to all baselines in Figure 5 and Table 5. For example, to achieve a phoneme error rate (PER) of 21.3%, baseline RAND, AE and RL required 560, 177 and >1000 trained models; however, this is reduced to 92, 78 and 72 samples with 2000 models of zero-cost warmup.
DISCUSSION
In this section we investigate why zero-cost NAS is effective in improving the sample efficiency of NAS algorithms by looking more closely at how top models are selected by the synflow proxy. Table 6 shows the number of top-5% most-accurate models ranked within the top 64 models by the synflow metric. If we compare random warmup versus zero-cost warmup with synflow, random warmup will only return 5% or 3 models out of 64 that are within the top 5% of models whereas synflow warmup returns a higher number of top-5% models as listed in Table 6. This is key to the improvements observed when adding zero-cost warmup to algorithms like random search or AE. For example, with AE, the numbers in Table 6 are indicative of the models that may end up in the initial AE pool. By initializing the AE pool with many good models, it becomes more likely that a random mutation will lead to an even better model, thus allowing the search to find a top model more quickly. Note that synflow is able to rank many good models in its top 64 models even when global/local correlation is low (as it is the case for NAS-Bench-ASR). Move Proposal For a search algorithm like AE, search moves consist of random mutations (with edit distance 1 for our experiments) for a model from the AE pool. Zero-cost move proposal enhances this by trying out all possible mutations and selecting the best one according to synflow.
To investigate how this improves search efficiency, we took 1000 random points and explored their local neighbourhood cluster of possible mutations. Table 7 shows the probability that the synflow proxy correctly identifies the top model. Indeed, synflow improves the chance of selecting the best mutation from 4% for NAS-Bench-201/101 to >30% and 12%. Even for NAS-Bench-ASR a random mutation has a 7.7% chance (= 1/13) to select the best mutation, but this increases to 10% with the synflow proxy thus speeding up convergence to top models. Table 7: For 1000 clusters of models with edit distance 1, we empirically measure the probability that the synflow proxy will select the most accurate model from each cluster.
NAS-Bench-201 NAS-Bench-101 NAS-Bench-ASR CIFAR-10 CIFAR-100 ImageNet16-120
CONCLUSION
In this paper, we introduced six zero-cost proxies, mainly based on recent pruning-at-initialization work, that are used to rank neural network models in NAS. First, we compared to conventional proxies (EcoNAS) that perform reduced-computation training and we found that zero-cost proxies such as synflow can outperform EcoNAS in maintaining rank consistency. Next, we verified our zero-cost metrics on four additional datasets of varying sizes and tasks and found that indeed out of the six initially-considered zero-cost metrics, only synflow was robust across all datasets for both global and top-10% rank correlation. Finally, we proposed two ways to integrate synflow within NAS algorithms: zero-cost warmup and zero-cost move proposal. Both methods demonstrated significant speedups across four search algorithms and three NAS benchmarks, setting new state-of-the-art results for both NAS-Bench-101 and NAS-Bench-201 datasets. Our strong and consistent empirical results suggest that the synflow metric, when combined with warmup and move proposal can be an effective and reliable methodology for speeding up different NAS algorithms.
A APPENDIX
A.1 EXPERIMENTAL DETAILS
In Table 8 we list the hyper-parameters used in training the EcoNAS proxies to produce Figure 1. The only difference to the standard NAS-Bench-201 training pipeline (Dong & Yang, 2020) is our use of fewer epochs for the learning rate annealing schedule -we anneal the learning rate to zero over 40 epochs instead of 200. This is a common technique used in speeding up convergence for training proxies Zhou et al. (2020). We acknowledge that slightly better correlations could have been achieved for econas and econas+ proxies in Figure 1 if the learning rate was annealed to zero over fewer epochs (20 and 15 epochs respectively). However, we do not anticipate the results to change significantly. Figure 6 shows the speedup of different EcoNAS proxies compared to baseline training. Even though r 8 c 4 has 64× less computation compared to r 32 c 16 , it achieves a maximum of 4× real speedup even when the batch size is increased.
A.3 ANALYSIS OF THE TOP 10% OF MODELS
In the main text we pointed to the fact that only synflow achieves consistent rank correlation for the top-10% of models across different datasets. Here, in Table 9 we provide the full results. Additionally, we hypothesized that a successful metric will rank many of the most-accurate models in its top models. In Table 10 we enumerate the percentage of top-10% most accurate models ranked as top-10% by each proxy metric. Again, synflow is the only consistent metric for all datasets, and performs best on average. Table 11 shows the number of top-5% models ranked in the top 64 models by each metric. This is an extension to Table 6 in the main text that only shows the results for synflow.
A.5 SENSITIVITY ANALYSIS
We performed some sensitivity analysis to investigate how the zero-cost metrics perform on all points within NAS-Bench-201 with different initialization seed, initialization method and minibatch size. We comment on each table in its caption; however, to summarize, all metrics seem to be relatively unaffected when initialization and minibatch size are varied. The one exception can be seen in Table 15 where fisher benefits when biases are initialized with zeroes. Figure 8: Evaluation of all zero-cost proxies on different datasets and search algorithms: random search (RAND) and aging evolution (AE). RAND benefits greatly from a strong metric (such as synflow) but may deteriorate with a weaker metric as shown in the plot. However, AE benefits when a strong metric is used and is resilient to weaker metrics as well -it is able to recover and achieve the top accuracy in most cases.
Figure 1 :
1Evaluation of different econas proxies on NAS-Bench-201 CIFAR-10. FLOPS and runtime are normalized to the FLOPS/runtime of a single baseline (full training) epoch.
Figure 2 :
2Correlation of validation accuracy to final test accuracy during the first 12 epochs of training for three datasets on the NAS-Bench-201 search space. Zero-cost and EcoNAS proxies are also labeled for comparison.
Figure 3 :
3Performance of zero-cost metrics on PyTorchCV models (averaged over 5 seeds).
Figure 4 :
4Search speedup with the synflow zero-cost proxy on NAS-Bench-201 CIFAR-100.Reinforcement Learning In the REINFORCE algorithm (Zoph & Le(2017)), we sample N random models and use their zero-cost scores to reward the controller, thus biasing it towards selecting architectures which are likely to have higher values of the chosen metrics. During warmup, reward for the controller is calculated by linearly normalizing values returned by the proxy functions to the range [−1, 1] (with online adjustment of min and max).
Figure 5 :
5Search speedup with the synflow zero-cost proxy on NAS-Bench-ASR TIMIT.
Figure 6 :
6Higher batch sizes when training econas proxies have diminishing returns in terms of measured speedup. This measurement is done for 10 randomly-sampled NAS-Bench-201 models on the CIFAR-10 dataset.
Figure 7 :
7Search speedup with the synflow zero-cost proxy on NAS-Bench-101 CIFAR-10.
Table 1 :
1Spearman ρ of zero-cost proxies on NAS-Bench-201.Dataset
grad norm snip grasp fisher synflow jacob cov vote
CIFAR-10
0.58
0.58 0.48
0.36
0.74
0.73
0.82
CIFAR-100
0.64
0.63 0.54
0.39
0.76
0.71
0.83
ImageNet16-120
0.58
0.58 0.56
0.33
0.75
0.71
0.82
4.1.1 ECONAS PROXY ON NAS-BENCH-201
Table 2 :
2Spearman ρ of zero-cost proxies on other NAS search spaces.grad norm snip grasp fisher synflow jacob cov
NAS-Bench-101
0.20
0.16
0.45
0.26
0.37
0.38
NAS-Bench-NLP
-0.21
-0.19 0.16
-
0.34
0.56
NAS-Bench-ASR
0.06
-0.01
-
0.01
0.38
-0.38
Table 3 :
3Zero-cost NAS comparison with baseline algorithms on NAS-Bench-201 CIFAR-100. We show accuracy after 50 trained models and the number of models to reach 73% accuracy.Baseline
Warmup
Move
1000 (BP=256) 3000 (BP=512)
15k
10
100
RAND 71.02 / 1000+ 72.79 / 1000+
73.00 / 49
73.09 / 14
-
-
RL
71.00 / 722
72.55 / 134
72.88 / 64
72.52 / 137 71.39 / 309 72.72 / 141
AE
71.05 / 171
72.63 / 93
72.89 / 68
73.12 / 33
71.10 / 76
-
BP
72.72 / 65
73.02 / 48
73.19 / 44
-
-
-
Table 4 :
4Comparison to prior work on NAS-Bench-101 dataset.Wen et al.
(2019)
Wei et al.
(2020)
Dudziak
et al. (2020)
Ours
RL+M (100) AE+W (15k) RAND+W (3k)
# Trained Models
256
150
140
51
50
34
Test Accuracy [%]
94.17
94.14
94.22
94.22
94.22
94.22
Table 5 :
5Zero-cost NAS comparison with baseline algorithms on NAS-Bench-ASR. We show PER after 50 trained models and the number of models to reach PER=21.3%.Baseline
Warmup
Move
500
2000
10
100
RAND 21.63 / 560
21.37 / 509 21.36 / 92 -
-
RL
21.69 / 1000+ 21.52 / 349 21.45 / 78 21.60 / 1000+ 21.45 / 1000+
AE
21.76 / 177
21.40 / 87
21.36 / 72 21.59 / 142
-
Warmup
Table 6 :
6Number of top-5% most-accurate models within the top 64 models returned by synflow.NAS-Bench-201
NAS-Bench-101 NAS-Bench-ASR
CIFAR-10 CIFAR-100 ImageNet16-120
44
54
56
12
16
Table 8 :
8EcoNAS training hyper-parameters for NAS-Bench-201.
optimizer
SGD
initial LR
0.1
Nesterov
final LR
0
momentum
0.9
LR schedule cosine
weight decay
0.0005
epochs
40
random flip
(p=0.5) batch size
256
random crop
A.2 GPU RUNTIME FOR ECONAS
Table 9 :
9Spearman ρ of zero-cost proxies for the top 10% of points on all NAS search spaces.grad norm snip grasp fisher synflow jacob cov
NB2-CIFAR-10
-0.38
-0.38 -0.37 -0.38
0.18
0.17
NB2-CIFAR-100
-0.09
-0.09 -0.11 -0.16
0.42
0.08
NB2-ImageNet16-120
0.13
0.13
0.10
0.02
0.55
0.05
NAS-Bench-101
0.05
-0.01 -0.01 0.07
0.14
0.08
NAS-Bench-NLP
-0.03
-0.02 0.04
-
0.10
0.04
NAS-Bench-ASR
-0.04
-0.17
-
-0.03
0.03
0.02
Table 10: Percentage of top-10% most-accurate models within the top-10% of models ranked by
each zero-cost metric.
grad norm snip grasp fisher synflow jacob cov
NB2-CIFAR-10
30%
31% 30%
5%
46%
25%
NB2-CIFAR-100
35%
36% 34%
4%
50%
24%
NB2-ImageNet16-120
31%
31% 32%
5%
44%
30%
NAS-Bench-101
2%
3%
26%
3%
23%
2%
NAS-Bench-NLP
10%
10%
4%
-
22%
38%
NAS-Bench-ASR
0%
0%
-
0%
15%
44%
Table 11 :
11Number of top-5% most-accurate models within the top-64 models returned by each metric.grad norm snip grasp fisher synflow jacob cov
NB2-CIFAR-10
0
0
0
0
44
15
NB2-CIFAR-100
4
4
4
0
54
16
NB2-ImageNet16-120
13
13
14
0
56
15
NAS-Bench-101
0
0
6
0
12
0
NAS-Bench-ASR
1
0
-
1
16
13
Table 12 :
12Rank correlation coefficient for the local neighbourhoods (edit distance = 1) of 1000 points in each search space. grad norm snip grasp fisher synflow jacob covNB2-CIFAR-10
0.51
0.51 0.37
0.37
0.66
0.62
NB2-CIFAR-100
0.58
0.58 0.44
0.41
0.69
0.61
NB2-ImageNet16-120
0.56
0.57
0.5
0.4
0.67
0.61
NAS-Bench-101
0.23
0.21 0.44
0.27
0.36
0.37
NAS-Bench-ASR
0.59
0.4
-
0.56
0.38
0.28
Table 13 :
13For 1000 clusters of points with edit distance = 1. We count the number of times wherein the top model returned by a zero-cost metric matches the top model according to validation accuracy. This represents the probability that zero-cost move proposal will perform the best possible mutation. NAS-BENCH-101 SEARCH PLOTSFigure 7shows the NAS search curves for all considered algorithms on NAS-Bench-101 dataset. Important points from this plot are summarized inTable 4in the main text.grad norm
snip
grasp
fisher synflow jacob cov
NB2-CIFAR-10
14.8%
14.8% 12.7% 5.7%
32.2%
14.5%
NB2-CIFAR-100
19.1%
18.5% 14.2% 6.0%
35.4%
13.8%
NB2-ImageNet16-120
17.5%
18.5% 15.7% 5.5%
33.4%
16.7%
NAS-Bench-101
0.4%
0.9%
7.4%
0.5%
12.3%
0.5%
NAS-Bench-ASR
11.0%
9.8%
-
10.3% 10.3%
10.5%
Table 14 :
14All metrics remain fairly constant when varying the initialization seed -the variations are only observed at the third significant digit. Dataload is random with 128 samples and initialization is done with default PyTorch initialization scheme.seed
grad norm snip
grasp fisher synflow jacob cov
1
0.578
0.581 0.487 0.361
0.737
0.735
2
0.580
0.583 0.488 0.354
0.740
0.728
3
0.582
0.584 0.486 0.358
0.738
0.726
4
0.581
0.584 0.491 0.356
0.738
0.73
5
0.581
0.583 0.486 0.356
0.738
0.727
Average
0.580
0.583 0.488 0.357
0.738
0.729
Table 15 :
15fisher becomes noticeably better when biases are initialized to zero; otherwise, metrics seem to perform independently of initialization method. Results averaged over 3 seeds.Weights init Bias init grad norm snip grasp fisher synflow jacob covTable 16: Surprisingly, grasp becomes worse with more (random) data, while grad norm and snip degrade very slightly. Other metrics seem to perform independently of the number of samples in the minibatch. Initialization is done with default PyTorch initialization scheme. RESULTS FOR ALL ZERO-COST METRICSHere we provide some NAS search results using all considered metrics for both RAND and AE searches on NAS-Bench-101/201 datasets.Avg. best test accuracy [%]Avg. best test accuracy[%] default
default
0.580
0.583 0.488 0.357
0.738
0.729
kaiming
default
0.548
0.558 0.364 0.332
0.731
0.723
xavier
default
0.543
0.568 0.424 0.345
0.736
0.729
default
zero
0.581
0.583 0.488 0.509
0.738
0.729
kaiming
zero
0.542
0.551 0.370 0.479
0.730
0.723
xavier
zero
0.540
0.566 0.412 0.495
0.735
0.730
Number of Samples grad norm snip
grasp fisher synflow jacob cov
32
0.595
0.596 0.511 0.362
0.737
0.732
64
0.589
0.59 0.509 0.361
0.737
0.735
128
0.578
0.581 0.487 0.361
0.737
0.735
256
0.564
0.569 0.447 0.361
0.737
0.731
512
0.547
0.552 0.381 0.361
0.737
0.724
A.6
The full Hessian does not need to be explicitly constructed as explained byPearlmutter (1993).
We used Nvidia Geforce GTX 1080 Ti and ran a random sample of 10 models for 10 epochs to get an average time-per-epoch for each proxy at different batch sizes. We discuss this further in Section A.2
Understanding and simplifying one-shot architecture search. Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, Quoc V Le, International Conference on Machine Learning (ICML). Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc V. Le. Under- standing and simplifying one-shot architecture search. In International Conference on Machine Learning (ICML), 2018.
Efficient architecture search by network transformation. Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang, AAAI Conference on Artificial Intelligence (AAAI). Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
. Wayne W Daniel, Applied Nonparametric Statistics. PWS-KentWayne W. Daniel. Applied Nonparametric Statistics. Boston: PWS-Kent, 1990.
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search. Xuanyi Dong, Yi Yang, International Conference on Learning Representations (ICLR. 2020Xuanyi Dong and Yi Yang. NAS-Bench-201: Extending the Scope of Reproducible Neural Archi- tecture Search. In International Conference on Learning Representations (ICLR), 2020.
BRP-NAS: Prediction-based NAS using GCNs. Łukasz Dudziak, Thomas Chau, Mohamed S Abdelfattah, Royson Lee, Hyeji Kim, Nicholas D Lane, Neural Information Processing Systems (NeurIPS). 2020Łukasz Dudziak, Thomas Chau, Mohamed S. Abdelfattah, Royson Lee, Hyeji Kim, and Nicholas D. Lane. BRP-NAS: Prediction-based NAS using GCNs. In Neural Information Processing Systems (NeurIPS), 2020.
Perforatedcnns: Acceleration through elimination of redundant convolutions. Mikhail Figurnov, Aizhan Ibraimova, P Dmitry, Pushmeet Vetrov, Kohli, Neural Information Processing Systems (NeurIPS). Mikhail Figurnov, Aizhan Ibraimova, Dmitry P Vetrov, and Pushmeet Kohli. Perforatedcnns: Accel- eration through elimination of redundant convolutions. In Neural Information Processing Systems (NeurIPS), 2016.
The lottery ticket hypothesis: Finding sparse, trainable neural networks. Jonathan Frankle, Michael Carbin, International Conference on Learning Representations (ICLR). Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations (ICLR), 2019.
Timit acoustic phonetic continuous speech corpus. Linguistic Data Consortium. John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, David S Pallett, Nancy L Dahlgren, Victor Zue, John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue. Timit acoustic phonetic continuous speech corpus. Linguistic Data Consortium, 1993. URL https://catalog.ldc.upenn.edu/LDC93S1.
Learning both weights and connections for efficient neural network. Song Han, Jeff Pool, John Tran, William Dally, Neural Information Processing Systems (NeurIPS). Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Neural Information Processing Systems (NeurIPS), 2015.
Second order derivatives for network pruning: Optimal brain surgeon. Babak Hassibi, David G Stork, Advances in Neural Information Processing Systems 5. Babak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems 5, 1993.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Computer Vision and Pattern Recognition (CVPR), June 2016.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam, arXiv:1704.04861MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprintAndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861, 2017.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Computer Vision and Pattern Recognition (CVPR). Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In Computer Vision and Pattern Recognition (CVPR), July 2017.
Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Evgeny Burnaev, arXiv:2006.07116Nas-bench-nlp: Neural architecture search benchmark for natural language processing. arXiv preprintNikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, and Evgeny Burnaev. Nas-bench-nlp: Neural architecture search benchmark for natural language processing. arXiv preprint arXiv:2006.07116, 2020.
Learning Multiple Layers of Features from Tiny Images. Alex Krizhevsky, Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009. URL https: //www.cs.toronto.edu/˜kriz/cifar.html.
Optimal brain damage. Yann Lecun, John S Denker, Sara A Solla, Neural Information Processing Systems (NeurIPS). Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Neural Information Processing Systems (NeurIPS), 1990.
Snip: Single-shot network pruning based on connection sensitivity. Namhoon Lee, Thalaiyasingam Ajanthan, Philip Hs Torr, International Conference on Learning Representations (ICLR). Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations (ICLR), 2019.
DARTS: Differentiable architecture search. Hanxiao Liu, Karen Simonyan, Yiming Yang, International Conference on Learning Representations (ICLR). Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations (ICLR), 2019.
AtomNAS: Fine-grained end-to-end neural architecture search. Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, Alan Yuille, Jianchao Yang, International Conference on Learning Representations (ICLR. 2020Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, Alan Yuille, and Jianchao Yang. AtomNAS: Fine-grained end-to-end neural architecture search. In International Conference on Learning Representations (ICLR), 2020.
Joseph Mellor, Jack Turner, Amos Storkey, Elliot J Crowley, arXiv:2006.04647Neural architecture search without training. arXiv preprintJoseph Mellor, Jack Turner, Amos Storkey, and Elliot J. Crowley. Neural architecture search without training. arXiv preprint arXiv:2006.04647, 2020.
Pruning convolutional neural networks for resource efficient inference. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz, International Conference on Learning Representations. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Repre- sentations (ICLR), 2017.
Reading Digits in Natural Images with Unsupervised Feature Learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
Fast Exact Multiplication by the Hessian. A Barak, Pearlmutter, Neural Computation. Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian. Neural Computation, 1993.
Efficient neural architecture search via parameter sharing. Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, Jeff Dean, Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing, 2018.
Regularized Evolution for Image Classifier Architecture Search. Esteban Real, Alok Aggarwal, Yanping Huang, Quoc V Le, AAAI Conference on Artificial Intelligence (AAAI). Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized Evolution for Image Classifier Architecture Search. In AAAI Conference on Artificial Intelligence (AAAI), 2019.
PyTorchCV Convolutional neural networks for computer vision. Oleg Sémery, Oleg Sémery. PyTorchCV Convolutional neural networks for computer vision, August 2020. URL https://github.com/osmr/imgclsmob.
EfficientNet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, Quoc Le, International Conference on Machine Learning (ICML). Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural net- works. In International Conference on Machine Learning (ICML), 2019a.
Mingxing Tan, V Quoc, Le, Mixconv, Mixed depthwise convolutional kernels. Mingxing Tan and Quoc V. Le. Mixconv: Mixed depthwise convolutional kernels, 2019b.
Hidenori Tanaka, Daniel Kunin, L K Daniel, Surya Yamins, Ganguli, arXiv:2006.05467Pruning neural networks without any data by iteratively conserving synaptic flow. arXiv preprintHidenori Tanaka, Daniel Kunin, Daniel L. K. Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. arXiv preprint arXiv:2006.05467, 2020.
Faster gaze prediction with dense networks and fisher pruning. Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár, arXiv:1801.05787Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Huszár. Faster gaze prediction with dense networks and fisher pruning. arXiv:1801.05787, 2018.
Blockswap: Fisher-guided block substitution for network compression on a budget. Jack Turner, Elliot J Crowley, O' Michael, Amos Boyle, Gavin Storkey, Gray, International Conference on Learning Representations (ICLR. 2020Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, and Gavin Gray. Blockswap: Fisher-guided block substitution for network compression on a budget. In International Confer- ence on Learning Representations (ICLR), 2020.
Picking winning tickets before training by preserving gradient flow. Chaoqi Wang, Guodong Zhang, Roger Grosse, International Conference on Learning Representations (ICLR. 2020Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations (ICLR), 2020.
NPENAS: Neural predictor guided evolution for neural architecture search. Chen Wei, Chuang Niu, Yiping Tang, Jimin Liang, 12857Chen Wei, Chuang Niu, Yiping Tang, and Jimin Liang. NPENAS: Neural predictor guided evolution for neural architecture search. arXiv 2003.12857, 03 2020.
Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans, arXiv:1912.00848Neural predictor for neural architecture search. Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, and Pieter-Jan Kindermans. Neural predictor for neural architecture search. arXiv:1912.00848, 2019.
NASbench-101: Towards reproducible neural architecture search. Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, Frank Hutter, International Conference on Machine Learning (ICML). Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and Frank Hutter. NAS- bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning (ICML), 2019.
Econas: Finding proxies for economical neural architecture search. Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, Wanli Ouyang, Conference on Computer Vision and Pattern Recognition (CVPR). Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, and Wanli Ouyang. Econas: Finding proxies for economical neural architecture search. In Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
Neural architecture search with reinforcement learning. Barret Zoph, Quoc V Le, International Conference on Learning Representations (ICLR. Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In Interna- tional Conference on Learning Representations (ICLR), 2017. |
47,012,356 | SLALOM: FAST, VERIFIABLE AND PRIVATE EXECUTION OF NEURAL NETWORKS IN TRUSTED HARDWARE | As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6× to 20× increases in throughput for verifiable inference, and 4× to 11× for verifiable and private inference.Published as a conference paper at ICLR 2019This leads us to the main question of this paper: How can we most efficiently leverage TEEs for secure machine learning? This was posed byStoica et al. (2017)as one of nine open research problems for system challenges in AI. A specific challenge they raised is that of appropriately splitting ML computations between trusted and untrusted components, to increase efficiency as well as security by minimizing the Trusted Computing Base. This paper explores a novel approach to this challenge, wherein a Deep Neural Network (DNN) execution is partially outsourced from a TEE to a co-located, untrusted but faster device. Our approach, inspired by the verifiable ASICs of Wahby et al.(2016), differs from cryptographic ML outsourcing. In our case, work is delegated between two co-located parties, thus allowing for highly interactive-yet conceptually simpleroutsourcing protocols with orders-of-magnitude better efficiency. Our work also departs from prior systems that execute DNNs fully in a TEE(Ohrimenko et al., 2016;Hunt et al., 2018;Cheng et al., 2018;Hanzlik et al., 2018).The main observation that guides our approach is that matrix multiplication-the main bottleneck in DNNsadmits a concretely efficient verifiable outsourcing scheme known as Freivalds' algorithm(Freivalds, 1977), which can also be turned private in our setting. Our TEE selectively outsources these CPU intensive steps to a fast untrusted co-processor (and runs the remaining steps itself) therefore achieving much better performance than running the entire computation in the enclave, without compromising security. | [] | SLALOM: FAST, VERIFIABLE AND PRIVATE EXECUTION OF NEURAL NETWORKS IN TRUSTED HARDWARE
Florian Tramèr [email protected]
Stanford University
Stanford University
Dan Boneh
Stanford University
Stanford University
SLALOM: FAST, VERIFIABLE AND PRIVATE EXECUTION OF NEURAL NETWORKS IN TRUSTED HARDWARE
Published as a conference paper at ICLR 2019
As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6× to 20× increases in throughput for verifiable inference, and 4× to 11× for verifiable and private inference.Published as a conference paper at ICLR 2019This leads us to the main question of this paper: How can we most efficiently leverage TEEs for secure machine learning? This was posed byStoica et al. (2017)as one of nine open research problems for system challenges in AI. A specific challenge they raised is that of appropriately splitting ML computations between trusted and untrusted components, to increase efficiency as well as security by minimizing the Trusted Computing Base. This paper explores a novel approach to this challenge, wherein a Deep Neural Network (DNN) execution is partially outsourced from a TEE to a co-located, untrusted but faster device. Our approach, inspired by the verifiable ASICs of Wahby et al.(2016), differs from cryptographic ML outsourcing. In our case, work is delegated between two co-located parties, thus allowing for highly interactive-yet conceptually simpleroutsourcing protocols with orders-of-magnitude better efficiency. Our work also departs from prior systems that execute DNNs fully in a TEE(Ohrimenko et al., 2016;Hunt et al., 2018;Cheng et al., 2018;Hanzlik et al., 2018).The main observation that guides our approach is that matrix multiplication-the main bottleneck in DNNsadmits a concretely efficient verifiable outsourcing scheme known as Freivalds' algorithm(Freivalds, 1977), which can also be turned private in our setting. Our TEE selectively outsources these CPU intensive steps to a fast untrusted co-processor (and runs the remaining steps itself) therefore achieving much better performance than running the entire computation in the enclave, without compromising security.
INTRODUCTION
Machine learning is increasingly used in sensitive decision making and security-critical settings. At the same time, the growth in both cloud offerings and software stack complexity widens the attack surface for ML applications. This raises the question of integrity and privacy guarantees for ML computations in untrusted environments, in particular for ML tasks outsourced by a client to a remote server. Prominent examples include cloud-based ML APIs (e.g., a speech-to-text application that consumes user-provided data) or general ML-as-a-Service platforms.
Trusted Execution Environments (TEEs), e.g, Intel SGX (McKeen et al., 2013), ARM TrustZone (Alves & Felton, 2004) or Sanctum offer a pragmatic solution to this problem. TEEs use hardware and software protections to isolate sensitive code from other applications, while attesting to its correct execution. Running outsourced ML computations in TEEs provides remote clients with strong privacy and integrity guarantees.
For outsourced ML computations, TEEs outperform pure cryptographic approaches (e.g, (Gilad-Bachrach et al., 2016;Mohassel & Zhang, 2017;Ghodsi et al., 2017;Juvekar et al., 2018)) by multiple orders of magnitude. At the same time, the isolation guarantees of TEEs still come at a steep price in performance, compared to untrusted alternatives (i.e., running ML models on contemporary hardware with no security guarantees). For instance, Intel SGX (Intel Corp., 2015) incurs significant overhead for memory intensive tasks (Orenbach et al., 2017;Harnik & Tsfadia, 2017), has difficulties exploiting multi-threading, and is currently limited to desktop CPUs that are outmatched by untrusted alternatives (e.g., GPUs or server CPUs). Thus, our thesis is that for modern ML workloads, TEEs will be at least an order of magnitude less efficient than the best available untrusted hardware.
Contributions. We propose Slalom, a framework for efficient DNN inference in any trusted execution environment (e.g., SGX or Sanctum). To evaluate Slalom, we build a lightweight DNN library for Intel SGX, which may be of independent interest. Our library allows for outsourcing all linear layers to an untrusted GPU without compromising integrity or privacy. Our code is available at https://github.com/ftramer/slalom.
We formally prove Slalom's security, and evaluate it on multiple canonical DNNs with a variety of computational costs-VGG16 (Simonyan & Zisserman, 2014), MobileNet (Howard et al., 2017), and ResNets (He et al., 2016). Compared to running all computations in SGX, outsourcing linear layers to an untrusted GPU increases throughput (as well as energy efficiency) by 6× to 20× for verifiable inference, and by 4× to 11× for verifiable and private inference. Finally, we discuss open challenges towards efficient verifiable training of DNNs in TEEs.
BACKGROUND
PROBLEM SETTING
We consider an outsourcing scheme between a client C and a server S, where S executes a DNN F (x) : X → Y on data provided by C. The DNN can either belong to the user (e.g., as in some ML-as-a-service platforms), or to the server (e.g., as in a cloud-based ML API). Depending on the application, this scheme should satisfy one or more of the following security properties (see Appendix B for formal definitions):
• t-Integrity: For any S and input x, the probability that a user interacting with S does not abort (i.e., output ⊥) and outputs an incorrect valueỹ = F (x) is less than t. • Privacy: The server S learns no information about the user's input x.
• Model privacy: If the model F is provided by the user, S learns no information about F (beyond e.g., its approximate size). If F belongs to the server, C learns no more about F than what is revealed by y = F (x). 1
TRUSTED EXECUTION ENVIRONMENTS (TEES), INTEL SGX, AND A STRONG BASELINE
Trusted Execution Environments (TEE) such as Intel SGX, ARM TrustZone or Sanctum enable execution of programs in secure enclaves. Hardware protections isolate computations in enclaves from all Table 1: Security guarantees and performance (relative to baseline) of different ML outsourcing schemes.
Model Privacy
Approach TEE Integrity Privacy w.r.t. Server w.r.t. Client Throughput (relative) SafetyNets (Ghodsi et al., 2017) -≤ 1 /200 × Gazelle (Juvekar et al., 2018) programs on the same host, including the operating system. Enclaves can produce remote attestations-digital signatures over an enclave's code-that a remote party can verify using the manufacturer's public key. Our experiments with Slalom use hardware enclaves provided by Intel SGX (see Appendix A for details). 2 TEEs offer an efficient solution for ML outsourcing: The server runs an enclave that initiates a secure communication with C and evaluates a model F on C's input data. This simple scheme (which we implemented in SGX, see Section 4) outperforms cryptographic ML outsourcing protocols by 2-3 orders of magnitude (albeit under a different trust model). See Table 1 and Appendix C for a comparison to two representative works.
Yet, SGX's security comes at a performance cost, and there remains a large gap between TEEs and untrusted devices. For example, current SGX CPUs are limited to 128 MB of Processor Reserved Memory (PRM) and incur severe paging overheads when exceeding this allowance (Orenbach et al., 2017). We also failed to achieve noticeable speed ups for multi-threaded DNN evaluations in SGX enclaves (see Appendix H). For DNN computations, current SGX enclaves thus cannot compete-in terms of performance or energy efficiency (see Appendix C)-with contemporary untrusted hardware, such as a GPU or server CPU.
In this work, we treat the above simple (yet powerful) TEE scheme as a baseline, and identify settings where we can still improve upon it. We will show that our system, Slalom, substantially outperforms this baseline when the server has access to the model F (e.g., F belongs to S as in cloud ML APIs, or F is public). Slalom performs best for verifiable inference (the setting considered in SafetyNets (Ghodsi et al., 2017)). If the TEE can run some offline data-independent preprocessing (e.g., as in Gazelle (Juvekar et al., 2018)), Slalom also outperforms the baseline for private (and verifiable) outsourced computations in a later online phase. Such a two-stage approach is viable if user data is sent at irregular intervals yet has to be processed with high throughput when available.
OUTSOURCING OUTSOURCED DNNS AND FREIVALDS' ALGORITHM
Our idea for speeding up DNN inference in TEEs is to further outsource work from the TEE to a co-located faster untrusted processor. Improving upon the above baseline thus requires that the combined cost of doing work on the untrusted device and verifying it in the TEE be cheaper than evaluating the full DNN in the TEE. Wahby et al. (2016;2017) aim at this goal for arbitrary computations outsourced between co-located ASICs. The generic non-interactive proofs they use for integrity are similar to those used in SafetyNets (Ghodsi et al., 2017), which incur overheads that are too large to warrant outsourcing in our setting (e.g., Wahby et al. (2016) find that the technology gap between trusted and untrusted devices needs to be of over two decades for their scheme to break even). Similarly for privacy, standard cryptographic outsourcing protocols (e.g., (Juvekar et al., 2018)) are unusable in our setting as simply running the computation in the TEE is much more efficient (see Table 1).
To overcome this barrier, we design outsourcing protocols tailored to DNNs, leveraging two insights:
1. In our setting, the TEE is co-located with the server's faster untrusted processors, thus widening the design space to interactive outsourcing protocols with high communication but better efficiency.
2. The TEE always has knowledge of the model and can selectively outsource part of the DNN evaluation and compute others-for which outsourcing is harder-itself.
DNNs are a class of functions that are particularly well suited for selective outsourcing. Indeed, non-linearitieswhich are hard to securely outsource (with integrity or privacy)-represent a small fraction of the computation in a DNN so we can evaluate these in the TEE (e.g., for VGG16 inference on a single CPU thread, about 1.5% of the computation is spent on non-linearities). In contrast, linear operators-the main computational bottleneck in DNNs-admit for a conceptually simple yet concretely efficient secure delegation scheme, described below.
Integrity. We verify integrity of outsourced linear layers using variants of an algorithm by Freivalds (1977).
Lemma 2.1 (Freivalds). Let A, B and C be n × n matrices over a field F and let s be a uniformly random vector in S n , for S ⊆ F. Then,
Pr[Cs = A(Bs) | C = AB] = Pr[(C − AB)s = 0 | (C − AB) = 0] ≤ 1 /|S| .
The randomized check requires 3n 2 multiplications, a significant reduction (both in concrete terms and asymptotically) over evaluating the product directly. The algorithm has no false negatives and trivially extends to rectangular matrices. Independently repeating the check k times yields soundness error 1 /|S| k .
Privacy. Input privacy for outsourced linear operators could be achieved with linearly homomorphic encryption, but the overhead (see the micro-benchmarks in (Juvekar et al., 2018)) is too high to compete with our baseline (i.e., computing the function directly in the TEE would be faster than outsourcing it over encrypted data).
We instead propose a very efficient two-stage approach based on symmetric cryptography, i.e., an additive stream cipher. Let f : F m → F n be a linear function over a field F. In an offline phase, the TEE generates a stream of one-time-use pseudorandom elements r ∈ F m , and pre-computes u = f (r). Then, in the online phase when the remote client sends an input x, the TEE computes Enc(x) = x + r over F m (i.e., a secure encryption of x with a stream cipher), and outsources the computation of f (Enc(x)) to the faster processor. Given the result f (Enc(x)) = f (x + r) = f (x) + f (r) = f (x) + u, the TEE recovers f (x) using the pre-computed u.
Communication. Using Freivalds' algorithm and symmetric encryption for each linear layer in a DNN incurs high interaction and communication between the TEE and untrusted co-processor (e.g., over 50MB per inference for VGG16, see Table 3). This would be prohibitive if they were not co-located. There are protocols with lower communication than repeatedly using Freivalds' ( (Fiore & Gennaro, 2012;Thaler, 2013;Ghodsi et al., 2017)). Yet, these incur a high overhead on the prover in practice and are thus not suitable in our setting.
SLALOM
We introduce Slalom, a three-step approach for outsourcing DNNs from a TEE to an untrusted but faster device:
(1) Inputs and weights are quantized and embedded in a field F; (2) Linear layers are outsourced and verified using Freivalds' algorithm;
(3) Inputs of linear layers are encrypted with a pre-computed pseudorandom stream to guarantee privacy. Figure 1 shows two Slalom variants, one to achieve integrity, and one to also achieve privacy.
We focus on feed-forward networks with fully connected layers, convolutions, separable convolutions, pooling layers and activations. Slalom can be extended to other architectures (e.g., residual networks, see Section 4.3).
Slalom with integrity TEE(F, x1) S(F ) x 1 − − → for i ∈ [1, n] do yi = xiWi xi+1 = σ(yi) y 1 ...yn ← −−−− − for i ∈ [1, n] do assert Freivalds(yi, xi, Wi) xi+1 = σ(yi) return yn Slalom with integrity & privacy TEE(F, x1) S(F ) Preproc : for i ∈ [1, n] do ri R ← − F m i , ui =riWi for i ∈ [1, n] dõ xi = xi + rix i − − → y i ← − −ỹi =xiWi yi =ỹi − ui assert Freivalds(yi, xi, Wi) xi+1 = σ(yi)
return yn Figure 1: The Slalom algorithms for verifiable and private DNN inference. The TEE outsources computation of n linear layers of a model F to the untrusted host server S. Each linear layer is defined by a matrix W i of size m i × n i and followed by an activation σ. All operations are over a field F. The Freivalds(y i , x i , w i ) subroutine performs k repetitions of Freivalds' check (possibly using precomputed values as in Section 3.2). The pseudorandom elements r i (we omit the PRNG for simplicity) and precomputed values u i are used only once.
QUANTIZATION
The techniques we use for integrity and privacy (Freivalds' algorithm and stream ciphers) work over a field F. We thus quantize all inputs and weights of a DNN to integers, and embed these integers in the field Z p of integers modulo a prime p (where p is larger than all values computed in a DNN evaluation, so as to avoid wrap-around).
As in (Gupta et al., 2015), we convert floating point numbers x to a fixed-point representation asx = FP(x; l) := round(2 l · x). For a linear layer with kernel W and bias b, we define integer parametersW = FP(W, l),b = FP(b, 2l). After applying the layer to a quantized inputx, we scale the output by 2 −l and re-round to an integer.
For efficiency reasons, we perform integer arithmetic using floats (so-called fake quantization), and choose p < 2 24 to avoid loss of precision (we use p = 2 24 − 3). For the models we evaluate, setting l = 8 for all weights and inputs ensures that all DNN values are bounded by 2 24 , with less than a 0.5% drop in accuracy (see Table 3). When performing arithmetic modulo p (e.g., for Freivalds' algorithm or when computing on encrypted data), we use double-precision floats, to reduce the number of modular reductions required (details are in Appendix F).
VERIFYING COMMON LINEAR OPERATORS
We now describe Slalom's approach to verifying the integrity of outsourced linear layers. We describe these layers in detail in Appendix D and summarize this section's results in Table 2.
Freivalds' Algorithm for Batches. The most direct way of applying Freivalds' algorithm to arbitrary linear layers of a DNN is by exploiting batching. Any linear layer f (x) from inputs of size m to outputs of size n can be represented (with appropriate reshaping) as f (x) = x W for a (often sparse and implicit) m × n matrix W .
For a batch X of size B, we can outsource f (X) and check that the output Y satisfies f (s X) = s Y , for a random vector s (we are implicitly applying Freivalds to the matrix product XW = Y ). As the batch size B grows, the cost of evaluating f is amortized and the total verification cost is |X| + |Y | + cost f multiplications (i.e., we approach one operation per input and output). Yet, as we show in Section 4.3, while batched verification is worthwhile for processors with larger memory, it is prohibitive in SGX enclaves due to the limited PRM.
For full convolutions (and pointwise convolutions), a direct application of Freivalds' check is worthwhile even for single-element batches. For f (x) = Conv(x, W ) and purported output y, we can sample a random vector s of Table 2: Complexity (number of multiplications) for evaluating and verifying linear functions. The layers are "Fully Connected", "Convolution", "Depthwise Convolution" and "Pointwise Convolution", defined in Appendix D. Each layer f has an input x, output y and kernel W . We assume a batch size of B ≥ 1.
Layer |x|, |y| |W| cost f (B = 1) Batched verification With preproc. FC h in , hout h in · hout |x| · |y| B · (|x| + |y|) + cost f B · (|x| + |y|) Conv h · w · c in , h · w · cout k 2 · c in · cout |x| · k 2 · cout B · (|x| + |y|) + c in · cout + |x| · k 2 B · (|x| + |y|) Depth. Conv h · w · c in , h · w · c in k 2 · c in |x| · k 2 B · (|x| + |y|) + cost f B · (|x| + |y|) Point. Conv h · w · c in , h · w · cout c in · cout |x| · cout B · (|x| + |y|) + c in · cout B · (|x| + |y|)
dimension c out (the number of output channels), and check that Conv(x, W s) = ys (with appropriate reshaping). For a batch of inputs X, we can also apply Freivalds' algorithm twice to reduce both W and X.
Preprocessing. We now show how to obtain an outsourcing scheme for linear layers that has optimal verification complexity (i.e., |x| + |y| operations) for single-element batches and arbitrary linear operators, while at the same time compressing the DNN's weights (a welcome property in our memory-limited TEE model).
We leverage two facts: (1) DNN weights are fixed at inference time, so part of Freivalds' check can be precomputed;
(2) the TEE can keep secrets from the host S, so the random values s can be re-used across layers or inputs (if we run Freivalds' check n times with the same secret randomness, the soundness errors grows at most by a factor n). Our verification scheme with preprocessing follows from a reformulation of Lemma (2.1):
Lemma 3.1. Let f : F m → F n be a linear operator, f (x) := x W . Let s be uniformly random in S n , for S ⊆ F, and lets := ∇F x (s) = W s. For any x ∈ F m , y ∈ F n , we have Pr y s = x s | y = f (x) ≤ 1 /|S| .
The check requires |x| + |y| multiplications, and storage for s ands := W s (of size |x| and |y|). To save space, we can reuse the same random s for every layer. The memory footprint of a model is then equal to the size of the inputs of all its linear layers (e.g., for VGG16 the footprint is reduced from 550MB to 36MB, see Table 3).
INPUT PRIVACY
To guarantee privacy of the client's inputs, we use precomputed blinding factors for each outsourced computation, as described in Section 2.3. The TEE uses a cryptographic Pseudo Random Number Generator (PRNG) to generate blinding factors. The precomputed "unblinding factors" are encrypted and stored in untrusted memory or disk. In the online phase, the TEE regenerates the blinding factors using the same PRNG seed, and uses the precomputed unblinding factors to decrypt the output of the outsourced linear layer.
This blinding process incurs several overheads: (1) the computations on the untrusted device have to be performed over Z p so we use double-precision arithmetic.
(2) The trusted and untrusted processors exchange data in-between each layer, rather than at the end of a full inference pass. (3) The TEE has to efficiently load precomputed unblinding factors, which requires either a large amount of RAM, or a fast access to disk (e.g., a PCIe SSD).
Slalom's security is given by the following results. Formal definitions and proofs are in Appendix B. Let negl be a negligible function (for any integer c > 0 there exists an integer N c such that for all x > N c , |negl(x)| < 1 /x c ). Theorem 3.2. Let Slalom be the protocol from Figure 1 (right), where F is an n-layer DNN, and Freivalds' algorithm is repeated k times per layer with random vectors drawn from S ⊆ F. Assume all random values are generated using a secure PRNG with security parameter λ. Then, Slalom is a secure outsourcing scheme for F between a TEE and an untrusted co-processor S with privacy and t-integrity for t = n /|S| k − negl(λ).
Corollary 3.3. Assuming the TEE is secure (i.e., it acts as a trusted third party hosted by S), Slalom is a secure outsourcing scheme between a remote client C and server S with privacy and t-integrity for t = n /|S| k − negl(λ).
If the model F is the property of S, the scheme further satisfies model privacy.
EMPIRICAL EVALUATION
We evaluate Slalom on real Intel SGX hardware, on micro-benchmarks and a sample application (ImageNet inference with VGG16, MobileNet and ResNet models). Our aim is to show that, compared to a baseline that runs inference fully in the TEE, outsourcing linear layers increases performance without sacrificing security.
IMPLEMENTATION
As enclaves cannot access most OS features (e.g., multi-threading, disk and driver IO), porting a large framework such as TensorFlow or Intel's MKL-DNN to SGX is hard. Instead, we designed a lightweight C++ library for feed-forward networks based on Eigen, a linear-algebra library which TensorFlow uses as a CPU backend. Our library implements the forward pass of DNNs, with support for dense layers, standard and separable convolutions, pooling, and activations. When run on a native CPU (without SGX), its performance is comparable to TensorFlow on CPU (compiled with AVX). Our code is available at https://github.com/ftramer/slalom.
Slalom performs arithmetic over Z p , for p = 2 24 − 3. For integrity, we apply Freivalds' check twice to each layer (k = 2), with random values from S = [−2 19 , 2 19 ], to achieve 40 bits of statistical soundness per layer (see Appendix F for details on the selection of these parameters). For a 50-layer DNN, S has a chance of less than 1 in 22 billion of fooling the TEE on any incorrect DNN evaluation (a slightly better guarantee than in SafetyNets). For privacy, we use AES-CTR and AES-GCM to generate, encrypt and authenticate blinding factors.
SETUP
We use an Intel Core i7-6700 Skylake 3.40GHz processor with 8GB of RAM, a desktop processor with SGX support. The outsourced computations are performed on a co-located Nvidia TITAN XP GPU. Due to a lack of native internal multi-threading in SGX, we run our TEE in a single CPU thread. We discuss challenges for efficient parallelization in Appendix H. We evaluate Slalom on the following workloads:
• Synthetic benchmarks for matrix products, convolutions and separable convolutions, where we compare the enclave's running time for computing a linear operation to that of solely verifying the result. • ImageNet (Deng et al., 2009) MobileNet, a model tailored for low compute devices, serves as a worst-case benchmark for Slalom, as the model's design aggressively minimizes the amount of computation performed per layer. We also consider a "fused" variant of MobileNet with no activation between depthwise and pointwise convolutions. Removing these activations improves convergence and accuracy (Chollet, 2017;Sheng et al., 2018), while also making the network more outsourcing-friendly (i.e., it is possible to verify a separable convolution in a single step).
Our evaluation focuses on throughput (number of forward passes per second). We also discuss energy efficiency in Appendix C to account for hardware differences between our baseline (TEE only) and Slalom (TEE + GPU).
RESULTS
Micro-Benchmarks. Our micro-benchmark suite consists of square matrix products of increasing dimensions, convolutional operations performed by VGG16, and separable convolutions performed by MobileNet. In all cases, the data is pre-loaded inside an enclave, so we only measure the in-enclave execution time. Figure 2 plots the relative speedups of various verification strategies over the cost of computing the linear operation directly. In all cases, the baseline computation is performed in single-precision floating point, and the verification algorithms repeat Freivalds' check so as to attain at least 40 bits of statistical soundness. For square matrices of dimensions up to 2048, verifying an outsourced result is 4× to 8× faster than computing it. For larger matrices, we exceed the limit of SGX's DRAM, so the enclave resorts to expensive paging which drastically reduces performance both for computation and verification.
For convolutions (standard or separable), we achieve large savings with outsourcing if Freivalds' algorithm is applied with preprocessing. The savings get higher as the number of channels increases. Without preprocessing, Freivalds' algorithm results in savings when c out is large. Due to SGX's small PRM, batched verification is only effective for operators with small memory footprints. As expected, "truly" separable convolutions (with no intermediate non-linearity) are much faster to verify, as they can be viewed as a single linear operator.
Verifiable Inference. Figure 3 shows the throughout of end-to-end forward passes in two neural networks, VGG16 and MobileNet. For integrity, we compare the secure baseline (executing the DNN fully in the enclave) to two variants of the Slalom algorithm in Figure 1. The first (in red) applies Freivalds' algorithm "on-the-fly", while the second more efficient variant (in orange) pre-computes part of Freivalds' check as described in Section 3.2.
The VGG16 network is much larger (500MB) than SGX's PRM. As a result, there is a large overhead on the forward pass and verification without preprocessing. If the enclave securely stores preprocessed products W r for all network weights, we drastically reduce the memory footprint and achieve up to a 20.3× increase in throughput. We also ran the lower-half of the VGG16 network (without the fully connected layers), a common approach for extracting features for transfer learning or object recognition (Liu et al., 2016). This part fits in the PRM, and we thus achieve higher throughput for in-enclave forward passes and on-the-fly verification.
For MobileNet, we achieve between 3.6× and 6.4× speedups when using Slalom for verifiable inference (for the standard or "fused" model, respectively). The speedups are smaller than for VGG16, as MobileNet performs much fewer operations per layer (verifying a linear layer requires computing at least two multiplications for each input and output. The closer the forward pass gets to that lower-bound, the less we can save by outsourcing).
Private Inference. We further benchmark the cost of private DNN inference, where inputs of outsourced linear layers are additionally blinded. Blinding and unblinding each layer's inputs and outputs is costly, especially in SGX due to the extra in-enclave memory reads and writes. Nevertheless, for VGG16 and the fused MobileNet variant without intermediate activations, we achieve respective speedups of 13.0× and 5.0× for private outsourcing (in black in Figure 3), and speedups of 10.7× and 4.1× when also ensuring integrity (in purple). For this benchmark, the precomputed unblinding factor are stored in untrusted memory.
We performed the same experiments on a standard CPU (i.e., without SGX) and find that Slalom's improvements are even higher in non-resource-constrained or multi-threaded environments (see Appendix G-H). Slalom's improvements over the baseline also hold when accounting for energy efficiency (see Section C). fully in enclave (baseline) verify w. preproc integrity + privacy Figure 4: Secure outsourcing of ResNet models with Intel SGX. We compare the baseline of fully executing the DNN in the enclave (blue) to secure outsourcing with integrity (yellow) and privacy and integrity (purple).
Extending Slalom to Deep Residual Networks. The Slalom algorithm in Figure 1 and our evaluations above focus on feed-forward architectures. Extending Slalom to more complex DNNs is quite simple. To illustrate, we consider the family of ResNet models (He et al., 2016), which use residual blocks f (x) = σ(f 1 (x) + f 2 (x)) that merge two feed-forward "paths" f 1 and f 2 into a final activation σ. To verify integrity of f (x), the TEE simply verifies all linear layers in f 1 and f 2 and computes σ directly. For privacy, the TEE applies the interactive Slalom protocol in Figure 1 (right) in turn to f 1 and f 2 , and then computes σ. The results for the privacy-preserving Slalom variant in Figure 4 use a preliminary implementation that performs all required operations-and thus provides meaningful performance numbers-but without properly constructed unblinding factors.
We use the ResNet implementation from Keras Chollet et al. (2015), which contains a pre-trained 50-layer variant. For this model, we find that our quantization scheme results in less than a 0.5% decrease in accuracy (see Table 3). For other variants (i.e., with 18, 34, 101 and 152 layers) we compute throughput on untrained models. Figure 4 shows benchmarks for different ResNet variants when executed fully in the enclave (our baseline) as well as secure outsourcing with integrity or privacy and integrity. For all models, we achieve 6.6× to 14.4× speedups for verifiable inference and 4.4× to 9.0× speedups when adding privacy.
Comparing results for different models is illustrative of how Slalom's savings scale with model size and architectural design choices. The 18 and 34-layer ResNets use convolutions with 3 × 3 kernels, whereas the larger models mainly use pointwise convolutions. As shown in Table 2 verifying a convolution is about a factor k 2 · c out than computing it, which explains the higher savings for models that use convolutions with large kernel windows. When adding more layers to a model, we expect Slalom's speedup over the baseline to remain constant (e.g., if we duplicate each layer, the baseline computation and the verification should both take twice as long). Yet we find that Slalom's speedups usually increase as layers get added to the ResNet architecture. This is because the deeper ResNet variants are obtained by duplicating layers towards the end of the pipeline, which have the largest number of channels and for which Slalom achieves the highest savings.
CHALLENGES FOR VERIFIABLE AND PRIVATE TRAINING
Our techniques for secure outsourcing of DNN inference might also apply to DNN training. Indeed, a backward pass consists of similar linear operators as a forward pass, and can thus be verified with Freivalds' algorithm. Yet, applying Slalom to DNN training is challenging, as described below, and we leave this problem open.
• Quantizing DNNs for training is harder than for inference, due to large changes in weight magnitudes (Micikevicius et al., 2018). Thus, a more flexible quantization scheme than the one we used would be necessary. • Because the DNN's weights change during training, the same preprocessed random vectors for Freivalds' check cannot be re-used indefinitely. The most efficient approach would presumably be to train with very large batches than can then be verified simultaneously. • Finally, the pre-computation techniques we employ for protecting input privacy do not apply for training, as the weights change after every processed batch. Moreover, Slalom does not try to hide the model weights from the untrusted processor, which might be a requirement for private training.
CONCLUSION
This paper has studied the efficiency of evaluating a DNN in a Trusted Execution Environment (TEE) to provide strong integrity and privacy guarantees. We explored new approaches for segmenting a DNN evaluation to securely outsource work from a trusted environment to a faster co-located but untrusted processor.
We designed Slalom, a framework for efficient DNN evaluation that outsources all linear layers from a TEE to a GPU. Slalom leverage Freivalds' algorithm for verifying correctness of linear operators, and additionally encrypts inputs with precomputed blinding factors to preserve privacy. Slalom can work with any TEE and we evaluated its performance using Intel SGX on various workloads. For canonical DNNs (VGG16, MobileNet and ResNet variants), we have shown that Slalom boosts inference throughput without compromising security.
Securely outsourcing matrix products from a TEE has applications in ML beyond DNNs (e.g., non negative matrix factorization, dimensionality reduction, etc.) We have also explored avenues and challenges towards applying similar techniques to DNN training, an interesting direction for future work. Finally, our general approach of outsourcing work from a TEE to a faster co-processor could be applied to other problems which have fast verification algorithms, e.g., those considered in (McConnell et al., 2011;Zhang et al., 2014).
A DETAILS ON INTEL SGX SECURITY
SGX enclaves isolate execution of a program from all other processes on a same host, including a potentially malicious OS. In particular, enclave memory is fully encrypted and authenticated. When a word is read from memory into a CPU register, a Memory Management Engine handles the decryption .
While SGX covers many software and hardware attack vectors, there is a large and prominent class of sidechannel attacks that it explicitly does not address . In the past years, many attacks have been proposed, with the goal of undermining privacy of enclave computations (Xu et al., 2015;Brasser et al., 2017;Moghimi et al., 2017;Götzfried et al., 2017;Van Bulck et al., 2017;. Most of these attacks rely on data dependent code behavior in an enclave (e.g., branching or memory access) that can be partially observed by other processes running on the same host. These side-channels are a minor concern for the DNN computations considered in this paper, as the standard computations in a DNN are data-oblivious (i.e., the same operations are applied regardless of the input data) (Ohrimenko et al., 2016).
The recent Spectre attacks on speculative execution (Kocher et al., 2018) also prove damaging to SGX (as well as to most other processors), as recently shown Dall et al., 2018;Van Bulck et al., 2018). Mitigations for these side-channel attacks are being developed (Shinde et al., 2016;Intel Corp., 2018) but a truly secure solution might require some architectural changes, e.g., as in the proposed Sanctum processor .
We refrain from formally modeling SGX's (or other TEE's) security in this paper, as Slalom is mostly concerned with outsourcing protocols wherein the TEE acts as a client. We refer the interested reader to (Pass et al., 2017;Fisch et al., 2017;Subramanyan et al., 2017) for different attempts at such formalisms.
B FORMAL SECURITY DEFINITIONS AND PROOFS
We define a secure outsourcing scheme, between a client C and a server S, for a DNN F (x) : X → Y from some family F (e.g., all DNNs of a given size). We first assume that the model F is known to both C and S:
Definition B.1 (Secure Outsourcing Schemes). A secure outsourcing scheme consists of an offline preprocessing algorithm Preproc, as well as an interactive online protocol Outsource C, S , defined as follows:
• st ← Preproc(F, 1 λ ): The preprocessing algorithm is run by C and generates some data-independent state st (e.g., cryptographic keys or precomputed values to accelerate the online outsourcing protocol.)
• Y ∪ {⊥} ← Outsource C(F, x, st), S(F ) :
The online outsourcing protocol is initiated by C with inputs (F, x, st). At the end of the protocol, C either outputs a value y ∈ Y or aborts (i.e., C outputs ⊥).
The properties that we may require from a secure outsourcing scheme are:
• Correctness: For any F ∈ F and x ∈ X , running st ← Preproc(F, 1 λ ) and y ← Outsource C(F, x, st), S(F ) yields y = F (x).
• t-Integrity: For any F ∈ F, input x ∈ X and probabilistic polynomial-time adversary S * , the probability thatỹ = Outsource C(F, x, st), S * (F ) andỹ / ∈ {F (x), ⊥} is less than t.
• Input privacy: For any F ∈ F, inputs x, x ∈ X and probabilistic poly-time adversary S * , the views of S * in Outsource C(F, x, st), S * (F ) and Outsource C(F, x , st), S * (F ) are computationally indistinguishable.
• Efficiency: The online computation of C in Outsource should be less than the cost for C to evaluate F ∈ F.
Model Privacy. In some applications a secure outsourcing scheme may also require to hide the model F from either S or C (in which case that party would obviously not take F as input in the above scheme).
Privacy with respect to an adversarial server S * (which Slalom does not provide), is defined as the indistinguishability of S * 's views in Outsource C(F, x, st), S * and Outsource C(F , x, st), S * for any F, F ∈ F.
As noted in Section 2.1, a meaningful model-privacy guarantee with respect to C requires that S first commit to a specific DNN F , and then convinces C that her outputs were produced with the same model as all other clients'. We refer the reader to Canetti et al. (2002) for formal definitions for such commit-and-prove schemes, and to who show how to trivially instantiate them using a TEE.
Proof of Theorem 3.2. Let st ← Preproc and Outsource TEE(F, x, st), S be the outsourcing scheme defined in Figure 1 (right). We assume that all random values sampled by the TEE are produced by a secure cryptographically secure pseudorandom number generator (PRNG) (with elements in S ⊆ F for the integritycheck vectors s used in Freivalds' algorithm, and in F for the blinding vectors r i ).
We first consider integrity. Assume that the scheme is run with input x 1 and that the TEE outputs y n . We will bound Pr[y n = F (x 1 ) | y n = ⊥]. By the security of the PRNG, we can replace the vectors s used in Freivalds' algorithm by truly uniformly random values in S ⊆ F, via a simple hybrid argument. For the i-th linear layer, with operator W i , input x i and purported output y i , we then have that y i = x i W i with probability at most 1 /|S| k . By a simple union bound, we thus have that Pr[y n = F (x 1 )] ≤ n /|S| k − negl(λ). Note that this bound holds even if the same (secret) random values s are re-used across layers.
For privacy, consider the views of an adversary S * when Slalom is run with inputs x 1 and x 1 . Again, by the security of the PRNG, we consider a hybrid protocol where we replace the pre-computed blinding vectors r i by truly uniformly random values in F. In this hybrid protocol,x i = x i + r i is simply a "one-time-pad" encryption of x i over the field F, so S * 's views in both executions of the hybrid protocol are equal (information theoretically). Thus, S * 's views in both executions of the original protocol are computationally indistinguishable.
Proof of Corollary 3.3. The outsourcing protocol between the remote client C and server S hosting the TEE is simply defined as follows (we assume the model belongs to S):
• st ← Preproc(): C and the TEE setup a secure authenticated communication channel, using the TEE's remote attestation property. The TEE receives the model F from S and initializes the Slalom protocol.
• Outsource C(x, st), S(F ) : -C sends x to the TEE over the secure channel.
-The TEE securely computes y = F (x) using Slalom.
-The TEE sends y (and a publicly verifiable commitment to F ) to C over the secure channel.
If the TEE is secure (i.e., it acts as a trusted third party hosted by S), then the result follows.
C PERFORMANCE COMPARISON OF DNN OUTSOURCING SCHEMES
We provide a brief overview of the outsourcing approaches compared in Table 1. Our baseline runs a DNN in a TEE (a single-threaded Intel SGX enclave) and can provide all the security guarantees of an ML outsourcing scheme. On a high-end GPU (an Nvidia TITAN XP), we achieve over 50× higher throughput but no security. For example, for MobileNet, the enclave evaluates 16 images/sec and the GPU 900 images/sec (56× higher).
SafetyNets (Ghodsi et al., 2017) and Gazelle (Juvekar et al., 2018) are two representative works that achieve respectively integrity and privacy using purely cryptographic approaches (without a TEE). SafetyNets does not hide the model from either party, while Gazelle leaks some architectural details to the client. The cryptographic techniques used by these systems incur large computation and communication overheads in practice. The largest model evaluated by SafetyNets is a 4-layer TIMIT model with quadratic activations which runs at about 13 images/sec (on a notebook CPU). In our baseline enclave, the same model runs at over 3,500 images/sec. The largest model evaluated by Gazelle is an 8-layer CIFAR10 model. In the enclave, we can evaluate 450 images/sec whereas Gazelle evaluates a single image in 3.5 sec with 300MB of communication between client and server.
A Note on Energy Efficiency. When comparing approaches with different hardware (e.g., our single-core CPU baseline versus Slalom which also uses a GPU), throughput alone is not the fairest metric. E.g., the baseline's throughput could also be increased by adding more SGX CPUs. A more accurate comparison considers the energy efficiency of a particular approach, a more direct measure of the recurrent costs to the server S.
For example, when evaluating MobileNet or VGG16, our GPU draws 85W of power, whereas our baseline SGX CPU draws 30W. As noted above, the GPU also achieves more than 50× higher throughput, and thus is at least 18× more energy efficient (e.g., measured in Joules per image) than the enclave.
For Slalom, we must consider the cost of running both the enclave and GPU. In our evaluations, the outsourced computations on the GPU account for at most 10% of the total running time of Slalom (i.e., the integrity checks and data encryption/decryption in the enclave are the main bottleneck). Thus, the power consumption attributed to Slalom is roughly 10% · 85W + 90% · 30W = 35.5W. Note that when not being in use by Slalom, the trusted CPU or untrusted GPU can be used by other tasks running on the server. As Slalom achieves 4×-20× higher throughput than our baseline for the tasks we evaluate, it is also about 3.4×-17.1× more energy efficient.
D NOTATION FOR STANDARD LINEAR OPERATORS
Below we describe some common linear operators used in deep neural networks. For simplicity, we omit additive bias terms, and assume that convolutional operators preserve the spatial height and width of their inputs. Our techniques easily extend to convolutions with arbitrary strides, paddings, and window sizes.
For a fully-connected layer f FC , the kernel W has dimension (h in × h out ). For an input x of dimension h in , we have f FC (x) = x W . The cost of the layer is h in · h out multiplications.
A convolutional layer has kernel W of size
(k × k × c in × c out ). On input x of size (h × w × c in ), f conv (x) = Conv(x; W ) produces an output of size (h × w × c out ).
A convolution can be seen as the combination of two linear operators: a "patch-extraction" process that transforms the input x into an intermediate input x of dimension (h · w, k 2 · c in ) by extracting k × k patches, followed by a matrix multiplication with W . The cost of this layer is thus k 2 · h · w · c in · c out multiplications.
A separable convolution has two kernels,
W 1 of size (k × k × c in ) and W 2 of size (c in × c out ). On input x of size (h × w × c in ), f sep-conv (x)
produces an output of size (h × w × c out ), by applying a depthwise convolution f dp-conv (x) with kernel W 1 followed by a pointwise convolution f pt-conv (x) with kernel W 2 . The depthwise convolution consists of c in independent convolutions with filters of size k × k × 1 × 1, applied to a single input channel, which requires k 2 · h · w · c in multiplications. A pointwise convolution is simply a matrix product with an input of size (h · w) × c in , and thus requires h · w · c in · c out multiplications. Table 3 provides details about the two DNNs we use in our evaluation (all pre-trained models are taken from Keras Chollet et al. (2015)). We report top 1 and top 5 accuracy on ImageNet with and without the simple quantization scheme described in Section 3.1. Quantization results in at most a 0.5% drop in top 1 and top 5 accuracy. More elaborate quantization schemes exist (e.g., Micikevicius et al. (2018)) that we have not experimented with in this work.
E NEURAL NETWORK DETAILS
We report the number of model parameters, which is relevant to the memory constraints of TEEs such as Intel SGX. We also list the total size of the inputs and outputs of all the model's linear layers, which impact the amount of communication between trusted and untrusted co-processors in Slalom, as well as the amount of data stored in the TEE when using Freivalds' algorithm with preprocessing. In this section, we briefly describe how Slalom performs modular arithmetic over a field Z p in the TEE, while leveraging standard floating point operations to maximize computational efficiency. The main computations in the TEE are inner products over Z p for Freivalds' check (a matrix product is itself a set of inner products).
Our quantization scheme (see Section 3.1) ensures that all DNN values can be represented in Z p , for p 2 24 , which fits in a standard float. To compute inner products, we first cast elements to doubles (as a single multiplication in Z p would exceed the range of integers exactly representable as floats). Single or double precision floats are preferable to integer types on Intel architectures due to the availability of much more efficient SIMD instructions, at a minor reduction in the range of exactly representable integers.
In our evaluation, we target a soundness error of 2 −40 for each layer. This leads to a tradeoff between the number of repetitions k of Freivalds' check, and the size of the set S from which we draw random values. One check with |S| = 2 40 is problematic, as multiplying elements in Z p and S can exceed the range of integers exactly representable as doubles (2 53 ). With k = 2 repetitions, we can set S = [−2 19 , 2 19 ]. Multiplications are then bounded by 2 24+19 = 2 43 , and we can accumulate 2 10 terms in the inner-product before needing a modular reduction. In practice, we find that increasing k further (and thus reducing |S|) is not worthwhile, as the cost of performing more inner products trumps the savings from reducing the number of modulos.
G RESULTS ON A STANDARD CPU
For completeness, and to asses how our outsourcing scheme fairs in an environment devoid of Intel SGX's performance quirks, we rerun the evaluations in Section 4 on the same CPU but outside of SGX's enclave mode. Figure 5 show the results of the micro-benchmarks for matrix multiplication, convolution and separable convolutions. In all cases, verifying a computation becomes 1-2 orders of magnitude faster than computing it as the outer dimension grows. Compared to the SGX benchmarks, we also see a much better viability of batched verification (we haven't optimized batched verifications much, as they are inherently slow on SGX. It is likely that these numbers could be improved significantly, to approach those of verification with preprocessing). Figure 6 shows benchmarks for VGG16 and MobileNet on a single core with either direct computation or various secure outsourcing strategies. For integrity alone, we achieve savings up to 8.9× and 19.5× for MobileNet and VGG16 respectively. Even without storing any secrets in the enclave, we obtain good speedups using batched Figure 6: Inference with integrity and/or privacy on an untrusted CPU. We compare the baseline inference throughput (blue) to that obtained with "on-the-fly" integrity checks (red); batched integrity checks (green); integrity checks with precomputed secrets (yellow); privacy only (black); and privacy and integrity (purple). The fused MobileNet model has no intermediate activation for separable convolutions.
verification. As noted above, it is likely that the batched results could be further improved. With additional blinding to preserve privacy, we achieve speedups of 3.9× and 8.1× for MobileNet and VGG16 respectively.
H PARALLELIZATION
Our experiments on SGX in Section 4 where performed using a single execution thread, as SGX enclaves do not have the ability to create threads. We have also experimented with techniques for achieving parallelism in SGX, both for standard computations and outsourced ones, but with little success.
To optimize for throughput, a simple approach is to run multiple forward passes simultaneously. On a standard CPU, this form of "outer-parallelism" achieves close to linear scaling as we increase the number of threads from 1 to 4 on our quad-core machine. With SGX however, we did not manage to achieve any parallel speedup for VGG16-whether for direct computation or verifying outsourced results-presumably because each independent thread requires extra memory that quickly exceeds the PRM limit. For the smaller MobileNet model, we get less than a 1.5× speedup using up to 4 threads, for direct computation or outsourced verification alike.
DNNs typically also make use of intra-operation parallelism, i.e., computing the output of a given layer using multiple threads. Our DNN library currently does not support intra-operation parallelism, but implementing a dedicated thread pool for SGX could be an interesting extension for future work. Instead, we evaluate the potential benefits of intra-op parallelism on a standard untrusted CPU, for our matrix-product and convolution benchmarks. We make use of Eigen's internal multi-threading support to speed up these operations, and custom OpenMP code to parallelize dot products, as Eigen does not do this on its own. Reiterates benchmarks for matrix products and convolutions using 4 threads. Figure 7 shows the results using 4 threads. For convolutions, we have currently only implemented multi-threading for the verification with preprocessing (which requires only standard dot products). Surprisingly maybe, we find that multi-threading increases the gap between direct and verified computations of matrix products, probably because dot products are extremely easy to parallelize efficiently (compared to full convolutions). We also obtain close to linear speedups for verifiable separable convolutions, but omit the results as we currently do not have an implementation of multi-threaded direct computation for depthwise convolutions, which renders the comparison unfair. Due to the various memory-access overheads in SGX, it is unclear whether similar speedups could be obtained by using intra-op parallelism in an enclave, but this is an avenue worth exploring.
*
With an offline preprocessing phase.
classification with VGG16 (Simonyan & Zisserman, 2014), MobileNet (Howard et al., 2017), and ResNet He et al. (2016) models (with fused Batch Normalization layers when applicable).
Figure 2 :
2Micro benchmarks on Intel SGX. We plot the relative speedup of verifying the result of a linear operator compared to computing it entirely in the enclave. The dotted line shows the throughput obtained for a direct computation. "Fused" separable convolutions contain no intermediate activation.
Figure 5 :
5Micro benchmarks on an untrusted CPU. For three different linear operators, we plot the relative speedup of verifying a result compared to computing it. The dotted line in each plot shows the throughput obtained for computing the operation.
Figure 7 :
7Multi-threaded micro benchmarks on an untrusted CPU.
Table 3 :
3Details of models used in our evaluation. Accuracies are computed on the ImageNet validation set. Pre-trained models are from Keras Chollet et al. (2015). Model Top 1 Top 5 Top 1 Top 5 Layers Parameters (M) Size of layer inputs/outputs (M)Accuracy
Quantized
VGG16
71.0 90.0 70.6 89.5
16
138.4
9.1 / 13.6
VGG16 (no top)
-
-
-
-
13
14.7
9.1 / 13.5
MobileNet
70.7 89.6 70.5 89.5
28
4.2
5.5 / 5.0
MobileNet (fused)
-
-
-
-
15
4.2
3.6 / 3.1
ResNet 50
76.9 92.4 76.4 92.2
50
25.5
10.0 / 10.4
For this zero-knowledge guarantee to be meaningful in our context, S would first commit to a specific DNN, and then convince C that this DNN was correctly evaluated on her input, without revealing anything else about the DNN.
SGX has recently come under several side-channel attacksVan Bulck et al., 2018). Intel is making firmware and hardware updates to SGX with the goal of preventing these attacks. In time, it is likely that SGX can be made sufficiently secure to satisfy the requirements needed for Slalom. Even if not, other enclave architectures are available, such as Sanctum for RISC-V or possibly a separate co-processor for security operations.
Trustzone: Integrated hardware and software security-enabling trusted computing in embedded systems. Tiago Alves, Don Felton, Technical report. Tiago Alves and Don Felton. Trustzone: Integrated hardware and software security-enabling trusted computing in embedded systems. Technical report, ARM, 2004.
Software grand exposure: SGX cache attacks are practical. Ferdinand Brasser, Urs Müller, Alexandra Dmitrienko, Kari Kostiainen, Srdjan Capkun, Ahmad-Reza Sadeghi, USENIX Workshop on Offensive Technologies. Ferdinand Brasser, Urs Müller, Alexandra Dmitrienko, Kari Kostiainen, Srdjan Capkun, and Ahmad-Reza Sadeghi. Software grand exposure: SGX cache attacks are practical. In USENIX Workshop on Offensive Technologies, 2017.
Universally composable two-party and multi-party secure computation. Ran Canetti, Yehuda Lindell, Rafail Ostrovsky, Amit Sahai, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing. the thiry-fourth annual ACM symposium on Theory of computingACMRan Canetti, Yehuda Lindell, Rafail Ostrovsky, and Amit Sahai. Universally composable two-party and multi-party secure computation. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pp. 494-503. ACM, 2002.
Guoxing Chen, Sanchuan Chen, Yuan Xiao, Yinqian Zhang, Zhiqiang Lin, H Ten, Lai, arXiv:1802.09085SGXPECTRE attacks: Leaking enclave secrets via speculative execution. arXiv preprintGuoxing Chen, Sanchuan Chen, Yuan Xiao, Yinqian Zhang, Zhiqiang Lin, and Ten H Lai. SGXPECTRE attacks: Leaking enclave secrets via speculative execution. arXiv preprint arXiv:1802.09085, 2018.
Detecting privileged side-channel attacks in shielded execution with déjá vu. Sanchuan Chen, Xiaokuan Zhang, K Michael, Yinqian Reiter, Zhang, ACM Asia Conference on Computer and Communications Security (ASIACCS). ACMSanchuan Chen, Xiaokuan Zhang, Michael K Reiter, and Yinqian Zhang. Detecting privileged side-channel attacks in shielded execution with déjá vu. In ACM Asia Conference on Computer and Communications Security (ASIACCS), pp. 7-18. ACM, 2017.
Ekiden: A platform for confidentiality-preserving, trustworthy, and performant smart contract execution. Raymond Cheng, Fan Zhang, Jernej Kos, Warren He, Nicholas Hynes, Noah Johnson, Ari Juels, Andrew Miller, Dawn Song, arXiv:1804.05141arXiv preprintRaymond Cheng, Fan Zhang, Jernej Kos, Warren He, Nicholas Hynes, Noah Johnson, Ari Juels, Andrew Miller, and Dawn Song. Ekiden: A platform for confidentiality-preserving, trustworthy, and performant smart contract execution. arXiv preprint arXiv:1804.05141, 2018.
. François Chollet, François Chollet et al. Keras. https://keras.io, 2015.
Xception: Deep learning with depthwise separable convolutions. François Chollet, Conference on Computer Vision and Pattern Recognition (CVPR. François Chollet. Xception: Deep learning with depthwise separable convolutions. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
. Victor Costan, Srinivas Devadas, Intel SGX. Victor Costan and Srinivas Devadas. Intel SGX explained. https://eprint.iacr.org/2016/086, 2016.
Sanctum: Minimal hardware extensions for strong software isolation. Ilia Victor Costan, Srinivas Lebedev, Devadas, USENIX Security Symposium. Victor Costan, Ilia Lebedev, and Srinivas Devadas. Sanctum: Minimal hardware extensions for strong software isolation. In USENIX Security Symposium, 2016.
Cachequote: Efficiently recovering long-term secrets of sgx epid via cache attacks. Fergus Dall, Gabrielle De Micheli, Thomas Eisenbarth, Daniel Genkin, Nadia Heninger, Ahmad Moghimi, Yuval Yarom, IACR Transactions on Cryptographic Hardware and Embedded Systems. 2018Fergus Dall, Gabrielle De Micheli, Thomas Eisenbarth, Daniel Genkin, Nadia Heninger, Ahmad Moghimi, and Yuval Yarom. Cachequote: Efficiently recovering long-term secrets of sgx epid via cache attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2018(2):171-191, 2018.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, Conference on Computer Vision and Pattern Recognition (CVPR). IEEEJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248-255. IEEE, 2009.
Publicly verifiable delegation of large polynomials and matrix computations, with applications. Dario Fiore, Rosario Gennaro, Proceedings of the 2012 ACM conference on Computer and communications security. the 2012 ACM conference on Computer and communications securityACMDario Fiore and Rosario Gennaro. Publicly verifiable delegation of large polynomials and matrix computations, with applications. In Proceedings of the 2012 ACM conference on Computer and communications security, pp. 501-512. ACM, 2012.
Iron: functional encryption using intel sgx. Ben Fisch, Dhinakaran Vinayagamurthy, Dan Boneh, Sergey Gorbunov, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityACMBen Fisch, Dhinakaran Vinayagamurthy, Dan Boneh, and Sergey Gorbunov. Iron: functional encryption using intel sgx. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 765-782. ACM, 2017.
Probabilistic machines can use less running time. Rusins Freivalds, IFIP congress. 839842Rusins Freivalds. Probabilistic machines can use less running time. In IFIP congress, volume 839, pp. 842, 1977.
Safetynets: Verifiable execution of deep neural networks on an untrusted cloud. Zahra Ghodsi, Tianyu Gu, Siddharth Garg, Advances In Neural Information Processing Systems (NIPS). Zahra Ghodsi, Tianyu Gu, and Siddharth Garg. Safetynets: Verifiable execution of deep neural networks on an untrusted cloud. In Advances In Neural Information Processing Systems (NIPS), pp. 4675-4684, 2017.
Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, John Wernsing, International Conference on Machine Learning (ICML). Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning (ICML), pp. 201-210, 2016.
Cache attacks on Intel SGX. Johannes Götzfried, Moritz Eckert, Sebastian Schinzel, Tilo Müller, European Workshop on Systems Security. ACMJohannes Götzfried, Moritz Eckert, Sebastian Schinzel, and Tilo Müller. Cache attacks on Intel SGX. In European Workshop on Systems Security, pp. 2. ACM, 2017.
Deep learning with limited numerical precision. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan, International Conference on Machine Learning (ICML). Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning (ICML), pp. 1737-1746, 2015.
Mlcapsule: Guarded offline deployment of machine learning as a service. Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, Mario Fritz, arXiv:1808.00590arXiv preprintLucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. Mlcapsule: Guarded offline deployment of machine learning as a service. arXiv preprint arXiv:1808.00590, 2018.
Impressions of Intel SGX performance. Danny Harnik, Eliad Tsfadia, Danny Harnik and Eliad Tsfadia. Impressions of Intel SGX performance. https://medium.com/@danny_harnik/ impressions-of-intel-sgx-performance-22442093595a, 2017. Accessed on May 17, 2018.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Mobilenets: Efficient convolutional neural networks for mobile vision applications. G Andrew, Menglong Howard, Bo Zhu, Dmitry Chen, Weijun Kalenichenko, Tobias Wang, Marco Weyand, Hartwig Andreetto, Adam, arXiv:1704.04861arXiv preprintAndrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Chiron: Privacy-preserving machine learning as a service. Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, Emmett Witchel, arXiv:1803.05961arXiv preprintTyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. Chiron: Privacy-preserving machine learning as a service. arXiv preprint arXiv:1803.05961, 2018.
Intel Software Guard Extensions Evaluation SDK. Intel Corp, Intel Corp. Intel Software Guard Extensions Evaluation SDK. https://software.intel.com/en-us/sgx-sdk, 2015.
Intel software guard extensions (sgx) SW development guidance for potential bounds check bypass (CVE-2017-5753) side channel exploits. Intel Corp, Intel Corp. Intel software guard extensions (sgx) SW development guidance for potential bounds check bypass (CVE- 2017-5753) side channel exploits. https://software.intel.com/sites/default/files/180204_SGX_ SDK_Developer_Guidance_v1.0.pdf, 2018.
Gazelle: A low latency framework for secure neural network inference. Chiraag Juvekar, Vinod Vaikuntanathan, Anantha Chandrakasan, arXiv:1801.05507arXiv preprintChiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. Gazelle: A low latency framework for secure neural network inference. arXiv preprint arXiv:1801.05507, 2018.
Paul Kocher, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, Yuval Yarom, arXiv:1801.01203Spectre attacks: Exploiting speculative execution. arXiv preprintPaul Kocher, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. Spectre attacks: Exploiting speculative execution. arXiv preprint arXiv:1801.01203, 2018.
Inferring fine-grained control flow inside SGX enclaves with branch shadowing. Sangho Lee, Ming-Wei Shih, Prasun Gera, Taesoo Kim, Hyesoon Kim, Marcus Peinado, USENIX Security Symposium. Sangho Lee, Ming-Wei Shih, Prasun Gera, Taesoo Kim, Hyesoon Kim, and Marcus Peinado. Inferring fine-grained control flow inside SGX enclaves with branch shadowing. In USENIX Security Symposium, pp. 16-18, 2017.
SSD: Single shot multibox detector. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C Berg, European Conference on Computer Vision (ECCV). SpringerWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision (ECCV), pp. 21-37. Springer, 2016.
Certifying algorithms. M Ross, Kurt Mcconnell, Stefan Mehlhorn, Pascal Näher, Schweitzer, Computer Science Review. 52Ross M McConnell, Kurt Mehlhorn, Stefan Näher, and Pascal Schweitzer. Certifying algorithms. Computer Science Review, 5(2):119-161, 2011.
Innovative instructions and software model for isolated execution. Frank Mckeen, Ilya Alex, Alex Berenzon, Carlos Rozas, Hisham Shafi, Vedvyas Shanbhogue, Uday Savagaonkar, International Workshop on Hardware and Architectural Support for Security and Privacy (HASP). Frank McKeen, Ilya Alex, Alex Berenzon, Carlos Rozas, Hisham Shafi, Vedvyas Shanbhogue, and Uday Savagaonkar. Inno- vative instructions and software model for isolated execution. In International Workshop on Hardware and Architectural Support for Security and Privacy (HASP), 2013.
Ganesh Venkatesh, et al. Mixed precision training. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, International Conference on Learning Representations (ICLR). Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, Ganesh Venkatesh, et al. Mixed precision training. In International Conference on Learning Representations (ICLR), 2018.
Cachezoom: How SGX amplifies the power of cache attacks. Ahmad Moghimi, Gorka Irazoqui, Thomas Eisenbarth, International Conference on Cryptographic Hardware and Embedded Systems. SpringerAhmad Moghimi, Gorka Irazoqui, and Thomas Eisenbarth. Cachezoom: How SGX amplifies the power of cache attacks. In International Conference on Cryptographic Hardware and Embedded Systems, pp. 69-90. Springer, 2017.
SecureML: A system for scalable privacy-preserving machine learning. Payman Mohassel, Yupeng Zhang, IEEE Symposium on Security and Privacy. IEEEPayman Mohassel and Yupeng Zhang. SecureML: A system for scalable privacy-preserving machine learning. In IEEE Symposium on Security and Privacy, pp. 19-38. IEEE, 2017.
Oblivious multi-party machine learning on trusted processors. Olga Ohrimenko, Felix Schuster, Cdric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, Manuel Costa, USENIX Security Symposium. Olga Ohrimenko, Felix Schuster, Cdric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. Oblivious multi-party machine learning on trusted processors. In USENIX Security Symposium, 2016.
Eleos: Exitless os services for sgx enclaves. Meni Orenbach, Pavel Lifshits, Marina Minkin, Mark Silberstein, Proceedings of the Twelfth European Conference on Computer Systems. the Twelfth European Conference on Computer SystemsACMMeni Orenbach, Pavel Lifshits, Marina Minkin, and Mark Silberstein. Eleos: Exitless os services for sgx enclaves. In Proceedings of the Twelfth European Conference on Computer Systems, pp. 238-253. ACM, 2017.
Formal abstractions for attested execution secure processors. Rafael Pass, Elaine Shi, Florian Tramèr, EUROCRYPT'17. Rafael Pass, Elaine Shi, and Florian Tramèr. Formal abstractions for attested execution secure processors. In EUROCRYPT'17, 2017.
A quantization-friendly separable convolution for mobilenets. Tao Sheng, Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Mickey Aleksic, arXiv:1803.08607arXiv preprintTao Sheng, Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, and Mickey Aleksic. A quantization-friendly separable convolution for mobilenets. arXiv preprint arXiv:1803.08607, 2018.
T-SGX: Eradicating controlled-channel attacks against enclave programs. Ming-Wei Shih, Sangho Lee, Taesoo Kim, Marcus Peinado, Network and Distributed System Security Symposium (NDSS). Ming-Wei Shih, Sangho Lee, Taesoo Kim, and Marcus Peinado. T-SGX: Eradicating controlled-channel attacks against enclave programs. In Network and Distributed System Security Symposium (NDSS), 2017.
Preventing page faults from telling your secrets. Shweta Shinde, Zheng Leong Chua, Viswesh Narayanan, Prateek Saxena, ACM Asia Conference on Computer and Communications Security (ASIACCS). ACMShweta Shinde, Zheng Leong Chua, Viswesh Narayanan, and Prateek Saxena. Preventing page faults from telling your secrets. In ACM Asia Conference on Computer and Communications Security (ASIACCS), pp. 317-328. ACM, 2016.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, arXiv:1409.1556arXiv preprintKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Ion Stoica, Dawn Song, Ada Raluca, David Popa, Patterson, W Michael, Randy Mahoney, Katz, D Anthony, Michael Joseph, Jordan, M Joseph, Joseph E Hellerstein, Gonzalez, arXiv:1712.05855A Berkeley view of systems challenges for AI. arXiv preprintIon Stoica, Dawn Song, Raluca Ada Popa, David Patterson, Michael W Mahoney, Randy Katz, Anthony D Joseph, Michael Jordan, Joseph M Hellerstein, Joseph E Gonzalez, et al. A Berkeley view of systems challenges for AI. arXiv preprint arXiv:1712.05855, 2017.
A formal foundation for secure remote execution of enclaves. Pramod Subramanyan, Rohit Sinha, Ilia Lebedev, Srinivas Devadas, Sanjit, Seshia, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityACMPramod Subramanyan, Rohit Sinha, Ilia Lebedev, Srinivas Devadas, and Sanjit A Seshia. A formal foundation for secure remote execution of enclaves. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 2435-2450. ACM, 2017.
Time-optimal interactive proofs for circuit evaluation. Justin Thaler, Advances in Cryptology-CRYPTO 2013. SpringerJustin Thaler. Time-optimal interactive proofs for circuit evaluation. In Advances in Cryptology-CRYPTO 2013, pp. 71-89. Springer, 2013.
Sealed-Glass Proofs: Using transparent enclaves to prove and sell knowledge. Florian Tramèr, Fan Zhang, Huang Lin, Jean-Pierre Hubaux, Ari Juels, Elaine Shi, IEEE European Symposium on Security and Privacy. Florian Tramèr, Fan Zhang, Huang Lin, Jean-Pierre Hubaux, Ari Juels, and Elaine Shi. Sealed-Glass Proofs: Using transparent enclaves to prove and sell knowledge. In IEEE European Symposium on Security and Privacy, 2017.
Telling your secrets without page faults: Stealthy page table-based attacks on enclaved execution. Jo Van Bulck, Nico Weichbrodt, Rüdiger Kapitza, Frank Piessens, Raoul Strackx, USENIX Security Symposium. Jo Van Bulck, Nico Weichbrodt, Rüdiger Kapitza, Frank Piessens, and Raoul Strackx. Telling your secrets without page faults: Stealthy page table-based attacks on enclaved execution. In USENIX Security Symposium, 2017.
Foreshadow: Extracting the keys to the Intel SGX kingdom with transient out-of-order execution. Jo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F Wenisch, Yuval Yarom, Raoul Strackx, Proceedings of the 27th USENIX Security Symposium. the 27th USENIX Security SymposiumJo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F. Wenisch, Yuval Yarom, and Raoul Strackx. Foreshadow: Extracting the keys to the Intel SGX kingdom with transient out-of-order execution. In Proceedings of the 27th USENIX Security Symposium, 2018.
Verifiable ASICs. Max Riad S Wahby, Siddharth Howald, Abhi Garg, Michael Shelat, Walfish, IEEE Symposium on Security and Privacy. IEEERiad S Wahby, Max Howald, Siddharth Garg, Abhi Shelat, and Michael Walfish. Verifiable ASICs. In IEEE Symposium on Security and Privacy, pp. 759-778. IEEE, 2016.
Full accounting for verifiable outsourcing. Ye Riad S Wahby, Ji, J Andrew, Abhi Blumberg, Justin Shelat, Michael Thaler, Thomas Walfish, Wies, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityACMRiad S Wahby, Ye Ji, Andrew J Blumberg, Abhi Shelat, Justin Thaler, Michael Walfish, and Thomas Wies. Full accounting for verifiable outsourcing. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 2071-2086. ACM, 2017.
Controlled-channel attacks: Deterministic side channels for untrusted operating systems. Yuanzhong Xu, Weidong Cui, Marcus Peinado, S&P'15. IEEEYuanzhong Xu, Weidong Cui, and Marcus Peinado. Controlled-channel attacks: Deterministic side channels for untrusted operating systems. In S&P'15, pp. 640-656. IEEE, 2015.
Alitheia: Towards practical verifiable graph processing. Yupeng Zhang, Charalampos Papamanthou, Jonathan Katz, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. the 2014 ACM SIGSAC Conference on Computer and Communications SecurityACMYupeng Zhang, Charalampos Papamanthou, and Jonathan Katz. Alitheia: Towards practical verifiable graph processing. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 856-867. ACM, 2014. |
202,573,030 | NEURAL OBLIVIOUS DECISION ENSEMBLES FOR DEEP LEARNING ON TABULAR DATA | Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data. | [
2780493,
153313159
] | NEURAL OBLIVIOUS DECISION ENSEMBLES FOR DEEP LEARNING ON TABULAR DATA
Sergei Popov
Yandex National Research University Higher School of Economics
Yandex Lomonosov Moscow State University
Stanislav Morozov [email protected]
Yandex National Research University Higher School of Economics
Yandex Lomonosov Moscow State University
Artem Babenko [email protected]
Yandex National Research University Higher School of Economics
Yandex Lomonosov Moscow State University
NEURAL OBLIVIOUS DECISION ENSEMBLES FOR DEEP LEARNING ON TABULAR DATA
Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data.
INTRODUCTION
The recent rise of deep neural networks (DNN) resulted in a substantial breakthrough for a large number of machine learning tasks in computer vision, natural language processing, speech recognition, reinforcement learning (Goodfellow et al., 2016). Both gradient-based optimization via backpropagation (Rumelhart et al., 1985) and hierarchical representation learning appear to be crucial in increasing the performance of machine learning for these problems by a large margin.
While the superiority of deep architectures in these domains is undoubtful, machine learning for tabular data still did not fully benefit from the DNN power. Namely, the state-of-the-art performance in problems with tabular heterogeneous data is often achieved by "shallow" models, such as gradient boosted decision trees (GBDT) (Friedman, 2001;Chen & Guestrin, 2016;Ke et al., 2017;Prokhorenkova et al., 2018). While the importance of deep learning on tabular data is recognized by the ML community, and many works address this problem (Zhou & Feng, 2017;Miller et al., 2017;Lay et al., 2018;Feng et al., 2018;Ke et al., 2018), the proposed DNN approaches do not consistently outperform the state-of-the-art shallow models by a notable margin. In particular, to the best of our knowledge, there is still no universal DNN approach that was shown to systematically outperform the leading GBDT packages (e.g., XGBoost (Chen & Guestrin, 2016)). As additional evidence, a large number of Kaggle ML competitions with tabular data are still won by the shallow GBDT methods (Harasymiv, 2015). Overall, at the moment, there is no dominant deep learning solution for tabular data problems, and we aim to reduce this gap by our paper.
We introduce Neural Oblivious Decision Ensembles (NODE), a new DNN architecture, designed to work with tabular problems. The NODE architecture is partially inspired by the recent CatBoost package (Prokhorenkova et al., 2018), which was shown to provide state-of-the-art performance on a large number of tabular datasets. In a nutshell, CatBoost performs gradient boosting on oblivious decision trees (decision tables) (Kohavi, 1994;Lou & Obukhov, 2017), which makes inference very efficient, and the method is quite resistant to overfitting. In its essence, the proposed NODE architecture generalizes CatBoost, making the splitting feature choice and decision tree routing differentiable. As a result, the NODE architecture is fully differentiable and could be incorporated in any computational graph of existing DL packages, such as TensorFlow or PyTorch. Furthermore, NODE allows constructing multi-layer architectures, which resembles "deep" GBDT that is trained end-to-end, which was never proposed before. Besides the usage of oblivious decision tables, another important design choice is the recent entmax transformation (Peters et al., 2019), which effectively performs a "soft" splitting feature choice in decision trees inside the NODE architecture. As discussed in the following sections, these design choices are critical to obtain state-of-the-art performance. In a large number of experiments, we compare the proposed approach with the leading GBDT implementations with tuned hyperparameters and demonstrate that NODE outperforms competitors consistently on most of the datasets.
Overall, the main contributions of our paper can be summarized as follows:
1. We introduce a new DNN architecture for machine learning on tabular data. To the best of our knowledge, our method is the first successful example of deep architectures that substantially outperforms leading GBDT packages on tabular data. 2. Via an extensive experimental evaluation on a large number of datasets, we show that the proposed NODE architecture outperforms existing GBDT implementations. 3. The PyTorch implementation of NODE is available online 1 .
The rest of the paper is organized as follows. In Section 2 we review prior work relevant to our method. The proposed Neural Oblivious Decision Ensembles architecture is described in Section 3 and experimentally evaluated in Section 4. Section 5 concludes the paper.
RELATED WORK
In this section, we briefly review the main ideas from prior work that are relevant to our method.
The state-of-the-art for tabular data. Ensembles of decision trees, such as GBDT (Friedman, 2001) or random forests (Barandiaran, 1998), are currently the top choice for tabular data problems. Currently, there are several leading GBDT packages, such as XGBoost (Chen & Guestrin, 2016), LightGBM (Ke et al., 2017), CatBoost (Prokhorenkova et al., 2018), which are widely used by both academicians and ML practitioners. While these implementations vary in details, on most of the tasks their performances do not differ much (Prokhorenkova et al., 2018;Anghel et al.). The most important distinction of CatBoost is that it uses oblivious decision trees (ODTs) as weak learners. As ODTs are also an important ingredient of our NODE architecture, we discuss them below.
Oblivious Decision
Trees. An oblivious decision tree is a regular tree of depth d that is constrained to use the same splitting feature and splitting threshold in all internal nodes of the same depth. This constraint essentially allows representing an ODT as a table with 2 d entries, corresponding to all possible combinations of d splits (Lou & Obukhov, 2017). Of course, due to the constraints above, ODTs are significantly weaker learners compared to unconstrained decision trees. However, when used in an ensemble, such trees are less prone to overfitting, which was shown to synergize well with gradient boosting (Prokhorenkova et al., 2018). Furthermore, the inference in ODTs is very efficient: one can compute d independent binary splits in parallel and return the appropriate table entry. In contrast, non-oblivious decision trees require evaluating d splits sequentially.
Differentiable trees. The significant drawback of tree-based approaches is that they usually do not allow end-to-end optimization and employ greedy, local optimization procedures for tree construction. Thus, they cannot be used as a component for pipelines, trained in an end-to-end fashion. To address this issue, several works (Kontschieder et al., 2015;Lay et al., 2018) propose to "soften" decision functions in the internal tree nodes to make the overall tree function and tree routing differentiable. In our work, we advocate the usage of the recent entmax transformation (Peters et al., 2019) to "soften" decision trees. We confirm its advantages over the previously proposed approaches in the experimental section.
Entmax. The key building block of our model is the entmax transformation (Peters et al., 2019), which maps a vector of real-valued scores to a discrete probability distribution. This transformation generalizes the traditional softmax and its sparsity-enforcing alternative sparsemax (Martins & Astudillo, 2016), which has already received significant attention in a wide range of applications: probabilistic inference, topic modeling, neural attention (Niculae & Blondel, 2017;Niculae et al., 2018;Lin et al., 2019). The entmax is capable to produce sparse probability distributions, where the majority of probabilities are exactly equal to 0. In this work, we argue that entmax is also an appropriate inductive bias in our model, which allows differentiable split decision construction in the internal tree nodes. Intuitively, entmax can learn splitting decisions based on a small subset of data features (up to one, as in classical decision trees), avoiding undesired influence from others. As an additional advantage, using entmax for feature selection allows for computationally efficient inference using the sparse pre-computed choice vectors as described below in Section 3.
Multi-layer non-differentiable architectures. Another line of work (Miller et al., 2017;Zhou & Feng, 2017;Feng et al., 2018) promotes the construction of multi-layer architectures from nondifferentiable blocks, such as random forests or GBDT ensembles. For instance, (Zhou & Feng, 2017;Miller et al., 2017) propose to use stacking of several random forests, which are trained separately. In recent work, (Feng et al., 2018) introduces the multi-layer GBDTs and proposes a training procedure that does not require each layer component to be differentiable. While these works report marginal improvements over shallow counterparts, they lack the capability for end-to-end training, which could result in inferior performance. In contrast, we argue that end-to-end training is crucial and confirm this claim in the experimental section.
Specific DNN for tabular data. While a number of prior works propose architectures designed for tabular data (Ke et al., 2018;Shavitt & Segal, 2018), they mostly do not compare with the properly tuned GBDT implementations, which are the most appropriate baselines. The recent preprint (Ke et al., 2018) reports the marginal improvement over GBDT with default parameters, but in our experiments, the baseline performance is much higher. To the best of our knowledge, our approach is the first to consistently outperform the tuned GBDTs over a large number of datasets.
NEURAL OBLIVIOUS DECISION ENSEMBLES
We introduce the Neural Oblivious Decision Ensemble (NODE) architecture with a layer-wise structure similar to existing deep learning models. In a nutshell, our architecture consists of differentiable oblivious decision trees (ODT) that are trained end-to-end by backpropagation. We describe our implementation of the differentiable NODE layer in Section 3.1, the full model architecture in Section 3.2, and the training and inference procedures in section 3.3.
DIFFERENTIABLE OBLIVIOUS DECISION TREES
The core building block of our model is a Neural Oblivious Decision Ensemble (NODE) layer. The layer is composed of m differentiable oblivious decision trees (ODTs) of equal depth d. As an input, all m trees get a common vector x ∈ R n , containing n numeric features. Below we describe a design of a single differentiable ODT.
In its essence, an ODT is a decision table that splits the data along d splitting features and compares each feature to a learned threshold. Then, the tree returns one of the 2 d possible responses, corresponding to the comparisons result. Therefore, each ODT is completely determined by its splitting
features f ∈ R d , splitting thresholds b ∈ R d and a d-dimensional tensor of responses R ∈ R 2 × 2 × 2 d .
In this notation, the tree output is defined as:
h(x) = R[1(f 1 (x) − b 1 ), . . . , 1(f d (x) − b d )],(1)
where 1(·) denotes the Heaviside function.
... input F 1 σ α ( F 1 (x )−b 1 ) σ α ( F 2 ( x)−b 2 ) σ α ( F 2 ( x)−b 2 ) σ α ( F 3 ( x)−b 3 ) σ α ( F 3 ( x)−b 3 ) F 2 F 3 F 2 F 3 entmax choice R 000 R 001 R 110 R 111
output tree root Figure 1: The single ODT inside the NODE layer. The splitting features and the splitting thresholds are shared across all the internal nodes of the same depth. The output is a sum of leaf responses scaled by the choice weights.
To make the tree output (1) differentiable, we replace the splitting feature choice f i and the compar- , for instance, REINFORCE (Williams, 1992) or Gumbel-softmax (Jang et al., 2016). However, these approaches typically require long training time, which can be crucial in practice.
ison operator 1(f i (x) − b i )
Instead, we propose to use the α-entmax transformation (Peters et al., 2019) as it is able to learn sparse choices, depending only on a few features, via standard gradient descent. The choice function is hence replaced by a weighted sum of features, with weights computed as entmax over the learnable feature selection matrix F ∈ R d×n :
f i (x) = n j=1 x j · entmax α (F ij )(2)
Similarly, we relax the Heaviside function 1(f i (x) − b i ) as a two-class entmax, which we denote as Based on the c i (x) values, we define a "choice" tensor C ∈ R 2 × 2 × 2 d of the same size as the response tensor R by computing the outer product of all c i :
σ α (x)=entmax α ([x,C(x) = c 1 (x) 1 − c 1 (x) ⊗ c 2 (x) 1 − c 2 (x) ⊗ · · · ⊗ c d (x) 1 − c d (x)(3)
The final prediction is then computed as a weighted linear combination of response tensor entries R with weights from the entries of choice tensor C:
h(x) = i1,...i d ∈{0,1} d R i1,...,i d · C i1,...,i d (x)(4)
Note, that this relaxation equals to the classic non-differentiable ODT h(x)(1) iff both feature selection and threshold functions reach one-hot state, i.e. entmax always returns non-zero weights for a single feature and c i always return exactly zeros or ones.
Finally, the output of the NODE layer is composed as a concatenation of the outputs of m individual trees ĥ 1 (x), . . . ,ĥ m (x) .
Multidimensional tree outputs. In the description above, we assumed that tree outputs are onedimensionalĥ(x) ∈ R. For classification problems, where NODE predicts probabilities of each class, we use multidimensional tree outputsĥ(x) ∈ R |C| , where |C| is a number of classes.
GOING DEEPER WITH THE NODE ARCHITECTURE
The NODE layer, described above, can be trained alone or within a complex structure, like fullyconnected layers that can be organized into the multi-layer architectures. In this work, we introduce a new architecture, following the popular DenseNet (Huang et al., 2017) The NODE architecture, consisting of densely connected NODE layers. Each layer contains several trees whose outputs are concatenated and serve as input for the subsequent layer. The final prediction is obtained by averaging the outputs of all trees from all the layers.
Similar to DenseNet, our architecture is a sequence of k NODE layers (see Section 3.1), where each layer uses a concatenation of all previous layers as its input. The input layer 0 of this architecture corresponds to the input features x, accessible by all successor layers. Due to such a design, our architecture is capable to learn both shallow and deep decision rules. A single tree on i-th layer can rely on chains of up to i − 1 layer outputs as features, allowing it to capture complex dependencies. The resulting prediction is a simple average of all decision trees from all layers.
Note, in the multi-layer architecture described above, tree outputsĥ(x) from early layers are used as inputs for subsequent layers. Therefore, we do not restrict the dimensionality ofĥ(x) to be equal to the number of classes, and allow it to have an arbitrary dimensionality l, which correspond to
the (d + 1)-dimensional response tensor R ∈ R 2 × 2 × 2 d ×l .
When averaging the predictions from all layers, only first |C| coordinates ofĥ(x) are used for classification problems and the first one for regression problems. Overall, l is an additional hyperparameter with typical values in [1, 3].
TRAINING
Here we summarize the details of our training protocol.
Data preprocessing. First, we transform each data feature to follow a normal distribution via quantile transform 2 . In experiments, we observed that this step was important for stable training and faster convergence.
Initialization. Before training, we perform the data-aware initialization (Mishkin & Matas, 2016) to obtain a good initial parameter values. In particular, we initialize the feature selection matrix uniformly F ij ∼ U (0, 1), while the thresholds b are initialized with random feature values f i (x) observed in the first data batch. The scales τ i are initialized in such a way that all the samples in the first batch belong to the linear region of σ α , and hence receive nonzero gradients. Finally, the response tensor entries are initialized with the standard normal distribution R[i 1 , . . . , i d ] ∼ N (0, 1).
Training. As for existing DNN architectures, NODE is trained end-to-end via mini-batch SGD.
We jointly optimize all model parameters: F, b, R. In this work, we experimented with traditional objective functions (cross-entropy for classification and mean squared error for regression), but any differentiable objective can be used as well. As an optimization method, we use the recent Quasi-Hyperbolic Adam with parameters recommended in the original paper (Ma & Yarats, 2018). We also average the model parameters over c = 5 consecutive checkpoints (Izmailov et al., 2018) and pick the optimal stopping point on the hold-out validation dataset.
Inference. During training, a significant fraction of time is spent computing the entmax function and multiplying the choice tensor. Once the model is trained, one can pre-compute entmax feature selectors and store them as a sparse vector (e.g., in coordinate (coo) format), making inference more efficient.
EXPERIMENTS
In this section, we report the results of a comparison between our approach and the leading GBDT packages. We also provide several ablation studies that demonstrate the influence of each design choice in the proposed NODE architecture.
COMPARISON TO THE STATE-OF-THE-ART.
As our main experiments, we compare the proposed NODE architecture with two state-of-the-art GBDT implementations on a large number of datasets. In all the experiments we set α parameter in the entmax transformation to 1.5. All other details of the comparison protocol are described below.
Datasets. We perform most of the experiments on six open-source tabular datasets from different domains: Epsilon, YearPrediction, Higgs, Microsoft, Yahoo, Click. The detailed description of the datasets is available in appendix. All the datasets provide train/test splits, and we used 20% samples from the train set as a validation set to tune the hyperparameters. For each dataset, we fix the train/val/test splits for a fair comparison. For the classification datasets (Epsilon, Higgs, Click), we minimize cross-entropy loss and report the classification error. For the regression and ranking datasets (YearPrediction, Microsoft, Yahoo), we minimize and report mean squared error (which corresponds to the pointwise approach to learning-to-rank).
Methods. We compare the proposed NODE architecture to the following baselines:
• Catboost. The recent GBDT implementation (Prokhorenkova et al., 2018) that uses oblivious decision trees as weak learners. We use the open-source implementation, provided by the authors. • XGBoost. The most popular GBDT implementation widely used in machine learning competitions (Chen & Guestrin, 2016). We use the open-source implementation, provided by the authors. • FCNN. Deep neural network, consisting of several fully-connected layers with ReLU nonlinearity layers (Nair & Hinton, 2010).
Regimes. We perform comparison in two following regimes that are the most important in practice:
• Default hyperparameters. In this regime, we compare the methods as easy-to-tune toolkits that could be used by a non-professional audience. Namely, here we do not tune hyperparameters and use the default ones provided by the GBDT packages. The only tunable parameter here is a number of trees (up to 2048) in CatBoost/XGBoost, which is set based on the validation set. We do not compare with FCNN in this regime, as it typically requires much tuning, and we did not find the set of parameters, appropriate for all datasets. The default architecture in our model contains only a single layer with 2048 decision trees of depth six. Both of these hyperparameters were inherited from the CatBoost package settings for oblivious decision trees. With these parameters, the NODE architecture is shallow, but it still benefits from end-to-end training via back-propagation. • Tuned hyperparameters. In this regime, we tune the hyperparameters for both NODE and the competitors on the validation subsets. The optimal configuration for NODE contains between two and eight NODE layers, while the total number of trees across all the layers does not exceed 2048. The details of hyperparameter optimization are provided in appendix.
The results of the comparison are summarized in Table 1 and Table 2. For all methods, we report mean performance and standard deviations computed over ten runs with different random seeds.
Several key observations are highlighted below: Table 2: The comparison of NODE with both shallow and deep counterparts with hyperparameters tuned for optimal performance. The results are computed over ten runs with different random seeds.
1. With default hyperparameters, the proposed NODE architecture consistently outperforms both CatBoost and XGBoost on all datasets. The results advocate the usage of NODE as a handy tool for machine learning on tabular problems. 2. With tuned hyperparameters, NODE also outperforms the competitors on most of the tasks. Two exceptions are the Yahoo and Microsoft datasets, where tuned XGBoost provides the highest performance. Given the large advantage of XGBoost over CatBoost on Yahoo, we speculate that the usage of oblivious decision trees is an inappropriate inductive bias for this dataset. This implies that NODE should be extended to non-oblivious trees, which we leave for future work. 3. In the regime with tuned hyperparameters on some datasets FCNN outperforms GBDT, while on others GBDT is superior. Meanwhile, the proposed NODE architecture appears to be a universal instrument, providing the highest performance on most of the tasks.
For completeness we also aimed to compare to previously proposed architectures for deep learning on tabular data. Unfortunately, many works did not publish the source code. We were only able to perform a partial comparison with mGBDT (Feng et al., 2018) and DeepForest (Zhou & Feng, 2017), which source code is available. For both baselines, we use the implementations, provided by the authors, and tune the parameters on the validation set. Note, that the DeepForest implementation is available only for classification problems. Moreover, both implementations do not scale well, and for many datasets, we obtained Out-Of-Memory error (OOM). On datasets in our experiments it turns out that properly tuned GBDTs outperform both (Feng et al., 2018) and (Zhou & Feng, 2017).
ABLATIVE ANALYSIS
In this section, we analyze the key architecture components that define our model.
Choice functions. Constructing differentiable decision trees requires a function that selects items from a set. Such function is required for both splitting feature selection and decision tree routing. We experimented with four possible options, each having different implications:
• Softmax learns dense decision rules where all items have nonzero weights;
• Gumbel-Softmax (Jang et al., 2016) learns to stochastically sample a single element from a set;
• Sparsemax (Martins & Astudillo, 2016) learns sparse decision rules, where only a few items have nonzero weights; • Entmax (Peters et al., 2019) generalizes both sparsemax and softmax; it is able to learn sparse decision rules, but is smoother than sparsemax, being more appropriate for gradient-based optimization. In comparison α parameter was set to 1.5. We experimentally compare the four options above with both shallow and deep architectures in Table 3. We use the same choice function for both feature selection and tree routing across all experiments. In Gumbel-Softmax, we replaced it with hard argmax one-hot during inference. The results clearly show that Entmax with α=1.5 outperforms the competitors across all experiments. First, Table 3 demonstrates that sparsemax and softmax are not universal choice functions. For instance, on the YearPrediction dataset, sparsemax outperforms softmax, while on the Epsilon dataset softmax is superior. In turn, entmax provides great empirical performance across all datasets. Another observation is that Gumbel-Softmax is unable to learn deep architectures with both constant and annealed temperature schedules. This behavior is probably caused by the stochasticity of Gumbel-Softmax and the responses on the former layers are too noisy to produce useful features for the latter layers.
Feature importance. In this series of experiments, we analyze the internal representations, learned by the NODE architecture. We begin by estimating the feature importances from different layers of a multi-layer ensemble via permutation feature importance, initially introduced in (Breiman, 2001). Namely, for 10.000 objects from the Higgs dataset we randomly shuffle the values of each feature (original or learnt on some NODE layer) and compute the increase in the classification error. Then for each layer, we split feature importance values into seven equal bins and calculate the total feature importance of each bin, shown on Figure 3 (left-top). We discovered that the features from the first layer are used the most, with feature importances decreasing with depth. This figure shows that deep layers are able to produce important features, even though earlier layers have an advantage because of the DenseNet architecture. Next, we estimated the mean absolute contribution of individual trees to the final response, reported on Figure 3 (left-bottom). One can see the reverse trend, deep trees tend to contribute more to the final response. Figure 3 (right) clearly shows that there is anticorrelation of feature importances and contributions in the final response, which implies that the main role of ealier layers is to produce informative features, while the latter layers mostly use them for accurate prediction.
Training/Inference runtime. Finally, we compare the NODE runtime to the timings of the stateof-the-art GBDT implementations. In Table 4 we report the training and inference time for million of objects from the YearPrediction dataset. In this experiment, we evaluate ensembles of 1024 trees of depth six with all other parameters set to their default values. Our GPU setup has a single 1080Ti GPU and 2 CPU cores. In turn, our CPU setup has a 28-core Xeon E5-2660 v4 processor (which costs almost twice as much as the GPU (3) Higgs is a dataset from the UCI ML Repository. The problem is to predict whether the given event produces Higgs bosons or not. (4) Microsoft is a Learning to Rank Dataset. It consists of 136-dimensional feature vectors extracted from query-url pairs. Each pair has relevance judgment labels, which take values from 0 (irrelevant) to 4 (perfectly relevant) (5) Yahoo is very similar ranking dataset with query-url pairs labeled from 0 to 4. We treat both ranking problems as regression (which corresponds to the pointwise approach to learning-to-rank) (6) Click is a subset of data from the 2012 KDD Cup. For the subset construction, we randomly sample 500.000 objects of a positive class and 500.000 objects of a negative class. The categorical features were converted to numerical ones via Leave-One-Out encoder from category encoders package of the scikit-learn library.
A.2 OPTIMIZATION OF HYPERPARAMETERS
In order to tune the hyperparameters, we performed a random stratified split of full training data into train set (80%) and validation set (20%) for the Epsilon, YearPrediction, Higgs, Microsoft, and Click datasets. For Yahoo, we use train/val/test split provided by the dataset authors. We use the Hyperopt 3 library to optimize Catboost, XGBoost, and FCNN hyperparameters. For each method, we perform 50 steps of Tree-structured Parzen Estimator (TPE) optimization algorithm. As a final configuration, we choose the set of hyperparameters, corresponding to the smallest loss on the validation set.
A.2.1 CATBOOST AND XGBOOST
On each iteration of Hyperopt, the number of trees was set based on the validation set, with maximal trees count set to 2048. Below is the list of hyperparameters and their search spaces for Catboost.
0]). As different features can have different characteristic scales, we use the scaled version c i (x) = σ α fi(x)−bi τi , where b i and τ i are learnable parameters for thresholds and scales respectively.
Figure 2 :
2Figure 2: The NODE architecture, consisting of densely connected NODE layers. Each layer contains several trees whose outputs are concatenated and serve as input for the subsequent layer. The final prediction is obtained by averaging the outputs of all trees from all the layers.
Figure 3 :
3NODE on UCI Higgs dataset: Left-Top: individual feature importance distributions for both original and learned features. Left-Bottom: mean absolute contribution of individual trees to the final response. Right: responses dependence on feature importances. See details in the text.
by their continuous counterparts. There are several existing approaches that can be used for modelling differentiable choice functions in decision trees
model and train it end-toend via backpropagation.input
...
...
...
...
layer 1
...
...
...
...
layer 2
...
...
...
...
Table 1 :
1The comparison of NODE with the shallow state-of-the-art counterparts with default hyperparameters. The results are computed over ten runs with different random seeds.Epsilon
YearPrediction
Higgs
Microsoft
Yahoo
Click
Tuned hyperparameters
Table 3 :
3The experimental comparison of various choice functions and architecture depths. The values represent mean squared error for YearPrediction and classification error rate for Epsilon.
). We use CatBoost v0.15 and XGBoost v0.90 as baselines, while NODE inference runs on PyTorch v1.1.0. Overall, NODE inference time is on par with heavily optimized GBDT libraries despite being implemented in pure PyTorch (i.e. no custom kernels).In this paper, we introduce a new DNN architecture for deep learning on heterogeneous tabular data. The architecture is differentiable deep GBDTs, trained end-to-end via backpropagation. In extensive5 CONCLUSION
Method
NODE 8 layers 1080Ti XGBoost Xeon XGBoost 1080Ti CatBoost Xeon
Training
7min 42s
5min 39s
1min 13s
41s
Inference
8.56s
5.94s
4.45s
4.62s
Table 4 :
4Training and inference runtime for models with 1024 trees of depth six on the YearPrediction dataset, averaged over five runs. Both timings of eight-layer NODE architecture are on par with timings of shallow counterparts of the same total number of trees in an ensemble. experiments, we demonstrate the advantages of our architecture over existing competitors with the default and tuned hyperparameters. A promising research direction is incorporating the NODE layer into complex pipelines trained via back-propagation. For instance, in multi-modal problems, the NODE layer could be employed as a way to incorporate the tabular data, as CNNs are currently used for images, or RNNs are used for sequences.Train
Test
Features
Task
Metric
Description
Epsilon 4
400K
100K
2000
Classification Error PASCAL Challenge 2008
YearPrediction 5 463K 51.6K
90
Regression
MSE
Million Song Dataset
Higgs 6
10.5M 500K
28
Classification Error
UCI ML Higgs
Microsoft 7
723K
241K
136
Regression
MSE
MSLR-WEB10K
Yahoo 8
544K
165K
699
Regression
MSE
Yahoo LETOR dataset
Click 9
800K
200K
11
Classification Error
2012 KDD Cup
Table 5 :
5The datasets used in our experiments. from 1922 to 2011.
https://github.com/Qwicen/node
sklearn.preprocessing.QuantileTransformer
https://github.com/maxpumperla/hyperas
Alessandro de Palma, and Haralampos Pozidis. Benchmarking and optimization of gradient boosting decision tree algorithms. Andreea Anghel, Nikolaos Papandreou, Thomas Parnell, Andreea Anghel, Nikolaos Papandreou, Thomas Parnell, Alessandro de Palma, and Haralampos Pozidis. Benchmarking and optimization of gradient boosting decision tree algorithms.
The random subspace method for constructing decision forests. IEEE transactions on pattern analysis and machine intelligence. Iñigo Barandiaran, Iñigo Barandiaran. The random subspace method for constructing decision forests. IEEE transac- tions on pattern analysis and machine intelligence, 1998.
Random forests. Leo Breiman, 10.1023/A:1010933404324Machine Learning. 45Leo Breiman. Random forests. Machine Learning, 45(1):5-32, 2001. doi: 10.1023/A: 1010933404324. URL https://doi.org/10.1023/A:1010933404324.
Xgboost: A scalable tree boosting system. Tianqi Chen, Carlos Guestrin, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. the 22nd acm sigkdd international conference on knowledge discovery and data miningTianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785-794, 2016.
Multi-layered gradient boosting decision trees. Ji Feng, Yang Yu, Zhi-Hua Zhou, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. NeurIPSJi Feng, Yang Yu, and Zhi-Hua Zhou. Multi-layered gradient boosting decision trees. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Pro- cessing Systems 2018, NeurIPS, 2018.
Greedy function approximation: a gradient boosting machine. H Jerome, Friedman, Annals of statisticsJerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pp. 1189-1232, 2001.
Deep learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT pressIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
Lessons from 2 million machine learning models on kaggle. Vasyl Harasymiv, Vasyl Harasymiv. Lessons from 2 million machine learning models on kaggle, 2015.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, arXiv:1803.05407arXiv preprintPavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wil- son. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, abs/1611.01144CoRREric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. CoRR, abs/1611.01144, 2016. URL http://dblp.uni-trier.de/db/journals/ corr/corr1611.html#JangGP16.
Lightgbm: A highly efficient gradient boosting decision tree. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu, Advances in Neural Information Processing Systems. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie- Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems, pp. 3146-3154, 2017.
Tabnn: A universal neural network solution for tabular data. Guolin Ke, Jia Zhang, Zhenhui Xu, Jiang Bian, Tie-Yan Liu, Guolin Ke, Jia Zhang, Zhenhui Xu, Jiang Bian, and Tie-Yan Liu. Tabnn: A universal neural network solution for tabular data. 2018.
Bottom-up induction of oblivious read-once decision graphs: strengths and limitations. Ron Kohavi, AAAI. Ron Kohavi. Bottom-up induction of oblivious read-once decision graphs: strengths and limitations. In AAAI, 1994.
Deep neural decision forests. Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, Samuel Rota Bulo, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionPeter Kontschieder, Madalina Fiterau, Antonio Criminisi, and Samuel Rota Bulo. Deep neural decision forests. In Proceedings of the IEEE international conference on computer vision, 2015.
Random hinge forest for differentiable learning. Nathan Lay, P Adam, Sharon Harrison, Gitesh Schreiber, Adrian Dawer, Barbu, arXiv:1802.03882arXiv preprintNathan Lay, Adam P Harrison, Sharon Schreiber, Gitesh Dawer, and Adrian Barbu. Random hinge forest for differentiable learning. arXiv preprint arXiv:1802.03882, 2018.
Sparsemax and relaxed wasserstein for topic sparsity. Tianyi Lin, Zhiyue Hu, Xin Guo, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningTianyi Lin, Zhiyue Hu, and Xin Guo. Sparsemax and relaxed wasserstein for topic sparsity. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, 2019.
Bdt: Gradient boosted decision tables for high accuracy and scoring efficiency. Yin Lou, Mikhail Obukhov, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningYin Lou and Mikhail Obukhov. Bdt: Gradient boosted decision tables for high accuracy and scoring efficiency. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017.
Quasi-hyperbolic momentum and adam for deep learning. Jerry Ma, Denis Yarats, arXiv:1810.06801arXiv preprintJerry Ma and Denis Yarats. Quasi-hyperbolic momentum and adam for deep learning. arXiv preprint arXiv:1810.06801, 2018.
From softmax to sparsemax: A sparse model of attention and multi-label classification. Andre Martins, Ramon Astudillo, International Conference on Machine Learning. Andre Martins and Ramon Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning, 2016.
Kevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, David Kartchner, arXiv:1705.07366Forward thinking: building deep random forests. arXiv preprintKevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, and David Kartchner. Forward thinking: building deep random forests. arXiv preprint arXiv:1705.07366, 2017.
All you need is a good init. Dmytro Mishkin, Jiri Matas, 4th International Conference on Learning Representations, ICLR 2016. San Juan, Puerto RicoConference Track ProceedingsDmytro Mishkin and Jiri Matas. All you need is a good init. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, Proceedings of the 27th international conference on machine learning (ICML-10). the 27th international conference on machine learning (ICML-10)Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), 2010.
A regularized framework for sparse and structured neural attention. Vlad Niculae, Mathieu Blondel, Advances in Neural Information Processing Systems. Vlad Niculae and Mathieu Blondel. A regularized framework for sparse and structured neural atten- tion. In Advances in Neural Information Processing Systems, 2017.
Vlad Niculae, F T André, Mathieu Martins, Claire Blondel, Cardie, arXiv:1802.04223Sparsemap: Differentiable sparse structured inference. arXiv preprintVlad Niculae, André FT Martins, Mathieu Blondel, and Claire Cardie. Sparsemap: Differentiable sparse structured inference. arXiv preprint arXiv:1802.04223, 2018.
Sparse sequence-to-sequence models. Ben Peters, Vlad Niculae, André F T Martins, ACL. Ben Peters, Vlad Niculae, and André F. T. Martins. Sparse sequence-to-sequence models. In ACL, 2019, pp. 1504-1519, 2019.
Catboost: unbiased boosting with categorical features. Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, Andrey Gulin, Advances in Neural Information Processing Systems. Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, and Andrey Gulin. Catboost: unbiased boosting with categorical features. In Advances in Neural Information Processing Systems, pp. 6638-6648, 2018.
Learning internal representations by error propagation. Geoffrey E David E Rumelhart, Ronald J Hinton, Williams, California Univ San Diego La Jolla Inst for Cognitive ScienceTechnical reportDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
Regularization learning networks: Deep learning for tabular datasets. Ira Shavitt, Eran Segal, Advances in Neural Information Processing Systems. Ira Shavitt and Eran Segal. Regularization learning networks: Deep learning for tabular datasets. In Advances in Neural Information Processing Systems, pp. 1379-1389, 2018.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ronald J Williams, Machine Learning. 8Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256, 1992.
Yongxin Yang, Irene Garcia Morillo, Timothy M Hospedales, arXiv:1806.06988Deep neural decision trees. arXiv preprintYongxin Yang, Irene Garcia Morillo, and Timothy M Hospedales. Deep neural decision trees. arXiv preprint arXiv:1806.06988, 2018.
The problem is a binary classification. (2) YearPrediction is a subset of Million Song Dataset. It is regression dataset, and the task is to predict the release year of the song by using the audio features. Zhi-Hua Zhou, Ji Feng, Deep forest: Towards an alternative to deep neural networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. It contains tracks • learning rate: Log-Uniform distribution. e −5 , 1] • random strength: Discrete uniform distribution [1, 20] • one hot max size: Discrete uniform distribution [0, 25Zhi-Hua Zhou and Ji Feng. Deep forest: Towards an alternative to deep neural networks. In Proceed- ings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI, 2017. A APPENDIX A.1 DESCRIPTION OF THE DATASETS In our experiments, we used six tabular datasets, described in Table 5. (1) Epsilon is high dimen- sional dataset from the PASCAL Large Scale Learning Challenge 2008. The problem is a binary classification. (2) YearPrediction is a subset of Million Song Dataset. It is regression dataset, and the task is to predict the release year of the song by using the audio features. It contains tracks • learning rate: Log-Uniform distribution [e −5 , 1] • random strength: Discrete uniform distribution [1, 20] • one hot max size: Discrete uniform distribution [0, 25]
• l2 leaf reg: Log-Uniform distribution. 1, 10] • bagging temperature: Uniform distribution [0, 1] • leaf estimation iterations: Discrete uniform distribution [1, 10• l2 leaf reg: Log-Uniform distribution [1, 10] • bagging temperature: Uniform distribution [0, 1] • leaf estimation iterations: Discrete uniform distribution [1, 10]
XGBoost tuned parameters and their search spaces: • eta: Log-Uniform distribution. e −7 , 1] • max depth: Discrete uniform distribution [2, 10] • subsample: Uniform distribution [0.5, 1XGBoost tuned parameters and their search spaces: • eta: Log-Uniform distribution [e −7 , 1] • max depth: Discrete uniform distribution [2, 10] • subsample: Uniform distribution [0.5, 1]
Uniform distribution [0.5, 1] • colsample bylevel: Uniform distribution [0.5, 1] • min child weight: Log-Uniform distribution. e −16 , e 5 ] • alpha: Uniform choice {0, Log-Uniform distribution [e −16 , e 2 ]} • lambda: Uniform choice {0, Log-Uniform distribution [e −16 , e 2 ]} • gamma: Uniform choice {0, Log-Uniform distribution [e −16 , e 2http://www.kdd.org/kdd-cup/view/kdd-cup-2012-track-2 • colsample bytree: Uniform distribution [0.5, 1] • colsample bylevel: Uniform distribution [0.5, 1] • min child weight: Log-Uniform distribution [e −16 , e 5 ] • alpha: Uniform choice {0, Log-Uniform distribution [e −16 , e 2 ]} • lambda: Uniform choice {0, Log-Uniform distribution [e −16 , e 2 ]} • gamma: Uniform choice {0, Log-Uniform distribution [e −16 , e 2 ]}
We consider FCNN constructed from the following blocks: Dense-ReLU-Dropout. The number of units in each layer is independent of each other, and dropout value is the same for the whole network. The networks are trained with the Adam optimizer with averaging the model parameters over c=5 consecutive checkpoints. Izmailov, A.2.2 FCNN Fully connected neural networks were tuned using Hyperas 10 library, which is a Keras wrapper for Hyperopt. 2018) and early stopping on validation. Batch size is fixed to 1024 for all datasets. Below is the list of tuned hyperparameters. • Number of layers: Discrete uniform distribution [2, 7] • Number of units: Dicrete uniform distribution over a set {128, 256, 512, 1024} • Learning rate: Uniform distribution. 1e − 4, 1e − 2] • Dropout: Uniform distribution [0, 0.5A.2.2 FCNN Fully connected neural networks were tuned using Hyperas 10 library, which is a Keras wrapper for Hyperopt. We consider FCNN constructed from the following blocks: Dense-ReLU-Dropout. The number of units in each layer is independent of each other, and dropout value is the same for the whole network. The networks are trained with the Adam optimizer with averaging the model pa- rameters over c=5 consecutive checkpoints (Izmailov et al., 2018) and early stopping on validation. Batch size is fixed to 1024 for all datasets. Below is the list of tuned hyperparameters. • Number of layers: Discrete uniform distribution [2, 7] • Number of units: Dicrete uniform distribution over a set {128, 256, 512, 1024} • Learning rate: Uniform distribution [1e − 4, 1e − 2] • Dropout: Uniform distribution [0, 0.5]
In the multi-layer NODE, we use the same architecture for all layers, i.e., the same number of trees of the same depth. total tree count here denotes the total number of trees on all layers. For each dataset, we use the maximal batch size. Neural Oblivious Decision Ensembles were tuned by grid search over the following hyperparameter values. 3which fits in the GPU memory. We always use learning rate 10 −3 . • num layers: {2, 4, 8} • total tree count: {1024, 2048} • tree depth: {6, 8} • tree output dim: {2Neural Oblivious Decision Ensembles were tuned by grid search over the following hyperparameter values. In the multi-layer NODE, we use the same architecture for all layers, i.e., the same number of trees of the same depth. total tree count here denotes the total number of trees on all layers. For each dataset, we use the maximal batch size, which fits in the GPU memory. We always use learning rate 10 −3 . • num layers: {2, 4, 8} • total tree count: {1024, 2048} • tree depth: {6, 8} • tree output dim: {2, 3} |
56,475,997 | UNIVERSAL SUCCESSOR FEATURES APPROXIMATORS | The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed universal successor features approximators (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment. | [] | UNIVERSAL SUCCESSOR FEATURES APPROXIMATORS
Diana Borsa [email protected]
André Barreto [email protected]
John Quan [email protected]
Daniel Mankowitz [email protected]
Rémi Munos [email protected]
Hado Van Hasselt
David Silver [email protected]
Tom Schaul [email protected]
UKDeepmind London
UNIVERSAL SUCCESSOR FEATURES APPROXIMATORS
Submitted as a conference paper to ICLR 2019
The ability of a reinforcement learning (RL) agent to learn about many reward functions at the same time has many potential benefits, such as the decomposition of complex tasks into simpler ones, the exchange of information between tasks, and the reuse of skills. We focus on one aspect in particular, namely the ability to generalise to unseen tasks. Parametric generalisation relies on the interpolation power of a function approximator that is given the task description as input; one of its most common form are universal value function approximators (UVFAs). Another way to generalise to new tasks is to exploit structure in the RL problem itself. Generalised policy improvement (GPI) combines solutions of previous tasks into a policy for the unseen task; this relies on instantaneous policy evaluation of old policies under the new reward function, which is made possible through successor features (SFs). Our proposed universal successor features approximators (USFAs) combine the advantages of all of these, namely the scalability of UVFAs, the instant inference of SFs, and the strong generalisation of GPI. We discuss the challenges involved in training a USFA, its generalisation properties and demonstrate its practical benefits and transfer abilities on a large-scale domain in which the agent has to navigate in a first-person perspective three-dimensional environment.
INTRODUCTION
Reinforcement learning (RL) provides a general framework to model sequential decision-making problems with sparse evaluative feedback in the form of rewards. The recent successes in deep RL have prompted interest in increasingly more complex tasks and a shift in focus towards scenarios in which a single agent must solve multiple related problems, either simultaneously or sequentially. This paradigm is formally known as multitask RL (Taylor and Stone, 2009;Teh et al., 2017). One of the benefits of learning about multiple tasks at the same time is the possibility of transferring knowledge across tasks; in essence, this means that by jointly learning about a set of tasks one should be able to exploit their common structure to speed up learning and induce better generalisation (Taylor and Stone, 2009;Lazaric, 2012).
A particularly interesting instance of transfer is the generalisation to new, unseen tasks. This potentially allows an agent to perform a task with little or no learning by leveraging knowledge from previously learned tasks. In this paper we will be exploring this scenario.
Consider an RL agent in a persistent environment trying to master a number of tasks. In order to generalise to unseen tasks, the agent needs to be able to identify and exploit some common structure underlying the tasks. Two possible sources of structure in this scenario are: i) some similarity between the solutions of the tasks, either in the policy or in the associated value-function space, and ii) the shared dynamics of the environment (e.g., physics). In this paper we will attempt to build an agent that can make use of both types of structure. For this, we will build on two frameworks that exploit these structures in isolation.
The first one are Schaul et al.'s (2015) universal value function approximators (UVFAs). UVFAs extend the notion of value functions to also include the description of a task, thus directly exploiting the common structure in the associated optimal value functions. The second framework we build upon exploits the common structure in the environment and capitalises on the power of dynamic programming. Barreto et al.'s (2017) framework is based on two core concepts: successor features (SFs), a representation scheme that allows a policy to be evaluated on any task of a given format, and generalised policy improvement (GPI), a generalisation of dynamic programming's classic operator that uses a set of policies instead of a single one.
UVFAs and SF&GPI generalise to new tasks in quite different, and potentially complementary, ways. UVFAs aim to generalise across the space of tasks by exploiting structure in the underlying space of value functions. In contrast, SF&GPI strategy is to exploit the structure of the RL problem itself. In this paper we propose a model that exhibits both types of generalisation. The basic insight is to note that SFs are multi-dimensional value functions, so we can extend them in the same way a universal value function extends their unidimensional counterparts. We call the resulting model universal successor features approximators, or USFAs for short. USFA is a strict generalisation of its precursors. Specifically, we show that by combining USFAs and GPI we can recover both UVFAs and SF&GPI as particular cases. This opens up a new spectrum of possible approaches in between these two extreme cases. We discuss the challenges involved in training a USFA and demonstrate the practical benefits of doing so on a large-scale domain in which the agent has to navigate in a three-dimensional environment using only images as observations.
BACKGROUND
In this section we present some background material, formalise the scenario we are interested in, and briefly describe the methods we build upon.
MULTITASK REINFORCEMENT LEARNING
We consider the usual RL framework: an agent interacts with an environment and selects actions in order to maximise the expected amount of reward received in the long run (Sutton and Barto, 1998). As usual, we assume that such an interaction can be modeled as a Markov decision process (MDP, Puterman, 1994). An MDP is defined as a tuple M ≡ (S, A, p, R, γ) where S and A are the state and action spaces, p(·|s, a) gives the next-state distribution upon taking action a in state s, R(s, a, s ) is a random variable representing the reward received at transition s a − → s , and γ ∈ [0, 1) is a discount factor that gives smaller weights to future rewards.
As mentioned in the introduction, in this paper we are interested in the multitask RL scenario, where the agent has to solve multiple tasks. Each task is defined by a reward function R w ; thus, instead of a single MDP M , our environment is a set of MDPs that share the same structure except for the reward function. Following Barreto et al. (2017), we assume that the expected one-step reward associated with transition s a − → s is given by E [R w (s, a, s )] = r w (s, a, s ) = φ(s, a, s ) w,
(1) where φ(s, a, s ) ∈ R d are features of (s, a, s ) and w ∈ R d are weights. The features φ(s, a, s ) can be thought of as salient events that may be desirable or undesirable to the agent, such as for example picking up an object, going through a door, or knocking into something. In this paper we assume that the agent is able to recognise such events-that is, φ is observable-, but the solution we propose can be easily extended to the case where φ must be learned (Barreto et al., 2017). Our solution also applies to the case where (1) is only approximately satisfied, as discussed by .
Given w ∈ R d representing a task, the goal of the agent is to find a policy π w : S → A that maximises the expected discounted sum of rewards, also called the return G
(t) w = ∞ i=0 γ i R (t+i) w , where R (t) w = R w (S t , A t , S t+1 )
is the reward received at the t th time step. A principled way to address this problem is to use methods derived from dynamic programming (DP), which heavily rely on the concept of a value function (Puterman, 1994). The action-value function of a policy π on task w is defined as Q π w (s, a) ≡ E π G (t)
w | S t = s, A t = a , where E π [·]
denotes expected value when following policy π. Based on Q π w we can compute a greedy policy π (s) ∈ argmax a Q π w (s, a); one of the fundamental results in DP guarantees that Q π w (s, a) ≥ Q π w (s, a) for all (s, a) ∈ S × A. The computation of Q π w (s, a) and π are called policy evaluation and policy improvement; under certain conditions their successive application leads to the optimal value function Q * w , from which one can derive an optimal policy for task w as π * w (s) ∈ argmax a Q * w (s, a) (Sutton and Barto, 1998).
As a convention, in this paper we will add a tilde to a symbol to indicate that the associated quantity is an approximation; we will then refer to the respective tunable parameters as θ. For example, the agent computes an approximationQ π w ≈ Q π w by tuning θ Q .
TRANSFER LEARNING
Here we focus on one aspect of multitask RL: how to transfer knowledge to unseen tasks (Taylor and Stone, 2009;Lazaric, 2012). Specifically, we ask the following question: how can an agent leverage knowledge accumulated on a set of tasks M ⊂ R d to speed up the solution of a new task w / ∈ M?
In order to investigate the question above we recast it using the formalism commonly adopted in learning. Specifically, we define a distribution D w over R d and assume the goal is for the agent to perform as well as possible under this distribution. As usual, we assume a fixed budget of sample transitions and define a training set M ∼ D w that is used by the agent to learn about the tasks of interest. We also define a test set M ∼ D w and use it to assess the agent's generalisation-that is, how well it performs on the distribution of MDPs induced by D w .
A natural way to address the learning problem above is to use Schaul et al.'s (2015) universal valuefunction approximators (UVFAs). The basic insight behind UVFAs is to note that the concept of optimal value function can be extended to include as one of its arguments a description of the task; an obvious way to do so in the current context is to define the function Q * (s, a, w) : S × A × R d → R as the optimal value function associated with task w. The function Q * (s, a, w) is called a universal value function (UVF); a UVFA is then the corresponding approximation,Q(s, a, w). When we define transfer as above it becomes clear that in principle a sufficiently expressive UVFA can identify and exploit structure across the joint space S × A × R d . In other words, a properly trained UVFA should be able to generalise across the space of tasks.
A different way of generalising across tasks is to use Barreto et al.'s (2017) framework, which builds on assumption (1) and two core concepts: successor features (SFs) and generalised policy improvement (GPI). The SFs ψ ∈ R d of a state-action pair (s, a) under policy π are given by
ψ π (s, a) ≡ E π ∞ i=t γ i−t φ i+1 | S t = s, A t = a .(2)
SFs allow one to immediately compute the value of a policy π on any task w: it is easy to show that, when (1) holds, Q π w (s, a) = ψ π (s, a) w. It is also easy to see that SFs satisfy a Bellman equation in which φ play the role of rewards, so ψ can be learned using any RL method (Szepesvári, 2010).
GPI is a generalisation of the policy improvement step described in Section 2.1. The difference is that in GPI the improved policy is computed based on a set of value functions rather than on a single one. Suppose that the agent has learned the SFs ψ πi of policies π 1 , π 2 , ..., π n . When exposed to a new task defined by w, the agent can immediately compute Q πi w (s, a) = ψ πi (s, a) w. Let the GPI policy be defined as π(s) ∈ argmax a Q max (s, a), where Q max = max i Q πi . The GPI theorem states that Q π (s, a) ≥ Q max (s, a) for all (s, a) ∈ S × A. The result also extends to the scenario where we replace Q πi with approximationsQ πi (Barreto et al., 2017).
UNIVERSAL SUCCESSOR FEATURES APPROXIMATORS
UVFAs and SF&GPI address the transfer problem described in Section 2.2 in quite different ways. With UVFAs, one trains an approximatorQ(s, a, w) by solving the training tasks w ∈ M using any RL algorithm of choice. One can then generalise to a new task by plugging its description w intoQ and then acting according to the policy π(s) ∈ argmax aQ (s, a, w ). With SF&GPI one solves each task w ∈ M and computes an approximation of the SFs of the resulting policies π w , ψ πw (s, a) ≈ ψ πw (s, a). The way to generalise to a new task w is to use the GPI policy defined as π(s) ∈ argmax a max w∈Mψ πw (s, a) w .
The algorithmic differences between UVFAs and SF&GPI reflect the fact that these approaches exploit distinct properties of the transfer problem. UVFAs aim at generalising across the space of tasks by exploiting structure in the function Q * (s, a, w). In practice, such strategy materialises in the choice of function approximator, which carries assumptions about the shape of Q * (s, a, w). For example, by using a neural network to representQ(s, a, w) one is implicitly assuming that Q * (s, a, w) is smooth in the space of tasks; roughly speaking, this means that small perturbations to w will result in small changes in Q * (s, a, w).
In contrast, SF&GPI's strategy to generalise across tasks is to exploit structure in the RL problem itself. GPI builds on the general fact that a greedy policy with respect to a value function will in general perform better than the policy that originated the value function. SFs, in turn, exploit the structure (1) to make it possible to quickly evaluate policies across tasks-and thus to apply GPI in an efficient way. The difference between the types of generalisation promoted by UVFAs and SF&GPI is perhaps even clearer when we note that the latter is completely agnostic to the way the approximationsψ πw are represented, and in fact it can applied even with a tabular representation.
Obviously, both UVFAs and GPI have advantages and limitations. In order to illustrate this point, consider two tasks w and w that are "similar", in the sense that ||w − w || is small (|| · || is a norm in R d ). Suppose that we have trained a UVFA on task w and as a result we obtained a good approximationQ(s, a, w) ≈ Q * (s, a, w). If the structural assumptions underlyingQ(s, a, w) holdfor example,Q(s, a, w) is smooth with respect to w-, it is likely thatQ(s, a, w ) will be a good approximation of Q * (s, a, w ). On the other hand, if such assumptions do not hold, we should not expect UVFA to perform well. A sufficient condition for SF&GPI to generalise well from task w to task w is that the policy π(s) ← argmax aψ πw (s, a) w does well on task w , where π w is a solution for task w. On the downside, SF&GPI will not exploit functional regularities at all, even if they do exist. Let policy π w be a solution for tasks w . In principle we cannot say anything about ψ π w (s, a), the SFs of π w , even if we have a good approximationψ πw (s, a) ≈ ψ πw (s, a).
As one can see, the types of generalisation provided by UVFAs and SF&GPI are in some sense complementary. It is then natural to ask if we can simultaneously have the two types of generalisation. In this paper we propose a model that provides exactly that. The main insight is actually simple: since SFs are multi-dimensional value functions, we can extend them in the same way as universal value functions extend regular value functions. In the next section we elaborate on how exactly to do so.
UNIVERSAL SUCCESSOR FEATURES
As discussed in Section 2.2, UVFs are an extension of standard value functions defined as Q * (s, a, w). If π w is one of the optimal policies of task w, we can rewrite the definition as Q πw (s, a, w). This makes it clear that the argument w plays two roles in the definition of a UVF: it determines both the task w and the policy π w (which will be optimal with respect to w). This does not have to be the case, though. Similarly to Sutton et al.'s (2011) general value functions (GVFs), we could in principle define a function Q(s, a, w, π) that "disentangles" the task from the policy. This would provide a model that is even more general than UVFs. In this section we show one way to construct such a model when assumption (1) (approximately) holds.
Note that, when (1) is true, we can revisit the definition of SFs and write Q(s, a, w, π) = ψ π (s, a) w. If we want to be able to compute Q(s, a, w, π) for any π, we need SFs to span the space of policies π. Thus, we define universal successor features as ψ(s, a, π) ≡ ψ π (s, a). Based on such definition, we callψ(s, a, π) ≈ ψ(s, a, π) a universal successor features approximator (USFA).
In practice, when defining a USFA we need to define a representation for the policies π. A natural choice is to embed π onto R k . Let e : (S → A) → R k be a policy-encoding mapping, that is, a function that turns policies π into vectors in R k . We can then see USFs as a function of e(π): ψ(s, a, e(π)). The definition of the policy-encoding mapping e(π) can have a strong impact on the structure of the resulting USF. We now point out a general equivalence between policies and reward functions that will provide a practical way of defining e(π). It is well known that any reward function induces a set of optimal policies (Puterman, 1994). A point that is perhaps less immediate is that the converse is also true. Given a deterministic policy π, one can easily define a set of reward functions that induce this policy: for example, we can have r π (s, π(s), ·) = 0 and r π (s, a, ·) = c, with c < 0, for any a = π(s). Therefore, we can use rewards to refer to deterministic policies and vice-versa (as long as potential ambiguities are removed when relevant).
Since here we are interested in reward functions of the form (1), if we restrict our attention to policies induced by tasks z ∈ R d we end up with a conveniently simple encoding function e(π z ) = z. From this encoding function it follows that Q(s, a, w, π z ) = Q(s, a, w, z). It should be clear that UVFs are a particular case of this definition when w = z. Going back to the definition of USFs, we can finally write Q(s, a, w, z) = ψ(s, a, z) w. Thus, if we learn a USF ψ(s, a, z), we have a value function that generalises over both tasks and policies, as promised.
USFA GENERALISATION
We now revisit the question as to why USFAs should provide the benefits associated with both UVFAs and SF&GPI. We will discuss how exactly to train a USFA in the next section, but for now suppose that we have trained one such modelψ(s, a, z) using the training tasks in M. It is then not difficult to see that we can recover the solutions provided by both UVFAs and SF&GPI. Given an unseen task w , let π be the GPI policy defined as
π(s) ∈ argmax a max z∈CQ (s, a, w , z) = argmax a max z∈Cψ (s, a, z) w ,(3)
where C ⊂ R d . Clearly, if we make C = {w }, we get exactly the sort of generalisation associated with UVFAs. On the other hand, setting C = M essentially recovers SF&GPI.
The fact that we can recover both UVFAs and SF&GPI opens up a spectrum of possibilities in between the two. For example, we could apply GPI over the training set augmented with the current task, C = M ∪ {w }. In fact, USFAs allow us to apply GPI over any set of tasks C ⊂ R d . The benefits of this flexibility are clear when we look at the theory supporting SF&GPI, as we do next. Barreto et al. (2017) provide theoretical guarantees on the performance of SF&GPI applied to any task w ∈ M based on a fixed set of SFs. Below we state a slightly more general version of this result that highlights the two types of generalisation promoted by USFAs (proof in Barreto et al.'s (2017) Theorem 2).
Proposition 1 Let w ∈ M and let Q π w be the action-value function of executing policy π on task w . Given approximations {Q πz w =ψ(s, a, z) w } z∈C , let π be the GPI policy defined in 3. Then,
Q * w − Q π w ∞ ≤ 2 1 − γ minz∈C φ ∞ w − z δ d (z) + max z∈C w · ψ πz −ψ(s, a, z) ∞ δ ψ (z) , (4)
where Q * w is the optimal value of task w , ψ πz are the SFs corresponding to the optimal policy for task z, and f − g ∞ = max s,a |f (s, a) − g(s, a)|.
When we write the result in this form, it becomes clear that, for each policy π z , the right-hand side of (4) involves two terms: i) δ d (z), the distance between the task of interest w and the task that induced π z , and ii) δ ψ (z), the quality of the approximation of the SFs associated with π z .
In order to get the tightest possible bound (4) we want to include in C the policy z that minimises δ d (z) + δ ψ (z). This is exactly where the flexibility of choosing C provided by USFAs can come in handy. Note that, if we choose C = M, we recover Barreto et al.'s (2017) bound, which may have an irreducible min z∈C δ d (z) even with a perfect approximation of the SFs in M. On the other extreme, we could query our USFA at the test point C = {w }. This would result in δ d (w ) = 0, but can potentially incur a high cost due to the associated approximation error δ ψ (w ).
HOW TO TRAIN A USFA
Now that we have an approximationQ(s, a, w, z) a natural question is how to train this model. In this section we show that the decoupled nature ofQ is reflected in the training process, which assigns clearly distinct roles for tasks w and policies π z .
In our scenario, a transition at time t will be (s t , a t , φ t+1 , s t+1 ). Note that φ allows us to compute the reward for any task w, and since our policies are encoded by z, transitions of this form allow us to learn the value function of any policy π z on any task w. To see why this is so, let us define the temporal-difference (TD) error (Sutton and Barto, 1998) used in learning these value functions.
Given transitions in the form above, the n-step TD error associated with policy π z on task w will be
δ t,n wz = t+n−1 i=t γ i−t r w (s i , a i , s i+1 ) + γ nQ (s t+n , π z (s t+n ), w, z) −Q(s t , a t , w, z) = t+n−1 i=t γ i−t φ(s i , a i , s i+1 ) + γ nψ (s t+n , a t+n , z) −ψ(s t , a t , z) w = (δ t,n z ) w,(5)
where a t+n = argmax bQ (s t+n , b, z, z) = argmax bψ (s t+n , b, z) z. As is well known, the TD error δ t,n wz allows us to learn the value of policy π z on task w; since here δ t,n wz is a function of z and w only, we can learn about any policy π z on any task w by just plugging in the appropriate vectors.
Equation (5) highlights some interesting (and subtle) aspects involved in training a USFA. Since the value functionQ(s, a, w, z) can be decoupled into two components,ψ(s, a, z) and w, the process of evaluating a policy on a task reduces to learningψ(s, a, z) using the vector-based TD error δ t,n z showing up in (5).
Since δ t,n z is a function of z only, the updates toQ(s, a, w, z) will not depend on w. How do the tasks w influence the training of a USFA, then? If sample transitions are collected by a behaviour policy, as is usually the case in online RL, a natural choice is to have this policy be induced by a task w. When this is the case the training tasks w ∈ M will define the distribution used to collect sample transitions. Whenever we want to update ψ(s, a, z) for a different z than the one used in generating the data, we find ourselves under the off-policy regime (Sutton and Barto, 1998).
Assuming that the behaviour policy is induced by the tasks w ∈ M, training a USFA involves two main decisions: how to sample tasks from M and how to sample policies π z to be trained through (5) or some variant. As alluded to before, these decisions may have a big impact on the performance of the resulting USFA, and in particular on the trade-offs involved in the choice of the set of policies C used by the GPI policy (3). As a form of illustration, Algorithm 1 shows a possible regime to train a USFA based on particularly simple strategies to select tasks w ∈ M and to sample policies z ∈ R d . One aspect of Algorithm 1 worth calling attention to is the fact that the distribution D z used to select policies can depend on the current task w. This allows one to focus on specific regions of the policy space; for example, one can sample policies using a Gaussian distribution centred around w.
Algorithm 1 Learn USFA with -greedy Q-learning Require: , training tasks M, distribution D z over R d , number of policies n z 1: select initial state s ∈ S 2: for ns steps do
for i ← 1, 2, ..., n z do z i ∼ D z (·|w) 6: if Bernoulli( )=1 then a ← Uniform(A) 7: else a ← argmax b max iψ (s, b, z i ) w {GPI } 8:
Execute action a and observe φ and s 9:
for i ← 1, 2, ..., n z do {Updateψ} 10: a ← argmax bψ (s, b, z i ) z i {a ≡ π i (s )} 11: θ α ← − φ + γψ(s , a , zi) −ψ(s, a, zi) ∇ θψ 12:
s ← s 13: return θ Figure 1: USFA architecture
EXPERIMENTS
In this section we describe the experiments conducted to test the proposed architecture in a multitask setting and assess its ability to generalise to unseen tasks.
ILLUSTRATIVE EXAMPLE: TRIP MDP
We start with a simple illustrative example to provide intuition on the kinds of generalisation provided by UVFAs and SF&GPI. We also show how in this example USFAs can effectively leverage both types of generalisation and outperform its precursors. For this we will consider the simple two-state MDP depicted in Figure 2. To motivate the example, suppose that state s 1 of our MDP represents the arrival of a traveler to a new city. The traveler likes coffee and food and wants to try what the new city has to offer. In order to model that, we will use features φ ∈ R 2 , with φ 1 representing the quality of the coffee and φ 2 representing the quality of the food, both ranging from 0 to 1. The traveler has done some research and identified the places that serve the best coffee and the best food; in our MDP these places are modelled by terminal states associated with actions 'C' and 'F ' whose respective associated rewards are φ(·, C) = φ(C) = [1, 0] and φ(F ) = [0, 1]. As one can infer from these feature vectors, the best coffee place does not serve food and the best restaurant does not serve coffee (at least not a very good one). Nevertheless, there are other places in town that serve both; as before, we will model these places by actions P i associated with features φ(P i ). We assume that φ(P i ) 2 = 1 and consider N = 5 alternative places P i evenly spaced on the preference spectrum. We model how much the traveler wants coffee and food on a given day by w ∈ R 2 . If the traveler happens to want only one of these (i.e. w ∈ {[1, 0], [0, 1]}), she can simply choose actions 'C' or 'F ' and get a reward r = φ(·) w = 1. If instead she wants both coffee and food (i.e. if w is not an "one-hot" vector), it may actually be best to venture out to one of the other places. Unfortunately, this requires the traveler to spend some time researching the area, which we model by an action 'E' associated with feature φ(E) = [− , − ]. After choosing 'E' the traveler lands on state s 2 and can now reach any place in town: C, F , P 1 , ..., P N . Note that, depending on the vector of preferences w, it may be worth paying the cost of φ(E) w to subsequently get a reward of φ(P i ) w (here γ = 1).
In order to assess the transfer ability of UVFAs, SF&GPI and USFAs, we define a training set M = {10, 01} and K = 50 test tasks corresponding to directions in the two-dimensional w-space:
M = {w |w = [cos( πk 2K ), sin( πk 2K )], k = 0, 1, ..., K}.
We start by analysing what SF&GPI would do in this scenario. We focus on training task w C = [1, 0], but an analogous reasoning applies to task w F = [0, 1]. Let π C be the optimal policy associated with task w C . It is easy to see that π(s 1 ) = π(s 2 ) = C. Thus, under π C it should be clear that Q π C w (s 1 , C) > Q π C w (s 1 , E) for all test tasks w . Since the exact same reasoning applies to task w F if we replace action C with action F , the GPI policy (3) computed over {ψ π C , ψ π F } will be suboptimal for most test tasks in M . Training a UVFA on the same set M, will not be perfect either due to the very limited number of training tasks. Nonetheless the smoothness in the approximation allows for a slightly better generalisation in M .
Alternatively, we can use Algorithm 1 to train a USFA on the training set M. In order to do so we sampled n z = 5 policies z ∈ R 2 using a uniformly random distribution D z (·|w) = U([0, 1] 2 ) (see line 5 in Algorithm 1). When acting on the test tasks w we considered two choices for the candidates set: C = {w } and C = {z i |z i ∼ U([0, 1] 2 ), i = 1, 2, ..., 5}. Empirical results are provided in Figure 2. As a reference we report the performance of SF&GPI using the true {ψ π C , ψ π F } -no approximation. We also show the learning curve of a UVFA. As shown in the figure, USFA clearly outperforms its precursors and quickly achieves near optimal performance. This is due to two factors. First, contrary to vanilla SF&GPI, USFA can discover and exploit the rich structure in the policyspace, enjoying the same generalisation properties as UVFAs but now enhanced by the combination of the off-policy and off-task training regime. Second, the ability to sample a candidate set C that induces some diversity in the policies considered by GPI overcomes the suboptimality associated with the training SFs ψ π C and ψ π F . We explore this effect in a bit more detail in the suppl. material. Environment and tasks. We used the DeepMind Lab platform to design a 3D environment consisting of one large room containing four types of objects: TVs, balls, hats, and balloons (Beattie et al., 2016;. A depiction of the environment through the eyes of the agent can be seen in tions associated with object types, i.e., φ i (s, a, s ) = 1 if and only if the agent collects an object of type i (say, a TV) on the transition s a − → s . A task is defined by four real numbers w ∈ R 4 indicating the rewards attached to each object. Note that these numbers can be negative, in which case the agent has to avoid the corresponding object type. For instance, in task w = [1-100] the agent is interested in objects of the first type and should avoid objects of the second type.
Agent architecture. A depiction of the architecture used for the USFA agent is illustrated in Fig. 1. The architecture has three main modules: i) A fairly standard input processing unit composed of three convolution layers and an LSTM followed by a non-linearity (Schmidhuber, 1996); ii) A policy conditioning module that combines the state embedding coming from the first module, s ≡ f (h), and the policy embedding, z, and produces |A| · d outputs corresponding to the SFs of policy π z , ψ(s, a, z); and iii) The evaluation module, which, given a task w and the SFsψ(s, a, z), will construct the evaluation of policy π z on w,Q(s, a, w, z) =ψ(s, a, z) w.
Training and baselines. We trained the above architecture end-to-end using a variation of Alg. 1 that uses Watkins's (1989) Q(λ) to apply Q-learning with eligibility traces. As for the distribution D z used in line 5 of Alg. 1 we adopted a Gaussian centred at w: z ∼ N (w, 0.1 I), where I is the identity matrix. We used the canonical vectors of R 4 as the training set, M = {1000, 0100, 0010, 0001}.
Once an agent was trained on M we evaluated it on a separate set of unseen tasks, M , using the GPI policy (3) over different sets of policies C. Specifically, we used: C = {w }, which corresponds to a UVFA with an architecture specialised to (1); C = M, which corresponds to doing GPI on the SFs of the training policies (similar to (Barreto et al., 2017)), and C = M ∪ {w }, which is a combination of the previous two. We also included as baselines two standard UVFAs that do not take advantage of the structure (1); one of them was trained on-policy and the other one was trained off-policy (see supplement). The evaluation on the test tasks M was done by "freezing" the agent at different stages of the learning process and using the GPI policy (3) to select actions. To collect and process the data we used an asynchronous scheme similar to IMPALA (Espeholt et al., 2018). Fig. 4 shows the results of the agents after being trained on M. One thing that immediately stands out in the figure is the fact that all architectures generalise quite well to the test tasks. This is a surprisingly good result when we consider the difficulty of the scenario considered: recall that the agents are solving the test tasks without any learning taking place. This performance is even more impressive when we note that some test tasks contain negative rewards, something never experienced by the agents during training. When we look at the relative performance of the agents, it is clear that USFAs perform considerably better than the unstructured UVFAs. This is true even for the case where C = {w }, in which USFAs essentially reduce to a structured UVFA that was trained by decoupling tasks and policies. The fact that USFAs outperform UVFAs in the scenario considered here is not particularly surprising, since the former exploit the structure (1) while the latter cannot. In any case, it is reassuring to see that our model can indeed exploit such a structure effectively. This result also illustrates a particular way of turning prior knowledge about a problem into a favourable inductive bias in the UVFA architecture. It is also interesting to see how the different instantiations of USFAs compare against each other. As shown in Fig. 4, there is a clear advantage in including M to the set of policies C used in GPI (3). This suggests that, in the specific instantiation of this problem, the type of generalisation provided by SF& GPI is more effective than that associated with UVFAs. One result that may seem counter-intuitive at first is the fact that USFAs with C = M + {w } sometimes perform worse than their counterparts using C = M, especially on tasks with negative rewards. Here we note two points. First, although including more tasks to C results in stronger guarantees for the GPI policy, strictly speaking there are no guarantees that the resulting policy will perform better (see Barreto et al.'s Theorem 1, 2017). Another explanation very likely in the current scenario is that errors in the approximationsψ(s, a, z) may have a negative impact on the resulting GPI policy (3). As argued previously, adding a point to C can sometimes increase the upper bound in (4), if the approximation at this point is not reliable. On the other hand, comparing USFA's results using C = M + {w } and C = {w }, we see that by combining the generalisation of UVFAs and GPI we can boost the performance of a model that only relies on one of them. This highlights the fine balance between the two error terms in (4) and emphasizes how critical selecting low-error candidates in C can be.
RESULTS AND DISCUSSION
In the above scenario, SF&GPI on the training set M seems to provide a more effective way of generalising, as compared to UVFAs, even when the latter has a structure specialised to (1). Nevertheless, with less conservative choices of D z that provide a greater coverage of the z space we expect the structured UVFA (C = {w }) to generalise better. Note that this can be done without changing M and is not possible with conventional UVFAs. One of the strengths of USFAs is exactly that: by disentangling tasks and policies, one can learn about the latter without ever having to actually try them out in the environment. We exploit this possibility to repeat our experiments now using D z = N (w, 0.5 I). Results are shown in Fig.5. As expected, the generalisation of the structured UVFA improves considerably, almost matching that of GPI. This shows that USFAs can operate in two regimes: i) with limited coverage of the policy space, GPI over M will provide a reliable generalisation; ii) with a broader coverage of the space structured UVFAs will do increasingly better. 1
RELATED WORK
Multitask RL is an important topic that has generated a large body of literature. Solutions to this problem can result in better performance on the training set (Espeholt et al., 2018), can improve data efficiency (Teh et al., 2017) and enable generalisation to new tasks. For a comprehensive presentation of the subject please see Taylor and Stone (2009) and Lazaric (2012) and references therein.
There exist various techniques that incorporate tasks directly into the definition of the value function for multitask learning (Kaelbling, 1993;Ashar, 1994;Sutton et al., 2011). UVFAs have been used for zero-shot generalisation to combinations of tasks Hermann et al., 2017), or to learn a set of fictitious goals previously encountered by the agent (Andrychowicz et al., 2017).
Many recent multitask methods have been developed for learning subtasks or skills for a hierarchical controller (Vezhnevets et al., 2017;Andreas et al., 2016;Oh et al., 2017). In this context, Devin et al. (2017) and Heess et al. (2016) proposed reusing and composing sub-networks that are shared across tasks and agents in order to achieve generalisation to unseen configurations. Finn et al. (2017) uses meta-learning to acquire skills that can be fine-tuned effectively. Sequential learning and how to retain previously learned skills has been the focus of a number of investigations . All of these works aim to train an agent (or a sub-module) to generalise across many subtasks. All of these can be great use-cases for USFAs.
USFAs use a UVFA to estimate SFs over multiple policies. The main reason to do so is to apply GPI, which provides a superior zero-shot policy in an unseen task. There have been previous attempts to combine SFs and neural networks, but none of them used GPI (Kulkarni et al., 2016;Zhang et al., 2016). Recently, Ma et al. (2018) have also considered combining SFs and UVFAs. They propose building a goal-conditioned policy that aims to generalise over a collection of goals. In their work, the SFs are trained to track this policy and only used to build the critic employed in training the goal-conditioned policy. Thus, they are considering the extrapolation in π(s, g) and using the SFs as an aid in training. Moreover, as both the training and SFs and π(s, g) are done on-policy, the proposed system has only seen instances where the SFs, critic and policy are all conditioned on the same goal. In contrast, in this work we argue and show the benefits of decoupling the task and policy to enable generalisation via GPI when appropriate, while preserving the ability to exploit the structure in the policy space. We use the SFs as a way to factorize and exploit effectively the structure in value function space. And we will use these evaluations directly to inform our action selection
CONCLUSION
In this paper we presented USFAs, a generalisation of UVFAs through SFs. The combination of USFAs and GPI results in a powerful model capable of exploiting the same types of regularity exploited by its precursors: structure in the value function, like UVFAs, and structure in the problem itself, like SF&GPI. This means that USFAs can not only recover their precursors but also provide a whole new spectrum of possible models in between them. We described the choices involved in training and evaluating a USFA and discussed the trade-offs associated with these alternatives. To make the discussion concrete, we presented two examples aimed to illustrate different regimes of operation. The first example embodies a MDP where the generalisation in the optimal policy space is fairly easy but the number of optimal policies we would want to represent can be large. This is a scenario where UVFAs would strive, while vanilla SF&GPI will struggle due to the large number of policies needed to build a good GPI policy. In this case, we show that USFAs can leverage the sort of parametric generalisation provided by UVFAs and even improve on it, due to its decoupled training regime and the use of GPI in areas where the approximation is not quite perfect. Our second example is in some sense a reciprocal one, where we know from previous work that the generalisation provided via GPI can be very effective even on a small set of policies, while generalising in the space of optimal policies, like UVFAs do, seems to require a lot more data. Here we show that USFAs can recover the type of generalisation provided by SFs when appropriate. This example also highlights some of the complexities involved in training at scale and shows how USFAs are readily applicable to this scenario. Overall, we believe USFAs are a powerful model that can exploit the available structure effectively: i) the structure induced by the shared dynamics (via SFs), ii) the structure in the policy space (like UVFAs) and finally iii) the structure in the RL problem itself (via GPI), and could potentially be useful across a wide range of RL applications that exhibit these properties.
Universal Successor Features Approximators -Supplementary Material -A TWO TYPES OF GENERALISATION: INTUITION
In this paper we argue that one of the main benefits provided by USFAs is the ability to combine the types of generalisation associated with UVFAs and GPI. In this section, we will take a close look at a very simple example to illustrate the two of generalisation we are considering and how they are different. This is a very small example where the number of optimal policies are very limited and the induced tasks are not that interesting, but we chose this solely to illustrate the decision process induced by GPI and how it differs from parametric generalisation in w via a functional approximator (FA).
Let us consider a an MDP with a single state s and two actions. Upon executing action a 1 the agent gets a reward of 0 and remains in state s; the execution of action a 2 leads to a potentially non-zero reward followed by termination. We define unidimensional features φ ∈ R as φ(s, a 1 ) = 0 and φ(s, a 2 ) = 1. A task is thus induced by a scalar w ∈ R which essentially re-scales φ(s, a 2 ) and defines the reward r w = w the agent receives before termination. In this environment, the space of tasks considered are induced by a scalar w ∈ R. In this space of tasks, one can easily see that there are only two optimal policies: taking action a 1 and receiving the reward r w = w if w ≤ 0, or taking action a 0 and remaining in s 0 indefinitely. Thus the space of optimal value functions is very simple. For convenience, we include a depiction of this space in Figure 6.
Optimal Values Figure 6: Optimal value space as a function of a scalar task description w Suppose now we are in the scenario studied in the paper, where after training on a set of tasks M the agent should generalise to a test task w . Specifically, let us consider three points in this space M = {w 1 , w 2 , w 3 }three tasks we are going to consider for learning and approximating their optimal policies {Q * w1 ,Q * w1 ,Q * w1 }. Given these three points we are going to fit a parametric function that aims to generalise in the space of w. A depiction of this is included in Figure 7(a). Now, given a new point w we can obtain a zero-shot estimateQ * w for Q * w -see Figure 7(b). Due to approximation error under a very limited number of training points, this estimate will typically not recover perfectly Q * w = 0. In the case of UVFA (and other FAs trying to generalise in task space), we are going to get a guess based on optimal value function we have built, and we are going to take decision based on this estimateQ * w . Given the same base tasks M = {w 1 , w 2 , w 3 } we can now look at what the other method of generalisation would be doing. We are going to denote by π zi the (inferred) optimal policy of task w i . Since we have learnt the SFs corresponding to all of these policies ψ πz i , we can now evaluate how well each of these policies would do on the current test task w : Q πz w (s, a) = ψ πz (s, a) T w for all z ∈ M. A depiction of this step is included in Figure 8(a). Given these evaluations of previous behaviours, GPI takes the maximum of these values -"trusting", in a sense, the most promising value. In our case this corresponds to the behaviour associated with task w 3 , which in this case happens to have the same optimal policy as our test task w . Thus in this particular example, the evaluation of a previously learned behaviour gives us a much better basis for inducing a behaviour in our test task w . Moreover if the SFs are perfect we would automatically get the optimal value function for w . It is worth noting that, in this case, by learning a USFA we can recover both scenarios described above based on our choice of the candidate set for GPI C. In particular, if C = {w } we recover the mechanism in Figure 7, while for C = M we recover the generalisation in Figure 8. Furthermore, for any choice of trained points that include one positive and one negative choice of w, by relying on GPI we can generalise perfectly to the whole space of tasks, while an approach based exclusively on the sort of generalisation provided by UVFAs may struggle to fit the full function. Analogously, in scenarios where the structure of the optimal space favours (UV)FAs, we expect USFAs to leverage this type of generalisation. An example of such a scenario is given in the first part of the experimental section -Section 4.1, and further details in Section B.
B ILLUSTRATIVE EXAMPLE: TRIP MDP
In this section we provide some additional analysis and results omitted from the main text. As a reminder, this is a two state MDP, where the first state is a root state, the transition from s 1 → s 2 comes at a cost r w (s 1 , E) = φ(s 1 , E) T w = − (w 1 + w 2 ) and all other actions lead to a final positive reward corresponding to how much the resulting state/restaurant alligns with our preferences (our task) right now. For convenience, we provide below the depiction of the Trip MDP introduced in Section 4.1.
Arrived in new city
S₁ S₂ E C F
In the experiments run we considered a fixed set of training tasks M = {01, 10} for all methods. The set of outcomes from the exploratory state s 2 is defined as φ(s 2 , a) = [cos(θ), sin(θ)] for θ ∈ {kπ/2N } k=0,N . Note that this includes the binary states for k = 0 and respectively k = N . We ran this MDP with N = 6, and = 0.05. Thus outside the binary outcomes, the agent can select N − 1 = 5 other mixed outcomes and, as argued in the main text, under these conditions there will be a selection of the w-space in which each of these outcomes will be optimal. Thus the space of optimal policies, we hope to recover, is generally N + 1. Nevertheless, there is a lot of structure in this space, that the functional approximators can uncover and employ in their generalization.
B.1 ADDITIONAL RESULTS
In the main paper, we reported the zero-shot aggregated performance over all direction M = {w |w = [cos( πk 2K ), sin( πk 2K )], k ∈ Z K }. This should cover most of the space of tasks/trade-offs we would be interest in. In this section we include the generalization for other sets M . First in Fig. 9 we depict the performance of the algorithms considered across the whole w space M = [0, 1] 2 . Fig. 10 is just a different visualization of the previous plot, where we focus on how far these algorithms are from recovering the optimal performance. This also shows the subtle effect mentioned in the discussion in the main text, induced by the choice of C in the USFA evaluation. (5) and USFA with C = {w } as compared to the optimal performance one could get in this MDP (first plot). These correspond to one sample run, where we trained the UVFA and USFA for 1000 episodes. The optimal performance and the SF&GPIwere computed exactly. run] Optimality gap over the whole task space. These correspond to the same sample run as above, where we trained the UVFA and USFA for 1000 episodes. We can now see more clearly that USFAs manage to recover better policies and optimality across a much greater portion of the task space. The last two plots correspond to the same USFA just using different choices of the candidate set C. Something to note here is that by having a more diverse choice in C, we can recover an optimal policy even in areas of the space where our approximation has not yet optimally generalised (like the upper-left corner in the w-space in the figures above).
A particularly adversarial choice of test tasks for the vanilla SF&GPIwould be the diagonal in the [0, 1] 2 quadrant depicted in the plot above: M = {w |w 1 = w 2 , w 1 ∈ [0, 1]}. This is, in a sense, maximally away from the training tasks and both of the precursor models are bound to struggle in this portion of the space. This intuition was indeed empirically validated. Results are provided in Fig. 11. As mentioned above, this is an adversarial evaluation, mainly to point out that, in general, there might be regions of the space were the generalization of the previous models can be very bad, but where the combination of them can still recover close to optimal performance. As highlighted in Section 4.2, our agent comprises of three main modules:
• Input processing module: computes a state representation f (h t ) from observation o t . This module is made up of three convolutional layers (structure identical to the one used in (Mnih et al., 2015)), the output of which then serves as input to a LSTM (256). This LSTM takes as input the previously executed action a t−1 . The output of the LSTM is passed through a non-linearity f (chosen here to be a ReLu) to produce a vector of 128 units, f (h t ).
• Policy conditioning module: compute the SFsψ(s, a, z), given a (sampled) policy embedding z and the state representation f (h t ). This module first produces n z number of z ∼ D z samples (n z = 30 in our experiments). Each of these is then transformed via a 2-layer MLP(32,32) to produce a vector of size 32, for each sample z. This vector g(z) gets concatenated with the state representation f (h t ) and the resulting vector is further processed by a 2-layer MLP that produces a tensor of dimensions d × |A| for each z, where d = dim(φ). These correspond to SFsψ(s, a, z) for policy π z . Note that this computation can be done quite efficiently by reusing the state embedding f (h t ), doing the downstream computation in parallel for each policy embedding z.
• Task evaluation module: computes the value function Q(s, a, z, w) =ψ(s, a, z) T w for a given task description w. This module does not have any parameters as the value functions are simply composable fromψ(s, a, z) and the task description w via assumption (1). This module with output n z value functions that will be used to produce a behavior via GPI.
An important decision in this design was how and where to introduce the conditioning on the policy. In all experiments shown here the conditioning was done simply by concatenating the two embeddings w and z, although stronger conditioning via an inner product was tried yielding similar performance.
The 'where' on the other hand is much more important. As the conditioning on the policy happens quite late in the network, most of the processing (up to f (h t )) can be done only once, and we can sample multiple z and compute the correspondingψ πz at a fairly low computational cost. As mentioned above, these will be combined with the task vector w to produce the candidate action value functions for GPI. Note that this helps both in training and in acting, as otherwise the unroll of the LSTM would be policy conditioned, making the computation of the SFs and the off-policy n-step learning quite expensive. Furthermore, if we look at the learning step we see that this step can also benefit from this structure, as the gradient computation of f can be reused. We will only have a linear dependence on n z on the update of the parameters and computations in the red blocks in Figure 12.
UVFA baseline agents have a similar architecture, but now the task description w is fed in as an input to the network. The conditioning on the task of UVFAs is done in a similar fashion as we did the conditioning on the policies in USFAs, to make the computational power and capacity comparable. The input processing module is the same and now downstream, instead of conditioning on the policy embedding z, we condition on task description w. This conditioning if followed by a 2-layer MLP that computes the value functionsQ * (s, a, w), which induces the greedy policy π (U V F A) w = arg max aQ * (s, a, w).
C.2 AGENT'S TRAINING
The agents' training was carried out using the IMPALA architecture (Espeholt et al., 2018). On the learner side, we adopted a simplified version of IMPALA that uses Q(λ) as the RL algorithm. In our experiments, for all agents we used λ = 0.9. Depending on the sampling distribution D z , in learning we will be often off-policy. That is, most of the time, we are going to learn about a policy π z1 and update its corresponding SFs approximationsψ(s, a, z 1 ), using data generated by acting in the environment according to some other policy π z2 . In order to account for this off-policiness, whenever computing the n-step return required in eq. 5, we are going to cut traces whenever the policies start to disagree and bootstrap from this step on (Sutton and Barto, 1998). Here we can see how the data distribution induce by the choice of training tasks M can influence the training process.
If the data distribution D z is very close to the set M, as in our first experiment, most of the policies we are going to sample will be close to the policies that generated the data. This means that we might be able to make use of longer trajectories in this data, as the policies will rarely disagree. On the other hand, by staying close to the training tasks, we might hurt our ability to generalise in the policy space, as our first experiment suggest (see Figure 4). By having a broader distribution D z = N (w, 0.5I), we can learn about more diverse policies in this space, but we will also increase our off-policiness. We can see from Figure 5, that our algorithm can successfully learn and operate in both of these regimes.
For the distributed collection of data we used 50 actors per task. Each actor gathered trajectories of length 32 that were then added to the common queue. The collection of data followed an -greedy policy with a fixed = 0.1. The training curves shown in the paper correspond to the performance of the the -greedy policy (that is, they include exploratory actions of the agents).
C.3 AGENT'S EVALUATION
All agents were evaluated in the same fashion. During the training process, periodically (every 20M frames) we will evaluate the agents performance on a test of held out test tasks. We take these intermediate snapshots of our agents and 'freeze' their parameters to assess zero-shot generalisation. Once a test task w is provided, the agent interacts with the environment for 20 episodes, one minute each and the average (undiscounted) reward is recorded. These produce the evaluation curves in Figure 4. Evaluations are done with a small = 0.001, following a GPI policy with different instantiations of C. For the pure UVFA agents, the evaluation is similar: -greedy on the produced value functionsQ * (s, a, w), with the same evaluation = 0.001.
D ADDITIONAL RESULTS
In our experiments we defined a set of easy test tasks (close to M) and a set of harder tasks, in order to cover reasonably well a few distinct scenarios:
• Testing generalisation to tasks very similar to the training set, e.g. w = [0, 0.9, 0, 0.1];
• Testing generalisation to harder tasks with different reward profiles: only positive rewards, only negative rewards, and mixed rewards.
In the main text, we included only a selection of these for illustrative purposes. Here we present the full results.
D.1 CANONICAL BASIS: ZERO-SHOT GENERALISATION
This section contains the complete results of the first experiment conducted. As a reminder, in this experiment we were training a USFA agent on M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I) and compare its performance with two conventional UVFA agents (one trained on-policy and the other one using all the data generated to learn off-policy) on a range of unseen test tasks. Complete set of result is included below, as follows: Figure 13 includes results on easy tasks, close to the tasks contained in the training set M (generalisation to those should be fairly straightforward); Figure 14 and Figure 15 present results on more challenging tasks, quite far away from the training set, testing out agents ability to generate to the whole 4D hypercube. In this section, we include the omitted results from our second experiment. As a reminder, in this experiment we were training two USFA agents on the same set of canonical tasks, but employing different distributions D z , one will low variance σ = 0.1, focusing in learning policies around the training set M, and another one with larger variance σ = 0.5, that will try to learn about a lot more policies away from the training set, thus potentially facilitating the generalisation provided by the UVFA component. Results are displayed in Figures 16-17
D.3 LARGER COLLECTION OF TRAINING TASKS
We also trained our USFA agent on a larger set of training tasks that include the previous canonical tasks, as well as four other tasks that contain both positive and negative reward M = {1000, 0100, 0010, 0001, 1-100, 01-10, 001-1, -1000}. Thus we expect this agent to generalises better as a result of its training. A selection of these results and sample performance in training are included in Fig. 18.
Figure 3 :
3Environment.
Fig. 3 .Figure 2 :
32Features φ i are indicator func-Trip MDP: [Left] Depiction of MDP.[Right] Optimality gap (Difference between optimal return and the return obtained by the different models) at different times in the training process.
Figure 4 : 5 Figure 5 :
455Zero-shot generalisation performance, across different models, on a sample of test tasks w ∈ M after training on M. Shaded areas represent one standard deviation over 10 runs. GPI over C = {w'}), σ = 0. 1 USFA (GPI over C = {w'}), σ = 0. 5 USFA (GPI over C = M), σ = 0. 1 USFA (GPI over C = M), σ = 0. Generalisation performance on sample test tasks w ∈ M after training on M, with D z = N (w, σ I), for σ = 0.1 and σ = 0.5 (larger coverage of the z space). Average over 3 runs.
Figure 7 :
7UVFA-like generalisation.
Figure 8 :
8GPI generalisation.
Figure 9 :
9[Sample run] Performance of the different methods (in this order, starting with the second subplot): UVFA, SF&GPIon the perfect SFs induced by M, USFA with C = random
Figure 10 :
10[Sample
Figure 11 :
11Zero-shot performance on the diagonal: Optimality gap for M = {w |w 1 = w 2 , w 1 ∈ [0, 1]}. These results were averaged over 10 runs. contains a detailed description of the USFA agent used in our experimental section. As a reminder, we include the agent's architecture below(Figure 1 in the main text).
Figure 12 :
12USFA architecture
Figure 13 :Figure 14 :Figure 15 :
131415Zero-shot performance on the easy evaluation set: Average reward per episode on test tasks not shown in the main paper. This is comparing a USFA agent trained on the canonical training set M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I) and the two UVFA agents: one trained on-policy, one employing off-policy. Zero-shot performance on harder tasks: Average reward per episode on test tasks not shown in the main paper. This is comparing a USFA agent trained on the canonical training set M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I) and the two UVFA agents: one trained on-policy, one employing off-policy. Zero-shot performance on harder tasks: Average reward per episode on test tasks not shown in the main paper. This is comparing a USFA agent trained on the canonical training set M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I) and the two UVFA agents: one trained on-policy, one employing off-policy. (Part 2) D.2 CANONICAL BASIS: USFAS IN DIFFERENT TRAINING REGIMES.
Figure 16 :Figure 17 :
1617on all tasks in the hard evaluation set. Different D z -Zero-shot performance on harder tasks: Average reward per episode on test tasks not shown in the main paper. This is comparing the generalisations of two USFA agent trained on the canonical training set M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I), and D z = N (w, 0.5I). Different D z -Zero-shot performance on harder tasks: Average reward per episode on test tasks not shown in the main paper. This is comparing a USFA agent trained on the canonical training set M = {1000, 0100, 0010, 0001}, with D z = N (w, 0.1I), and D z = N (w, 0.5I). (Part 2)
Figure 18 :
18Large M. Learning curves for training task [1000] ∈ M and generalisation performance on a sample of test tasks w ∈ M after training on all the tasks M. This is a selection of the hard evaluation tasks. Results are average over 10 training runs.
Videos of USFAs in action on the links https://youtu.be/Pn76cfXbf2Y and https://youtu.be/0afwHJofbB0.
Modular multitask reinforcement learning with policy sketches. J Andreas, D Klein, S Levine, arXiv:1611.01796arXiv preprintJ. Andreas, D. Klein, and S. Levine. Modular multitask reinforcement learning with policy sketches. arXiv preprint arXiv:1611.01796, 2016.
Hindsight experience replay. M Andrychowicz, F Wolski, A Ray, J Schneider, R Fong, P Welinder, B Mcgrew, J Tobin, O P Abbeel, W Zaremba, Advances in Neural Information Processing Systems. M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. P. Abbeel, and W. Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048-5058, 2017.
Hierarchical learning in stochastic domains. R Ashar, CiteseerPhD thesisR. Ashar. Hierarchical learning in stochastic domains. PhD thesis, Citeseer, 1994.
Successor features for transfer in reinforcement learning. A Barreto, W Dabney, R Munos, J Hunt, T Schaul, H Van Hasselt, D Silver, Advances in Neural Information Processing Systems (NIPS). A. Barreto, W. Dabney, R. Munos, J. Hunt, T. Schaul, H. van Hasselt, and D. Silver. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), 2017.
Transfer in deep reinforcement learning using successor features and generalised policy improvement. A Barreto, D Borsa, J Quan, T Schaul, D Silver, M Hessel, D Mankowitz, A Zidek, R Munos, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)A. Barreto, D. Borsa, J. Quan, T. Schaul, D. Silver, M. Hessel, D. Mankowitz, A. Zidek, and R. Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In Proceedings of the International Conference on Machine Learning (ICML), pages 501-510, 2018.
. C Beattie, J Z Leibo, D Teplyashin, T Ward, M Wainwright, H Küttler, A Lefrancq, S Green, V Valdés, A Sadik, arXiv:1612.03801Deepmind lab. arXiv preprintC. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Küttler, A. Lefrancq, S. Green, V. Valdés, A. Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016.
Learning modular neural network policies for multi-task and multi-robot transfer. C Devin, A Gupta, T Darrell, P Abbeel, S Levine, 2017 IEEE International Conference on. IEEERobotics and Automation (ICRAC. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural network policies for multi-task and multi-robot transfer. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 2169-2176. IEEE, 2017.
L Espeholt, H Soyer, R Munos, K Simonyan, V Mnih, T Ward, Y Doron, V Firoiu, T Harley, I Dunning, arXiv:1802.01561Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprintL. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, abs/1703.03400CoRRC. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. CoRR, abs/1703.03400, 2017. URL http://arxiv.org/abs/1703.03400.
Learning and transfer of modulated locomotor controllers. N Heess, G Wayne, Y Tassa, T Lillicrap, M Riedmiller, D Silver, arXiv:1610.05182arXiv preprintN. Heess, G. Wayne, Y. Tassa, T. Lillicrap, M. Riedmiller, and D. Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016.
Grounded language learning in a simulated 3d world. K M Hermann, F Hill, S Green, F Wang, R Faulkner, H Soyer, D Szepesvari, W Czarnecki, M Jaderberg, D Teplyashin, arXiv:1706.06551arXiv preprintK. M. Hermann, F. Hill, S. Green, F. Wang, R. Faulkner, H. Soyer, D. Szepesvari, W. Czarnecki, M. Jaderberg, D. Teplyashin, et al. Grounded language learning in a simulated 3d world. arXiv preprint arXiv:1706.06551, 2017.
Learning to achieve goals. L P Kaelbling, IJCAI. CiteseerL. P. Kaelbling. Learning to achieve goals. In IJCAI, pages 1094-1099. Citeseer, 1993.
Overcoming catastrophic forgetting in neural networks. J Kirkpatrick, R Pascanu, N C Rabinowitz, J Veness, G Desjardins, A A Rusu, K Milan, J Quan, T Ramalho, A Grabska-Barwinska, D Hassabis, C Clopath, D Kumaran, R Hadsell, abs/1612.00796CoRR. J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell. Overcoming catastrophic forgetting in neural networks. CoRR, abs/1612.00796, 2016. URL http://arxiv.org/abs/1612.00796.
T D Kulkarni, A Saeedi, S Gautam, S J Gershman, arXiv:1606.02396Deep successor reinforcement learning. arXiv preprintT. D. Kulkarni, A. Saeedi, S. Gautam, and S. J. Gershman. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396, 2016.
Transfer in Reinforcement Learning: A Framework and a Survey. A Lazaric, A. Lazaric. Transfer in Reinforcement Learning: A Framework and a Survey, pages 143-173. 2012.
Universal successor representations for transfer reinforcement learning. C Ma, J Wen, Y Bengio, arXiv:1804.03758arXiv preprintC. Ma, J. Wen, and Y. Bengio. Universal successor representations for transfer reinforcement learning. arXiv preprint arXiv:1804.03758, 2018.
Unicorn: Continual learning with a universal, off-policy agent. D J Mankowitz, A Žídek, A Barreto, D Horgan, M Hessel, J Quan, J Oh, H Van Hasselt, D Silver, T Schaul, arXiv:1802.08294arXiv preprintD. J. Mankowitz, A.Žídek, A. Barreto, D. Horgan, M. Hessel, J. Quan, J. Oh, H. van Hasselt, D. Silver, and T. Schaul. Unicorn: Continual learning with a universal, off-policy agent. arXiv preprint arXiv:1802.08294, 2018.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis, Nature. 5187540V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529-533, 2015.
Zero-shot task generalization with multi-task deep reinforcement learning. CoRR, abs/1706.05064. J Oh, S P Singh, H Lee, P Kohli, J. Oh, S. P. Singh, H. Lee, and P. Kohli. Zero-shot task generalization with multi-task deep reinforce- ment learning. CoRR, abs/1706.05064, 2017. URL http://arxiv.org/abs/1706.05064.
Markov Decision Processes-Discrete Stochastic Dynamic Programming. M L Puterman, John Wiley & Sons, IncM. L. Puterman. Markov Decision Processes-Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., 1994.
. A A Rusu, N C Rabinowitz, G Desjardins, H Soyer, J Kirkpatrick, K Kavukcuoglu, R Pascanu, R Hadsell, CoRR, abs/1606.04671A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. CoRR, abs/1606.04671, 2016. URL http: //arxiv.org/abs/1606.04671.
Universal Value Function Approximators. T Schaul, D Horgan, K Gregor, D Silver, International Conference on Machine Learning (ICML). T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal Value Function Approximators. In International Conference on Machine Learning (ICML), pages 1312-1320, 2015.
A general method for incremental self-improvement and multi-agent learning in unrestricted environments. J Schmidhuber, Evolutionary Computation: Theory and Applications. Scientific Publishing CompanyJ. Schmidhuber. A general method for incremental self-improvement and multi-agent learning in unrestricted environments. In Evolutionary Computation: Theory and Applications. Scientific Publishing Company, 1996.
Reinforcement Learning: An Introduction. R S Sutton, A G Barto, MIT PressR. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. URL http://www-anw.cs.umass.edu/˜rich/book/the-book.html.
Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. R S Sutton, J Modayil, M Delp, T Degris, P M Pilarski, A White, D Precup, International Conference on Autonomous Agents and Multiagent Systems. R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In International Conference on Autonomous Agents and Multiagent Systems, pages 761-768, 2011.
Algorithms for Reinforcement Learning. C Szepesvári, Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool PublishersC. Szepesvári. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010.
Transfer learning for reinforcement learning domains: A survey. M E Taylor, P Stone, Journal of Machine Learning Research. 101M. E. Taylor and P. Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(1):1633-1685, 2009.
Distral: Robust multitask reinforcement learning. Y W Teh, V Bapst, W M Czarnecki, J Quan, J Kirkpatrick, R Hadsell, N Heess, R Pascanu, Advances in Neural Information Processing Systems (NIPS). Y. W. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pascanu. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), pages 4499-4509, 2017.
FeUdal networks for hierarchical reinforcement learning. A S Vezhnevets, S Osindero, T Schaul, N Heess, M Jaderberg, D Silver, K Kavukcuoglu, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)A. S. Vezhnevets, S. Osindero, T. Schaul, N. Heess, M. Jaderberg, D. Silver, and K. Kavukcuoglu. FeUdal networks for hierarchical reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 3540-3549, 2017.
Learning from Delayed Rewards. C Watkins, University of Cambridge, EnglandPhD thesisC. Watkins. Learning from Delayed Rewards. PhD thesis, University of Cambridge, England, 1989.
Deep reinforcement learning with successor features for navigation across similar environments. J Zhang, J T Springenberg, J Boedecker, W Burgard, abs/1612.05533CoRRJ. Zhang, J. T. Springenberg, J. Boedecker, and W. Burgard. Deep reinforcement learning with successor features for navigation across similar environments. CoRR, abs/1612.05533, 2016. |
227,337,121 | SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM | Although spatio-temporal graph neural networks have achieved great empirical success in handling multiple correlated time series, they may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data. Furthermore, spatio-temporal graph neural networks lack theoretical interpretation. To address these issues, we put forth a novel mathematically designed framework to analyze spatio-temporal data. Our proposed spatio-temporal graph scattering transform (ST-GST) extends traditional scattering transforms to the spatiotemporal domain. It performs iterative applications of spatio-temporal graph wavelets and nonlinear activation functions, which can be viewed as a forward pass of spatio-temporal graph convolutional networks without training. Since all the filter coefficients in ST-GST are mathematically designed, it is promising for the real-world scenarios with limited training data, and also allows for a theoretical analysis, which shows that the proposed ST-GST is stable to small perturbations of input signals and structures. Finally, our experiments show that i) ST-GST outperforms spatio-temporal graph convolutional networks by an increase of 35% in accuracy for MSR Action3D dataset; ii) it is better and computationally more efficient to design the transform based on separable spatio-temporal graphs than the joint ones; and iii) the nonlinearity in ST-GST is critical to empirical performance. | [
49416020
] | SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM
Chao Pan [email protected]
Siheng Chen [email protected]
Antonio Ortega [email protected]
University of Illinois at Urbana-Champaign Champaign
ILUSA
Shanghai Jiao Tong University Shanghai
China
University of Southern California
Los AngelesCAUSA
SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM
Although spatio-temporal graph neural networks have achieved great empirical success in handling multiple correlated time series, they may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data. Furthermore, spatio-temporal graph neural networks lack theoretical interpretation. To address these issues, we put forth a novel mathematically designed framework to analyze spatio-temporal data. Our proposed spatio-temporal graph scattering transform (ST-GST) extends traditional scattering transforms to the spatiotemporal domain. It performs iterative applications of spatio-temporal graph wavelets and nonlinear activation functions, which can be viewed as a forward pass of spatio-temporal graph convolutional networks without training. Since all the filter coefficients in ST-GST are mathematically designed, it is promising for the real-world scenarios with limited training data, and also allows for a theoretical analysis, which shows that the proposed ST-GST is stable to small perturbations of input signals and structures. Finally, our experiments show that i) ST-GST outperforms spatio-temporal graph convolutional networks by an increase of 35% in accuracy for MSR Action3D dataset; ii) it is better and computationally more efficient to design the transform based on separable spatio-temporal graphs than the joint ones; and iii) the nonlinearity in ST-GST is critical to empirical performance.
INTRODUCTION
Processing and learning from spatio-temporal data have received increasing attention recently. Examples include: i) skeleton-based human action recognition based on a sequence of human poses (Liu et al. (2019)), which is critical to human behavior understanding (Borges et al. (2013)), and ii) multi-agent trajectory prediction (Hu et al. (2020)), which is critical to robotics and autonomous driving (Shalev-Shwartz et al. (2016)). A common pattern across these applications is that data evolves in both spatial and temporal domains. This paper aims to analyze this type of data by developing novel spatio-temporal graph-based data modeling and operations.
Spatio-temporal graph-based data modeling. Graphs are often used to model data where irregularly spaced samples are observed over time. Good spatio-temporal graphs can provide informative priors that reflect the internal relationships within data. For example, in skeleton-based human action recognition, we can model a sequence of 3D joint locations as data supported on skeleton graphs across time, which reflects both the human physical constraints and temporal consistency (Yan et al. (2018)). Recent studies on modeling spatio-temporal graphs have followed either joint or separable processing frameworks. Joint processing is based on constructing a single spatio-temporal graph and processing (e.g., filtering) via operations on this graph (Kao et al. (2019); Liu et al. (2020)). In contrast, a separable processing approach works separately, and possibly with different operators, across the space and time dimension. In this case, independent graphs are used for space and time (Yan et al. (2018); Cheng et al. (2020)). However, no previous work thoroughly analyzes and compares these two constructions. In this work, we mathematically study these two types of graphs and justify the benefits of separable processing from both theoretical and empirical aspects.
Spatio-temporal graph-based operations. Graph operations can be performed once the graph structure is given. Some commonly used graph operations include the graph Fourier transform (Shuman et al. (2013)), and graph wavelets (Hammond et al. (2011)). It is possible to extend those operations to the spatio-temporal graph domain. For example, Grassi et al. (2017) developed the short time-vertex Fourier transform and spectrum-based time-vertex wavelet transform. However, those mathematically designed, linear operations show some limitations in terms of empirical performances. In comparison, many recent deep neural networks adopt trainable graph convolution operations to analyze spatio-temporal data (Yan et al. (2018); Liu et al. (2020)). However, most networks are designed through trial and error. It is thus hard to explain the rationale behind empirical success and further improve the designs (Monga et al. (2019)). In this work, to fill in the gap between mathematically designed linear transforms and trainable spatio temporal graph neural networks, we propose a novel spatio-temporal graph scattering transform (ST-GST), which is a mathematically designed, nonlinear operation.
Specifically, to characterize the spatial and temporal dependencies, we present two types of graphs corresponding to joint and separable designs. We then construct spatio-temporal graph wavelets based on each of these types of graphs. We next propose the framework of ST-GST, which adopts spatio-temporal graph wavelets followed by a nonlinear activation function as a single scattering layer. All the filter coefficients in ST-GST are mathematically designed beforehand and no training is required. We further show that i) a design based on separable spatio-temporal graph is more flexible and computationally efficient than a joint design; and ii) ST-GST is stable to small perturbations on both input spatio-temporal graph signals and structures. Finally, our experiments on skeletonbased human action recognition show that the proposed ST-GST outperforms spatio-temporal graph convolutional networks by 35% accuracy in MSR Action3D dataset.
We summarize the main contributions of this work as follows:
• We propose wavelets for both separable and joint spatio-temporal graphs. We show that it is more flexible and computationally efficient to design wavelets based on separable spatio-temporal graphs;
• We propose a novel spatio-temporal graph scattering transform (ST-GST), which is a non-trainable counterpart of spatio-temporal graph convolutional networks and a nonlinear version of spatiotemporal graph wavelets. We also theoretically show that ST-GST is robust and stable in the presence of small perturbations on both input spatio-temporal graph signals and structures;
• For skeleton-based human action recognition, our experiments show that: i) ST-GST can achieve similar or better performances than spatio-temporal graph convolutional networks and other nondeep-learning approaches in small-scale datasets; ii) separable spatio-temporal scattering works significantly better than joint spatio-temporal scattering; and iii) ST-GST significantly outperforms spatio-temporal graph wavelets because of the nonlinear activation function.
RELATED WORK
Scattering transforms. Convolutional neural networks (CNNs) use nonlinearities coupled with trained filter coefficients and are well known to be hard to analyze theoretically (Anthony & Bartlett (2009)). As an alternative, Mallat (2012); Bruna & Mallat (2013) propose scattering transforms, which are non-trainable versions of CNNs. Under admissible conditions, the resulting transform enjoys both great performance in image classification and appealing theoretical properties. These ideas have been extended to the graph domain (Gama et al. (2019a); Zou & Lerman (2020); Gao et al. (2019);Ioannidis et al. (2020)). Specifically, the graph scattering transform (GST) proposed in (Gama et al. (2019a)) iteratively applies predefined graph filter banks and element-wise nonlinear activation function. In this work, we extend classical scattering transform to the spatio-temporal domain and provide a new mathematically designed transform to handle spatio-temporal data. The key difference between GST and our proposed ST-GST lies in the graph filter bank design, where ST-GST needs to handle both spatial and temporal domains.
Spatio-temporal neural networks. Deep neural networks have been adapted to operate on spatiotemporal domain. For example, Liu et al. (2019) uses LSTM to process time series information, while ST-GCN (Yan et al. (2018)) combines a graph convolution layer and a temporal convolution layer as a unit computational block in the network architecture. However, those networks all require a huge amount of high-quality labeled data, and training them is computationally expensive, which may make them impractical for many real-world scenarios. Furthermore, many architectures are designed through trial and error, making it hard to justify the design choices and further improve them. In this work, the proposed ST-GST is a nonlinear transform with a forward procedure similar to that of ST-GCN. However, ST-GST does not require any training, which is useful in many applications where only limited training data is available. Furthermore, since all filter coefficients in ST-GST are predefined, it allows us to perform theoretical analysis and the related conclusions potentially shed some light on the design of spatio-temporal networks.
Skeleton-based human action recognition. Conventional skeleton-based action recognition models learn semantics based on hand-crafted features (Wang et al. (2012)). To handle time series information, some recurrent-neural-network-based models are proposed to capture the temporal dependencies between consecutive frames (Kim & Reiter (2017)). Recently, graph-based approaches have gained in popularity while achieving excellent performance (Yan et al., 2018;Li et al., 2019). In this work, our experiments focus on this task and show that ST-GST outperforms the state-of-the-art spatio-temporal graph neural networks, like MS-G3D (Liu et al., 2020), on small-scale datasets.
SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM
In this section, we first define spatio-temporal graph structures and signals. We next design our spatio-temporal graph wavelets. Finally, we present ST-GST. Figure 1: Visualization of spatial graph, temporal graph and three commonly used product graphs.
SPATIO-TEMPORAL GRAPH STRUCTURES AND SIGNALS
Spatio-temporal data can be represented as a matrix X ∈ R N ×T , where N is the number of the spatial positions and T is the number of time stamps. In this matrix, each row is a time-series for a spatial node, and each column is a spatial signal at a certain time stamp. Note that the index of spatial information can be arbitrary: we will associate each spatial location to a vertex on the spatial graph, and the edges will provide information about the relative position of the nodes. We can reshape the matrix to form a vector x of length N T , where the element x (s,t) := x (s−1)T +t is the feature value corresponding to the s-th vertex at time t. To construct a spatio-temporal graph, we create connections based on physical constraints. For example, for skeleton-based action recognition, the spatial graph is the human skeleton graph, reflecting bone connections; see Fig. 1(a); and the temporal graph is a line graph connecting consecutive time stamps; see Fig. 1
(b).
As a starting point, we choose a spatial graph G s = (V s , E s , A s ) with |V s | = N , reflecting the graph structure of each column in X and a temporal graph G t = (V t , E t , A t ) with |V t | = T , reflecting the graph structure of each row in X. The separable spatio-temporal design is achieved by processing the columns and rows of X separately based on their respective graphs.
As an alternative, a product graph, denoted as G = G s G t = (V, E, A) can be constructed to unify the relations in both the spatial and temporal domains, allowing us to process data jointly across space and time. The product graph G s G t has |V| = N T nodes and an appropriately defined N T × N T adjacency matrix A. The operation interweaves two graphs to form a unifying graph structure. The edge weight A (s1,t1),(s2,t2) := A (s1−1)T +t1,(s2−1)T +t2 characterizes the relation, such as similarity or dependency, between the s 1 -th spatial node at the t 1 -th time stamp and the s 2 -th spatial node at the t 2 -th time stamp. There are three commonly used product graphs (Sandryhaila & Moura, 2014): i) Kronecker product: G = G s ⊗G t with graph adjacency matrix as A = A s ⊗A t and
G = G s G t with A = A s ⊗ A t + A s ⊗ I T + I N ⊗ A t ,
which can be viewed as a combination of Kronecker and Cartesian products; see Fig 1(e). The joint spatio-temporal design is achieved based on a product graph.
In this paper, we consider designs based on both separable graphs and product graphs.
SPATIO-TEMPORAL GRAPH FILTERING
We now show two graph filter designs, separable and joint filtering, based on the corresponding spatio-temporal graphs we just described. For each design, we first define the spatio-temporal graph shift, which is the most elementary graph filter and defines how information should propagate in a spatio-temporal graph. We then propose spatio-temporal graph filtering in both graph vertex and graph spectral domains.
Separable graph filters. Given the spatial graph G s = (V s , E s , A s ) and the temporal graph G t = (V t , E t , A t ), let the spatial graph shift be S s = A s and the temporal graph shift be S t = A t 1 . For simplicity, we focus on the symmetric graph shifts. For a spatio-temporal graph signal, spatial and temporal graph filtering work as, H(S s )X =
P −1 p=0 h p S p s X and XG (S t ) = X( Q−1 q=0 g q S q t )
, where h p are g q are spatial and temporal filter coefficients, respectively. In each modality, the graph filter is a polynomial of the graph shift. The polynomial orders P and Q control the length of filters in the spatial and temporal modalities, respectively. Note that these two values can be chosen to be different, which provides additional design flexibility. Then, a separable spatio-temporal graph filtering operation can be defined as
H(S s )XG (S t ) := P −1 p=0 h p S p s X Q−1 q=0 g q S q t = (H(S s ) ⊗ G(S t )) x,(1)
where the second equality follows from the property:
M 1 XM T 2 = (M 1 ⊗ M 2 )x.
We can also represent the filtering process in the graph spectral domain. Let the eigen-decomposition of the spatial and temporal graphs be S s = V s Λ s V s and S t = V t Λ t V t , respectively, where V s ∈ R N ×N , V t ∈ R T ×T form the spatial and temporal graph Fourier bases. The elements along the diagonals of Λ s , Λ t represent the spatial and temporal graph frequencies. We have
H(S s )X = V s P −1 p=0 h p Λ p s V s X = V s H(Λ s )V s X, XG (S t ) = XV t Q−1 q=0 g q Λ q t V t = XV t G(Λ t )V t . Letting V = V s ⊗V t , the spectral representation of the separable spatio-temporal graph filtering is then (V s H(Λ s )V s )X(V t G(Λ t )V t ) = V (H(Λ s ) ⊗ G(Λ t )) V x.
Joint graph filters. Given the joint graph structure G = G s G t = (V, E, A), let the spatio-temporal graph shift be S = A. Then, a joint spatio-temporal graph filtering operation can be defined as:
H(S)x = K−1 k=0 h k S k x = V K−1 k=0 h k Λ k V x = VH(Λ)V x,(2)
where h k is the filter coefficient. The kernel function h(λ) =
K−1 k=0 h k λ k is applied to each diagonal element of Λ to obtain H(Λ).
Here h(λ) is independent of any specific graph structure, and characterizes the filter response in the graph spectral domain. Note that h = (h 0 , · · · , h K−1 ), h(λ) and H(·) are essentially the same thing, and are used interchangeably in this paper.
It is worth pointing out that these three product graphs share the same form of joint spatio-temporal graph filtering (2) as well as the same graph Fourier bases V = V s ⊗ V t . Following from (2), the spectral representation of the joint spatio-temporal graph filtering can be formulated as
Kronecker product: H(S) = V(H(Λ s ⊗ Λ t ))V , Cartesian product: H(S) = V(H(Λ s ⊗ I T + I N ⊗ Λ t ))V , Strong product: H(S) = V(H(Λ s ⊗ Λ t + Λ s ⊗ I T + I N ⊗ Λ t ))V .
Comparison between separable and joint graph filters. First of all, we stress that both separable and joint spatio-temporal graph filtering share the same Fourier bases, meaning that they share the same frequency space and their difference only comes from the frequency responses.
Second, designing filters based on separable spatio-temporal graphs provides additional flexibility. Although it is conceptually simple to design graph filters directly on product graphs, the eigenvalues along the spatial and temporal domains are tied together, making it difficult to adjust the frequency responses independently for the two modalities. Moreover, two domains are forced to share the same set of filter coefficients and length. Take a filter defined on Kronecker product graph as an example.
By expanding the term H(Λ s ⊗ Λ t ) we can have that H(Λ s ⊗ Λ t ) = K−1 k=0 h k (Λ s ⊗ Λ t ) k = K−1 k=0 h k (Λ k s ⊗ Λ k t )
. This shows that the filter coefficients are applied on the product of spatial and temporal eigenvalues, making it hard to decompose and interpret the functionality of the filter in single modality. Such limitations make them less practical for spatio-temporal signals which might have distinct patterns in each of the two modalities. This problem is overcome by separable graph filtering, where different filters are applied to each modality. The flexibility of separable graph filters means that one can design different filters (h and g) with independent filter lengths (P and Q) in the spatial and temporal domains. However, it is worth pointing out that the representation power of these two formulations does not have a clear relationship that one is a subset of the other. Consider a joint graph filter designed on a strong product graph with length K = 3. The filter kernel is defined as
H(Λ s ⊗ Λ t + Λ s ⊗ I T + I N ⊗ Λ t ) = 2 k=0 h k (Λ s ⊗ Λ t + Λ s ⊗ I T + I N ⊗ Λ t ) k . Similarly, the kernel of a separable graph filter with P = Q = 3 can be written as H(Λ s ) ⊗ G(Λ t ) = ( 2 p=0 h p Λ p s ) ⊗ ( 2 q=0 g q Λ q t )
. By expanding the expression and rearranging the coefficients, one can obtain the coefficient matrices for the joint graph filter and the separable graph filter, C 1 and C 2 , respectively; that is,
C 1 = h 0 h 1 h 2 h 1 h 1 + 2h 2 2h 2 h 2 2h 2 h 2 , C 2 = h 0 g 0 h 0 g 1 h 0 g 2 h 1 g 0 h 1 g 1 h 1 g 2 h 2 g 0 h 2 g 1 h 2 g 2 = h 0 h 1 h 2 [g 0 g 1 g 2 ] , where (i, j)-th element means the coefficient of term Λ i−1 s ⊗ Λ j−1 t .
On one hand, it is obvious that C 2 is always a rank 1 matrix, while C 1 could have rank 1, 2, or 3. So C 1 is not a special case of C 2 . On the other hand, C 1 is always a symmetric matrix, but C 2 can be either symmetric or non-symmetric, depending on the choices of h and g. So C 2 is also not a special case of C 1 . Therefore, the families spanned by two designs do not have any simple relationship that one is a subset of the other. Similar conclusions hold for the Kronecker and Cartesian products.
Third, designing based on separable spatio-temporal graphs is computationally more efficient. In a separable graph filtering process, we only need to deal with two small matrix multiplications (1), instead of one large matrix-vector multiplication (2), reducing the computational cost from O(N 2 T 2 ) to O(N T (N + T )).
In short, the joint and separable graph filters are two different design methods for spatio-temporal graphs. Though the representation power of separable graph filters is not necessarily much stronger than joint ones, separable design enjoys the flexibility, computation efficiency and straightforward interpretation. Empirical performances also show that the separable design outperforms the joint one; see Section 5. Note that this separable design coincides with the basic module widely used in spatio-temporal graph convolutional networks Li et al. (2019), which consists of one graph convolution layer followed by one temporal convolution layer.
SPATIO-TEMPORAL GRAPH WAVELETS
In time-series analysis and image processing, wavelets are one of the best tools to design filter banks, allowing us to trade-off between the time-frequency resolutions and touching the lower bound of uncertainty principle of the time-frequency representations (Akansu & Haddad, 2000). Inspired by this, we propose spatio-temporal graph wavelets, which include a series of mathematically designed graph filters to provide multiresolution analysis for spatio-temporal graph signals. The proposed spatio-temporal graph wavelets are later used at each layer in the proposed ST-GST framework. Based on two types of graph structures, we consider two designs: separable and joint wavelets.
Separable graph wavelets. Based on separable spatio-temporal graph filtering (1), we are able to design spatial graph wavelets,
{H j1 (S s ) = P −1 p=0 h (j1)
p S p s } Js j1=1 , and temporal graph wavelets,
{G j2 (S t ) = Q−1 q=0 g (j2) q S q t } Jt j2=1 , separately.
For each modality, the filter at scale j is defined as H j (S) = S 2 j−1 − S 2 j = S 2 j−1 (I − S 2 j−1 ). There are also many other off-the-shelf graph wavelets we can choose from. More discussion about wavelets and their properties can be found in Appendix A. Since two modalities are designed individually, the number of wavelet scales for each modality could be different. This is important in practice because the number of time samples T is normally larger than the number of spatial nodes N . For each node in spatio-temporal graph, using different wavelet scales in the two domains allows for more flexibility to diffuse the signal with its neighbors. Based on this construction, when we choose J s and J t scales for spatial and temporal domains, respectively, the overall number of scales for spatio-temporal wavelets is then J = J s ×J t .
Joint graph wavelets. When the joint filtering (2) is chosen, we can directly apply existing graph wavelet designs, such as the spectral graph wavelet transform (Hammond et al., 2011).
SPATIO-TEMPORAL GRAPH SCATTERING TRANSFORM
The proposed ST-GST is a nonlinear version of spatio-temporal graph wavelets, which iteratively uses wavelets followed by a nonlinearity activation function. ST-GST includes three components: (i) spatio-temporal graph wavelets, (ii) a pointwise nonlinearity activation function σ(·), and (iii) a low-pass pooling operator U . These operations are performed sequentially to extract representative features from input spatio-temporal graph signal X. The main difference between ST-GST and spatio-temporal graph wavelets is the application of nonlinear activation at each layer. The nonlinear transformation disperses signals through the graph spectrum, producing more patterns in spectrum.
Separable ST-GST. Let Z ∈ R N ×T be a spatio-temporal graph signal. At each scattering layer, we sequentially use spatial graph wavelets {H j1 } Js j1=1 and temporal wavelets {G j2 } Jt j2=1 to convolve with Z. Since each graph filter generates a new spatio-temporal graph signal, separable spatiotemporal graph filtering generates J = J s × J t spatio-temporal graph signals. Then, the nonlinear activation is applied for each spatio-temporal graph signal. For example, the (j 1 , j 2 )-th signal is Z (j1,j2) = σ(H j1 (S s )ZG j2 (S t )). We can treat each filtered spatio-temporal graph signal as one tree node. Given Z as the parent node, a scattering layer produces J children nodes.
To construct ST-GST, we first initialize the input data Z 0 = X be the root of the scattering tree; and then, we recursively apply scattering layers at each node to produce children nodes, growing a scattering tree; see Fig. 2. We can index all the nodes in this scattering tree by a unique path from the root to each node. For example, p ( ) = ((j 2 )) is the path from root to one tree node in the -th layer, and the signal associated with it is Z (p ( ) ) . Data matrix Z (p ( ) ) is then summarized by an pooling operator U (·) to obtain a lower-dimensional vector φ (p ( ) ) = U Z (p ( ) ) . Various pooling methods can lead to different dimensions of scattering features. Common choices for U (·) include average in the spatial domain (U = 1 N 1 1×N , φ = UZ ∈ R T ), average in the temporal domain (U = 1 T 1 T ×1 , φ = ZU ∈ R N ) and average in both modalities (U = 1 N T 1 N ×T , φ = U • Z ∈ R), where • represent Hadamard product. Finally, all scattering features φ (p ( ) ) are concatenated to construct a scattering feature map Φ(S s , S t ,
X) := {{φ (p ( ) ) } all p ( ) } L−1 =0 ,
Joint ST-GST. Since we deal with a unifying graph, we can use the spatio-temporal product graph directly in combination with the ordinary graph scattering transform (Gama et al. (2019b)).
Comparison with ST-GCNs. One distinct difference between ST-GST and ST-GCNs lies in the fact that the trainable graph convolution in each layer of ST-GCN performs the multiplication between a spatial graph shift and the feature matrix, which only extracts low-frequency information over the graph; while ST-GST leverages multiple spatio-temporal graph filters to cover multiple frequency bands. Furthermore, predefined filter coefficients conform a frame (3) in each layer of ST-GST, which is crucial for establishing the stability of ST-GST as shown in next section.
THEORETICAL ANALYSIS
Stability is the key to designing robust and reliable algorithms. However, since the training process of ST-GCNs is data-driven, it might be vulnerable to small perturbations added to training data, which may lead to significant degradation in practice. Trainable parameters make it hard to develop a theoretical analysis for ST-GCNs. In contrast, here we show that the proposed separable ST-GST is stable to perturbations on both spatio-temporal graph signals and structures. All proofs of statements in this section are explained thoroughly in Appendix B. Unless specified, x is the 2 norm for vector x, while X and X 2 are the Frobenius and spectral norm for matrix X, respectively.
Here we show the results for separable spatio-temporal graph scattering transform. But all the results can be extended to the joint version. Before introducing perturbations, we first show that separable spatio-temporal graph wavelets also satisfy certain frame bounds. Thus, with a separable construction, we can control bound constants for spatio-temporal wavelet and build tight frames.
Lemma 1. Let {H j1 } Js j1=1 and {G j2 } Jt j2=1 be the wavelet filter bank for spatial domain and for temporal domain, respectively. Both satisfy frame properties such that for any x ∈ R N and y ∈ R T ,
A 2 1 x 2 ≤ Js j1=1 H j1 (S s )x 2 ≤ B 2 1 x 2 , A 2 2 y 2 ≤ Jt j2=1 G j2 (S t )y 2 ≤ B 2 2 y 2 ,(3)
Then, for any Z ∈ R N ×T and its corresponding reshaped vector z ∈ R N T , it holds that
A 2 1 A 2 2 Z 2 ≤ Js,Jt j1,j2=1 (H j1 (S s ) ⊗ G j2 (S t ))z 2 = Js,Jt j1,j2=1 H j1 (S s )ZG j2 (S t ) 2 ≤ B 2 1 B 2 2 Z 2 .
Lemma 1 guarantees that separable design can also lead to valid wavelets. Furthermore, when we choose both spatial {H j1 } Js j1=1 and temporal {G j2 } Jt j2=1 to be tight frames with A 1 = B 1 = A 2 = B 2 = 1 (Shuman et al., 2015), the resulting separable wavelet also conforms a tight frame. In later context, denote B = B 1 × B 2 as the frame bound constant for separable spatio-temporal graph wavelet, and separable ST-GST is configured with L layers and J = J s × J t scales at each layer.
STABILITY TO PERTURBATION OF SPATIO-TEMPORAL GRAPH SIGNALS
Consider the perturbed spatio-temporal graph signal X = X + ∆ ∈ R N ×T , where ∆ ∈ R N ×T is the perturbation matrix. Such an additive model can represent measurement noise caused by devices or adversarial perturbations added manually. Theorem 1 shows that the feature map for perturbed signal will not deviate much from original feature map under small input perturbations. Theorem 1. Consider the additive noise model for input data X, it then holds that
Φ(S s , S t , X) − Φ(S s , S t , X) T L−1 =0 J ≤ 1 √ N T L−1 =0 B 2 L−1 =0 J ∆ .(4)
The difference of output is normalized by the squared root of dimension of the final feature map. Note that we can construct spatio-temporal wavelet easily with B = 1 when spatial and temporal wavelets are both tight frames, then the normalized bound presented in (4) indicates that the transform is insensitive to perturbations on input signals as the factor is much smaller than 1.
STABILITY TO PERTURBATION OF SPATIO-TEMPORAL GRAPH STRUCTURES
Perturbations on the underlying graph usually happen when the graph is unknown or when the graph changes over time (Segarra et al., 2017). Since such kind of perturbations usually happen in spatial domain, here we simply consider the structure perturbations on the spatial graph only. Specifically, we consider the perturbed spatial graph S s = S s + E S s + S s E, where E is the perturbation matrix and temporal graph S t is not changed. Detailed descriptions see Appendix B. Theorem 2. Suppose eigenvalues {m i } N i=1 of E ∈ R N ×N are organized in order such that |m 1 | ≤ |m 2 | ≤ · · · ≤ |m N |, satisfying |m N | ≤ /2 and |m i /m N − 1| ≤ for > 0 and all i's. Suppose spatial filter bank {H j1 } Js j1=1 satisfies max i |λh i (λ)| ≤ C and temporal filter bank {G j2 } Jt j2=1 satisfies limited spectral response max i |g i (λ)| ≤ D. It then holds that
Φ(S s , S t , X) − Φ( S s , S t , X) T L−1 =0 J ≤ CD B √ N T L−1 =0 2 (B 2 J) L−1 =0 J X ,(5)
where characterizes the perturbation level. Theorem 2 shows that ST-GST is a stable transform also for structure deformation, as the norm of change of feature map is linear in . It is worth pointing out that the upper bound in both Theorems 1 and 2 only depend on the choice of filter banks and structure of scattering tree, instead of quantities related with specific graph support S s and S t that are shown in previous works (Gama et al., 2019b;Zou & Lerman, 2020;Levie et al., 2019).
EXPERIMENTAL RESULTS
We now evaluate the performance of proposed ST-GST in skeleton-based action recognition task. Comparison with state-of-the-art methods. We consider two datasets, MSR Action3D and NTU-RGB+D (cross-subject). For MSR Action3D, the proposed ST-GST is compared with GFT facilitated by temporal pyramid matching (GFT+TPM) (Kao et al., 2019), Bayesian hierarchical dynamic model (HDM) (Zhao et al., 2019), and a few deep learning approaches, including temporal convolution neural networks (Kim & Reiter, 2017), ST-GCN (Yan et al., 2018), and MS-G3D (Liu et al., 2020). For NTU-RGB+D, Deep LSTM (Liu et al., 2019), part-aware LSTM (PA-LSTM) (Liu et al., 2019) and spatio-temporal LSTM with trust gates (ST-LSTM+TG) (Liu et al., 2016) are included in comparison. Methods labeled "fixed topology" are modified so as not to use adaptive training of the adjacency matrix in order for the comparison with ST-GST to be fair. Tables 1 and 2 compares the classification accuracies on MSR Action3D and NTU-RGB+D, respectively. We see that even without any training, the performance of ST-GST is better than other non-deep-learning and LSTMbased methods, and is comparable with state-of-the-art GCN-based methods in large-scale dataset. Further, ST-GST outperforms all other methods when the size of training set is small. Fig. 3(a) shows the classification accuracy as a function of the training ratio. When training ratio is less than 20%, ST-GST significantly outperforms ST-GCN. Fig. 3(b) shows the accuracy-running time plot, reflecting that ST-GST is much faster than ST-GCN with similar classification performance.
ST-GST works well in small-scale-data regime. Table 1 and Fig. 3(a) show that ST-GST outperforms other deep learning methods in the small-scale-data regime, which can be explained as follows. The good performance of spatio-temporal graph neural networks highly relies on the assumption that the training data is abundant. When the size of training set is limited, most of them can be easily trapped into bad local optima due to overfitting, resulting in a significant drop of classification accuracy. But in practice, obtaining a huge amount of training data with high-quality labels could be extremely expensive. On the other hand, since ST-GST is a non-trainable framework, filter coefficients in ST-GST are mathematically designed rather than trained by data, which avoids the problem of overfitting when the training ratio is low. Another advantage of ST-GST compared to ST-GCN is that it requires less computation because no training process is involved in ST-GST.
Separable design is better than joint design. Tables 1 and 2 also show that separable spatiotemporal graph wavelets work much better than joint ones, achieving 25% increase in classification accuracy for MSR Action3D dataset. The result is consistent with our analysis in Section 3.2. The intuition is that when dealing with spatio-temporal data generated from complex structures like skeleton sequences, the fixed dependencies generated by product graphs highly restrict the way how signals can be diffused in spatio-temporal graphs and thus limit the efficiency of feature extraction.
Nonlinearity is beneficial. Fig. 3(c) compares ST-GST with and without nonlinearity and shows that it is critical to ST-GST, also reflecting the potential effect of nonlinearity in ST-GCNs.
CONCLUSIONS
In this work we propose a novel spatio-temporal graph scattering transform (ST-GST), which can be viewed as one forward pass of spatio-temporal graph convolutional networks (ST-GCNs) without any training. ST-GST is stable to small perturbations on both input signals and structures. Our experiments show that: i) ST-GST can achieve better performance than both non-deep-learning and ST-GCNs based methods when the size of training samples is limited; ii) designing spatial and temporal graph filters separately is more flexible and computationally efficient than designing them jointly; and iii) the nonlinearity is critical to the performance.
ACKNOWLEDGEMENT
This work is fully supported by Mitsubishi Electric Research Laboratories (MERL), where Chao Pan was a research intern, Siheng Chen was a research scientist and Antonio Ortega is a consultant.
A DIFFERENT DESIGN OF GRAPH WAVELETS
There are many off-the-shelf, well-developed graph wavelets we can choose. They mainly focus on extracting features from multiple frequency bands of input signal spectrum. Some of them are shown as follows.
Monic Cubic wavelets. Monic Cubic wavelets (Hammond et al., 2011) define the kernel function h(λ) as
h(λ) = λ for λ < 1; −5 + 11λ − 6λ 2 + λ 3 for 1 ≤ λ ≤ 2; 2/λ for λ > 2.
Different scales of filters are implemented by scaling and translation of above kernel function.
Itersine wavelets. Itersine wavelets define the kernel function at scale j as
h j (λ) = sin π 2 cos 2 (π(λ − j − 1 2 )) 1 j 2 − 1 ≤ λ ≤ j 2 .
Itersine wavelets form tight frames.
Geometric scattering wavelets. Geometric scattering wavelet filter bank (Gao et al., 2019) contains a set of filters based on lazy random walk matrix. The filter at scale j is defined as H j (S) = S 2 j−1 − S 2 j = S 2 j−1 (I − S 2 j−1 ), where S = 1 2 (I + AD −1 ) is the lazy random walk matrix and D is the degree matrix.
Note that one is also allowed to customize either spatial or temporal graph wavelets, once they conform a frame and satisfy integral Lipschitz constraint shown as follows
A 2 x 2 ≤ J j=1 H j x 2 ≤ B 2 x 2 , |λh (λ)| ≤ const ∀λ,
where A, B are scalar constants and h (·) is the gradient of the kernel function.
B PROOFS
B.1 PROOF OF LEMMA 1
By reshaping the signal from Z to z with Z s,t = z (s−1)T +t , we can have that
Js,Jt j1,j2=1 (H j1 (S s ) ⊗ G j2 (S t ))z 2 = Js,Jt j1,j2=1 H j1 (S s )ZG j2 (S t ) 2 .
Since S s and S t do not change over computation process, we just use H j1 and G j2 to represent
H j1 (S s ) and G j2 (S t ), respectively. Suppose H j1 = h 11 h 1N . . . h N 1 h N N ∈ R N ×N , then we have the Kronecker product as H j1 ⊗ G j2 = h 11 G j2 h 1N G j2 . . . h N 1 G j2 h N N G j2 .
Apply it to vector z and we can have a filtered signal y j1,j2 = (H j1 ⊗ G j2 )z ∈ R N T . The first T elements of y can also be written as
y j1,j2 (1 : T ) = N i=1 h 1i G j2 Z i,1 Z i,2 . . . Z i,T = G j2 N i=1 h 1i Z i,1 Z i,2 . . . Z i,T .
Therefore we have
A 2 2 N i=1 h 1i Z i,1 Z i,2 . . . Z i,T 2 ≤ j2 y j1,j2 (1 : T ) 2 ≤ B 2 2 N i=1 h 1i Z i,1 Z i,2 . . . Z i,T 2 .
Thus j2 y j1,j2 2 can be sandwiched as
A 2 2 N k=1 N i=1 h ki Z i,1 Z i,2 . . . Z i,T 2 ≤ j2 y j1,j2 2 ≤ B 2 2 N k=1 N i=1 h ki Z i,1 Z i,2 . . . Z i,T 2 .
By definition of vector 2 norm, we can rewrite the upper and lower bound in Eq. (6) as
A 2 2 T i=1 H j1 Z 1,i Z 2,i . . . Z N,i 2 ≤ j2 y j1,j2 2 ≤ B 2 2 T i=1 H j1 Z 1,i Z 2,i . . . Z N,i 2 .
Summing above quantity over j 1 gives us that
A 2 1 A 2 2 Z 2 = A 2 1 A 2 2 T i=1 Z 1,i Z 2,i . . . Z N,i 2 ≤ j1,j2 y j1,j2 2 ≤ B 2 1 B 2 2 T i=1 Z 1,i Z 2,i . . . Z N,i 2 = B 2 1 B 2 2 Z 2 ,
which completes the proof. Lemma 1 is a very handful result. It shows that we can easily construct new spatio-temporal wavelets just by combining spatio and temporal ones. Moreover, the constants for new frame bound can be easily obtained once we know the characteristics of the wavelets in each domain. In particular, it also provides us a convenient way to build tight frames for spatio-temporal data analysis with A = B, because we just need to choose tight frames for spatial and temporal domain separately without considering possible correlations.
B.2 PROOF OF THEOREM 1
We are considering pooling operator U (·) as average in spatial domain in this proof, so U = 1 N 1 1×N and φ = UZ ∈ R T . The proof techniques can be easily generalized to any form of U (·). When reshaping Z ∈ R N ×T to z ∈ R N T , the new pooling operator can be simply represented as
U = 1 N (I T , I T , · · · , I T ) ∈ R T ×N T , φ = U z.
Note that U 2 = 1 √ N . Consider scattering tree nodes at the last layer L − 1. Suppose they are indexed from 1 to J L−1 associated with signal a 1 , · · · , a J L−1 , and their parent nodes are indexed from 1 to J L−2 associated with signal b 1 , · · · , b J L−2 . When the input data X is perturbed, all signals in scattering tree will change correspondingly. Here we simply denote them as a, b. Then for the change of feature vector located at node with a 1 , it holds that
φ a1 − φ a1 2 = U (a 1 − a 1 ) 2 ≤ U 2 a 1 − a 1 2 ≤ 1 N σ((H j1 ⊗ G j2 )(b 1 − b 1 )) 2 ,(6)
where j 1 = j 2 = 1. The last inequality holds because we are using absolute value function as nonlinear activation, which is non-expansive. Summing above quantity over j 1 , j 2 and by the frame bound proved in Lemma 1, we can have that
J L−1 i=1 φ ai − φ ai 2 ≤ B 2 N J L−2 i=1 b i − b i 2 .(7)
Note that for sum of square norm of change at layer L − 2 it is
J L−2 i=1 φ bi − φ bi 2 ≤ 1 N J L−2 i=1 b i − b i 2 .(8)
Compare Eq. (7) and (8). The upper bound only differs with a factor B 2 . Then by induction we can have that
Φ(S s , S t , X) − Φ(S s , S t , X) 2 ≤ 1 N L−1 =0 B 2 x − x 2 = 1 N L−1 =0 B 2 ∆ 2 .
Normalize it with the dimension of final feature map, we have
Φ(S s , S t , X) − Φ(S s , S t , X) T L−1 =0 J ≤ 1 √ N T L−1 =0 B 2 L−1 =0 J ∆ .(9)
B.3 PROOF OF THEOREM 2
Perturbations on the underlying graph usually happen when the graph is unknown or when the graph changes over time (Segarra et al., 2017). Take skeleton-based action recognition as an example. Some joints may be misrecognized with others due to measurement noise of devices during certain frames, thus the location signals of those joints are interchanged. This leads to different spatial graph structures at those time stamps. Since such kind of perturbations usually happen in spatial domain, here we simply consider the structure perturbations on the spatial graph only. But the results can be extended to more general cases.
Consider the original spatio-temporal graph as G with spatial graph shift matrix S s and temporal one S t , and the perturbed graph as G with S s and S t . We first show that ST-GST is invariant to node permutations in spatial domain, where the set of permutation matrices is defined as P = P ∈ {0, 1} N ×N : P1 = 1, P 1 = 1, PP = I N . Note that we are considering average in spatial domain for U (·), so U = 1 N 1 1×N and φ = UZ ∈ R T , U = UP. Lemma 2. Consider the spatial permutation S s = P S s P and input data X = P X are also permuted in spatial domain correspondingly. Then, it holds that
Φ(S s , S t , X) = Φ( S s , S t , X)(10)
Proof. Note that the permutation holds for all signals computed in scattering tree; that is to say, Z (p ( ) ) = P Z (p ( ) ) . Suppose for path p ( ) the last two filter are chosen as H( S s ) and G(S t ), then the feature vector after pooling with respect to new graph support and data can be written as
φ (p ( ) ) ( S s , S t , Z (p ( ) ) ) = U(σ(H( S s ) Z (p ( ) ) G (S t ))) = UPσ(P H(S s )PP Z (p ( ) ) G (S t ))
The last equation holds due to definition of H(S). Since nonlinear activation is applied elementwise, we can rewrite it as
φ (p ( ) ) ( S s , S t , Z (p ( ) ) ) = Uσ(PP H(S s )PP Z (p ( ) ) G (S t )) = Uσ(H(S s )Z (p ( ) ) G (S t )) = φ (p ( ) ) (S s , S t , Z (p ( ) ) ).
This conclusion holds independently of specific path p ( ) , so it holds for all feature vector after pooling in scattering tree. Since final feature map is just a concatenation of all feature vectors, the proof is complete.
Lemma 2 shows that the output of ST-GST is essentially independent of the node ordering in spatial domain, as long as the permutation is consistent across all time stamps. This result is intuitive because the output of graph convolution should only depend on relative neighborhood structure of each node. Since node reordering will not alter neighborhood topology, the output should remain unchanged.
Based on Lemma 2, we use a relative perturbation model for structure modifications (Gama et al., 2019b), which focuses more on the change of neighborhood topology compared to absolute perturbations adopted in Levie et al. (2019). Define the set of permutations that make S s and S the closet as P s := arg min P∈P P S s P − S s 2 . Consider the set of perturbation matrices E(S, S) = {E|P S s P = S s + E S s + S s E, P ∈ P s , E ∈ R N ×N }. Then the relative distance to measure structure perturbations can be defined as d(S s , S s ) = min E∈E(Ss, Ss) E 2
Note that if S s = P S s P, meaning that the structure perturbation is purely permutation, then the relative distance d(S s , S s ) = 0, which is consistent with result shown in Lemma 2. Therefore, without loss of generality, we can assume that P = I N and S s = S s + E S s + S s E in later context. With this formulation, we are ready to prove Lemma 3.
Lemma 3. Suppose eigenvalues {m i } N i=1 of E are organized in order such that |m 1 | ≤ |m 2 | ≤ · · · ≤ |m N |, satisfying |m N | ≤ /2 and |m i /m N − 1| ≤ for > 0. For spatial graph filter H(S s ) and temporal graph filter G(S t ), denote their kernel functions as h(λ) and g(λ), respectively. If for all λ, h(λ) is chosen to satisfy integral Lipschitz constraint |λh (λ)| ≤ C and g(λ) has bounded spectral response |g(λ)| ≤ D. Then it holds that The second line holds because H(S s ) − H( S s ) is a symmetric matrix, which can be written as eigen-decomposition as FΩF . And (FΩF ) ⊗ (VΛV T ) = (F ⊗ V)(Ω ⊗ Λ)(F ⊗ V) holds, which finishes the proof. As for general structural perturbations, where we want to find H(S s ) ⊗ G(S t ) − H( S s ) ⊗ G( S t ) 2 , we can add and subtract term H( S s ) ⊗ G( S t ), use triangle inequality and further bound those two terms with more assumptions on h(λ) and g(λ).
H(S s ) ⊗ G(S t ) − H( S s ) ⊗ G(S t ) 2 ≤ CD + O( 2 ).(11)
The bound shown in Lemma 3 indicates that the difference of output caused by changing spatial graph support from S s to S s is proportional to , which is a scalar characterizing the level of the perturbation. Constraints on eigenvalues of E limits the change of graph structure. A more detailed description explaining the necessity of such constraints can be found in Gama et al. (2019b). With Lemma 3 in hand, we are ready to show the change of feature vector after pooling at each node in scattering tree when such structure perturbations happen.
Lemma 4. Consider a ST-GST with L layers and J = J s × J t scales at each layer. Suppose that the graph filter bank forms a frame with upper bound B = B 1 × B 2 , where B 1 , B 2 are frame bounds for spatial and temporal domain, respectively. Suppose for all λ, spatial wavelet filter bank {H j1 } Js j1=1 satisfies max i |λh i (λ)| ≤ C and temporal wavelet filter bank {G j2 } Jt j2=1 satisfies max i |g i (λ)| ≤ D, and other conditions the same as Lemma 3. Then for the change of feature vector φ p ( ) associated with path p ( ) it holds that φ p ( ) (S s , S t , X) − φ p ( ) ( S s , S t , X) ≤ 1 √ N CDB −1 X . ducted on MSR Action3D dataset. For signal perturbation, signal-to-noise ratio (SNR) is defined as 10 log X 2 ∆ 2 . For structure perturbation, E is set to be a diagonal matrix, whose diagonal elements satisfy corresponding constraints on . From both Fig. 4(a) and (b) we can see that ST-GST is stable and will not deviate much from original output when the perturbations are small.
⊗
represents the Kronecker product of matrices; see Fig 1(c); ii) Cartesian product: G = G s × G t with A = A s ⊗ I T + I N ⊗ A t ; see Fig 1(d); and iii) strong product:
Figure 2 :
2Scattering tree of separable ST-GST with L = 3, J s = J t = 2.
Figure 3 :
3Performance comparisons with various settings about training ratio, time and nonlinearity.
Experimental setup. The number of layers, L, the number of spatial wavelet scales, J s , and the number of temporal wavelet scales, J t , are represented by (J s , J t , L) for separable ST-GST, and (J, L) for joint ST-GST. Training ratio means the fraction of data used for training in the training set. For the spatial domain, we use the skeleton graph; and for the temporal domain, we use a line graph connecting consecutive time stamps, seeFig. 1(a)(b). Geometric scattering wavelets are used in both domain, and the nonlinear activation σ(·) is absolute value function which has the property of energy-preserving. Features output by ST-GST are then utilized by random forest classifier with 300 decision trees for classification.
2 ≤
2Proof. From Proposition 2 inGama et al. (2019b) we can have that when E satisfies above condi-tions, H(S s ) − H( S s ) 2 ≤ C + O( 2 ). So H(S s ) ⊗ G(S t ) − H( S s ) ⊗ G(S t ) 2 = (H(S s ) − H( S s )) ⊗ G(S t ) 2 ≤ H(S s ) − H( S s ) 2 G(S t ) CD + O( 2 ),
Figure 4 :
4Comparisons on performance under different level of perturbations.
Table 1 :
1Classification accuracy (MSR Action3D with 288 training and 269 testing samples).Method
Accuracy (%)
GNNs
Deep LSTM
60.7
PA-LSTM
62.9
ST-LSTM+TG
69.2
Temporal Conv.
74.3
ST-GCN (fixed topology)
75.8
Scattering
Separable ST-GST (5, 20, 2)
68.7
Separable ST-GST (5, 20, 3)
73.1
Joint Kronecker ST-GST (15, 3)
55.7
Joint Cartesian ST-GST (15, 3)
56.2
Joint Strong ST-GST (15, 3)
57.1
Table 2 :
2Classification accuracy (NTU-RGB+D
with 40, 320 training and 16, 560 testing samples).
Spatial wavelet Temporal wavelet Accuracy (%)Table 5: Performance for different choices of spatial and temporal wavelets (MSR Action3D) with setting (5, 15, 3).Geometric
Geometric
85.9
Geometric
MonicCubic
76.6
Geometric
Itersine
73.6
MonicCubic
Geometric
82.9
Itersine
Geometric
82.5
MonicCubic
MonicCubic
80.7
MonicCubic
Itersine
78.4
Itersine
MonicCubic
76.2
Itersine
Itersine
80.7
Some other choices of a graph shift include the graph Laplacian matrix, graph transition matrix and their normalized counterparts. Adjacency matrix is considered here for notation simplicity.
Proof. Expand φ p ( ) (S s , S t , X) − φ p ( ) ( S s , S t , X) as(S t ))) p ( ) is a shorthand for applying spatiotemporal filters and nonlinear activation in order to input data times according to the path p ( ) . Add and subtract term σ(H j ( )x and apply triangle inequality, we can have thatRecursive quantities can be observed above and the bound can be solved explicitly(Gama et al., 2019b). By induction and conclusion from Lemma 3, we can get that NTU-RGB+D(Liu et al., 2019)is currently the largest dataset with 3D joints annotations for human action recognition task. It covers 60 action types and 40 subjects. The dataset contains 56,880 action clips with maximum number of frames 300, and there are 25 joints for each subject in one clip. Each clip is guaranteed to have at most 2 subjects. The cross-subject benchmark of NTU-RGB+D includes 40,320 clips for training and 16,560 for testing.Full table of performance on MSR Action3D dataset. The table contains performance comparison for different algorithms with different set of parameters on MSR Action3D dataset. Note that the triple shown after ST-GST represents the value for (J s , J t , L). Methods labeled "fixed topology" are modified so as not to use adaptive training of the adjacency matrix in order for the comparison with ST-GST to be fair. Methods labeled "learnable topology" means that we use adaptive training for adjacency matrix to further validate our claim. Other configurations of compared methods are then set by default. From the table we can see that ST-GST outperforms all other methods even when the graph topology can be learned by neural networks. The intuition behind this is that deep learning methods need large amount of training data due to the complex structures, and it can easily
Multiresolution signal decomposition: transforms, subbands, and wavelets. Ali N Akansu, Richard A Haddad, Academic press2 editionAli N. Akansu and Richard A. Haddad. Multiresolution signal decomposition: transforms, sub- bands, and wavelets. Academic press, 2 edition, 2000.
Neural network learning: Theoretical foundations. Martin Anthony, L Peter, Bartlett, cambridge university pressMartin Anthony and Peter L Bartlett. Neural network learning: Theoretical foundations. cambridge university press, 2009.
Active learning for networked data. Mustafa Bilgic, Lilyana Mihalkova, Lise Getoor, Proceedings of the 27th international conference on machine learning. the 27th international conference on machine learningMustafa Bilgic, Lilyana Mihalkova, and Lise Getoor. Active learning for networked data. In Pro- ceedings of the 27th international conference on machine learning, pp. 79-86, 2010.
Video-based human behavior understanding: A survey. Paulo Vinicius, Koerich Borges, Nicola Conci, Andrea Cavallaro, IEEE transactions on circuits and systems for video technology. 23Paulo Vinicius Koerich Borges, Nicola Conci, and Andrea Cavallaro. Video-based human behavior understanding: A survey. IEEE transactions on circuits and systems for video technology, 23(11): 1993-2008, 2013.
Invariant scattering convolution networks. Joan Bruna, Stéphane Mallat, IEEE transactions on pattern analysis and machine intelligence. 35Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872-1886, 2013.
Skeleton-based action recognition with shift graph convolutional network. Ke Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, Hanqing Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKe Cheng, Yifan Zhang, Xiangyu He, Weihan Chen, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 183-192, 2020.
Diffusion scattering transforms on graphs. Fernando Gama, Alejandro Ribeiro, Joan Bruna, International Conference on Learning Representations. Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Diffusion scattering transforms on graphs. In International Conference on Learning Representations, 2019a.
Stability of graph scattering transforms. Fernando Gama, Alejandro Ribeiro, Joan Bruna, Advances in Neural Information Processing Systems. Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Stability of graph scattering transforms. In Advances in Neural Information Processing Systems, pp. 8038-8048, 2019b.
Geometric scattering for graph data analysis. Feng Gao, Guy Wolf, Matthew Hirn, International Conference on Machine Learning. Feng Gao, Guy Wolf, and Matthew Hirn. Geometric scattering for graph data analysis. In Interna- tional Conference on Machine Learning, pp. 2122-2131, 2019.
A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, Benjamin Ricaud, IEEE Transactions on Signal Processing. 663Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, and Benjamin Ricaud. A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. IEEE Transactions on Signal Processing, 66(3):817-829, 2017.
Wavelets on graphs via spectral graph theory. K David, Pierre Hammond, Rémi Vandergheynst, Gribonval, Applied and Computational Harmonic Analysis. 302David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011.
Collaborative motion prediction via neural motion message passing. Yue Hu, Siheng Chen, Ya Zhang, Xiao Gu, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USAIEEE2020Yue Hu, Siheng Chen, Ya Zhang, and Xiao Gu. Collaborative motion prediction via neural motion message passing. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 6318-6327. IEEE, 2020.
Pruned graph scattering transforms. Siheng Vassilis N Ioannidis, Georgios B Chen, Giannakis, International Conference on Learning Representations. Vassilis N Ioannidis, Siheng Chen, and Georgios B Giannakis. Pruned graph scattering transforms. In International Conference on Learning Representations, 2020.
Graph based skeleton modeling for human activity analysis. Jiun-Yu Kao, Antonio Ortega, Dong Tian, Hassan Mansour, Anthony Vetro, 2019 IEEE International Conference on Image Processing (ICIP). IEEEJiun-Yu Kao, Antonio Ortega, Dong Tian, Hassan Mansour, and Anthony Vetro. Graph based skele- ton modeling for human activity analysis. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2025-2029. IEEE, 2019.
Interpretable 3d human action analysis with temporal convolutional networks. Soo Tae, Austin Kim, Reiter, 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW). IEEETae Soo Kim and Austin Reiter. Interpretable 3d human action analysis with temporal convolu- tional networks. In 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW), pp. 1623-1631. IEEE, 2017.
On the transferability of spectral graph filters. Ron Levie, Elvin Isufi, Gitta Kutyniok, 13th International conference on Sampling Theory and Applications (SampTA). IEEERon Levie, Elvin Isufi, and Gitta Kutyniok. On the transferability of spectral graph filters. In 2019 13th International conference on Sampling Theory and Applications (SampTA), pp. 1-5. IEEE, 2019.
Actional-structural graph convolutional networks for skeleton-based action recognition. Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, Qi Tian, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Long Beach, CA, USAComputer Vision Foundation / IEEEMaosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-structural graph convolutional networks for skeleton-based action recognition. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 3595-3603. Computer Vision Foundation / IEEE, 2019.
Action recognition based on a bag of 3d points. Wanqing Li, Zhengyou Zhang, Zicheng Liu, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEEWanqing Li, Zhengyou Zhang, and Zicheng Liu. Action recognition based on a bag of 3d points. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition- Workshops, pp. 9-14. IEEE, 2010.
Spatio-temporal lstm with trust gates for 3d human action recognition. Jun Liu, Amir Shahroudy, Dong Xu, Gang Wang, European conference on computer vision. SpringerMethod Accuracy (%)Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In European conference on computer vision, pp. 816-833. Springer, 2016. Method Accuracy (%)
. Gnns St-Gcn, fixed topologyGNNs ST-GCN (fixed topology)
. St-Gcn, 56.0learnable topologyST-GCN (learnable topology) 56.0
. Temporal Conv. (resnet). 673Temporal Conv. (resnet) 67.3
. Temporal Conv, 69resnet-v3-gapTemporal Conv. (resnet-v3-gap) 69.9
. Temporal Conv, resnet-v4-gap) 72.1Temporal Conv. (resnet-v4-gap) 72.1
. Ms-G3d, 80GCN scales=10, G3D scales=6MS-G3D (GCN scales=10, G3D scales=6) 80.3
Ms-G3d, GCN scales=5, G3D scales=5). 81MS-G3D (GCN scales=5, G3D scales=5) 81.4
Ms-G3d, GCN scales=8, G3D scales=5). 82MS-G3D (GCN scales=8, G3D scales=5) 82.2
. St-Gst Scattering Separable, 5Scattering Separable ST-GST (5, 5, 3) 73.6
. St-Gst Separable, 4) 72.95Separable ST-GST (5, 5, 4) 72.9
. St-Gst Separable, 3) 81.410Separable ST-GST (5, 10, 3) 81.4
. St-Gst Separable, 15Separable ST-GST (5, 15, 3) 85.9
. St-Gst Separable, 3) 87.020Separable ST-GST (5, 20, 3) 87.0
. 3) 61.0Joint Kronecker ST-GST. 15Joint Kronecker ST-GST (15, 3) 61.0
. St-Gst Joint Cartesian, 3) 59.115Joint Cartesian ST-GST (15, 3) 59.1
. Joint Strong ST-GST. 15761Joint Strong ST-GST (15, 3) 61.7
Table 3: Full comparison of classification accuracy (MSR Action3D with 288 training and 269. Table 3: Full comparison of classification accuracy (MSR Action3D with 288 training and 269
Performance for different methods on MSR Action3D with standard deviations. be trapped into bad local optima due to overfitting when the size of training set is limited, which is common in practice. Also the good performance of ST-GST in sparse label regime could potentially inspire active learning for processing spatio-temporal data. Bilgic, 4Table 4: Performance for different methods on MSR Action3D with standard deviations. be trapped into bad local optima due to overfitting when the size of training set is limited, which is common in practice. Also the good performance of ST-GST in sparse label regime could potentially inspire active learning for processing spatio-temporal data (Bilgic et al., 2010).
We repeat part of our experiments 20 times on MSR Action3D dataset, especially for joint approaches, to obtain the standard deviations of classification accuracy. The results are shown in Table 4. Note that since ST-GST is a mathematically designed transform, the output features should be the same for different trails, and the randomness comes from classifiers used later (random forest in this case). Performance on MSR Action3D dataset with standard deviations. It can be seen that the standard deviations are comparable in all these methods, and therefore the conclusion that separable ST-GST consistently outperforms joint ST-GST still holdsPerformance on MSR Action3D dataset with standard deviations. We repeat part of our exper- iments 20 times on MSR Action3D dataset, especially for joint approaches, to obtain the standard deviations of classification accuracy. The results are shown in Table 4. Note that since ST-GST is a mathematically designed transform, the output features should be the same for different trails, and the randomness comes from classifiers used later (random forest in this case). It can be seen that the standard deviations are comparable in all these methods, and therefore the conclusion that separable ST-GST consistently outperforms joint ST-GST still holds.
2019) for both spatial and temporal domain can achieve the best performance, which is reported in main text. Classification accuracy using other type of wavelets is shown here. All experiments performed here are separable ST-GST with J s = 5, J t = 15, L = 3 on MSR Action3D dataset. An interesting observation is that there is a significant reduction in accuracy when we change temporal wavelet from diffusion based one (Geometric) to spectrum based one (MonicCubic or Itersine). Gao, practice we find that using graph geometric scattering wavelets. Comparison between different choices of wavelets. This may caused by the design of different waveletsComparison between different choices of wavelets. In practice we find that using graph geometric scattering wavelets (Gao et al., 2019) for both spatial and temporal domain can achieve the best performance, which is reported in main text. Classification accuracy using other type of wavelets is shown here. All experiments performed here are separable ST-GST with J s = 5, J t = 15, L = 3 on MSR Action3D dataset. An interesting observation is that there is a significant reduction in accuracy when we change temporal wavelet from diffusion based one (Geometric) to spectrum based one (MonicCubic or Itersine). This may caused by the design of different wavelets.
We also show the classification accuracy under different level of perturbations on spatio-temporal signals and spatial graph structures in Fig. Stability, St-Gst, 4. The experiments are conStability of ST-GST. We also show the classification accuracy under different level of perturba- tions on spatio-temporal signals and spatial graph structures in Fig. 4. The experiments are con- |
13,206,339 | GO FOR A WALK AND ARRIVE AT THE ANSWER: REASONING OVER PATHS IN KNOWLEDGE BASES USING REINFORCEMENT LEARNING | Knowledge bases (KB), both automatically and manually constructed, are often incomplete -many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA 1 , which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with combinatorially many destinations from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach obtains state-of-the-art results on several datasets, significantly outperforming prior methods. | [
1619841,
9206785,
11171811,
14170854,
9676646,
2768038,
2687019,
6018348,
2127100,
2711679
] | GO FOR A WALK AND ARRIVE AT THE ANSWER: REASONING OVER PATHS IN KNOWLEDGE BASES USING REINFORCEMENT LEARNING
Rajarshi Das [email protected]
University of Massachusetts
Amherst
Shehzaad Dhuliawala [email protected]
University of Massachusetts
Amherst
Manzil Zaheer [email protected]
Carnegie Mellon University
Luke Vilnis
University of Massachusetts
Amherst
Ishan Durugkar [email protected]
University of Texas at Austin
Akshay Krishnamurthy [email protected]
University of Massachusetts
Amherst
Alex Smola
Amazon Web Services
Andrew Mccallum [email protected]
University of Massachusetts
Amherst
GO FOR A WALK AND ARRIVE AT THE ANSWER: REASONING OVER PATHS IN KNOWLEDGE BASES USING REINFORCEMENT LEARNING
Knowledge bases (KB), both automatically and manually constructed, are often incomplete -many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA 1 , which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with combinatorially many destinations from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach obtains state-of-the-art results on several datasets, significantly outperforming prior methods.
INTRODUCTION
Automated reasoning, the ability of computing systems to make new inferences from observed evidence, has been a long standing goal of artificial intelligence. We are interested in automated reasoning on large knowledge bases (KB) with rich and diverse semantics (Suchanek et al., 2007;Bollacker et al., 2008;Carlson et al., 2010). KBs are highly incomplete (Min et al., 2013), and facts not directly stored in a KB can often be inferred from those that are, creating exciting opportunities and challenges for automated reasoning. For example, consider the small knowledge graph in figure 1. We can infer the (unobserved fact) home stadium of Colin Kaepernick from the following reasoning path: Colin Kaepernick → PlaysInTeam → 49ers → TeamHomeStadium → Levi's Stadium. Our goal is to automatically learn such reasoning paths in KBs. We frame the learning problem as one of query answering, that is to say, answering questions of the form (Colin Kaepernick, PlaysInLeague, ?). From its early days, the focus of automated reasoning approaches has been to build systems which can learn crisp symbolic logical rules (McCarthy, 1960;Nilsson, 1991). Symbolic representations have also been integrated with machine learning especially in statistical relational learning (Muggleton et al., 1992;Getoor & Taskar, 2007;Kok & Domingos, 2007;Lao et al., 2011), but due to poor generalization performance, these approaches have largely been superceded by distributed vector representations. Learning embedding of entities and relations using tensor factorization or neural methods has been a popular approach (Nickel et al., 2011;Bordes et al., 2013;Socher et al., 2013; inter alia), but these methods cannot capture chains of reasoning expressed by KB paths. Neural multi-hop models (Neelakantan et al., 2015;Guu et al., 2015;Toutanova et al., 2016) address the aforementioned problems to some extent by operating on KB paths in vector space. However, these models take as input a set of paths which are gathered by performing random walks independent of the query relation. Additionally, models such as Neelakantan et al. (2015); Das et al. (2017) use the same set of initially collected paths to answer a diverse set of query types (e.g. MarriedTo, Nationality, WorksIn etc.). This paper presents a method for efficiently searching the graph for answer-providing paths using reinforcement learning (RL) conditioned on the input question, eliminating any need for precomputed paths. Given a massive knowledge graph, we learn a policy, which, given the query (entity 1 , relation, ?), starts from entity 1 and learns to walk to the answer node by choosing to take a labeled relation edge at each step, conditioning on the query relation and entire path history. This formulates the query-answering task as a reinforcement learning (RL) problem where the goal is to take an optimal sequence of decisions (choices of relation edges) to maximize the expected reward (reaching the correct answer node). We call the RL agent MINERVA for "Meandering In Networks of Entities to Reach Verisimilar Answers."
Our RL-based formulation has many desirable properties. First, MINERVA has the built-in flexibility to take paths of variable length, which is important for answering harder questions that require complex chains of reasoning (Shen et al., 2017). Secondly, MINERVA needs no pretraining and trains on the knowledge graph from scratch with reinforcement learning; no other supervision or fine-tuning is required representing a significant advance over prior applications of RL in NLP. Third, our path-based approach is computationally efficient, since by searching in a small neighborhood around the query entity it avoids ranking all entities in the KB as in prior work. Finally, the reasoning paths found by our agent automatically form an interpretable provenance for its predictions.
The main contributions of the paper are: (a) We present agent MINERVA, which learns to do query answering by walking on a knowledge graph conditioned on an input query, stopping when it reaches the answer node. The agent is trained using reinforcement learning, specifically policy gradients ( § 2). (b) We evaluate MINERVA on several benchmark datasets and compare favorably to Neural Theorem Provers (NTP) (Rocktäschel & Riedel, 2017) and Neural LP , which do logical rule learning in KBs, and also state-of-the-art embedding based methods such as DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016). (c) We also extend MINERVA to handle partially structured natural language queries and test it on the WikiMovies dataset ( § 4.3) (Miller et al., 2016).
We also compare to DeepPath (Xiong et al., 2017) which uses reinforcement learning to pick paths between entity pairs. The main difference is that the state of their RL agent includes the answer entity since it is designed for the simpler task of predicting if a fact is true or not. As such their method cannot be applied directly to our more challenging query answering task where the second entity is unknown and must be inferred. Nevertheless, MINERVA outperforms DeepPath on their benchmark NELL-995 dataset when compared in their experimental setting ( § 4.1).
TASK AND MODEL
We formally define the task of query answering in a KB. Let E denote the set of entities and R be the set of binary relations. Then a KB is a collection of facts stored as triplets (e 1 , r, e 2 ) where e 1 , e 2 ∈ E and r ∈ R. Query answering seeks to answer questions of the form (e 1 , r, ?), e.g. Toronto, locatedIn, ?. We would also like to clearly point out the difference between query answering and the task of fact prediction. Fact prediction involves predicting if a fact is true or not, e.g. (Toronto, locatedIn, Canada)?. This task is easier than predicting the correct entity as the answer in query answering since the latter require finding the answer entity among many possible entities.
Next we describe how we reduce the problem of query answering in a KB to a finite horizon sequential decision making problem and solve it using reinforcement learning. We begin by representing the environment as a deterministic Markov decision process on a knowledge graph G derived from the KB ( §2.1). Our RL agent is given an input query of the form e 1q , r q , ? . Starting from vertex corresponding to e 1q in the knowledge graph G, the agent learns to traverse the environment/graph to mine the answer and stop when it determines the answer ( § 2.2). The agent is trained using policy gradient more specifically by REINFORCE (Williams, 1992) with control variates ( § 2.3). Let us begin by describing the environment.
ENVIRONMENT -STATES, ACTIONS, TRANSITIONS AND REWARDS
Our environment is a finite horizon, deterministic and partially observed Markov decision process that lies on a knowledge graph derived from the KB. Recall that a KB is collection of facts stored as triplets (e 1 , r, e 2 ) where e 1 , e 2 ∈ E and r ∈ R. From the KB, a knowledge graph G can be constructed where the entities e 1 , e 2 are represented as the nodes and relation r as labeled edge between them. Formally, a knowledge graph is a directed labeled multigraph G = (V, E, R), where V and E denote the vertices and edges of the graph respectively. Note that V = E and E ⊆ V × R ×V . Also, following previous approaches (Bordes et al., 2013;Neelakantan et al., 2015;Xiong et al., 2017), we add the inverse relation of every edge, i.e. for an edge (e 1 , r, e 2 ) ∈ E, we add the edge (e 2 , r −1 , e 1 ) to the graph. (If the set of binary relations R does not contain the inverse relation r −1 , it is added to R as well.) On this graph we will now specify a deterministic partially observable Markov decision process, which is a 5-tuple (S, O, A, δ, R), each of which we elaborate below.
States. The state space S consists of all possible query-answers cartesian product with the set of entities. Intuitively, we want a state to encode the query (e 1q , r q ), the answer (e 2q ), and a location of exploration e t (current node of the entity). Thus overall a state S ∈ S is represented by S = (e t , e 1q , r q , e 2q ) and the state space consists of all valid combinations.
Observations. The complete state of the environment is not observable, but only its current location of exploration and query can be observed but not the answer, i.e. only (e t , e 1q , r q ) is observed. Formally the observation function O : S → V ×V × R is defined as O(s = (e t , e 1q , r q , e 2q )) = (e t , e 1q , r q ).
Actions. The set of possible actions A S from a state S = (e t , e 1q , r q , e 2q ) consists of all outgoing edges of the vertex e t in G. Formally A S = {(e t , r, v) ∈ E : S = (e t , e 1q , r q , e 2q ), r ∈ R, v ∈ V } ∪ {(s, ∅, s)}. Basically, this means an agent at each state has option to select which outgoing edge it wishes to take having the knowledge of the label of the edge r and destination vertex v.
During implementation, we unroll the computation graph up to a fixed number of time steps T. We augment each node with a special action called 'NO OP' which goes from a node to itself. Some questions are easier to answer and needs lesser steps of reasoning than others. This design decision allows the agent to remain at a node for any number of time steps. This is especially helpful when the agent has managed to reach a correct answer at a time step t < T and can continue to stay at the 'answer node' for the rest of the time steps. Alternatively, we could have allowed the agent to take a special 'STOP' action, but we found the current setup to work sufficiently well. As mentioned before, we also add the inverse relation of a triple, i.e. for the triple (e 1 , r, e 2 ), we add the triple (e 2 , r −1 , e 1 ) to the graph. We found this important because this actually equips our agent to undo a potentially wrong decision as it can retract back to the current node in the next step.
Transition. The environment evolves deterministically by just updating the state to the new vertex pointed by the edge selected by the agent through its action. The query and answer remains the same. Formally, the transition function is δ : S × A → S defined by δ(S, A) = (v, e 1q , r q , e 2q ), where S = (e t , e 1q , r q , e 2q ) and A = (e t , r, v)).
Rewards. We only have a terminal reward of +1 if the current location is the correct answer at the end and 0 otherwise. To elaborate, if S T = (e t , e 1q , r q , e 2q ) is the final state, then we receive a reward of +1 if e t = e 2q else 0.=, i.e. R(S T ) = I{e t = e 2q }.
POLICY NETWORK
To solve the finite horizon deterministic partially observable Markov decision process described above, we aim to design a randomized history-dependent policy π = (d 1 , d 2 , ..., d T−1 ), where d t :
H t → P(A S t ) and history H t = (H t−1 , A t−1 , O t )
is just the sequence of observations and actions taken. We restrict ourselves to the function class expressed by long short-term memory network (LSTM) (Hochreiter & Schmidhuber, 1997) for learning the randomized history-dependent policy.
An agent based on LSTM encodes the history H t as a continuous vector h t ∈ R 2d . We also have embedding matrix r ∈ R |R|×d and e ∈ R |E|×d for the binary relations and entities respectively. The history embedding for H t = (H t−1 , A t−1 , O t ) is updated according to LSTM dynamics:
h t = LSTM (h t−1 , [a t−1 ; o t ])(1)
where a t−1 ∈ R d and o t ∈ R d denote the vector representation for action/relation at time t − 1 and observation/entity at time t respectively and [; ] denote vector concatenation. To elucidate, a t−1 = r A t−1 , i.e. the embedding of the relation corresponding to label of the edge the agent chose at time t − 1 and o t = e e t if O t = (e t , e 1q , r q ) i.e. the embedding of the entity corresponding to vertex the agent is at time t.
Based on the history embedding h t , the policy network makes the decision to choose an action from all available actions (A S t ) conditioned on the query relation. Recall that each possible action represents an outgoing edge with information of the edge relation label l and destination vertex/entity d. So embedding for each A ∈ A S t is [r l ; e d ], and stacking embeddings for all the outgoing edges we obtain the matrix A t . The network taking these as inputs is parameterized as a two-layer feedforward network with ReLU nonlinearity which takes in the current history representation h t and the embedding for the query relation r q and outputs a probability distribution over the possible actions from which a discrete action is sampled. In other words,
d t = softmax (A t (W 2 ReLU (W 1 [h t ; o t ; r q ]))) A t ∼ Categorical (d t )
Note that the nodes in G do not have a fixed ordering or number of edges coming out from them. The size of matrix A t is |A S t | × 2d, so the decision probabilities d t lies on simplex of size |A S t |. Also the procedure above is invariant to order in which edges are presented as desired and falls in purview of neural networks designed to be permutation invariant Zaheer et al. (2017). Finally, to summarise, the parameters of the LSTM, the weights W 1 , W 2 , the corresponding biases (not shown above for brevity), and the embedding matrices form the parameters θ of the policy network.
TRAINING
For the policy network (π θ ) described above, we want to find parameters θ that maximizes the expected reward:
J(θ) = E (e 1 ,r,e 2 )∼D E A 1 ,..,A T −1 ∼π θ [R(S T )|S 1 = (e 1 , e 1 , r, e 2 )]
where we assume there is a true underlying distribution (e 1 , r, e 2 ) ∼ D. To solve this optimization problem, we employ REINFORCE (Williams, 1992) as follows:
• The first expectation is replaced with empirical average over the training dataset.
• For the second expectation, we approximate by running multiple rollouts for each training example. The number of rollouts is fixed and for all our experiments we set this number to 20. • For variance reduction, a common strategy is to use an additive control variate baseline (Hammersley, 2013;Fishman, 2013;Evans & Swartz, 2000). We use a moving average of the cumulative discounted reward as the baseline. We tune the weight of this moving average as a hyperparameter. Note that in our experiments we found that learnt baseline performed similarly, but we finally settled for cumulative discounted reward as the baseline owing to its simplicity. • To encourage the policy to sample more diverse paths rather than sticking with a few, we add an entropy regularization term to our cost function after multiplying it by a constant (β). We treat β as a hyperparameter to control the exploration exploitation trade-off.
Experimental Details We choose the relation and embedding dimension size as 200. The action embedding is formed by concatenating the entity and relation embedding. We use a 3 layer LSTM with dimension size of 400. The hidden layer size of MLP (weights W 1 and W 2 ) is set to 400. We use Adam (Kingma & Ba, 2014)
DATA
We test our model on the following query answering datasets. (a) COUNTRIES (Bouchard et al., 2015), (Dettmers et al., 2017), (e) NELL-995, (f) FB15k-237 (g) WikiMovies (Miller et al., 2016). We also test on a synthetic grid world dataset released by to test the ability of the model to learn rules of long length.
(b) Alyawarra kinship (KINSHIP), (c) Unified Medical Language Systems (UMLS) (Kok & Domingos, 2007) (d) WN18RR
The COUNTRIES dataset is carefully designed to explicitly test the logical rule learning and reasoning capabilities of link prediction models. The dataset has 3 tasks (S1-3 in table 2) each requiring reasoning steps of increasing length and difficulty (see Rocktäschel & Riedel (2017) for more details about the tasks). We also test our model on existing large and challenging KG datasets ((d) -(f)). WN18RR is created from the original WORDNET18 dataset by removing test triples which can be answered trivially, making the datasets more realistic and challenging. Additionally, we test our model on a question answering dataset -WikiMovies (Miller et al., 2016) where the query is in natural language but the answers can be found in an accompanying KB. ) and the answer is a region. For example, LocatedIn(Egypt, ?) with the answer as Africa. Our experimental settings and scores are directly comparable to NTP and ComplEx (Trouillon et al., 2016). NTP-λ is a NTP model trained with an additional objective function of ComplEx. We also compare MINERVA against Neural LP on the UMLS and KINSHIP datasets.
The evaluation metric we report is HITS@k -which is the percentage of correct entities ranked in top-k. For the COUNTRIES dataset, we report the area under the precision-recall curve for comparing with the baselines. Following the design of the COUNTRIES dataset, for task S1 and S2, we set the maximum path length T = 2 and for S3, we set T = 3. We significantly outperform Neu-ralLP for longer path lengths. Table 2 shows that MINERVA outperforms all the baseline models except on the task S2 of COUNTRIES, where the ensemble model NTP-λ outperforms it, albeit with a higher variance across runs. Our gains are much more prominent in task S3, which is the hardest among all the tasks. We similarly outperform NeuralLP on the UMLS and KINSHIP datasets.
NELL-995
We also compare MINERVA to DeepPath. For a fair comparison, we only rank the answer entities against the negative examples in the dataset used in their experiments 2 and report the mean average precision (MAP) scores for each query relation. DeepPath feeds the paths its agent gathers as input features to the path ranking algorithm (PRA) (Lao et al., 2011), which trains a per-relation classifier. But unlike them, we train one model which learns for all query relations. If our agent is not able to reach the correct entity or one of the negative entities, the corresponding query gets a score of negative infinity. As show in table 6, we outperform them or achieve comparable performance for all the query relations For this experiment, we set the maximum length T = 3. Upon delving more into the structure of knowledge graph derived from FB15K-237, we found few interesting characteristics of the dataset. As a prelude, we would like to describe a long existing concept in graph theory -clustering coefficient (Holland & Leinhardt, 1971;Watts & Strogatz, 1998). Clustering coefficient (τ) of a graph measures whether groups of nodes form 'tightly knit' communities -i.e. whether groups of nodes tend to cluster together. A high τ implies the presence of higher number of densely connected groups of nodes. For instance, if we consider three nodes A, B and C, a high τ means with high probability whenever three nodes are connected as A -B -C, it implies nodes A -C are also connected forming a triangle. Intuitively, MINERVA can use such closed shapes to learn paths such as (A -B -C) to predict the answer of the query, i.e. the third node (C). The clustering coefficient also extends from triangles to cliques of arbitrary size (Watts & : Count of number of unique path types of length 3 which occur more than 'x' times in various datasets. In Kinship and NELL-995, there are more than 10 3 path types which occur more than 10 3 times, however for FB15k-237, we see a sharp decrease as 'x' becomes higher. Strogatz, 1998). Figure 3 plots τ for various datasets. We find that FB15k-237 has the least clustering coefficient (0.19) among all datasets. This means that the dataset has sparse neighborhoods and hence MINERVA finds it difficult to learn logical rules. We also check the frequency of occurrence of various unique paths (types). We define a path type as the sequence of relations (ignoring the entities) in a path. Intuitively, a predictive path which generalizes across queries will occur many number of times in the graph. Figure 4 shows the plot. As we can see, the characteristics of FB15k-237 is quite different from other datasets. Path types do not repeat that often, making it hard for MINERVA to learn paths which generalizes. We also provide further analysis of the types of various query relation in FB15k-237 in the appendix.
GRID WORLD PATH FINDING
As we empirically find and also noted by previous work (Rocktäschel & Riedel, 2017;Das et al., 2017;, often the reasoning chains required to answer queries in KB is not too long (restricted to 3 or 4 hops). To test if our model can learn long reasoning paths, we test our model on a synthetic 16-by-16 grid world dataset created by , where the task is to navigate to a particular cell (answer entity) starting from a random cell (start entity) by following a set of directions (query relation). The KB consists of atomic triples of the form ((2,1), North, (1,1)) -entity (1,1) is north of entity (2,1). The queries consists of a sequence of directions (e.g. North, SouthWest, East). The queries are classified into classes based on the path lengths. Figure 2 shows the accuracy on varying path lengths. Compared to Neural LP, MINERVA is much more robust for queries which require longer path lengths showing a very little degrade in performance for even the longest path length in the dataset. Queries in KB datasets are structured in the form of triples. However, this is unsatisfactory since for most real applications, the queries appear in natural language. As a first step in this direction, we extend MINERVA to take in "partially structured" queries. We use the WikiMovies dataset (Miller et al., 2016) which contains questions in natural language albeit generated by templates created by human annotators. An example question from the dataset is "Which is a film written by Herb Freed?". WikiMovies also has an accompanying KB which can be used to answer all the questions.
PARTIALLY STRUCTURED QUERIES
We link the entity occurring in the question to the KB via simple string matching. To form the vector representation of the query relation, we design a simple question encoder which computes the average of the embeddings of the question words. The word embeddings are learned from scratch and we do not use any pretrained embeddings. We compare our results with those reported in (table 7). We got the best result using T = 1, suggesting that WikiMovies is not the best testbed for multihop reasoning, but this experiment is a promising first step towards the realistic setup of having textual queries and knowledge bases.
ANALYSIS
Effectiveness of Remembering Path History. MINERVA encodes the history of decisions it has taken in the past using LSTMs. To test the importance of remembering the sequence of decisions, we did an ablation study in which the agent chose the next action based on only local information i.e. current entity and query and did not have access to the history h t . For the KINSHIP dataset, we observe a 27% points decrease in HITS@1 (0.184 v/s 0.46) and 13% decrease in HITS@10 (0.63 v/s 0.76). For grid-world, it is also not surprising that we see a big drop in performance. The final accuracy is 0.23 for path lengths 2-4 and 0.04 for lengths 8-10. For NELL, the performance dropped from 0.576 to 0.564 and for FB15k-237 the HITS@10 performance dropped from 0.456 to 0.408.
NO-OP and Inverse
Relations. At each step, MINERVA can choose to take a NO-OP edge and remain at the same node. This gives the agent the flexibility of taking paths of variable lengths. Some questions are easier to answer than others and require lesser steps of reasoning and if the agent reaches the answer early, it can choose to remain there. Example (i) in table 8 shows such an example. Similarly inverse relation gives the agent the ability to recover from a potentially wrong decision it has taken before. Example (ii) shows such an example, where the agent took a incorrect decision at the first step but was able to revert the decision because of the presence of inverted edges.
Query based Decision Making. At each step before making a decision, our agent conditions on the query relation. Figure 5 shows examples, where based on the query relation, the probabilities are peaked on different actions. For example, when the query relation is WorksFor, MINERVA assigns a much higher probability of taking the edge CoachesTeam than AthletePlaysInLeague. We also see similar behavior on the WikiMovies dataset where the query consists of words instead of fixed schema relation.
Inference Time. MINERVA is efficient at inference time since it has to essentially search for answer entities in its local neighborhood, whereas previous methods rank all the entities in the dataset. For instance, on the test dataset of WN18RR, the wall clock running time of MINERVA is 63 seconds whereas that of a GPU implementation of DistMult is 211 seconds (with the maximum batch size).
RELATED WORK
Learning vector representations of entities and relations using tensor factorization (Nickel et al., 2011;2012;Bordes et al., 2013;Riedel et al., 2013;Nickel et al., 2014;Yang et al., 2015) or neural methods (Socher et al., 2013;Toutanova et al., 2015;Verga et al., 2016) has been a popular approach to reasoning with a knowledge base. However, these methods cannot capture more complex reasoning patterns such as those found by following inference paths in KBs. Multi-hop link prediction approaches (Lao et al., 2011;Neelakantan et al., 2015;Guu et al., 2015;Toutanova et al., 2016;Das et al., 2017) address the problems above, but the reasoning paths that they operate on are gathered by (i) Can learn general rules: (i)). It can learn shorter paths if necessary (example (ii)) and has the ability to correct a previously taken decision (example (iii)) . performing random walks independent of the type of query relation. Lao et al. (2011) further filters paths from the set of sampled paths based on the restriction that the path must end at one of the target entities in the training set and are within a maximum length. These constraints make them query dependent but they are heuristic in nature. Our approach eliminates any necessity to pre-compute paths and learns to efficiently search the graph conditioned on the input query relation.
(S1) LocatedIn(X, Y) ← LocatedIn(X, Z) & LocatedIn(Z, Y) (S2) LocatedIn(X, Y) ← NeighborOf(X, Z) & LocatedIn(Z, Y) (S3) LocatedIn(X, Y) ← NeighborOf(X, Z) & NeighborOf(Z, W) & LocatedIn(W, Y)(
Inductive Logic Programming (ILP) (Muggleton et al., 1992) aims to learn general purpose predicate rules from examples and background knowledge. Early work in ILP such as FOIL (Quinlan, 1990), PROGOL (Muggleton, 1995) are either rule-based or require negative examples which is often hard to find in KBs (by design, KBs store true facts). Statistical relational learning methods (Getoor & Taskar, 2007;Kok & Domingos, 2007;Schoenmackers et al., 2010) along with probabilistic logic (Richardson & Domingos, 2006;Broecheler et al., 2010; combine machine learning and logic but these approaches operate on symbols rather than vectors and hence do not enjoy the generalization properties of embedding based approaches.
Neural Theorem Provers (NTP) (Rocktäschel & Riedel, 2017) and Neural LP are two recent methods in learning logical rules that can be trained end-to-end with gradient based learning. NTPs are constructed by Prolog's backward chaining inference method. It operates on vectors rather than symbols, thereby providing a success score for each proof path. However, since a score can be computed between any two vectors, the computation graph becomes quite large because of such soft-matching during substitution step of backward chaining. For tractability, it resides to heuristics such as only keeping the top-K scoring proof paths, but it loses any guarantee of computing exact gradients. Also the efficacy of NTPs has yet to be shown on large KBs. Neural LP introduces a differential rule learning system using operators defined in TensorLog (Cohen, 2016) and has a LSTM based controller and a differentiable memory component (Graves et al., 2014;Sukhbaatar et al., 2015) and the rule scores are calculated via attention. Even though, differentiable memory allows the network to be trained end to end, it necessitates accessing the entire memory which can be computationally expensive. RL approaches which can make hard selection of memory (Zaremba & Sutskever, 2015) are computationally attractive. MINERVA uses a similar hard selection of relation edges to walk on the graph. More importantly, MINERVA outperforms both these methods on their respective benchmark datasets.
DeepPath (Xiong et al., 2017) uses RL based approaches to find paths in KBs. However, the state of their MDP requires the target entity to be known in advance and hence their path finding strategy is dependent on knowing the answer entity. MINERVA does not need any knowledge of the target entity and instead learns to find the answer entity among all entities. DeepPath, additionally feeds its gathered paths to Path Ranking Algorithm (Lao et al., 2011), whereas MINERVA is a complete system trained to do query answering. DeepPath also uses fixed pretrained embeddings for its entity and relations. Lastly, on comparing MINERVA with DeepPath in their experimental setting on the NELL dataset, we match their performance or outperform them. MINERVA is also similar to methods for learning to search for structured prediction (Collins & Roark, 2004;Daumé III & Marcu, 2005;Daumé III et al., 2009;Ross et al., 2011;Chang et al., 2015). These methods are based on imitating a reference policy (oracle) which make near-optimal decision at every step. In our problem setting, it is unclear what a good reference policy would be. For example, a shortest path oracle between two entities would be bad, since the answer providing path should depend on the query relation.
CONCLUSION
We explored a new way of automated reasoning on large knowledge bases in which we use the knowledge graphs representation of the knowledge base and train an agent to walk to the answer node conditioned on the input query. We achieve state-of-the-art results on multiple benchmark knowledge base completion tasks and we also show that our model is robust and can learn long chains-ofreasoning. Moreover it needs no pretraining or initial supervision. Future research directions include applying more sophisticated RL techniques and working directly on textual queries and documents. We further did some query analysis on the FB15k-237 dataset. Following Bordes et al. (2013), we categorized the query relations into (M)any to 1, 1 to M and 1 to 1 relations. An example of a M to 1 relation would be '/people/profession' (What is the profession of 'X'?). An example of 1 to M relation would be '/people/cause of death/people'. An example query of that relation would be (Traffic collision, /people/cause of death/people, ?) 'Who were killed in traffic collision accidents?'. Another example would be /music/instrument/instrumentalists (Who plays the music instrument 'X'?). From a query answering point of view, the answers to this question is a list of entities. However, during evaluation time, the model is evaluated based on whether it is able to predict the one target entity which is in the query triple. Also since MINERVA outputs the end points of the paths as target entities, it is sometimes possible that the particular target entity of the triple does not have a path from the source entity (however there are paths to other 'correct' answer entities). Table 9 shows few other examples of relations belonging to different classes.
Following Bordes et al. (2013), we classify a relation as 1-to-M if the ratio of cardinality of tail to head entities is greater than 1.5 and as M-to-1 if it is lesser than 0.67. In the validation set of FB15k-237, 54% of the queries are 1-to-M, whereas only 26% are M-to-1. Contrasting it with NELL-995, 27% are 1-to-M and 36% are M-to-1 or UMLS or KINSHIP where only 18% and 32% of the relations are 1-to-M. Table 10 shows relations from FB15k-237 dataset which have tail-to-head ratio. The average ratio for 1-TO-M relations in FB15k-237 is 13.39 (substantially higher than 1.5). As explained before, the current evaluation scheme is unfair when it comes to 1-to-M relations and the high percentage of 1-to-M relations in FB15k-237 also explains the sub optimal performance of MINERVA.
Figure 2 :
2Grid world experiment:
Figure 3 :Figure 4
34Network
Figure 5 :
5Who directed the movie "Pennies from Heaven"?Who starred in the movie "Pennies from Heaven"? Based on the query relation our agent assigns different probabilities to different actions. The dashed edges in the top row denote query relation. Examples in the bottom row are from the WikiMovies dataset and hence the questions are partially structured.
ii) Can learn shorter path: Richard F
with the default parameters in REINFORCE for the update. The best hyperparameter values can be found in appendix.Table 1: Statistics of various datasets used in experiments.Dataset
#entities #relations
#facts
#queries
COUNTRIES
272
2
1158
24
UMLS
135
49
5,216
661
KINSHIP
104
26
10686
1074
WN18RR
40,945
15
86,835
3134
NELL-995
75,492
200
154,213
3992
FB15K-237
14,505
237
272,115
20,466
WikiMovies
43,230
9
196,453
9952
Task
Metric
Model
ComplEx
NTP
NTP-λ
MINERVA
S1
99.37±0.4 90.83±15.4
100.0±0.0
100.0±0.0
S2
AUC-PR 87.95±2.8
87.4±11.7
93.04±0.4
91±0.01
S3
48.44±6.3 56.68±17.6 77.26±17.0
93±0.01
Table 2 :
2Performance on COUNTRIES dataset. MINERVA significantly outperforms baselines in the challenging S3 task.
Table 1
1report the various
Table 3 :
3HITS@10 on UMLS andKINSHIP
Table 4 :
4MAP scores for different query relations on
the NELL-995 dataset. Note that in this comparison,
MINERVA refers to only a single learnt model for all query
relations which is competitive with individual DeepPath
models trained separately for each query relation.
Table 5 :
5Performance on WN18RR For this experiment, we also set the maximum length T = 3.WN18RR Next we test MINERVA on another large KB
dataset -WN18RR. On this dataset, we compare with
three recently proposed latent factorization model -(a)
ConvE (Dettmers et al., 2017), (b) DistMult (Yang et al.,
2015), (c) ComplEx (Trouillon et al., 2016). We report
HITS at various k and we compare favorably with the
state-of-the-art results of ComplEx in all settings (table
5). Model
HITS@10
ConvE
0.458
DistMult
0.568
ComplEx
0.419
MINERVA
0.456
Table 6 :
6Performance
on FB15K-237
Table 7 :
7Performance on WikiMovies
Table 8 :
8A few example of paths found by MINERVA on the COUNTRIES and NELL. MINERVA can learn general rules as required by the COUNTRIES dataset (example
Table 9 :
9Few example facts belonging to m to 1, 1 to m relations in FB15k-237 Relation tail/head /people/marriage union type/unions of this type./people/marriage/location of ceremony 129.75 /organization/role/leaders./organization/leadership/organization 65.15 /location/country/second level divisions 49.18 /user/ktrueman/default domain/international organization/member states 36.5 /base/marchmadness/ncaa basketball tournament/seeds./base/marchmadness/ncaa tournament seed/team 33.6.
Table 10 :
10Few example 1-to-M relations from FB15k-237 with high cardinality ratio of tail to head. 8 APPENDIX 8.1 ANALYSIS OF QUERY RELATIONS OF FB15K-237
Table 11 :
11Best hyper parameters 8.2 HYPERPARAMETERSIn our experiments, we tune our model over two hyper parameters, viz., β which is the entropy regularization constant and λ which is the moving average constant for the REINFORCE baseline. The table 11 lists the best hyper parameters for all the datasets.
We are grateful toXiong et al. (2017) for releasing the negative examples used in their experiments.3 We are aware of the high variance of DistMult scores reported on FB15k-237 by several papers, but to ensure fairness we report the high scores our in-house implementation achieved.
ACKNOWLEDGEMENTSThis work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-13-2-0020, in part by Defense Advanced Research Agency (DARPA) contract number HR0011-15-2-0036, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053 and in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
Freebase: A collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, ICDM. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A collabora- tively created graph database for structuring human knowledge. In ICDM, 2008.
Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, NIPS. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, 2013.
On approximate reasoning capabilities of low-rank vector spaces. Guillaume Bouchard, Sameer Singh, Theo Trouillon, AAAI Spring Symposium. Guillaume Bouchard, Sameer Singh, and Theo Trouillon. On approximate reasoning capabilities of low-rank vector spaces. AAAI Spring Symposium, 2015.
Probabilistic similarity logic. Matthias Broecheler, Lilyana Mihalkova, Lise Getoor, UAI. Matthias Broecheler, Lilyana Mihalkova, and Lise Getoor. Probabilistic similarity logic. In UAI, 2010.
Toward an Architecture for Never-ending Language Learning. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, R Estevam, Jr Hruschka, Tom M Mitchell, AAAI. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, Jr., and Tom M. Mitchell. Toward an Architecture for Never-ending Language Learning. In AAAI, 2010.
Learning to search better than your teacher. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, John Langford, ICML. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, and John Langford. Learning to search better than your teacher. In ICML, 2015.
William Cohen, arXiv:1605.06523Tensorlog: A differentiable deductive database. William Cohen. Tensorlog: A differentiable deductive database. arXiv:1605.06523, 2016.
Incremental parsing with the perceptron algorithm. Michael Collins, Brian Roark, ACL. Michael Collins and Brian Roark. Incremental parsing with the perceptron algorithm. In ACL, 2004.
Chains of reasoning over entities, relations, and text using recurrent neural networks. Rajarshi Das, Arvind Neelakantan, David Belanger, Andrew Mccallum, Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. In EACL, 2017.
Learning as search optimization: Approximate large margin methods for structured prediction. Hal Daumé, Iii , Daniel Marcu, ICML. Hal Daumé III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In ICML, 2005.
Search-based structured prediction. Machine learning. Hal Daumé, Iii , John Langford, Daniel Marcu, Hal Daumé III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 2009.
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, arXiv:1707.01476Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. arXiv:1707.01476, 2017.
Approximating integrals via Monte Carlo and deterministic methods. Michael Evans, Timothy Swartz, OUPOxfordMichael Evans and Timothy Swartz. Approximating integrals via Monte Carlo and deterministic methods. OUP Oxford, 2000.
Monte Carlo: concepts, algorithms, and applications. George Fishman, Springer Science & Business MediaGeorge Fishman. Monte Carlo: concepts, algorithms, and applications. Springer Science & Business Media, 2013.
Introduction to statistical relational learning. Lise Getoor, Ben Taskar, MIT pressLise Getoor and Ben Taskar. Introduction to statistical relational learning. MIT press, 2007.
Alex Graves, Greg Wayne, Ivo Danihelka, arXiv:1410.5401Neural turing machines. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv:1410.5401, 2014.
Traversing knowledge graphs in vector space. Kelvin Guu, John Miller, Percy Liang, EMNLP. Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. In EMNLP, 2015.
Monte carlo methods. John Hammersley, Springer Science & Business MediaJohn Hammersley. Monte carlo methods. Springer Science & Business Media, 2013.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997.
Transitivity in structural models of small groups. W Paul, Samuel Holland, Leinhardt, Comparative Group Studies. Paul W Holland and Samuel Leinhardt. Transitivity in structural models of small groups. Comparative Group Studies, 1971.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Statistical predicate invention. Stanley Kok, Pedro Domingos, ICML. Stanley Kok and Pedro Domingos. Statistical predicate invention. In ICML, 2007.
Random walk inference and learning in a large scale knowledge base. Ni Lao, Tom Mitchell, William Cohen, EMNLP. Ni Lao, Tom Mitchell, and William Cohen. Random walk inference and learning in a large scale knowledge base. In EMNLP, 2011.
Programs with common sense. RLE and MIT Computation Center. John Mccarthy, John McCarthy. Programs with common sense. RLE and MIT Computation Center, 1960.
Key-value memory networks for directly reading documents. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, EMNLP. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. EMNLP, 2016.
Distant supervision for relation extraction with an incomplete knowledge base. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, David Gondek, HLT-NAACL. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. Distant supervision for relation extraction with an incomplete knowledge base. In HLT-NAACL, 2013.
Inverse entailment and progol. New generation computing. Stephen Muggleton, Stephen Muggleton. Inverse entailment and progol. New generation computing, 1995.
. Stephen Muggleton, Ramon Otero, Alireza Tamaddoni-Nezhad, SpringerInductive logic programmingStephen Muggleton, Ramon Otero, and Alireza Tamaddoni-Nezhad. Inductive logic programming. Springer, 1992.
Compositional vector space models for knowledge base completion. Arvind Neelakantan, Benjamin Roth, Andrew Mccallum, ACL. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. Compositional vector space models for knowledge base completion. In ACL, 2015.
A three-way model for collective learning on multi-relational data. Maximilian Nickel, Hans-Peter Volker Tresp, Kriegel, ICML. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In ICML, 2011.
Factorizing yago: scalable machine learning for linked data. Maximilian Nickel, Hans-Peter Volker Tresp, Kriegel, WWWMaximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. Factorizing yago: scalable machine learning for linked data. In WWW, 2012.
Reducing the rank in relational factorization models by including observable patterns. Maximilian Nickel, Xueyan Jiang, Volker Tresp, NIPS. Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In NIPS, 2014.
Logic and artificial intelligence. J Nils, Nilsson, Artificial intelligence. Nils J Nilsson. Logic and artificial intelligence. Artificial intelligence, 1991.
Learning logical definitions from relations. Machine learning. J Ross Quinlan, J Ross Quinlan. Learning logical definitions from relations. Machine learning, 1990.
Markov logic networks. Machine learning. Matthew Richardson, Pedro Domingos, Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 2006.
Relation extraction with matrix factorization and universal schemas. Sebastian Riedel, Limin Yao, Andrew Mccallum, Benjamin M Marlin, NAACL. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. Relation extraction with matrix factorization and universal schemas. In NAACL, 2013.
End-to-end differentiable proving. Tim Rocktäschel, Sebastian Riedel, NIPS. Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In NIPS, 2017.
A reduction of imitation learning and structured prediction to no-regret online learning. Stéphane Ross, Geoffrey J Gordon, Drew Bagnell, AISTATS. Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011.
Learning first-order horn clauses from web text. Stefan Schoenmackers, Oren Etzioni, Daniel Weld, Jesse Davis, EMNLP. Stefan Schoenmackers, Oren Etzioni, Daniel Weld, and Jesse Davis. Learning first-order horn clauses from web text. In EMNLP, 2010.
Reasonet: Learning to stop reading in machine comprehension. Yelong Shen, Po-Sen Huang, Jianfeng Gao, Weizhu Chen, Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. In KDD, 2017.
Reasoning with neural tensor networks for knowledge base completion. Richard Socher, Danqi Chen, D Christopher, Andrew Manning, Ng, NIPS. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In NIPS, 2013.
Yago: A core of semantic knowledge. Fabian Suchanek, Gjergji Kasneci, Gerhard Weikum, WWW. Fabian Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: A core of semantic knowledge. In WWW, 2007.
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, NIPS. Sainbayar Sukhbaatar, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS, 2015.
Representing text for joint embedding of text and knowledge bases. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, Michael Gamon, EMNLP. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases. In EMNLP, 2015.
Compositional learning of embeddings for relation paths in knowledge base and text. Kristina Toutanova, Victoria Lin, Wen-Tau Yih, Hoifung Poon, Chris Quirk, ACL. Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. Compositional learning of embeddings for relation paths in knowledge base and text. In ACL, 2016.
Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, ICML. Théo Trouillon, Johannes Welbl, Sebastian Riedel,Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In ICML, 2016.
Multilingual relation extraction using compositional universal schema. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew Mccallum, NAACL. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. Multilingual relation extraction using compositional universal schema. In NAACL, 2016.
Programming with personalized pagerank: a locally groundable first-order probabilistic logic. Yang William, Kathryn Wang, William W Mazaitis, Cohen, CIKM. William Yang Wang, Kathryn Mazaitis, and William W Cohen. Programming with personalized pagerank: a locally groundable first-order probabilistic logic. In CIKM, 2013.
Collective dynamics of small-worldnetworks. nature. J Duncan, Watts, H Steven, Strogatz, Duncan J Watts and Steven H Strogatz. Collective dynamics of small-worldnetworks. nature, 1998.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning. J Ronald, Williams, Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992.
Deeppath: A reinforcement learning method for knowledge graph reasoning. Wenhan Xiong, Thien Hoang, William Yang Wang, EMNLP. Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. In EMNLP, 2017.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, ICLR. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In ICLR, 2015.
Differentiable learning of logical rules for knowledge base reasoning. Fan Yang, Zhilin Yang, William W Cohen, NIPS. Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base reasoning. In NIPS, 2017.
Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. Manzil Zaheer, Satwik Kottur, NIPS. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. In NIPS, 2017.
Wojciech Zaremba, Ilya Sutskever, arXiv:1505.00521Reinforcement learning neural turing machines. Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv:1505.00521, 2015. |
247,058,691 | ENABLING ARBITRARY TRANSLATION OBJECTIVES WITH ADAPTIVE TREE SEARCH | We introduce an adaptive tree search algorithm, that can find high-scoring outputs under translation models that make no assumptions about the form or structure of the search objective. This algorithm -a deterministic variant of Monte Carlo tree search -enables the exploration of new kinds of models that are unencumbered by constraints imposed to make decoding tractable, such as autoregressivity or conditional independence assumptions. When applied to autoregressive models, our algorithm has different biases than beam search has, which enables a new analysis of the role of decoding bias in autoregressive models. Empirically, we show that our adaptive tree search algorithm finds outputs with substantially better model scores compared to beam search in autoregressive models, and compared to reranking techniques in models whose scores do not decompose additively with respect to the words in the output. We also characterise the correlation of several translation model objectives with respect to BLEU. We find that while some standard models are poorly calibrated and benefit from the beam search bias, other often more robust models (autoregressive models tuned to maximize expected automatic metric scores, the noisy channel model and a newly proposed objective) benefit from increasing amounts of search using our proposed decoder, whereas the beam search bias limits the improvements obtained from such objectives. Thus, we argue that as models improve, the improvements may be masked by over-reliance on beam search or reranking based methods. | [
3438497,
3913537,
233210339,
229365698,
15816492,
8884845,
15535376,
15349458,
201058550,
8822680,
12639289,
13751870
] | ENABLING ARBITRARY TRANSLATION OBJECTIVES WITH ADAPTIVE TREE SEARCH
Wang Ling [email protected]
Talka, Inc
Ltd
Wojciech Stokowiec [email protected]
Talka, Inc
Ltd
Domenic Donato [email protected]
Talka, Inc
Ltd
Laurent Sartran [email protected]
Talka, Inc
Ltd
Yu Lei [email protected]
Talka, Inc
Ltd
Chris Dyer [email protected]
Talka, Inc
Ltd
Austin Matthews
Talka, Inc
Ltd
ENABLING ARBITRARY TRANSLATION OBJECTIVES WITH ADAPTIVE TREE SEARCH
Published as a conference paper at ICLR 2022
We introduce an adaptive tree search algorithm, that can find high-scoring outputs under translation models that make no assumptions about the form or structure of the search objective. This algorithm -a deterministic variant of Monte Carlo tree search -enables the exploration of new kinds of models that are unencumbered by constraints imposed to make decoding tractable, such as autoregressivity or conditional independence assumptions. When applied to autoregressive models, our algorithm has different biases than beam search has, which enables a new analysis of the role of decoding bias in autoregressive models. Empirically, we show that our adaptive tree search algorithm finds outputs with substantially better model scores compared to beam search in autoregressive models, and compared to reranking techniques in models whose scores do not decompose additively with respect to the words in the output. We also characterise the correlation of several translation model objectives with respect to BLEU. We find that while some standard models are poorly calibrated and benefit from the beam search bias, other often more robust models (autoregressive models tuned to maximize expected automatic metric scores, the noisy channel model and a newly proposed objective) benefit from increasing amounts of search using our proposed decoder, whereas the beam search bias limits the improvements obtained from such objectives. Thus, we argue that as models improve, the improvements may be masked by over-reliance on beam search or reranking based methods.
INTRODUCTION
Conditional text generation tasks, such as machine translation, consist of two parts: a model that assigns scores to candidate outputs, and a search component that interacts with the model in order to find an output that maximizes the score assigned by the model. This search problem is a hard combinatorial optimization problem, and as a result, constraints are frequently imposed on the structure of the model to make solving or approximating the search problem easier. In neural machine translation, an autoregressive factorization of the output probability distribution is widely used (Kalchbrenner & Blunsom, 2013;Sutskever et al., 2014;Vaswani et al., 2017), and a variety of conditional independence assumptions are made in other model classes from statistical translation models (Brown et al., 1993;Koehn et al., 2003) to non-autoregressive neural models (Lee et al., 2018). Although these assumptions enable fast and accurate approximations to the search problem with simple and efficient algorithms (e.g., beam search), which can be crucial for efficient production applications, they limit the form of the models and thereby restricting the kinds of architectures that can be used to address observed model failures.
Despite the algorithmic benefits that beam search provides, we argue that it is a poor foundation for long-term scientific progress toward an accurate and reliable translation model whose scores adequately predict translation quality. First, it can only be applied to autoregressive models, which have well-known length calibration problems due to a tendency to drop or "hallucinate" content, and this tendency has been remarkably resistant to remedy across a variety of different autoregressive architectures (Koehn & Knowles, 2017;Lin et al., 2020). The existing heuristic solutions-e.g., nonlocal corrections to the search objective at decoding time, or global statistics about the population of per-word probabilities Meister et al., 2020)-point to non-autoregressive components in the search objective as necessary parts of the solution. 1 We therefore would like to directly work with model classes that contain these solutions, rather than being dependent on limited heuristics that are imposed after the fact. Second, beam search is strongly biased towards translations that yield high initial scores due to the heuristic used to score partial translations (Stahlberg & Byrne, 2019;Meister et al., 2020). While this feature provides short term translation quality gains as it addresses many of the calibration issues in autoregressive models, it is undesirable in the longer term as it masks many modeling issues that need addressing. More importantly, as scientific progress drives better correlations between model scores and translation quality, search errors caused by this bias will inevitably have a negative impact on both model scores and translation quality.
To address these shortcomings, we introduce the beam adaptive tree search (BATS) algorithm, which is based on Monte Carlo tree search (Coulom, 2006;Browne et al., 2012). Like MCTS, BATS estimates the value of internal nodes (i.e., partial translations) in the search tree with estimates from an expected-outcome model based on playouts from an auxiliary model, and these estimates are refined as the search progresses. Because BATS is guided from the start by the true objective-whether autoregressive or not, and due to the refinement of initial score estimates, the BATS decoder exhibits fewer biases as the search budget increases than beam search or reranking algorithms.
Our experimental section aims to characterise the impact of decoding mechanisms on both nonautoregressive and autoregressive models. For autoregressive models, we show that the calibration issues in autoregressive models can be addressed by adding a non-autoregressive component that augments the autoregressive sequence score with the lowest scoring produced token in the autoregressive decomposition of the sequence, which we name max rank. We further show that existing non-autoregressive approaches, such as the "noisy channel" model (Yu et al., 2017;Ng et al., 2019;Yu et al., 2020b;a;Liu et al., 2021), which factorize the translation probability according to Bayes' rule and have been argued to be better calibrated, are (1) poorly optimized by standard reranking-based approaches, and (2) ultimately have similar calibration failures as neural autoregressive models have. Incorporating the max rank component to existing objectives benefits both beam search and BATS, but the latter yields both the best translation and model scores. Crucially, we show that our decoder can search substantially longer and achieve higher model scores before BLEU starts to deteriorate, which suggests a negative impact of the search bias in beam search. Finally, we show that once autoregressive models become robust enough to address the calibration issues, this bias has an equally negative impact on translation quality. By fine-tuning an autoregressive model to better correlate with BLEU using minimum risk training (Shen et al., 2016), we show that BATS can achieve higher translation quality and a better model score compared to beam search.
BACKGROUND
BEAM SEARCH
Translation tasks operate on the space of possible translations Y = Σ * • {EOS} where Σ is a finite vocabulary, and EOS is a symbol represents the end of a sequence. Elements in Y correspond to full sentences y = y 1 , . . . , y n with n − 1 tokens, where y n = EOS (end of sentence). Each translation y is conditioned on a source sentence x and is assigned a score s(x, y), which measures how well x translates to y. Decoding algorithms aim to find the best hypothesis y * ∈ Y under the search objective s(x, y). While autoregressive models may assign scores to prefixes of translations, we only assume that s is well defined for full translations, y ∈ Y. In the particular case of machine translation under standard autoregressive models, the search objective is defined as the conditional log probability:
s AR (x, y) = log p(y | x) = |y| i=1 log p(y i | y 1 , . . . , y i−1 , x).
As autoregressive models have probability emissions at the token level, the search problem can be cast into a shortest path problem for weighted graphs, where each node p is identified by a sequence y (p) . Nodes define an additional space Y * including Y as well as partial translations in Σ * that do not terminate with a EOS token. Each node contains |Σ| + 1 edges, each appending a different word y i ∈ Σ ∪ {EOS} to y (p) generating a new node p • y i . Edge weights are given by the emission log probability log p(y i | y (p) , x) according to the autoregressive model. Search starts from the root node ε with an empty sequence and nodes that generate the EOS token have a single edged with weight 0 that lead to a single shared terminal node. One solution to this problem is A * search (Hart et al., 1968), which is a best-first algorithm that iteratively searches nodes p with the highest valuê v(p) = c(p) + f (p), where c(p) is the sum of weights of all edges from ε to p, and f (p) is a heuristic function that attempts to estimate the sum of the edges on the highest scoring path from p to the terminal node.
Although theoretically appealing, good A * heuristics are difficult to obtain, and search can be extremely expensive with poor heuristics. To remedy this, beam search introduces approximations. Rather than a best-first traversal, beam search proceeds iteratively along a frontier at a certain depth. Exploration is limited to b nodes at each depth, where b is selected as a hyperparameter that trades search accuracy (in terms of model score) for speed. From depth 0 composed of just root node ε until a maximum depth limit Y is reached, or another stopping criteria is reached (Klein et al., 2017), beam search progressively scores all children of the nodes at the current depth and prunes all generated nodes except for the top scoring b nodes. Beam search scores each node with only the current cost v(p) = c(p), and, in the case of neural machine translation, setting f (p) = 0. Discarding the future cost biases search towards nodes with high scores without regard to whether they lead to a good path to the terminal state. Thus, a large space of potentially good translations with low initial scores is never explored. Examples include re-orderings that place high-entropy words at the start of the sentence or shorter sentence constructions (e.g. "Help me" vs. "Lend me a hand").
Interestingly, autoregressive models tend to overestimate the probability of short and ungrammatical translations that do not translate the entirety of the source sentence, which are pruned by this scoring heuristic (Stahlberg & Byrne, 2019;Holtzman et al., 2019). Thus, while b may be set low to increase speed, it is often set low to improve translation quality. However, we believe that model changes evaluated with beam search's biases may be obscured. We seek to propose modeling improvements to mitigate degenerate solutions, such that search quality and model quality are aligned.
MONTE CARLO TREE SEARCH
In Monte Carlo tree search (Coulom, 2006, MCTS), the Monte Carlo method replaces the heuristically driven measure of valuev(p) for a node p with an expected-outcome model based on random game playouts. For instance, given a node p representing a state of a game, one can assign a value to p by randomly playing from that state a certain number of times and computing the average score obtained from the playouts. Thus, no burden is placed on the form of the objective function, allowing the definition of arbitrary complex objectives (e.g. winner of a chess game). Additionally, as each playout yields a possible terminal state, both current c(p) and future costs f (p) are naturally embedded within the obtained estimate. Unlike beam search and A * search, where the search direction is determined by valuev(p), MCTS diversifies the search space by allocating budget to less explored areas in the search space (Kocsis & Szepesvári, 2006), and continually refines value estimates.
ADAPTIVE TREE SEARCH FOR TEXT GENERATION
While MCTS (Coulom, 2006) can be directly applied to decode arbitrary translation objectives, the heuristics defined in MCTS are optimised for environments where the computational cost of the scoring function s(x, y) is low. For instance, the playout heuristicv in the game of go (Silver et al., 2016) runs hundreds of thousands of playouts, which can be computed in less than a second. In neural text generation the scoring function frequently requires the computation of a neural network forward step, such as a log-probability computed using an autoregressive model, rendering these practices prohibitive. One option is to rely on a heuristic to generate samples to train a neural network that estimatesv (Leblond et al., 2021). However this has been shown to be challenging as model scores are difficult to estimate. Instead, we describe a variant of MCTS optimised on decoding text generation.
DETERMINISTIC PLAYOUT HEURISTIC
We start by establishing our playout functionv(p), which is used as an initializer for node values. While the Monte Carlo method is effective at accurately estimating the value of a given sequence by performing multiple random playouts, such practice is infeasible as each of the playouts needs to be scored using the scoring function s(x, y), which we expect will be prohibitively expensive. Furthermore, chances that grammatical translations are sampled using this process are extremely low due to the sparsity of high-quality translations in Y. Thus, rather than multiple random playouts, we estimate the cost of a given node using a single informed playout, which is guided by greedy decoding using an autoregressive model. Therefore, given a node p with prefix y (p) , we computê v(p) by recursively selecting the highest probability token y according to an autoregressive model
p(y | y (p) 1 , . . . , y (p)
i , x), and scoring the translation using the objective function s(x, y). This also implies our approach does not employ the Monte Carlo estimates and the decoding method is fully deterministic. The progression of the value estimates relies only on the refinement of the initial value estimates performed as the tree expands introduced in MCTS. Therefore, we will refer to our algorithm as an adaptive tree search (ATS) algorithm.
ADAPTIVE TREE SEARCH WITH A MODIFIED UCT CRITERIA
ATS operates on search trees instead of weighted graphs. A search tree covers the space of all possible full and partial translations Y * , and each node encodes a particular sequence y (p) . Nodes have |Σ| + 1 children, each appending a new word y ∈ Σ ∪ {EOS} to the sequence y (p) . We denote the child resulting from concatenating y to p as p • y. The child of a node that selects the EOS symbol is a terminal node, which is associated with a element in Σ * , and therefore, can be scored using the objective s(x, y). Each node stores the number of visits n (p) and its current value estimate v (p) , which can be reassigned during search. Nodes that have not been inserted in the tree have n (p) = 0 and no estimate for v (p) .
Search starts with the root node ε with visit count n (ε) = 1 and value v (ε) =v(ε), which corresponds to the score obtained by translating x with greedy decoding. Afterwards the tree expands in an iterative manner, where each iteration expands the search tree and updates its statistics.
Similar to the selection and expansion steps in ATS, we traverse the instantiated tree, starting from ε on the basis of the current estimated values v, together with confidence about the quality of the estimates. We recursively traverse the tree and select the child p • y with the highest score according to the continuous upper confidence tree criterion (Auger et al., 2013):
UCT(p, y) =v(p • y) + C √ n (p) 1 + n (p•y) π(y | y (p) ),(1)
where C is a hyperparameter weighting two terms. The first termv(p • y) encourages exploitation of nodes with known high value. The second term
√ n (p)
1+n (p•y) π(y | y (p) ) encourages the algorithm to explore nodes with low visit counts more thoroughly. Here, we specify the policy as the logprobability obtained from an autoregressive model π(y | y (p) ) = p(y | y (p) , x). The value of a node is determined by its current estimatev(p • y) = v (p•y) if n (p•y) > 0. For nodes not yet inserted in the tree, which have no value estimates, we compute an estimated value as:
y * = arg max ∀y∈Σ,n (p•y) >0 v (p•y) v (p•y) = v(p • y * ) π(y | y (p) ) π(y * | y (p) ) ,(2)
where we estimatev (p•y) by assuming that the ratio between policies π between p • y and the highest value node p • y * is the same as the ratio between their valuesv.
The traversal terminates when a node with n (p) = 0 or a terminal node is reached. In the former case, the node p is inserted into the tree, setting its visit count n (p) = 1 and estimating its value v (p) =v(p) as done in the simulation step in MCTS. We note here an important difference to many other formulations of MCTS, where selection terminates at leaf nodes (node where all Σ have n (p) > 0), which is followed by the expansion step that inserts a new child prior to simulation. Expanding all children of a node is generally considered efficient in domains with a small Σ and low playout costv(p), and the standard MCTS algorithm does not attempt to optimise the subset that needs to be expanded. In the text domain, most words in the vocabulary are not applicable as they do not fit the context y (p) and correspond to the content in the source sentence x, and can be excluded using the value estimate described in Equation 2.
Next, we ascend from the selected node p • y, updating visit counts and value estimates:
n (p) ← n (p) + 1 v (p) ← max{v (p) , v (p•y) },
where each parent p increases its visit count n (p) and updates its value estimate v (p) to the child's value if a new best translation is found. Thus, v (p) represents the best translation obtained in the subtree represented by p. Starting from the score obtained using greedy decoding when v (p) is initialised, each new traversal that passes through p has a chance to refine this initial estimate with the newly found translation.
BEAM ADAPTIVE TREE SEARCH
A standard way to guarantee progression in MCTS is to run an instance of MCTS per word. Here, we would run an ATS instance ATS(ε, k) with k iterations starting from root ε. Then, we set ε ← ε • y * 1 , where y * 1 = arg max y∈Σ v (ε•y) is the child with the highest value estimate and repeat this process until y * i = EOS. However, it has been found that in text generation tasks restricting search to a set of high value nodes rather than a single one allows such games to be solved at a faster rate (Baier & Winands, 2012). Thus, we modify our selection step as follows:
UCT constrained (p, y, d min ) = UCT(p, y) d (p•y) > d min −∞ otherwise (3)
where d (p•y) is initialised as (p • y), a function that counts the number of edges required to reach the root node from (p • y). Then, the following update rule is added to ensure that the d (p) stores that depth of the deepest node achievable from p:
d (p) ← max{d (p) , d (p•y) },
where we update each node so that d (p) stores the value of the deepest node that is accessible from p. Thus, in Equation 3, condition d (p•y) > d min tests whether p • y contains a node deeper than d min .
We define BATS(ε, k) as a Beam ATS instance that runs ATS starting from the root node ε with the selection criteria UCT constrained with d min = 0 and gradually increasing d min by 1 every k iterations. Search stops when no node satisfies d (p•y) > d min or until a maximum depth d max .
OBJECTIVES
As decoding with a decoder on vanilla autoregressive models is unlikely to yield translations with quality superior to beam search, as the beam search bias is essential to overcoming the calibration issues in these models, we propose modeling improvements in order to address these shortcomings.
MAX RANK
Decoding in autoregressive models generally optimises a normalised log probability , (log p(y | x)) ( 6 5+|y| ) α , which combines the sum of the token level log-probabilities (when α = 0) and a length-based adjustment, which approximates the mean of the log-probabilities as the length |y| grows (when α = 1).
Similar to normalised log-probabilities, we consider a metric that characterizes translation by their minimum token level log-probability. The intuition here is that the quality of the translation is represented by the worst decision made in the sequence. In practice, many degenerate cases in autoregressive models are created by making a single bad decision, such as generating a EOS token prematurely or omitting translations, which can be understood in terms of uniform information density (Meister et al., 2020).
However, the issue with the minimum of the token level log-probability is that log-probability ranges tend to vary depending on the context and number of translation options that are available. Thus, they are not very reflective on the quality of the choice made as the best choice at a given timestamp could still be the worst decision in the sequence. Instead, we optimise the normalised rank:
r(y i | y 1 , . . . , y i−1 , x) = y∈Σ δ(p(y i | y 1 , . . . , y i−1 , x) > p(y | y 1 , . . . , y i−1 , x)) |Σ|
where we count the number of actions in Σ with lower log-probability than y i . By using the rank r instead of the log-probability p, we can compare values within the same range at the cost of a loss in relative precision. Thus, we name our metric max rank (MR), which is computed as follows:
MR(x, y) = |y| max i=1 log r(y i | y 1 , . . . , y i−1 , x).
Finally, unlike the mean and sum of log-probabilities, the max of a sequence of log-probabilities is not autoregressive, so beam search is not applicable.
NOISY CHANNEL MODEL
The noisy channel model uses the Bayes rule decomposition in order to decompose the probability of a sentence p(y|x) into p(y|x) = p(x|y)p(y)
p(x)
where the channel model p(x|y) can be trained as translation model trained in the reverse direction and p(y) is a language model. Finally, the prior p(x) can be ignored in the context of a maximization problem. Since the reverse model is not autoregressive in the space Σ * , it can bypass many of the degenerative cases in autoregressive models.
MINIMUM RISK TRAINED AUTOREGRESSIVE MODELS
In order to show that BATS can be an attractive alternative to beam search under autoregressive models, we need to improve the model so that the search bias is no longer as crucial to preserving the translation quality of the generated text. To this end, we fine-tune our models using minimum risk training (Shen et al., 2016, MRT). The MRT training objective is designed to minimize the empirical risk r(y, y ) by minimizing it in a subset of Σ * obtained by sampling n translations. This allows the model to mitigate degenerate cases caused by optimising the likelihood objective by fine-tuning the model on downstream metrics, such as BLEU (Papineni et al., 2002).
EXPERIMENTS
SETUP
We conduct our experiments on the Chinese-English and Pashto-English tasks from WMT2020 ( Barrault et al., 2020), and German-English from WMT2014 (Bojar et al., 2014), following the same training, development and test splits. Our autoregressive model transformer baseline uses the multiquery attention model (Shazeer, 2019). It uses the standard architecture with 6 encoder and decoder layers with 512 hidden units, 2048 sized tied embeddings for both source and target word projections and 8 attention heads. We tokenize the data with byte-pair encoding (Sennrich et al., 2016) with 32K merges and set a maximum sentence size of Y = 128. Translation quality evaluation is performed using sacreBLEU (Post, 2018). We choose the checkpoint that yields the highest BLEU in the validation set using beam search with the normalisation constant α = 0.8 and beam size 6. We also compare a variant fine-tuned using MRT according to the procedure described in Appendix A.3.
For the noisy channel model, we train the channel model by simply swapping the translation direction of the autoregressive model with the same hyperparameters. For the language model prior, we employ the TransformerXL architecture (Dai et al., 2019) trained on 1 billion words (Chelba et al., 2013). On non-autoregressive objectives, we use beam search as a proxy to generate translations candidates which are rescored using the non-autoregressive metric Yu et al., 2020a). For BATS, we simply set hyperparameter C = 1. While optimising C could lead to more efficient optimisation of the model score, our goal is to study how model scores are correlated with translation quality in different objectives.
The translation budget (beam size for beam search and iterations for BATS) is swept by doubling its value starting from 1 to 256. We combine different objective components under a log-linear model, where the weights of the components are tuned with MERT (Och, 2003). In order for model scores to be comparable, all weights are tuned on a pool of 256 translations using beam search. 2 5.2 RESULTS Table 1 illustrates the translation quality results using BATS and beam search on the normalised autoregressive model baseline and the optimal model (Column "System") for each language pair (Column "Language Pair"). We perform a grid search over the following decisions: (1) Whether to use Max Rank (Column "MR"), (2) Whether to use the Noisy Channel Model (Column "NC"), (3) Whether to tune the autoregressive model using MRT (Column "MRT"), (4) Whether to use beam search or BATS (5) The translation budget of each decoder. The combination with the highest BLEU on the validation set is used to decode on the test set and the BLEU scores obtained using beam search and Bats are reported (Columns "Beam Search" and "BATS"). 3 We observe that for all pairs decoding with BATS can yield gains over decoding with beam search when using the combination of objectives with the highest BLEU on the validation set. As expected, due to the search bias in beam search, it does comparatively better on some language pairs, such as Chinese-English and Pashto-English.
In terms of modeling, we note that our proposed metric, Max Rank, and the MRT method combined yield the best results for Mandarin and German. The noisy channel model only yields improvements in Pashto-English, when used with Max Rank and MRT as the training data is small (500k parallel sentences) and the model relies on the large sized language model to provide accurate predictions. Table 1: Comparison between the BLEU scores obtained using beam search and BATS. Pairs of rows describes the results obtained using the vanilla autoregressive model with normalisation and the best combination of models (Max Rank, Noisy Channel and Minimum Risk Training), tuned on the validation set. The translation budget is also tuned for each method for maximum BLEU.
BEAM SEARCH AND BATS
We now provide a more in-depth analysis on the Chinese-English language pair, where we believe results are more informative. Table 2 illustrates the results obtained using some models that were explored in our grid search (Column s(x, y)). For each model, we illustrate the best BLEU obtained on the test set (Column "BLEU"), the beam size (Column "Beam") or number of iterations (Column "Iter"), where the best BLEU was obtained on the validation set and the percentage improvement between results obtained using beam search and BATS (Column "Delta"). Finally, autoregressive and non-autoregressive models are marked with AR with N AR, respectively. It is also important to refer that the number of iterations in BATS and beam in beam search are not comparable point-wise, instead we analyse the overall behavior of the model score and BLEU curves.
When to use Beam Search? In the models where beam search outperforms BATS(Rows "Log Probability AR (α = 0)" and "Log Probability AR (α = 0.8)"), we notice that the search budget in both cases is always low. Figure 1 plots the evolution of the BLEU (Top) and model score (Bottom) of both decoders as translation budget is increased. For the normalised log-probability (Column "Log Probability (α = 0.8)"), the model scores obtained for BLEU search and BATS are very similar. However, we an observe a large gap between the BLEU scores obtained at the same model score. This shows that the beam search bias is filtering the set of candidates in beam search so that they are of higher quality, even though the model score is failing to discriminate them. Additionally, we observe the standard BLEU curve (Stahlberg & Byrne, 2019), where after a set of initial iterations BLEU deteriorates rapidly, which renders elaborate decoding mechanisms unneeded. Furthermore, BATS and other MCTS-based methods are not ideal for low search budget scenarios as they depend on high node visit counts to accumulate enough statistics to make informed decisions. We see in all objectives that model scores for BATS do not outperform Beam Search until a large budget is allocated. In conclusion, if the model cannot support high search budgets, beam search is the preferred alternative, especially if the modeling issues can be addressed with the search bias.
When to use BATS? With the addition of the noisy channel model (Row "Log Probability (α = 0.8) + NC N AR ) and Max Rank (Row "Log Probability (α = 0.8) + MR N AR "), the search budget until BLEU deteriorates for both decoders is increased. Here, we observe that BATS is the decoding option that yields higher BLEU and that the delta is higher when BATS can employ more iterations. Figure 1 shows that the evolution of BLEU is considerably more stable with the noisy channel model (Column "Log Probability (α = 0.8) + NC") and when Max Rank is applied (Column "Log Probability (α = 0.8) + MR"). For these non-autoregressive models BATS yields significantly better model scores than reranking. More interestingly, comparing the behavior of Beam Search and BATS, similar model scores between the two methods do not yield similar BLEU scores. In the noisy channel model, we observe that at 16 iterations and beam size 16, both decoders yield similar model scores and BLEU scores. However, as the model score increases beyond that point, the degeneration in beam search as the score increases is significant, while BATS observes almost no deterioration. This suggests that the beam search bias has a negative impact in translation quality at high values of b by filtering good translations from the search space. Using Max Rank, we observe that not only BATS can achieve considerably higher model scores as it is optimising the non-autoregressive model directly, it also yields considerably higher BLEU scores. In beam search, BLEU stops improving at 32 iterations even though model score keeps increasing due to the search bias.
Finally, we observe that the search bias issue is also present when using beam search to decode from an autoregressive model fine-tuned with MRT ( Table 2, Row "Log Probability MRT (α = 0.8) AR "). Figure 1 shows that once MRT is applied (Column "Log Probability MRT (α = 0.8)"), not only BATS can find significantly better model scores, but they are are associated with better BLEU scores. Additionally, beam search stagnates after 8 iterations.
In conclusion, as translation models become more robust, there is a growing need of better decoding mechanisms, such as BATS, in order to maximize translation quality. We perceive that in both autoregressive and non-autoregressive models, there is a limit to both model and BLEU scores that can be obtained using beam search, which is partially attributed to the fact that its search is strongly biased due to the lack of future costs.
We believe that future research will drive models to extents where translation quality nearly perfectly matches with model scores. In an oracle setup, where the objective is the sentence level BLEU by peeking at the reference (Row "Oracle BLEU"), we hypothesise that the delta between beam search and BATS would grow vastly, and observe that a delta of 39.19%.
CONCLUSION
This paper proposes an adaptive tree search algorithm designed to optimise arbitrary metrics. It uses rollouts from an auxiliary autoregressive model to obtain estimates for the value estimates of internal nodes. This allows the decoder to optimise arbitrary objectives and avoid the search biases of manually defined heuristics, such as partial translation probability in autoregressive models. BATS is particularly useful when models are robust enough to allow for a higher search budget. As many existing objectives are found to be poorly correlated with translation quality, we propose a new metric named max rank and use existing methods, such as the noisy channel model and minimum error rate training, to address the failure modes of vanilla autoregressive models. Results on three language pairs are favourable to BATS when using our proposed augmentations of the autoregressive model. Additionally, we find that the gap in translation quality between beam search and BATS increases as more robust models are employed. More importantly, we observe that the search bias prevents beam search from achieving high-quality translations as it filters good translations that are unfavoured by the search heuristic. Thereby, the model score increases, but BLEU decreases. This shows that beam search limits the potential of many models by establishing translation quality ceilings unrelated to the robustness of the model, but to the topology of the search space they establish. This suggests that as scientific progress drives more robust models, exploring more robust decoding methods, such as BATS, is fundamental for advancing the field of text generation.
ACKNOWLEDGMENTS
We thank Rémi Leblond, Geoffrey Irving and Phil Blunsom for useful feedback throughout the different stages of this project.
A APPENDIX
A.1 BATS VS. ATS
The advantage of the beam variant of MCTS (Baier & Winands, 2012) is that the search algorithm does not have to commit to a single branch every k iterations. As often occurs during translations, multiple valid translation exist and correspond to different branches in the tree and only after furthering the search tree can the optimal translation be filtered out. In ATS, once the EOS is chosen, search ends immediately, with no chance for the decoder to explore other branches, which accentuate issues where degenerate solutions are chosen (e.g. prematurely ending a translation). A comparison between ATS and BATS on our WMT2020 Chinese-English validation set using autoregressive models is shown in Table 3: Comparison between BATS and ATS on our WMT2020 Chinese-English validation set, with Log Probability (α = 0.8) as the search objective.
A.2 MIN PROB VS. MAX RANK
The most straight-forward approach to select the worst decision in a translation is to select the lowest log-probability in the sentence. However, log-probabilities are not a good indicator of whether a decision is good or bad as some word translations are inherently low probability (words with many valid translations). Thus, we decided to use the maximum rank instead. Table 4 provides a comparison between the Min Prob (Row "Log Probability (α = 0.8) + MP") and the Max Rank objective (Row "Log Probability (α = 0.8) + MR"). We observe that while Min Prob yields a relatively small improvement, it is significantly smaller than Max Rank's improvement over the baseline (Row "Log Probability(α = 0.8)"). Additionally, it clearly does not address the degenerate solutions problem in autoregressive models. (Papineni et al., 2002) is the downstream translation metric used in most MT evaluations, its sentence level predictions tend to be sparse and inaccurate. Thereby, training quickly overfits before optimal translation quality is reached. With sentence level ChrF (Popović, 2015) training is more stable, and a better optimal translation quality can be obtained.
We use a sample size of 8 translations per sentence, and these are generated via temperature sampling with temperature 0.8. Finally, we set the risk r(y, y ) = − 1 2 BLEU(y, y ) − 1 2 ChrF(y, y ), where BLEU(·) is the sentence level sacreBLEU (Post, 2018) and ChrF(·) is the sentence level ChrF (Popović, 2015). We used the average of BLEU and ChrF because BLEU was designed as a corpus level metric, and ChrF provides a better estimate of translation quality with sparser matches against a reference for a single sentence. Table 5 compares the results obtained using only bleu(·), chrf(·) and their combinations, using Beam Search with beam 6, on the validation set in the WMT2020 Chinese-English dataset. Here, we can observe that a combination of both scores yields the optimal translation quality. Table 5: BLEU obtained on the WMT2020 Chinese-Englsih validation set using different metrics as risk r(y, y ). Columns "bleu(·)" and "chrf(·)", denote the weights applied and Column "BLEU" denote the BLEU obtained. The first row illustrates the BLEU score obtained prior to MRT.
A.4 COMPUTATIONAL COST OF BATS
For Beam Search with beam size b and a max sentence length Y , beam search requires b × Y × A operations, where A is a transformer+softmax block. Additionally, for reranking, the model score s(x, y) needs to be computed for each of the b translations and has a computational cost of B. Thus, the total cost is b × (Y × A + B). In BATS, computation is proportional to the number of expanded nodes, when the playout heuristic is applied. Each playout requires the computation of greedy decoding, followed by the scoring function s(x, y). Thus, each rollout requires Y × A + B computations. Finally, all nodes that follow the path used in greedy decoding will have the same value v, which means that the first node that is expanded, which corresponds to this path will have no cost, with the exception of the root node. Thus, the cost of BATS is
(1 + z) × (Y × A + B),
where z is the number of non-root nodes expanded more than once. As Y × A + B is a common denominator for both methods, we define it as the a computational unit. Table 6 shows the results obtained for max rank objective (row "Log Probability (α = 0.8) +M R" in Table 2). The "Cost" column represents the number of computational units, and we observe that BATS is considerably more expensive to run than Beam Search. However, observe the model scores (Column "Score"), we notice that Beam Search gradually decreases the rate at which model score gains are observed (Column "Gain") even though the cost doubles at each row. For BATS, we observe that while the cost is extremely high initially (73.049 at 2 iterations), the cost increases at a linear rate with the number of iterations. More importantly, we notice considerable gains even with high numbers of iterations (11.82 at 128 iterations). Finally, at beam 256, we notice that the cost of Beam Search is comparable to the cost of running 16 BATS iterations, which achieves similar model scores.
It is important to also refer that cost efficiency is not the goal of this work, but to expose the need for better decoders that are devoid of the beam search bias. Many improvements to MCTS-based methods can be made to improve efficiency, such as training value and policy networks iteratively (Silver et al., 2016). Table 6: Comparison between BS and MCTS in terms of computational cost using the normalised autoregressive model in the WMT2020 Chinese-English validation set. Cells denote the computational cost, model score obtained and the gain on score obtained relative to the row above.
A.5 EXAMPLE TRANSLATIONS AND SEARCH ERRORS
We compare the translated sentences using an a MRT tuned autoregressive model for Chinese-English, where BATS and beam search yield similar BLEU but where BATS achieves significantly lower model scores. Table 7 provides three example translations from the test set obtained using beam 64 for beam search (row "Beam Search") and 256 iterations for BATS (row "BATS"), which is the setup that obtained optimal results in the validation set. The first example shows that beam search tends to prolong sentences by using longer expressions ("many" vs. "there are also many") in order to get short term value gains, but lower overall score, also slightly shifting tone of the sentence. Figure 2 illustrates this issue, we observe that by using the expression "there are also many", it delays the generation of the word "technological" for three timestamps, leading to the higher score of −0.78729 (left path) compared to the alternative 1.20077 (right path) in the same timestamp. While this score regularizes to −1.58682 once the word "technological" is generated, its likely that the alternative translation is pruned by beam search.
In the second example, we observe that beam search prefers to reorder the original sentence, so that higher scoring terms ("U.S. destroyer USS Decatur") in the sentence are inserted first. However, as one can observe from the final score, this decision is only favorable in the short term as the final score of the sentence is substantially lower than the translation found using BATS, which respects the order of the original sentence. In the last sentence, we observe that in the final portion of the translation, the decoder makes a set of individually high scoring decisions that lead to an ungrammatical translation as all grammatical options have been filtered from the beam. : Illustration of the issue with the search bias in beam search where the decoder can delay the generation of low probability words, in this case the word "technological" in order to generate high initial scores. Edges scores correspond to the token level log probability log p(y i | y 1 , . . . , y i−1 , x) and nodes scores correspond to the value that is assigned to the state using the normalised partial sum of probabilities j≤i log p(yj |y1,...,yj−1,x) ( 5+i 6 ) α with α = 0.8. This example is obtained from the WMT2020 test set. The aim of Italian Eurosceptic government was that the budget deficit was equivalent to 2.4% of gross domestic product (GDP) in the next three years, suggesting that there was no debt reduction despite deficit reduction requirements. Beam search The Italian Euroskeptic government's goal is to have a budget deficit equivalent to 2.4% of gross domestic product over the next three years, suggesting that only facing deficit reduction requirements has not yet been debt reduction.
-1.647
BATS
The Italian Euroskeptic government's goal is to have a budget deficit equivalent to 2.4% of gross domestic product over the next three years, indicating only that there is no debt reduction required to reduce the deficit.
-1.542 Table 7: Examples of translation obtained using beam search and BATS on the Chinese-English test set using MRT tuned autoregressive models.
Figure 1 :
1Comparison of BATS and beam search over different translation budgets under different translation objectives on the WMT2020 Chinese-English test set. Each column illustrates the BLEU (top) and the model score (bottom) obtained using the two decoders on a different objective.
Figure 2
2Figure 2: Illustration of the issue with the search bias in beam search where the decoder can delay the generation of low probability words, in this case the word "technological" in order to generate high initial scores. Edges scores correspond to the token level log probability log p(y i | y 1 , . . . , y i−1 , x) and nodes scores correspond to the value that is assigned to the state using the normalised partial sum of probabilities j≤i log p(yj |y1,...,yj−1,x)
Table 2 :
2Comparison between BS and MCTS on the WMT2020 Chinese-English test set. Pairs of cells denote BLEU scores and the iteration budget achieving the best BLEU on the validation set.
Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. Better document-level machine translation with bayes' rule. Trans. Assoc. Comput.Linguistics, 8:346-360, 2020b.
Table 3 ,
3where we observe that ATS has both a bias towards degenerate cases and yields
worse model scores and BLEU.
BATS
ATS
Iter s(x, y) BLEU s(x, y) BLEU
1
-5.2
21.9
-5.23
21.9
2
-5.00
22.9
-4.92
21.3
4
-4.81
23.1
-4.90
20.9
8
-4.74
23.1
-4.80
20.8
16
-4.69
22.9
-4.78
20.7
32
-4.65
22.9
-4.75
20.7
64
-4.61
22.5
-4.73
20.4
128
-4.59
22.2
-4.72
20.2
256
-4.57
21.7
-4.72
20.0
1 2 4 8 16 32 64 128 256
20
21
22
23
BLEU
1 2 4 8 16 32 64 128 256
5.2
5.0
4.8
4.6
s(x, y)
BATS
ATS
Log Probability (α = 0.8)
Iterations
Table 4 :
4Comparison between Min Prob and Max Rank in our WMT2020 Chinese-English test set. The number of iterations is tuned on the validation set.A.3 MINIMUM RISK TRAINING WITH CHRF AND BLEU
While BLEU
s(x, y) Source 同样,也有不少科技企业在专利技术授权、专利技术使用等方面 遭遇了不小的侵权风波。 Reference Similarly, many technology companies have encountered numerous infringements crisis in patent technology licensing and patent technology use. Beam Search Similarly, there are also many technological enterprises that have encountered great infringing waves in the licensing of patented technology and the use of patented technology. BATS Similarly, many technological enterprises have encountered great infringing waves in the licensing of patented technology and the use of patented technology. Johnson Reef)12海里范围内。 Reference According to the comprehensive foreign reports of the Central News Agency, the U.S. official who requested anonymity revealed that the United States Navy destroyer, USS Decatur, cruised into the 12 nautical mile territorial limit of Gaven Reef and Johnson Reef of the Nansha Islands. Beam Search U.S. destroyer USS Decatur sailed into the Gaven Reef and Johnson Reef 12 nautical miles (12 nautical miles) of the Southern Sand Islands, according to Central Intelligence Agency's Comprehensive Outreach News on 30. BATS According to Central News Agency's comprehensive external telecommunications report on 30 June, U.S. officials who requested anonymity, the USS Decatur sailed into the Gaven Reef and Johnson Reef of the Southern Sand Islands within 12 nautical miles.-2.097
-1.740
Source
据中央社30日综合外电报道,据要求匿名的美国官员透露,美国
驱逐舰USS Decatur驶入了南沙群岛南薰礁(Gaven Reef)和赤瓜
礁(-2.722
-2.288
Source
意大利疑欧派政府的目标是未来三年预算赤字相当于国内生产总
值(GDP)的2.4 % ,这表明仅管面临减赤要求仍未有债务削减
Reference
In this paper, we will use the term non-autoregressive to refer to any model whose scores do not decompose additively with the words in the output sequence. These include models that make conditional independence assumptions and generate each word independent of the others(Lee et al., 2018), but also energy based models that require a complete translation hypothesis to compute a score, and models that make a Bayes' rule decomposition of the translation probability.
Tuning λ using BATS to genererate candidates yields similar weights.3 The optimal models with the highest BLEU on the validation for Beam Search and BATS are the same.
Continuous upper confidence trees with polynomial exploration -consistency. David Auger, Adrien Couëtoux, Olivier Teytaud, Machine Learning and Knowledge Discovery in Databases -European Conference, ECML PKDD 2013. Hendrik Blockeel, Kristian Kersting, Siegfried Nijssen, and Filip ZeleznýPrague, Czech Republic8188Proceedings, Part IDavid Auger, Adrien Couëtoux, and Olivier Teytaud. Continuous upper confidence trees with polynomial exploration -consistency. In Hendrik Blockeel, Kristian Kersting, Siegfried Ni- jssen, and Filip Zelezný (eds.), Machine Learning and Knowledge Discovery in Databases -European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part I, volume 8188 of Lecture Notes in Computer Science, pp. 194-209.
. Springer, 10.1007/978-3-642-40988-2_13Springer, 2013. doi: 10.1007/978-3-642-40988-2\ 13. URL https://doi.org/10.1007/ 978-3-642-40988-2_13.
Beam monte-carlo tree search. Hendrik Baier, H M Mark, Winands, 10.1109/CIG.2012.63741602012 IEEE Conference on Computational Intelligence and Games (CIG). United StatesIEEEHendrik Baier and Mark H. M. Winands. Beam monte-carlo tree search. In 2012 IEEE Conference on Computational Intelligence and Games (CIG), pp. 227-233, United States, September 2012. IEEE. doi: 10.1109/CIG.2012.6374160.
Loïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R Costa-Jussà, Christian Federmann, Mark Fishel, Alexander Fraser, Yvette Graham, Paco Guzman, Barry Haddow, Matthias Huck, Proceedings of the Fifth Conference on Machine Translation. Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, and Matteo Negrithe Fifth Conference on Machine TranslationAntonio Jimeno Yepes, Philipp Koehn, André MartinsAssociation for Computational LinguisticsLoïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussà, Christian Feder- mann, Mark Fishel, Alexander Fraser, Yvette Graham, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, and Matteo Negri (eds.). Proceedings of the Fifth Conference on Machine Translation. Association for Computational Linguistics, Online, November 2020. URL https://www.aclweb.org/anthology/2020.wmt-1.
Findings of the 2014 workshop on statistical machine translation. Ondřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, Aleš Tamchyna, 10.3115/v1/W14-3302Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsOndřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Level- ing, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. Findings of the 2014 workshop on statistical machine translation. In Proceed- ings of the Ninth Workshop on Statistical Machine Translation, pp. 12-58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. doi: 10.3115/v1/W14-3302. URL https://aclanthology.org/W14-3302.
The mathematics of statistical machine translation: Parameter estimation. F Peter, Stephen A Della Brown, Vincent J Pietra, Robert L Della Pietra, Mercer, Computational Linguistics. 192Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311, 1993. URL https://www.aclweb.org/anthology/J93-2003.
A survey of monte carlo tree search methods. Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, Simon Colton, IEEE Transactions on Computational Intelligence and AI in Games. 41Cameron Browne, Edward Powley, Daniel Whitehouse, Simon Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), Mar 2012.
One billion word benchmark for measuring progress in statistical language modeling. Ciprian Chelba, Tomás Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, abs/1312.3005CoRRCiprian Chelba, Tomás Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. One billion word benchmark for measuring progress in statistical language modeling. CoRR, abs/1312.3005, 2013. URL http://arxiv.org/abs/1312.3005.
Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search. Rémi Coulom, 5th International Conference on Computer and Games. Paolo Ciancarini and H. Jaap van den HerikTurin, ItalyRémi Coulom. Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search. In Paolo Ciancarini and H. Jaap van den Herik (eds.), 5th International Conference on Computer and Games, Turin, Italy, May 2006. URL https://hal.inria.fr/inria-00116992.
Transformer-xl: Attentive language models beyond a fixed-length context. CoRR. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc V Le, Ruslan Salakhutdinov, abs/1901.02860Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860, 2019. URL http://arxiv.org/abs/1901.02860.
A formal basis for the heuristic determination of minimum cost paths. Peter Hart, Nils Nilsson, Bertram Raphael, 10.1109/tssc.1968.300136IEEE Transactions on Systems Science and Cybernetics. 42Peter Hart, Nils Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107, 1968. doi: 10.1109/tssc.1968.300136. URL https://doi.org/10.1109/tssc.1968.
The curious case of neural text degeneration. CoRR, abs/1904.09751. Ari Holtzman, Jan Buys, Maxwell Forbes, Yejin Choi, Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degenera- tion. CoRR, abs/1904.09751, 2019. URL http://arxiv.org/abs/1904.09751.
Recurrent continuous translation models. Nal Kalchbrenner, Phil Blunsom, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1700-1709, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D13-1176.
Opennmt: Open-source toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander M Rush, abs/1701.02810CoRRGuillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. Opennmt: Open-source toolkit for neural machine translation. CoRR, abs/1701.02810, 2017. URL http: //arxiv.org/abs/1701.02810.
Bandit based monte-carlo planning. Levente Kocsis, Csaba Szepesvári, 3-540-45375Proceedings of the Seventeenth European Conference on Machine Learning. Johannes Fürnkranz, Tobias Scheffer, and Myra Spiliopoulouthe Seventeenth European Conference on Machine LearningBerlin/Heidelberg, GermanySpringer4212Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In Johannes Fürnkranz, Tobias Scheffer, and Myra Spiliopoulou (eds.), Proceedings of the Seventeenth European Con- ference on Machine Learning (ECML 2006), volume 4212 of Lecture Notes in Computer Sci- ence, pp. 282-293, Berlin/Heidelberg, Germany, 2006. Springer. ISBN 3-540-45375-X. URL http://www.sztaki.hu/˜szcsaba/papers/ecml06.pdf.
Six challenges for neural machine translation. Philipp Koehn, Rebecca Knowles, 10.18653/v1/W17-3204Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationVancouverAssociation for Computational LinguisticsPhilipp Koehn and Rebecca Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pp. 28-39, Vancouver, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/W17-3204. URL https://www. aclweb.org/anthology/W17-3204.
Statistical phrase-based translation. Philipp Koehn, Franz J Och, Daniel Marcu, Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsPhilipp Koehn, Franz J. Och, and Daniel Marcu. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 127-133, 2003. URL https://www.aclweb. org/anthology/N03-1017.
Machine translation decoding beyond beam search. CoRR, abs/2104.05336. Rémi Leblond, Jean-Baptiste Alayrac, Laurent Sifre, Miruna Pislar, Jean-Baptiste Lespiau, Ioannis Antonoglou, Karen Simonyan, and Oriol Vinyals. Rémi Leblond, Jean-Baptiste Alayrac, Laurent Sifre, Miruna Pislar, Jean-Baptiste Lespiau, Ioannis Antonoglou, Karen Simonyan, and Oriol Vinyals. Machine translation decoding beyond beam search. CoRR, abs/2104.05336, 2021. URL https://arxiv.org/abs/2104.05336.
Deterministic non-autoregressive neural sequence modeling by iterative refinement. Jason Lee, Elman Mansimov, Kyunghyun Cho, 10.18653/v1/D18-1149Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1173-1182, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1149. URL https:// aclanthology.org/D18-1149.
Autoregressive modeling is misspecified for some sequence distributions. CoRR, abs. Chu-Cheng Lin, Aaron Jaech, Xin Li, Matthew R Gormley, Jason Eisner, Chu-Cheng Lin, Aaron Jaech, Xin Li, Matthew R. Gormley, and Jason Eisner. Autoregressive modeling is misspecified for some sequence distributions. CoRR, abs/2010.11939, 2020. URL https://arxiv.org/abs/2010.11939.
Pretraining the noisy channel model for task-oriented dialogue. Qi Liu, Lei Yu, Laura Rimell, Phil Blunsom, Trans. Assoc. Comput. Linguistics. Qi Liu, Lei Yu, Laura Rimell, and Phil Blunsom and. Pretraining the noisy channel model for task-oriented dialogue. Trans. Assoc. Comput. Linguistics, 2021.
If beam search is the answer, what was the question? CoRR. Clara Meister, Tim Vieira, Ryan Cotterell, abs/2010.02650Clara Meister, Tim Vieira, and Ryan Cotterell. If beam search is the answer, what was the question? CoRR, abs/2010.02650, 2020. URL https://arxiv.org/abs/2010.02650.
Facebook fair's WMT19 news translation task submission. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov, Ondrej Bojar, Rajen Chatterjee, Christian Federmann. Marco Turchi, and Karin VerspoorMark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno-Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana L. Neves, Matt PostProceedingsWMTNathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fair's WMT19 news translation task submission. In Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno-Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana L. Neves, Matt Post, Marco Turchi, and Karin Verspoor (eds.), ProceedingsWMT, 2019.
Minimum error rate training in statistical machine translation. Franz Josef Och, 10.3115/1075096.1075117Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, JapanAssociation for Computational LinguisticsFranz Josef Och. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 160-167, Sapporo, Japan, July 2003. Association for Computational Linguistics. doi: 10.3115/1075096.1075117. URL https://www.aclweb.org/anthology/P03-1021.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://www.aclweb. org/anthology/P02-1040.
chrF: character n-gram F-score for automatic MT evaluation. Maja Popović, 10.18653/v1/W15-3049Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsMaja Popović. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 392-395, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-3049. URL https: //aclanthology.org/W15-3049.
A call for clarity in reporting BLEU scores. Matt Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMatt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186-191, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6319. URL https:// aclanthology.org/W18-6319.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb. org/anthology/P16-1162.
Fast transformer decoding: One write-head is all you need. Noam Shazeer, abs/1911.02150Noam Shazeer. Fast transformer decoding: One write-head is all you need. CoRR, abs/1911.02150, 2019. URL http://arxiv.org/abs/1911.02150.
Minimum risk training for neural machine translation. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, Yang Liu, 10.18653/v1/P16-1159Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1683-1692, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1159. URL https://aclanthology.org/P16-1159.
Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Christopher J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 529David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484-503, 2016. URL http: //www.nature.com/nature/journal/v529/n7587/full/nature16961.html.
On NMT search errors and model errors: Cat got your tongue?. Felix Stahlberg, Bill Byrne, 10.18653/v1/D19-1331Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsFelix Stahlberg and Bill Byrne. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3356-3362, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1331. URL https://www.aclweb.org/anthology/D19-1331.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neu- ral networks. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Wein- berger (eds.), Advances in Neural Information Processing Systems, volume 27. Curran Asso- ciates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/ a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf.
Attention is all you need. CoRR, abs/1706.03762. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, abs/1609.08144Oriol Vinyals. Greg Corrado, Macduff Hughes, and Jeffrey DeanCoRRYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.
Simple and effective noisy channel modeling for neural machine translation. Kyra Yee, Yann N Dauphin, Michael Auli, Proceedings of EMNLP. Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun WanEMNLPKyra Yee, Yann N. Dauphin, and Michael Auli. Simple and effective noisy channel modeling for neural machine translation. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of EMNLP, 2019.
The neural noisy channel. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, Tomás Kociský, Proceedings of ICLR. ICLRLei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomás Kociský. The neural noisy channel. In Proceedings of ICLR, 2017.
The DeepMind Chinese-English document translation system at WMT2020. Lei Yu, Laurent Sartran, Po-Sen Huang, Wojciech Stokowiec, Domenic Donato, Srivatsan Srinivasan, Alek Andreev, Wang Ling, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationSona Mokra, Agustin Dal Lago, Yotam Doron, Susannah Young, Phil Blunsom, and Chris DyerLei Yu, Laurent Sartran, Po-Sen Huang, Wojciech Stokowiec, Domenic Donato, Srivatsan Srinivasan, Alek Andreev, Wang Ling, Sona Mokra, Agustin Dal Lago, Yotam Doron, Susannah Young, Phil Blunsom, and Chris Dyer. The DeepMind Chinese-English document translation system at WMT2020. In Proceedings of the Fifth Conference on Machine Translation, 2020a. |
208,527,270 | Deep Neural Network Fingerprinting by Conferrable Adversarial Examples | In Machine Learning as a Service, a provider trains a deep neural network and provides many users access to it. However, the hosted (source) model is susceptible to model stealing attacks where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model or not. We propose a fingerprinting method for deep neural networks that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a specifically crafted subclass of targeted transferable adversarial examples which we call conferrable adversarial examples that transfer exclusively from a source model to its surrogates. We propose new methods to generate these conferrable adversarial examples and use them as our fingerprint. Our fingerprint is the first to be successfully tested as robust against distillation attacks, and our experiments show that this robustness extends to robustness against weaker removal attacks such as fine-tuning, ensemble attacks, and adversarial retraining. We even protect against a powerful adversary with white-box access to the source model, whereas the defender only needs black-box access to the surrogate model. We conduct our experiments on the CINIC dataset and a subset of ImageNet32 with 100 classes. | [] | Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas [email protected]
University of Waterloo
WaterlooOntarioCanada
Yuxuan Zhang [email protected]
University of Waterloo
WaterlooOntarioCanada
Florian Kerschbaum [email protected]
University of Waterloo
WaterlooOntarioCanada
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Index Terms-FingerprintConferrableTargeted Transferable Adversarial ExamplesKnowledge DistillationDeep Learning
In Machine Learning as a Service, a provider trains a deep neural network and provides many users access to it. However, the hosted (source) model is susceptible to model stealing attacks where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model or not. We propose a fingerprinting method for deep neural networks that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a specifically crafted subclass of targeted transferable adversarial examples which we call conferrable adversarial examples that transfer exclusively from a source model to its surrogates. We propose new methods to generate these conferrable adversarial examples and use them as our fingerprint. Our fingerprint is the first to be successfully tested as robust against distillation attacks, and our experiments show that this robustness extends to robustness against weaker removal attacks such as fine-tuning, ensemble attacks, and adversarial retraining. We even protect against a powerful adversary with white-box access to the source model, whereas the defender only needs black-box access to the surrogate model. We conduct our experiments on the CINIC dataset and a subset of ImageNet32 with 100 classes.
Abstract-In Machine Learning as a Service, a provider trains a deep neural network and provides many users access to it. However, the hosted (source) model is susceptible to model stealing attacks where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model or not. We propose a fingerprinting method for deep neural networks that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a specifically crafted subclass of targeted transferable adversarial examples which we call conferrable adversarial examples that transfer exclusively from a source model to its surrogates. We propose new methods to generate these conferrable adversarial examples and use them as our fingerprint. Our fingerprint is the first to be successfully tested as robust against distillation attacks, and our experiments show that this robustness extends to robustness against weaker removal attacks such as fine-tuning, ensemble attacks, and adversarial retraining. We even protect against a powerful adversary with white-box access to the source model, whereas the defender only needs black-box access to the surrogate model. We conduct our experiments on the CINIC dataset and a subset of ImageNet32 with 100 classes.
Index Terms-Fingerprint, Conferrable, Targeted Transferable Adversarial Examples, Knowledge Distillation, Deep Learning
I. INTRODUCTION
Deep neural networks (DNN) are powerful classifiers deployed for a wide range of tasks, e.g., image segmentation [1], in autonomous vehicles [2], natural language processing [3] and health care predictions [4]. Developing a DNN for a specific task is costly because of the labor and computational resources required for data collection, data cleaning, and training of the model. For this reason, models are often provided by a single entity and consumed by many, for example, in the context of Machine Learning as a Service (MLaaS). A threat to the provider is model stealing, in which an adversary derives a surrogate model from access to a source model, but without access to data with ground truth labels.
In this paper we study linkability of DNN models. A link is a relation between a target model and a source model. A target model is linked to a source model, if the target model is derived from the source model. Methods of derivation include, but are not limited to, distillation [5], fine-tuning [6], adversarial training [7] and model extraction [8]. A target model is not linked to a source model, when it is trained independently of the source model from scratch, possibly on the same data set as the source model. We call a derived target model a surrogate model and an independently trained target model, a reference model. Linkability is the ability of an algorithm to decide whether a target model is a surrogate or a reference model for a given source model, i.e., whether there is a link between the target and the source model or not.
Linkability has several applications. Assume a publicly available model has a known vulnerability, e.g., a backdoor [9], or bias [10]. Linkability can determine whether another model has been derived from that model and likely carries over the vulnerability or bias before the use of the model may have caused harm. Assume a MLaaS provider wants to restrict its users not to redistribute the source model, e.g., through a contractual usage agreement. Since he has to provide access, he cannot prevent users from extracting models. Using linkability, the provider can determine whether another model has been derived from his model.
We propose fingerprinting as a method that provides linkability. Watermarking of DNNs [11] also captures a notion of linkability. Watermarking embeds a secret message into a model that is later extractable using a secret key. A (target) model is linked to the marked (source) model if its extracted message matches the embedded one. Fingerprinting does not embed a secret message during training (and thereby modifies the model potentially impacting its accuracy) but extracts an identifying code (fingerprint) from an already trained model. Different from watermarking schemes, our fingerprint is specifically designed to withstand distillation (and related model extraction) attacks. Distillation [5] is a very powerful method to derive a target model, since the only information reused is the classification labels of the source model. This implies that the transfer of a fingerprint (or watermark) can only be achieved via those classification labels. Claimed security properties of existing watermarking schemes [12], [13] have been broken by distillation attacks [14]. Other watermarking schemes [15] make weaker claims, but explicitly limit the number of queries an adversary can make to exclude distillation attacks. Hence, there exists no scheme that provides linkability that has been successfully tested against distillation attacks. For more details on related work, we refer the reader to Section VII.
We exploit the transferability of adversarial examples [16] to address this problem. Adversarial examples [7] are inputs with small modifications that cause misclassification of the input. Given two models for the same classification task, an adversarial example can be found in one model and tested on the other. An adversarial example is called transferable if it is misclassified in both. Targeted adversarial examples are adversarial examples where the target class of the misclassification is fixed.
In this paper, we hypothesize that there exists a subclass of targeted, transferable, adversarial examples that transfer to surrogate models, but not to reference models. We call these adversarial examples conferrable. Conferrable adversarial examples can be used to provide linkability. Any conferrable example found in the source model should have the same misclassification in a target model, but a different one in others. Linkability via conferrable adversarial examples can withstand a very powerful attacker. The attacker may have white-box access to the source model, i.e., all parameters and its architecture, but the verifier of linkability only needs blackbox access to the target model. I.e., linkability is still feasible even if the attacker only deploys the target model to a remote server with API access.
We empirically study the existence of conferrable adversarial examples for DNNs trained on the CINIC [17] and ImageNet32 [18] datasets. Our study shows that known adversarial attacks such as Projected Gradient Descent (PGD) [19], DeepFool [20], Iterative Gradient Method (IGM) [21] and Carlini-Wagner (CW-L 2 ) [22] have low success rates for generating conferrable adversarial examples given only the source model. We propose a new ensemble-based approach called C-IGM to generate specifically conferrable adversarial examples with an improved success rate, which can be used as fingerprints. For CINIC, our fingerprints are retained with at least 92% in the surrogate models in a model with similar accuracy. For ImageNet32 the fingerprint retention is at worst 76% for a model that has 2.6% less test accuracy than our source model.
A. Contributions
This work contributes show that our fingerprint is robust to distillation attacks. We share all models and fingerprints created for this project and give full access to the source code for research use.
• A
II. BACKGROUND
The background comprises deep learning and distillation, model extraction and adversarial attacks. Then we define the problem we address.
A. Neural Networks
A neural network classifier is a function M (·) : X → Y that assigns a likelihood to inputs X ⊆ R d for each of K ∈ N classes that they belong to classes Y ⊆ R K . It is a sequence of layers f i , (i ∈ {1, .., L}) in which each layer implements a linear function followed by a non-linear function called the activation function. A neural network is called deep if it has more than one hidden layer. Hidden layers are have weights and bias parameters used to compute that layer's neuron activations. The output layer f L (·) typically implements a softmax activation function σ(·) that outputs confidence scores for all classes normalized as probabilities.
σ(f L (x)) i = exp(f L (x) i ) j exp(f L (x) j )(1)
Training a neural network requires the specification of a differentiable loss function that is optimized by gradient descent on all trainable weights and biases. One such loss function is the cross-entropy loss H for some ground truth y ∈ Y with respect to the model's prediction.
H(y, f L (x)) = − 0≤k<K (y k · log(f L (x) k ))(2)
A popular choice is the Adam [23] optimizer to implement the gradient descent. We define two functions, one for training a classifier given an oracle O and one for assigning labels y ∈ Y to some input D ⊆ X by a model. • Classify(M, D) returns a vector denoting the confidence score per class of a classifier M on a set of inputs D ⊆ X . We abuse notation and write M (D) instead of Classify(M, D) for inline paragraphs. • Train(O, D) returns a classifier M trained on the dataset D ⊆ X and labels O(D) ⊆ Y. In practice, the function Train(O, D) is almost guaranteed to output two different models for the same dataset D even when all hyperparameters are the same, due to the randomness in the training function, e.g. the random initialization of weights.
B. Distillation
Distillation has been proposed by Bucilu et al. [24] and was generalized by Goodfellow et al. [5] as a way to compress knowledge from a source classifier into a less complex target classifier. The problem when training only a target classifier is that hard labels capture no information about class similarities beyond the ground truth class for each input. When the target classifier is trained on hard labels, it has been found to generalize worse to unseen examples [5]. The idea in distillation is to use a complex model trained on the hard labels to create soft labels that also assign probabilities to other classes than the maximum class prediction, which enhances knowledge transfer between the models. For deep neural networks, generating soft labels is done by incorporating a distillation temperature T into the softmax layer that re-scales the logits of the source and target model during training. Higher temperatures T shrink the absolute distance between the logits, which leads to a reduced difference in the softmax activation between classes. For T → ∞ the softmax output for each class converges to 1/K for K classes. The softmax of the source and target model are changed as follows.
σ (f L (x)) i = exp(f L (x) i /T ) j exp(f L (x) j /T )(3)
We refer to a target model as surrogate model when it has been derived from a source model through knowledge distillation with any distillation temperature T . Any other model trained independently of the source model is called a reference model. We are particularly interested in distillation attacks with a distillation temperature of T = 1, because an adversary can not control the temperature T for an already trained source model. Different to the original form of distillation, we do not require a lower complexity surrogate model, and it has been shown that surrogate models perform better with higher complexity [25]. As confirmed by related work [14], distillation attacks are a powerful class of removal attacks against linkability for deep neural networks and we evaluate our fingerprinting method against them.
C. Model Extraction
Model extraction attacks are a form of distillation, but the source model is not controlled by the adversary and can only be accessed as a black-box, i.e. by its input-output behavior. The attack output is a surrogate model with white-box access for the adversary, meaning that all parameters of the surrogate model are accessible. A challenge in model extraction is that training data may be unavailable, or the number of queries made to the source model is limited. Like distillation, model extraction attacks are a threat to linkability. In this paper, we assume a powerful adversary with white-box access to the source model and access to potentially unlimited unlabeled data from the same data distribution on which the source model has been trained.
There are two types of model extraction: Recovery-and learning-based approaches. Recovery-based extraction has the goal to reconstruct a surrogate model highly similar to the source model [26], while learning-based extraction aims at creating a high-accuracy model regardless of the similarity to the source model [8], [27], [28]. We focus only on learning-based extraction because our adversary wants to attack linkability where high similarity is obstructive.
D. Distillation Attacks
We evaluate four distillation attacks for removing linkability: Retraining, Jagielski, Knockoff, and Papernot. All methods are given white-box access to the source model and access to a substitute dataset D 2 labeled by the source model, that drawn from the same distribution as the source model's training data D 1 . The main advantage of white-box access to the source model is that no cost is associated with querying the source model for labels.
• Retraining is distillation with a temperature of T = 1, where a target model is trained on the substitute dataset labeled with the unmodified softmax activations of the source model. • Jagielski et al. [26] propose a simple distillation attack by treating the softmax output obtained from the source model as the model's logits and post-processing them with a distillation parameter T to obtain soft labels. For an input x ∈ D and a source model M , the soft labels M (x) can be computed as follows.
M (x) i = exp(M (x) 1/T i ) j exp(M (x) 1/T j )(4)
• Knockoff Nets as proposed by Orekondy et al. [28] leverage a transfer set to train the surrogate model. A transfer set may have more and different classes than the training set of the source model. We use the random selection approach presented by the authors and retrain on the transfer set with nine times more classes than the source model's training data. • Papernot [27] proposes training a surrogate model on a substitute dataset labeled by the source model and then using the Jacobian to extend the dataset by more data points close to the decision boundary of the surrogate model. The source model labels these new inputs and the surrogate model is trained on the extended database. Distillation attacks are a threat to the model provider despite that the adversary has to invest more computational resources due to retraining another model. Data collection and cleaning, hyperparameter tuning, and model testing are associated with much higher costs than just model training. Specifically, small to medium-sized models are generally at risk of model extraction, while large models such as GPT-2 [29] may be distilled only by sufficiently motivated and funded adversaries. Our experiments confirm that distillation attacks are the most powerful, known attacks to break linkability.
E. Adversarial Examples
Deep neural networks are vulnerable to adversarial attacks, where a correctly classified input is perturbed to force a misclassification. These perturbed inputs are called adversarial examples [30] and they have been shown effective in practice against systems based on deep neural networks, like traffic sign recognition systems [31], malware detection [32] or speech recognition 1 [33]. The perturbation is typically constrained so that the similarity difference between the original image and the perturbed image is bounded. This allows creating imperceptibly small perturbations and in practice, ensures that the ground truth of the adversarial example, given by an oracle's classification, does not change from the original image.
A targeted adversarial example is crafted for a target clas-sifierM and a target class t not equal to the ground truth y. For some similarity measure d : X 2 → R and original input x ∈ X , the task is to find a perturbation δ where d(x, x+δ) ≤ for some threshold ∈ R. Formally, a targeted adversarial attack succeeds if the following condition holds.
M (x + δ) = t s.t. d(x, x + δ) ≤(5)
Adversarial attacks in the literature often use the q-norm for q = 0, 1, 2, ∞ so that d(x, x + δ) = ||δ|| q , but other similarity measures are explored at least in the image domain, such as structural similarity [34], DSSIM [35] and a discriminator deep neural network [36]. Unless otherwise specified, we generate adversarial examples with the L ∞ -norm as our similarity measure, as Warde-Farley and Goodfellow argue for the infinity norm being the optimal distance metric [37]. Our work is not limited to this norm and we also show results for Carlini-Wagner [22] (CW-L 2 ) and Projected Gradient Descent (PGD) [19] in the L 2 -norm. Adversarial examples that are adversarial to more than one model have a property that is called transferability [30], [38], [39]. There is strong evidence that adversarial examples are not singular points scattered randomly in the decision space, but instead, they span high-dimensional subspaces [38], [39]. Also, it has been found that neural networks learn similar decision boundaries and thus are vulnerable to the same adversarial examples [30]. In that sense, adversarial examples that exist in the intersection between these high-dimensional subspaces of adversarial examples for many different models are transferable. It was shown that adversarial examples misclassified with high confidence are more likely to be transferable [39]. Targeted transferability has been studied by Liu et al. [16], who devised an ensemble-based adversarial attack for improved transferability.
Our work proposes a new property for adversarial examples called conferrability, which is a subclass of targeted transferable adversarial examples. Conferrable adversarial examples transfer from a source model only to its surrogates obtained through knowledge distillation, but not to any reference models trained independently of the source model. We compare our approach that generates specifically conferrable adversarial examples to known adversarial attacks.
F. Adversarial Attacks
Adversarial attacks are algorithms that generate adversarial examples for a given target modelM . An adversarial attack is typically optimized to generate adversarial examples with certain desired properties, such as enhanced transferability, a high success rate for the attack, or low computation time. We focus on untargeted adversarial attacks for which we measure the targeted transferability of the generated adversarial examples. Let x and x + δ denote the original input and the adversarial example respectively, J the loss function for the target modelM and a maximum perturbation threshold so that ||δ|| q ≤ for some q.
• Fast Gradient Method (FGM) [30]: A one-step adversarial attack originally proposed under "Fast Gradient Sign Method" for the infinity norm L ∞ . For a norm q = 1, 2, the update rule can be written as following.
x = x − ∇J(x) ||∇J(x)|| q(6)
• Iterative Gradient Method (IGM) [21]: An extension of FGM applied iteratively with step size α, that clips back the gradient to the ball around the original input x = x 0 when the perturbation is too large.
x i = x i−1 − clip (α · ∇J(x i−1 ))(7)
• DeepFool [20]: DeepFool iteratively finds adversarial examples by linearizing the decision space of the target classifierM around the original input x and greedily selects the closest hyperplane to any other class than the ground truth class. An orthogonal projection of the intermediate input x i onto that hyperplane is computed and the algorithm repeats until a misclassification occurs.
For the class l with the closest hyperplane and J(x, t) representing the loss of classifierM on x with target label t, the update rule can be written as following for any q-norm with q ≥ 1 and representing element-wise multiplication.
w l = ∇J(x i−1 , y) − ∇J(x i−1 , l) (8) x i = |M (x i−1 )|) ||w l || q q |w l | q−1 sign(w l )(9)
• Projected Gradient Descent (PGD) [19]: The same as IGM [21] but with random starting points around the original images. PGD is executed several times with different starting points to avoid getting stuck in local minima. • Carlini-Wagner (CW-L 2 ) [22]: An iterative approach that operates on the logitsf L of a target classifierM . They optimize for the following function.
Minimize ||δ|| q + c ·f L (x + δ)(10)
A constant parameter c is used to balance the gradient magnitude for both summands and is selected with a binary search. The first modification for efficient optimization is to change the variables for the perturbation δ with a hyperbolic tangent. The second modification is to add an objective function onto the logits that are easier to optimize. They use the Adam [23] optimizer to find adversarial examples. Liu et al. [16] devised an optimization problem for an ensemble method that enhances transferability. Given ensemble weights α, models with softmax outputM 1 , ..,M n , a target label t ∈ Y and some similarity measure d their optimization target can be written as following.
arg min δ − log( n i=1 α iMi (x + δ) · t) + λd(x, x + δ) (11)
There is a considerable amount of research on achieving adversarial robustness, e.g. [40]- [43], both from the machine learning and security community. We are not aware of any method to fully protect against adversarial examples. In our work, we show that the presented adversarial attacks alone are insufficient to generate conferrable adversarial examples. We construct an ensemble-based method that uses these adversarial attacks to generate specifically conferrable adversarial examples.
III. PROBLEM STATEMENT
In this section, we define the problem and the capabilities we assign to the adversary and defender. Our defender has exclusive access to the training dataset D 1 and an oracle O that provides the ground truth labels. The adversary has exclusive access to an unlabeled substitute dataset D 2 that does not overlap with the defenders dataset D 1 . We define the following three types of models. trained on ground truth labels independently of the source model M on some dataset D. The goal of proving linkability for a source model M is to assess whether a suspect model M S belongs to the set of reference or surrogate models for a given source model. To this end, we define two roles, namely the defender and the adversary. The defender trains and deploys a source model that is given to an adversary with white-box access. The adversary performs a removal attack against linkability on the source model using the adversary's substitute dataset D 2 , which outputs a surrogate model of the source model. This surrogate model is deployed by the adversary with black-box access to any user. The defender is made aware of the suspect model and starts the verification procedure. This involves querying the suspect model for classifications on a set of inputs. The target model's predictions are compared by the defender with the source model's predictions and if the matching rate is above a threshold, the suspect model is classified as a surrogate of the source model. The defender wins if the matching rate of the suspect model with the source model on the inputs is larger than the threshold if and only if the suspect model truly is a surrogate of the source model. This is captured by the following security game for a verification thresholds θ 1 , θ 2 ∈ [0, 1].
1) Defender trains a source model
M ← Train(O, D 1 ) 2) Defender selects a set of inputs F ⊆ X 3) Obtain S 0 ← A(M, D 2 ) and R 0 ← Train(O, D 2 ) 4) Defender wins if: P r x∈F Classify(S 0 , x) = Classify(M, x) ≥ θ 1 and P r x∈F Classify(R 0 , x) = Classify(M, x) ≤ θ 2
As demonstrated by Hitaj et al. [44], the adversary may try to evade the verification and return random labels if he detects the classification requests for verification. We address this problem in our solution by showing that it is feasible to generate conferrable adversarial examples with perturbations as small as = 0.05 in the infinity norm L ∞ successfully for input values ranging from 0 to 1. The described system's overview and interactions between the defender and adversary are illustrated in Figure 2.
A. Adversary's Capabilities
Our adversary has the goal to create a high-accuracy surrogate model from white-box access to the source model M that can not be linked to the source model by the defender. He has access to potentially unlimited substitute data D 2 without labels that may come from the same distribution as the defender's training data D 1 . Unless otherwise specified, there is no overlap between the two datasets, i.e. D 1 ∩ D 2 = ∅.
In accordance with Kerckhoffs principle, the adversary has full knowledge of the algorithms used by the defender to choose the inputs F. We model our adversary as a probabilistic polynomial-time (PPT) algorithm. The adversary does not have access to the specific inputs F chosen by the defender, the oracle O and no access to any other reference model for the same task. In our evaluation, we include the case when the adversary has access to a small portion of clean labeled data. The adversary can not differentiate based on the identity between a verifier and any other benign user.
B. Defender's Capabilities
The defender has white-box access to the source model and has access to labeled training data D 1 . Surrogate models can be created by distilling the source model with the Retraining attack outlined in Section II-B, i.e. the defender creates surrogates S = {S i ← Train(M, D 1 )}. None of the other described distillation attacks are known to the defender. The defender has no access to the substitute dataset used by the adversary D 2 or the adversary's model architecture and parameters. The defender has to decide the linkability of a suspect model with a limited number of queries through a black-box API. The output granularity of the suspect model available to the defender during the verification may be truncated to only the class with the highest likelihood.
IV. CONFERRABLE ADVERSARIAL EXAMPLES
In this section, we motivate and define conferrable adversarial examples, present formulas to calculate conferrability scores, and present an optimization function for adversarial attacks that leads to conferrable adversarial examples upon minimization.
A. Motivation
Conferrable adversarial examples are a subclass of targeted transferable adversarial examples that exclusively transfer to the set of surrogates models for a source model, but not to any reference model. Surrogate models are all those models that can be derived from the source model through knowledge distillation, as explained in Section III. The intuition for the existence of conferrable adversarial examples is that misclassifications exhibited by a source model are carried over to a surrogate model with higher probability than when a new reference model is trained independently from the source model. Our conjecture for the existence of conferrable adversarial examples is based on the following two premises.
1 Figure 3.
Related work has empirically shown the existence of targeted transferable adversarial examples [16] for a set of reference models. In our experiments, we show that targeted transferable adversarial examples also exist for the source model and its surrogate models. We motivate the second premise as following. Assume a binary classification problem with two input variables that separates the classes cats and dogs. Given the vast amount of different states a model can reach after training, it is highly likely that models learn sufficiently different decision boundaries around some inputs. Then the surrogate model produces different predictions for these inputs compared to other reference models. The source model can not teach any surrogate model the correct class for these examples and the source model's errors are conferred on the surrogate with high probability. A surrogate model that is highly similar to the source model is likely to misclassify these inputs, and an adversarial perturbation into this decision space leads to higher misclassification probability for surrogate models compared to reference models. The concept of conferrability is illustrated in Figure 3 for the decision boundaries of one representative surrogate and a reference model.
B. Definitions
We focus solely on targeted transferable adversarial examples, but we do not put constraints on the chosen target class, i.e. the specific target class for which the input is transferable does not matter. For this reason, we also evaluate untargeted adversarial attacks to generate targeted transferable adversarial examples. The goal is to produce targeted transferable adversarial examples that transfer to the surrogates but not to the reference models, to separate surrogate from reference models. For targeted adversarial examples, we require a specific wrong label for each conferrable adversarial example by the surrogate models, which does not match with a random label by the reference models with high probability. Targeted transferability is important because only targeted adversarial examples can separate whether a target model is a low-accuracy reference model or a high-accuracy surrogate model. In the non-targeted case, a reference model that randomly guesses classes is going to assign wrong classes relative to the ground truth for most of the conferrable adversarial examples, which would match the expected predictions of a surrogate model.
For a given adversarial attack, we can compute the success rates for the three measures adversarial, transferable and conferrable, as described in Section II-E. A perturbed input is adversarial with some target label t, when a target model M classifies the example with the target label.
Adv(M, x) = 1 M (x)=t(12)
The transferability score for a single perturbed input is computed as an average over the classifications of a set of models M on that perturbed input for which this perturbed input is adversarial. Throughout the paper, we compute the transferability score only over the reference models because the surrogate models are dependent on each other and this dependence impedes comparison to related work. Transferability is maximized when all models classify the perturbed input with the target label.
Transfer(M, x) = 1 |M| M ∈M Adv(M, x)(13)
We calculate conferrability scores over the set of surrogate S and reference models R. Conferrability is maximized when the perturbed input is transferable to the surrogate models but not transferable to the reference models.
Confer(S, R, x) = Transfer(S, x)(1 − Transfer(R, x)) (14)
Generating conferrable adversarial examples requires finding a good approximation to the following non-convex optimization problem. For a source model M , an input from the dataset x ∈ D, a set of surrogate S and reference models R and a target label t ∈ Y, the goal is to find a valid perturbation δ ∈ ∆ that produces an adversarial example for the source model and its surrogates, but not for any reference models.
Minimize 1) M (x + δ) = t 2) P r S∈S [S(x + δ) = t] ≈ 1 3) P r R∈R [R(x + δ) = t] ≈ 1
subject to d(x, x + δ) ≤ Any adversarial example that satisfies these requirements is conferrable.
C. Success Rates of Known Adversarial Attacks
In this section, we evaluate the success rate of known adversarial attacks for generating adversarial examples that are transferable and conferrable. For the algorithms summarized in Section II-F, namely IGM [21], PGD [19], DeepFool [20] Our ensemble-based approach is optimized with IGM and throughout the paper we refer to our attack as conferrable IGM (C-IGM). We use the parameters summarized in Table I for all adversarial attacks. The source model is trained on CINIC [17] with a ResNet20 [45] architecture and reaches 76.96% validation accuracy. All surrogate and reference models are trained on the adversary's dataset D 2 which is a subset of the 85.000 examples from CINIC that does not overlap with the training data D 1 of the defender. Throughout the paper, these models as referred to as testing models, which we use to compute unbiased success rates for transferability and conferrability. More details about the testing models is given in Table III in Section VI. We use Keras [46] for machine learning and the Adversarial Robustness Toolbox [47] to generate adversarial examples. We randomly select 300 images correctly classified by the source model as inputs for the attack and select only successful adversarial examples from the attack outputs for varying perturbation distances. We measure transferability only on the testing reference models, whereas conferrability is measured also on the surrogate models. We select one conferrable adversarial example generated with IGM for = 0.25 and evaluate the robustness of that adversarial example towards random perturbation. Our goal is to assess whether the source model's prediction of the conferrable adversarial example changes upon modifications and how the decision space of the reference models compare to the decision space of the surrogate models. We compute a random orthogonal matrix Q using QR-decomposition of the perturbation δ and add it to the original input as follows.
f (a, b) = x + (a · δ) + (b · Q)(15)
We take the maximal class prediction of the source model M on the input f (a, b) and plot the function f for a, b ∈ [−1.25, 1.25] in Figure 5 with one color per class with the original input in the center. The conferrable adversarial example is robust for both parameters a, b and the decision space around the example looks different between the surrogate and reference models, even though two reference models agree with the surrogate models on the classification of the original input x.
D. Generating Conferrable Adversarial Examples
In this section, we propose our method for generating specifically conferrable adversarial examples leveraging known adversarial attacks. The idea is to create an ensemble of surrogate and reference models with one shared input and one output, which computes conferrability scores per class. We define a loss function that is minimized when the ensemble predicts a maximum conferrability score. For optimization, = 0.25 for the surrogates (top row) and the reference models (bottom row). The center is the original input and the x-axis denotes the value by which the image is modified towards the conferrable adversarial example. The y-axis is a random orthogonal projection of that perturbation.
we use the known adversarial attack IGM to update only the input layer while the ensemble is frozen. We refer to this optimization approach as C-IGM.
We compose an ensemble model M E that computes average voting on the union of the predictions from the source model and its surrogates and on the predictions from the reference models.
Surr(M, S, x) = 1 |{M }| ∪ S S∈|{M }|∪S Classify(S, x) (16) Ref(R, x) = 1 R R∈R Classify(R, x)(17)
As a next step, we compute the conferrability score to obtain the output of M E . We define the vector of all ones as 1 ∈ Y.
M E (M, S, R, x) = (1 − Ref(R, x))Surr(M, S, x) (18)
Then we define a linear loss as for the prediction of the ensemble on input x with target label t.
L E = −M E (x) · t(19)
Note that the output over all classes of our ensemble is not a probability distribution anymore, but the confidence score for only the target label remains a probability. Our loss function sets all values, but the target confidence score to zero and considers only the prediction of the ensemble for the target class. The total loss is simply the sum of all losses. Depending on the adversarial attack, we use the same constraints to limit the perturbation magnitude to compare our ensemble approach with an adversarial attack that is executed on a single model.
V. FINGERPRINTING
A black-box verifiable fingerprint for a source model M is a set of inputs that surrogates of M label similarly as the source model while all other reference models assign different labels. In our threat model, we consider surrogates that have been obtained by knowledge distillation of the source model. The existence of such a fingerprint is based on the assumption that there is some unique bias in the source model that leads to different predictions than the majority of reference models on some inputs and that this bias transfers over to surrogate models. We specify requirements for fingerprints and present our fingerprinting method that is based on conferrable adversarial examples.
A. Definitions
A fingerprinting algorithm for deep neural networks consists of two steps: extraction and verification. The extraction step has access to a source model M , some training dataset, and an oracle to label that data. The output of the extraction is the fingerprints F ⊆ X and their verification keys vk ⊆ Y for the source model. In our approach, the verification keys are the classifications of the fingerprint by the source model. Second, a verification procedure to determine for a given model and key whether that model belongs to the set of surrogates for the source model. Hitaj et al. [44] show that an adversary can evade the blackbox verification process by returning random labels when the fingerprints are easily separable from benign inputs. We specify a non-evasiveness property, which ensures that it is not possible to train a classifier that separates benign data samples from fingerprints. Such a blind fingerprint is desirable in a public verification setting because the fingerprint can be reused multiple times. We say fingerprinting satisfies non-evasiveness if the defender has a very high probability of winning the following game for a dataset of benign samples D. In conclusion, we require correctness and non-evasiveness for linkability. In the next section, we present our fingerprinting.
B. Fingerprinting by Conferrable Adversarial Examples
Our fingerprinting algorithm uses our ensemble-method to generate conferrable adversarial examples on the defenders training dataset D 1 that are used as the fingerprint. In the extraction step, we create the training models, build the ensemble as described in Section IV-D by invoking the function Compose(M, S, R) and run an adversarial attack A. We chose to use IGM as the adversarial attack because it is relatively fast and leads to the best results out of all methods tested in the case where only the source model is attacked, as illustrated by Figure 4. After filtering only the examples with a conferrability score above some threshold parameter τ = 1 in our case, we return the conferrable adversarial examples as fingerprints and the predictions of the source model on the fingerprints as verification keys. The extraction procedure is summarized by Algorithm 2.
Our verification sends the fingerprints to the suspect model and gets back a set of verification keys vk . We access only the maximum class prediction of the verification keys to compute the fingerprint retention in the suspect model relative to the original verification keys vk. If and only if the retention is higher than a threshold θ, the verification returns 1, meaning that the suspect model has been verified as a surrogate. Our verification procedure is summarized by Algorithm 1.
VI. EXPERIMENTS
Our experiments are split into two parts. First, we evaluate the success rates of our ensemble-based algorithm C-IGM for generating conferrable adversarial examples. Then, we show that our proposed fingerprint meets the correctness and nonevasiveness requirements we defined earlier.
A. Experiments Setup
We perform our experiments on the CINIC [17] dataset with 10 classes and a subset of 100 classes from the ImageNet32 [18] dataset. We choose these datasets over CIFAR-10 [48] and CIFAR-100 [49] Filter correctly predicted inputs of the source model 5:
D C ← {x ∈ D| arg max i M (x) i = arg max j O(x) j } Build the ensemble model 6: M E ← Compose(M, S, R) Perform adversarial attack 7: F ← A(M E , D C )
Filter examples by their conferrability score 8: F ← {x ∈ F |Confer(S, R, x) ≥ τ } 9: return F, M (F) class, which allows evaluating the case when the adversary and defender have non-overlapping training data. For both datasets, we train a ResNet20 [45] source model and derive the surrogate models through the Retraining attack and the reference models by training on the ground truth data, as described in Section II-C. Due to limited access to computational resources, we are restricted to datasets with input sizes 32 × 32 pixels and three color channels. All inputs are normalized to have 0 mean and 1 standard deviation per channel. We implemented the machine learning in Keras [46] and the adversarial attacks are reused from the Adversarial Robustness Toolbox [47]. Training of the models is done on one Tesla K10 GPU. The CINIC and ImageNet32 datasets can be described as following.
• CINIC [17]: A resized subset of ImageNet [50] with 10 classes and inputs of size 32 × 32 pixels and three color channels. There are 180.000 training images and 90.000 images for validation. We split the training set in two parts with no overlap and assign the defender and adversary 85.000 and 95.000 images respectively. • ImageNet32 [18]: The whole ImageNet dataset with 1000 classes resized to inputs of size 32 × 32 pixels with three color channels. ImageNet32 has 1.28 million training and 50.000 validation images. We manually selected a subset of 100 classes with a total of 128.000 images and assigned both the defender and adversary 64.000 images with no overlap. The selected classes are summarized in the appendix. In total, we train four different types of models: Training surrogates, training references, testing surrogates and testing references. For generating conferrable adversarial examples, we exclusively use training surrogate and training reference models we refer to as training models. For evaluating conferrability, we exclusively use testing surrogates and reference models we refer to as testing models. The testing surrogates have been trained on the adversary's substitute dataset labeled by the source model, whereas the testing reference models have been trained on the substitute data with ground truth labels. There is no overlap between the training and testing models. A summary of the architecture and accuracy of these models is given in Table III for CINIC and Table IV for ImageNet32.
B. Generating Conferrable Adversarial Examples
A summary of the training models used to build the ensemble model M E is presented in Table III for CINIC and in Table IV for Imagenet32. For the training models, we create 14 surrogates and 15 reference models for ImageNet32 and 44 surrogates and 28 reference models for CINIC. For the testing models, We trained two models per architecture for each dataset. The tables capture the minimum and maximum values for the validation accuracy, fidelity and fingerprint retention for all models. We use IGM to create adversarial examples for the ensemble model and refer to the whole generation approach as C-IGM.
The success rate for an example being adversarial is computed on the source model, transferability is computed on the testing reference models and conferrability is computed on all testing models. Initially, we observe an overfitting effect of C-IGM on the ensemble model M E , which results in a conferrability score of 1.00 for all generated examples on the training models, but only 0.243 on the testing models. To counteract this kind of overfitting, we set the dropout rate to 0.5, which increases the conferrability score for the testing models to 0.469. The average success rate for generating conferrable adversarial examples for all attacks is summarized in Table II. Our C-IGM algorithm significantly outperforms all other approaches in producing conferrable examples on CINIC. On ImageNet32, our approach improves over CW-L 2 by a conferrability score of 0.067. The main problem with the other approaches for CINIC is that specifically IGM and CW-L 2 produce adversarial examples that are highly transferable, which reduces the conferrability score. For ImageNet32, it seems that the surrogate and reference models are more different from each other so that a reduced transferability from CW-L 2 and IGM to the reference models can be observed.
The conferrability scores for each example for CINIC is illustrated in Figure 6 and for ImageNet32 it is illustrated in Figure 7. From the Figure it can be seen that C-IGM produces conferrable adversarial examples with a higher maximum conferrability score than the other attacks.
C. Fingerprinting Evaluation
We evaluate the adversarial attacks that generate specifically conferrable adversarial examples separately from our fingerprint method. For the fingerprinting, we additionally test the fingerprint retention for other types of removal attacks. Adversarial attacks we derive the set of training reference models by retraining models on the defender's data D 1 . For evaluating conferrability, we obtain testing surrogate models by training on the adversary's substitute dataset D 2 labeled by source model. The testing reference models are obtained by training on the substitute dataset with ground truth labels. in Section II-D as removal attacks. These are Retraining, Distillation as proposed by Jagielski et al. [26], Papernot [27] and Knockoff [28]. • Ensemble Attack: The adversary splits the substitute dataset into n parts and trains one surrogate model on each non-overlapping subset of the training dataset. The output is an ensemble of all surrogate models with average voting. We perform all removal attacks on the source model. For finetuning and adversarial training we use 10.000 inputs labeled by the source model and create 1000 adversarial examples. Model extraction uses all the data available to the adversary. We find that our fingerprint withstands all attacks with almost 100% fingerprint retention in all cases given that the surrogate is a high accuracy model. The results for CINIC are summarized in Table V and for ImageNet32 in Table VI. For CINIC, the highest fingerprint retention measured for the testing reference models is for a Densenet, which has 65% retention. The lowest fingerprint retention is measured for a ResNet20 model with 91% fingerprint retention. The three model extraction attacks, Jagielski, Papernot and Knockoff described in Section II-D, have close to 100% fingerprint retention for high accuracy surrogate models. From these values we derive our verification thresholds leaving small error margins so that θ 1 = 0.9 and θ 2 = 0.7, as described in Section III. A certain amount of fingerprint retention in the reference models may be unavoidable when -like in our case, the adversary is trained with exactly the same model architecture, learning objective and optimizer on a dataset that comes from the same distribution as the defenders training set. We demonstrated that it is possible to craft conferrable adversarial examples that are highly transferable specifically to surrogate models. Also, we showed that it is feasible to create fingerprints with small perturbations of = 0.05 in the infinity norm. Our fingerprint is non-evadable as long as adversarial examples with small-perturbations are non-evadable. If that is given, our fingerprint fulfills the correctness and non-evasiveness requirement specified in Section V and is a possible solution for deciding linkability.
VII. RELATED WORK
Uchida et al. [11] proposed the first watermarking scheme for neural networks. They embed the secret message into the weights during training and implement a white-box verification. Their watermark is evaluated against fine-tuning and pruning as removal attacks. Adi et al. [12] propose overfitting the source model on abstract images to provide watermarking. Zhang et al. [13] additionally propose modifying benign inputs by adding Gaussian noise or labels and small patches on top of the image and train the neural network to identify these as a backdoor-based watermark. Guo et al. [51] also use perturbation to watermark. They additionally allow encoding the identity of the data owner into the watermark. Hitaj et al. [44] show that backdoor-based black-box verifiable watermarking schemes are vulnerable to evasion. An adversary can deploy the watermarked model, but detect out-of-distribution (backdoor) queries and return random labels. To prevent this attack, Li et al. [52] created a blind watermarking scheme that creates watermark images close to the original distribution using a GAN, so that the attacker (and the distinguisher in the GAN) cannot distinguish these from regular queries. DeepSigns is a black-box verifiable watermarking framework by Rouhani et al. [53]. Their framework specifically selects rare inputs as watermarks on which the deep neural network does not produce high confidence classifications. The framework assigns each watermark a random class and embeds the watermark by fine-tuning the model on the watermark with a decreased learning rate to limit utility loss of the model. All these schemes have in common that the model is overfitted on some uncommon inputs that can be used to identify the model during the verification phase. It has been shown by Shafieniejad et al. [14] that the backdoor-based watermarking schemes from Adi et al. [12] and Zhang et al. [13] are not robust to distillation attacks. None of the proposed schemes evaluate whether their watermark is secure against distillation attacks.
(a) (b) (c) (d)(e
Frontier-Stitching [54] is the first watermarking scheme that uses adversarial examples close to the decision boundary as watermarks. Their idea relies on the fact that some adversarial attacks like the Fast Sign Gradient Method (FSGM) sometimes produce false adversarial examples, that does not lead to misclassification in the model but are nonetheless close to the decision boundary. They perform adversarial retraining with true and false adversarial examples to ensure that the decision boundary is minimally changed to maintain the model's utility.
The first watermarking scheme that provides some defenses against model extraction is DAWN [15], which uses an active defense and assumes the adversary has only black-box access to the watermarked model. DAWN intercepts 0.03 − 0.05% of all queries and returns false labels to embed a watermark in the surrogate. However, their scheme is explicitly not secure against extraction attacks that use as many queries as required to train a fresh model, e.g., by querying several close images where only one is associated with a false label. This includes no claimed security against distillation attacks. Our work is secure against an attacker with white-box access to the source model, i.e., against extraction attacks using any number of queries and does not impact the model's performance.
Cao et al. [55] recently proposed a framework for intellectual property protection using fingerprinting similar to our work. Their main idea is to generate inputs close to the
A. Other Types of Linkability
There are other lines of research for proving linkability, which deviates from our definition of linkability in various ways. The work by Yu et al. [56], [57] studies fingerprints for GANs where the goal is to link a given image to the generator that created it, given black-box access to all generators. Their work links outputs, while our work links models. The work by Wang et al. [35] links models for transfer learning that share the same base model. We do not consider transfer learning in this work but focus on the linkability of models for the same task. DeepMarks [58] uses the term "fingerprinting" with a different meaning than our work, referring to a watermark that is robust specifically against collusion attacks, among other attacks.
VIII. CONCLUSION
Fig. 1 :
1Conferrable adversarial examples used as a fingerprint to link surrogate models with their source model.
Fig. 2 :
2A system overview between a defender that trains a model, extracts its fingerprint and an adversary who deploys a surrogate model that is verified by the defender.
•
The source model trained on the defenders dataset M ← Train(O, D 1 ) for which the adversary is given white-box access. • Surrogate models S = {S i ← A(M, D)} which are models distilled from the source model M using a distillation attack A and some dataset D. The defender and adversary train surrogates on their datasets. • Reference models R = {R i ← Train(O, D)} are models
Fig. 3 :
3A summary of the difference between transferability and conferrability. Figure (a) summarizes known properties of adversarial examples and their relation towards one another. Figure (b) visualizes the decision boundaries relative to the ground truth for a representative reference and surrogate model and shows the difference between conferrable and transferable adversarial examples.
Fig. 4 :
4Conferrability scores for different perturbation magnitudes on CINIC. Our experiments depicted in Figure 4 show that out of all the known attacks, CW-L 2 and IGM have the highest success rate for generating conferrable adversarial examples. Their average conferrability score is 0.234 and 0.214 for = 0.03 and 0.219 and 0.194 for = 0.3 on CINIC. We observe that both attacks also have the highest transferability scores, 0.680 and 0.883 for = 0.03 and 0.680 and 0.821 for = 0.3. Our experiments show that larger values for result in a higher success rate for generating conferrable adversarial examples.
Fig. 5 :
5The decision space around a conferrable adversarial examples with
•
Extract(M, D, O): Has access to a source model M , some training data D and an oracle O to provide labels for D. Extraction outputs a fingerprint F and the labels predicted by the source model on the fingerprint vk = {M (x)|x ∈ F}. • Verify(M S , F, vk): On input of a suspect model M S , a fingerprint F and a verification key vk, Verify checks if M S is a surrogate of the source model and outputs 1 if and only if M S (F) ≈ vk. Given an oracle O, the source model M and the distillation attack A 0 described in Section II-C, we obtain all surrogate models by distilling the source model and the reference models by training clean models. We define an auxiliary method to extract a fingerprint from a model. FModel(): 1) Train the source model M ← Train(O, D) 2) Train surrogate models S = {S i |S i ← A(M, vk)} 3) Train reference models R = {R i |R i ← Train(O, D)} 4) Compute (F, vk) ← Extract(R, M, S) 5) Output (R, M, S, F, vk) A fingerprint is correct if the verification method verifies a model as surrogate if and only if it truly is a surrogate. It is only with very small probability that any distilled version of the source model can be generated that is not identified by the verification as a surrogate model and also has similar validation accuracy. 1) Defender computes (R, M, S, F, vk) ← FModel() 2) ObtainM 0 ← Train(O, D) andM 1 ← A(M ) 3) Sample b $ ← − {0, 1} and sendM b to the Defender 4) Defender wins if: P r[Verify(M b , F, vk) = b] ≈ 1
1 )
1Compute (R, M, S, F, vk)
because they have more examples per Algorithm 1 Our verification algorithm. Input: Fingerprint F, Verification keys vk, Black-box access to the suspect model M S , verification threshold θ Output: 1 if and only if M S is a surrogate of M 1: vk ← M S arg maxj vkij =arg maxj vk ij 3: if (p ret ≥ θ) Our extraction algorithm. Input: Training data D, Oracle O, Source Model M , Number of training models s, Minimal conferrability score τ , Adversarial attack A Output: Fingerprint F, Verification keys vk 1: for i = s to 0 do
•
Fine-Tuning: Fine-tuning continues training the surrogate model on more data labeled by the source model. 1) Fine-tune last layer (FTLL): Freeze all layers except for the final layer and fine-tune the model on the substitute data. 2) Fine-tune all layers (FTAL): Fine-tune the whole model. 3) Retrain last layer (RTLL): Re-initialize the last layer and train the model with all layers frozen except for the last one. 4) Retrain all layers (RTAL): The last layer is reinitialized, but all layers are trained. • Adversarial Training: The adversary fine-tunes the whole model on adversarial examples generated from the substitute dataset. We evaluate FGM, PGD and IGM. • Distillation: We evaluate the distillation attacks outlined
Fig. 9 :
9Fingerprints obtained from C-IGM with = 0.1 on ImageNet32.
Fig. 10 :
10Fingerprints obtained from C-IGM with = 0.15 on CINIC.
new subclass of targeted transferable adversarial examples, called conferrable adversarial examples. Conferrable adversarial examples transfer more likely from a source model to target models derived by knowledge distillation of the source model, but not to target models trained on ground truth labels. • An ensemble-based method that generates specifically conferrable adversarial examples with improved success rates over known targeted adversarial attacks. • Game-based definitions of fingerprinting for deep neural network classifiers. • A thorough evaluation of our fingerprinting method based on conferrable adversarial examples for the CINIC dataset and a subset with 100 classes from the ImageNet32 dataset. Among other derivation attacks, we are the first to
If these two premises hold, it follows that conferrable adversarial examples must exist. Targeted transferable adversarial examples lie in the set intersection of all adversarial examples for all models. Conferrable adversarial examples lie in the intersection of adversarial examples for the source model and all surrogate models, without all transferable adversarial examples for the reference models, as depicted in) The set of targeted transferable adversarial examples
between the source model and its surrogates is not
empty.
2) The set of targeted transferable adversarial examples
for all models, i.e. the source model, its surrogates
and the reference models, is not equal to the set of
targeted transferable adversarial examples between just
the source model and its surrogate models.
TABLE I :
IParameters for the adversarial attacks. Iter are the maximum number of iterations, α is the step-size, Grads is the number of gradients evaluated in parallel for DeepFool, κ and lr are the confidence and the learning rate for the CW-L 2 attack and q is the norm.and CW-L 2 [22], we empirically show low success rates for
generating specifically conferrable adversarial examples when
executing the attacks only on the source model. This serves
as motivation to create our ensemble-based approach with
improved success rates for generating conferrable adversarial
examples.
TABLE II :
IISuccess rates for all adversarial attacks ( = 0.15).Dataset
Method
Transferable
Conferrable
CINIC
C-IGM
0.492
0.469
IGM
0.752
0.197
PGD
0.111
0.101
DeepFool
0.033
0.036
CW-L 2
0.864
0.115
ImageNet32
C-IGM
0.508
0.419
IGM
0.357
0.250
PGD
0.135
0.155
DeepFool
0.045
0.029
CW-L 2
0.438
0.352
)
Fig. 6: Conferrability scores per generated adversarial example for the CINIC dataset with = 0.15.Fig. 7: Conferrability scores per generated adversarial example for a subset of ImageNet32 and = 0.15.(a)
(b)
(c)
(d)
(e)
Dataset
Type
Val Acc
Fidelity
FPRet
Label
Val Acc
Fidelity
FPRet
CINIC
D 1
Training Surrogate
Training Reference
ResNet20
0.770
1.00
1.00
-
ResNet20
[0.746, 0.764]
[0.842, 0.880]
1.00
ResNet20
[0.660, 0.777]
[0.850, 0.856]
0.00
ResNet56
[0.762, 0.774]]
[0.872, 0.877]
1.00
ResNet56
[0.788, 0.807]
[0.839, 0.847]
0.00
Densenet
[0.749, 0.763]
[0.826, 0.844]
1.00
Densenet
[0.734, 0.780]
[0.802, 0.826]
0.00
VGG16
[0.716, 0.747]
[0.769, 0.815]
1.00
VGG16
[0.750, 0.765]
[0.803, 0.814]
0.00
VGG19
[0.790, 0.792]
[0.920, 0.924]
1.00
VGG19
[0.779, 0.787]
[0.818, 0.822]
0.00
MobileNetV2
[0.769, 0.783]
[0.875, 0.876]
1.00
MobileNetV2
[0.677, 0.797]
[0.753, 0.840]
0.00
CINIC
D 2
Testing Surrogate
Testing Reference
TABLE III :
IIIAn overview of the models and their testing accuracies, fidelity and fingerprint retention for = 0.15 on CINIC. Fidelity is computed as the accuracy of a target model on the training set D 1 when the source model M predicts all labels. Brackets denote the minimum and maximum measured value.
TABLE IV :
IVAn overview of the models and their testing accuracies, fidelity and fingerprint retention on a subset of 100 classes from ImageNet32. Fidelity is computed as the accuracy of a target model on the training set D 1 when the source model M predicts all labels. Brackets denote the minimum and maximum measured value.Dataset
Type
Attack A
Val Acc
Fidelity
FPRet
CINIC
D 2
Model Extraction
ResNet20
Jagielski
0.756
0.840
0.995
ResNet20
Papernot
0.735
0.810
0.97
ResNet20
Knockoff
0.730
0.815
0.99
Fine-Tuning
ResNet20
FTLL
0.775
0.935
1.00
ResNet20
FTAL
0.773
0.922
1.00
ResNet20
RTLL
0.720
0.799
0.985
ResNet20
RTAL
0.668
0.65
0.98
Adversarial Training
ResNet20
FGM
0.757
0.842
1.00
ResNet20
PGD
0.768
0.920
1.00
Ensemble Attack
ResNet20
n=3
0.742
0.8287
0.970
TABLE V :
VRemoval attacks for CINIC.Dataset
Type
Attack A
Val Acc
Fidelity
FPRet
ImageNet32
D 2
Model Extraction
ResNet20
Jagielski
0.532
0.714
0.98
ResNet20
Papernot
0.509
0.670
0.90
ResNet20
Knockoff
0.474
0.622
0.98
Fine-Tuning
ResNet20
FTLL
0.573
0.886
1.00
ResNet20
FTAL
0.613
0.881
1.00
ResNet20
RTLL
0.590
0.853
1.00
ResNet20
RTAL
0.45
0.5696
0.82
Adversarial Training
ResNet20
FGM
0.554
0.862
1.00
ResNet20
PGD
0.556
0.853
1.00
Ensemble Attack
ResNet20
n=3
0.505
0.672
0.80
TABLE VI :
VIRemoval attacks for ImageNet32. decision boundary which are shown to be robust against finetuning and compression. They have not been tested as robust against distillation attacks. Liu et al. [16] evaluate the targeted transferability of adversarial examples in depth. Our study extends their results by a new subclass of targeted transferable adversarial examples: conferrable adversarial examples.
Although these are strictly speaking not adversarial examples, but rather adversarially embedded commands into speech that a human does not perceive as such.
We formally define fingerprinting for deep neural networks that provides linkability in the presence of an adversary performing model derivation, including model extraction attacks. We introduce conferrable adversarial examples as a means for fingerprinting that can withstand model extraction attacks and experimentally verify their existence for deep neural networks on the CINIC and a subset of the ImageNet32 dataset with 100 classes. We show that known adversarial attacks such as the DeepFool, Projected Gradient Descent (PGD), Iterative Gradient Method (IGM) and Carlini Wagner (CW-L 2 ) have a relatively low success rate for producing conferrable adversarial examples. In response, we design and evaluate a new generation method as an ensemble approach for generating conferrable adversarial examples called C-IGM. We evaluate fingerprinting using our conferrable adversarial examples against the following model extraction attacks from the literature: Retraining, Papernot[27], Knockoff[28]and the learning-based extraction proposed by Jagielski et al.[26]. We show that robustness against model extraction extends to robustness against other types of known attacks, such as finetuning, adversarial training and ensemble attacks. We also demonstrate that highly conferrable adversarial can be found using a relatively small perturbation of = 0.05 in the infinity norm so that an adversary can not easily evade the verification procedure.
Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. C Liu, L.-C Chen, F Schroff, H Adam, W Hua, A L Yuille, L Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Liu, L.-C. Chen, F. Schroff, H. Adam, W. Hua, A. L. Yuille, and L. Fei-Fei, "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 82-92.
Deeptest: Automated testing of deep-neural-network-driven autonomous cars. Y Tian, K Pei, S Jana, B Ray, Proceedings of the 40th international conference on software engineering. the 40th international conference on software engineeringACMY. Tian, K. Pei, S. Jana, and B. Ray, "Deeptest: Automated testing of deep-neural-network-driven autonomous cars," in Proceedings of the 40th international conference on software engineering. ACM, 2018, pp. 303-314.
Recent trends in deep learning based natural language processing. T Young, D Hazarika, S Poria, E Cambria, ieee Computational intelligenCe magazine. 133T. Young, D. Hazarika, S. Poria, and E. Cambria, "Recent trends in deep learning based natural language processing," ieee Computational intelligenCe magazine, vol. 13, no. 3, pp. 55-75, 2018.
A guide to deep learning in healthcare. A Esteva, A Robicquet, B Ramsundar, V Kuleshov, M Depristo, K Chou, C Cui, G Corrado, S Thrun, J Dean, Nature medicine. 251A. Esteva, A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou, C. Cui, G. Corrado, S. Thrun, and J. Dean, "A guide to deep learning in healthcare," Nature medicine, vol. 25, no. 1, pp. 24-29, 2019.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Convolutional neural networks for medical image analysis: Full training or fine tuning?. N Tajbakhsh, J Y Shin, S R Gurudu, R T Hurst, C B Kendall, M B Gotway, J Liang, IEEE transactions on medical imaging. 355N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, "Convolutional neural networks for medical image analysis: Full training or fine tuning?" IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1299-1312, 2016.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, arXiv:1312.6199arXiv preprintC. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013.
Stealing machine learning models via prediction apis. F Tramèr, F Zhang, A Juels, M K Reiter, T Ristenpart, 25th {USENIX} Security Symposium ({USENIX} Security 16. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing machine learning models via prediction apis," in 25th {USENIX} Security Symposium ({USENIX} Security 16), 2016, pp. 601-618.
Badnets: Identifying vulnerabilities in the machine learning model supply chain. T Gu, B Dolan-Gavitt, S Garg, arXiv:1708.06733arXiv preprintT. Gu, B. Dolan-Gavitt, and S. Garg, "Badnets: Identifying vulnera- bilities in the machine learning model supply chain," arXiv preprint arXiv:1708.06733, 2017.
Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases. P Stock, M Cisse, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)P. Stock and M. Cisse, "Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 498-512.
Embedding watermarks into deep neural networks. Y Uchida, Y Nagai, S Sakazawa, S Satoh, Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. the 2017 ACM on International Conference on Multimedia RetrievalACMY. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, "Embedding water- marks into deep neural networks," in Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. ACM, 2017, pp. 269-277.
Turning your weakness into a strength: Watermarking deep neural networks by backdooring. Y Adi, C Baum, M Cisse, B Pinkas, J Keshet, 27th {USENIX} Security Symposium ({USENIX} Security 18). Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, "Turning your weakness into a strength: Watermarking deep neural networks by backdooring," in 27th {USENIX} Security Symposium ({USENIX} Security 18), 2018, pp. 1615-1631.
Protecting intellectual property of deep neural networks with watermarking. J Zhang, Z Gu, J Jang, H Wu, M P Stoecklin, H Huang, I Molloy, Proceedings of the 2018 on Asia Conference on Computer and Communications Security. the 2018 on Asia Conference on Computer and Communications SecurityACMJ. Zhang, Z. Gu, J. Jang, H. Wu, M. P. Stoecklin, H. Huang, and I. Molloy, "Protecting intellectual property of deep neural networks with watermarking," in Proceedings of the 2018 on Asia Conference on Computer and Communications Security. ACM, 2018, pp. 159-172.
On the robustness of the backdoor-based watermarking in deep neural networks. M Shafieinejad, J Wang, N Lukas, F Kerschbaum, arXiv:1906.07745arXiv preprintM. Shafieinejad, J. Wang, N. Lukas, and F. Kerschbaum, "On the ro- bustness of the backdoor-based watermarking in deep neural networks," arXiv preprint arXiv:1906.07745, 2019.
Dawn: Dynamic adversarial watermarking of neural networks. S Szyller, B G Atli, S Marchal, N Asokan, arXiv:1906.00830arXiv preprintS. Szyller, B. G. Atli, S. Marchal, and N. Asokan, "Dawn: Dy- namic adversarial watermarking of neural networks," arXiv preprint arXiv:1906.00830, 2019.
Delving into transferable adversarial examples and black-box attacks. Y Liu, X Chen, C Liu, D Song, arXiv:1611.02770arXiv preprintY. Liu, X. Chen, C. Liu, and D. Song, "Delving into transfer- able adversarial examples and black-box attacks," arXiv preprint arXiv:1611.02770, 2016.
L N Darlow, E J Crowley, A Antoniou, A J Storkey, arXiv:1810.03505Cinic-10 is not imagenet or cifar-10. arXiv preprintL. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey, "Cinic-10 is not imagenet or cifar-10," arXiv preprint arXiv:1810.03505, 2018.
A downsampled variant of imagenet as an alternative to the cifar datasets. P Chrabaszcz, I Loshchilov, F Hutter, arXiv:1707.08819arXiv preprintP. Chrabaszcz, I. Loshchilov, and F. Hutter, "A downsampled variant of imagenet as an alternative to the cifar datasets," arXiv preprint arXiv:1707.08819, 2017.
Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, arXiv:1706.06083arXiv preprintA. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017.
Deepfool: a simple and accurate method to fool deep neural networks. S.-M Moosavi-Dezfooli, A Fawzi, P Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574-2582.
Adversarial examples in the physical world. A Kurakin, I Goodfellow, S Bengio, arXiv:1607.02533arXiv preprintA. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016.
Towards evaluating the robustness of neural networks. N Carlini, D Wagner, 2017 IEEE Symposium on Security and Privacy (SP). N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 IEEE Symposium on Security and Privacy (SP).
. IEEE. IEEE, 2017, pp. 39-57.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Model compression. C Bucilu, R Caruana, A Niculescu-Mizil, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningACMC. Bucilu, R. Caruana, and A. Niculescu-Mizil, "Model compression," in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006, pp. 535-541.
Prada: protecting against dnn model stealing attacks. M Juuti, S Szyller, S Marchal, N Asokan, 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEEM. Juuti, S. Szyller, S. Marchal, and N. Asokan, "Prada: protecting against dnn model stealing attacks," in 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2019, pp. 512-527.
High-fidelity extraction of neural network models. M Jagielski, N Carlini, D Berthelot, A Kurakin, N Papernot, arXiv:1909.01838arXiv preprintM. Jagielski, N. Carlini, D. Berthelot, A. Kurakin, and N. Papernot, "High-fidelity extraction of neural network models," arXiv preprint arXiv:1909.01838, 2019.
Practical black-box attacks against machine learning. N Papernot, P Mcdaniel, I Goodfellow, S Jha, Z B Celik, A Swami, Proceedings of the 2017 ACM on Asia conference on computer and communications security. the 2017 ACM on Asia conference on computer and communications securityACMN. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," in Proceedings of the 2017 ACM on Asia conference on computer and communications security. ACM, 2017, pp. 506-519.
Knockoff nets: Stealing functionality of black-box models. T Orekondy, B Schiele, M Fritz, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionT. Orekondy, B. Schiele, and M. Fritz, "Knockoff nets: Stealing func- tionality of black-box models," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4954-4963.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI Blog. 18A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," OpenAI Blog, vol. 1, no. 8, 2019.
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, arXiv:1412.6572arXiv preprintI. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
Robust physical-world attacks on deep learning visual classification. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, T Kohno, D Song, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionK. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, "Robust physical-world attacks on deep learning visual classification," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1625-1634.
Adversarial examples for malware detection. K Grosse, N Papernot, P Manoharan, M Backes, P Mcdaniel, European Symposium on Research in Computer Security. SpringerK. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, "Adversarial examples for malware detection," in European Symposium on Research in Computer Security. Springer, 2017, pp. 62-79.
Audio adversarial examples: Targeted attacks on speech-to-text. N Carlini, D Wagner, 2018 IEEE Security and Privacy Workshops. IEEEN. Carlini and D. Wagner, "Audio adversarial examples: Targeted attacks on speech-to-text," in 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018, pp. 1-7.
D Peng, Z Zheng, L Luo, X Zhang, arXiv:1910.09821Structure matters: Towards generating transferable adversarial images. arXiv preprintD. Peng, Z. Zheng, L. Luo, and X. Zhang, "Structure matters: Towards generating transferable adversarial images," arXiv preprint arXiv:1910.09821, 2019.
With great training comes great vulnerability: Practical attacks against transfer learning. B Wang, Y Yao, B Viswanath, H Zheng, B Y Zhao, 27th {USENIX} Security Symposium ({USENIX} Security 18). B. Wang, Y. Yao, B. Viswanath, H. Zheng, and B. Y. Zhao, "With great training comes great vulnerability: Practical attacks against transfer learning," in 27th {USENIX} Security Symposium ({USENIX} Security 18), 2018, pp. 1281-1297.
Generating adversarial examples with adversarial networks. C Xiao, B Li, J.-Y Zhu, W He, M Liu, D Song, arXiv:1801.02610arXiv preprintC. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song, "Gener- ating adversarial examples with adversarial networks," arXiv preprint arXiv:1801.02610, 2018.
Adversarial perturbations of deep neural networks. D Warde-Farley, I Goodfellow, Perturbations, Optimization, and Statistics. 311D. Warde-Farley and I. Goodfellow, "Adversarial perturbations of deep neural networks," Perturbations, Optimization, and Statistics, vol. 311, 2016.
F Tramèr, N Papernot, I Goodfellow, D Boneh, P Mcdaniel, arXiv:1704.03453The space of transferable adversarial examples. arXiv preprintF. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "The space of transferable adversarial examples," arXiv preprint arXiv:1704.03453, 2017.
Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, C Nita-Rotaru, F Roli, 28th {USENIX} Security Symposium. {USENIX} Security 19A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli, "Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks," in 28th {USENIX} Security Symposium ({USENIX} Security 19), 2019, pp. 321-338.
Disentangling adversarial robustness and generalization. D Stutz, M Hein, B Schiele, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionD. Stutz, M. Hein, and B. Schiele, "Disentangling adversarial robust- ness and generalization," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 6976-6987.
Unlabeled data improves adversarial robustness. Y Carmon, A Raghunathan, L Schmidt, P Liang, J C Duchi, arXiv:1905.13736arXiv preprintY. Carmon, A. Raghunathan, L. Schmidt, P. Liang, and J. C. Duchi, "Unlabeled data improves adversarial robustness," arXiv preprint arXiv:1905.13736, 2019.
On evaluating adversarial robustness. N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, I Goodfellow, A Madry, arXiv:1902.06705arXiv preprintN. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, and A. Madry, "On evaluating adversarial robustness," arXiv preprint arXiv:1902.06705, 2019.
Feature denoising for improving adversarial robustness. C Xie, Y Wu, L Maaten, A L Yuille, K He, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Xie, Y. Wu, L. v. d. Maaten, A. L. Yuille, and K. He, "Feature denoising for improving adversarial robustness," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 501-509.
Have you stolen my model? evasion attacks against deep neural network watermarking techniques. D Hitaj, L V Mancini, arXiv:1809.00615arXiv preprintD. Hitaj and L. V. Mancini, "Have you stolen my model? evasion attacks against deep neural network watermarking techniques," arXiv preprint arXiv:1809.00615, 2018.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Keras. F Chollet, F. Chollet et al., "Keras," 2015.
. M.-I Nicolae, M Sinn, M N Tran, A Rawat, M Wistuba, V Zantedeschi, N Baracaldo, B Chen, H Ludwig, I M Molloy, arXiv:1807.01069arXiv preprintAdversarial robustness toolbox v0. 4.0M.-I. Nicolae, M. Sinn, M. N. Tran, A. Rawat, M. Wistuba, V. Zant- edeschi, N. Baracaldo, B. Chen, H. Ludwig, I. M. Molloy et al., "Ad- versarial robustness toolbox v0. 4.0," arXiv preprint arXiv:1807.01069, 2018.
Cifar-10 (canadian institute for advanced research). A Krizhevsky, V Nair, G Hinton, A. Krizhevsky, V. Nair, and G. Hinton, "Cifar-10 (canadian institute for advanced research)." [Online]. Available: http://www.cs.toronto.edu/ ∼ kriz/cifar.html
Cifar-100 (canadian institute for advanced research). --, "Cifar-100 (canadian institute for advanced research)." [Online]. Available: http://www.cs.toronto.edu/ ∼ kriz/cifar.html
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJ. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248-255.
Watermarking deep neural networks for embedded systems. J Guo, M Potkonjak, 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD. IEEEJ. Guo and M. Potkonjak, "Watermarking deep neural networks for embedded systems," in 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2018, pp. 1-8.
How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of dnn. Z Li, C Hu, Y Zhang, S Guo, Z. Li, C. Hu, Y. Zhang, and S. Guo, "How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of dnn," 2019.
Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks. B Rouhani, H Chen, F Koushanfar, Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating SystemsACMB. Darvish Rouhani, H. Chen, and F. Koushanfar, "Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks," in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, 2019, pp. 485-497.
Adversarial frontier stitching for remote neural network watermarking. E L Merrer, P Perez, G Trédan, arXiv:1711.01894arXiv preprintE. L. Merrer, P. Perez, and G. Trédan, "Adversarial frontier stitching for remote neural network watermarking," arXiv preprint arXiv:1711.01894, 2017.
Ipguard: Protecting the intellectual property of deep neural networks via fingerprinting the classification boundary. X Cao, J Jia, N Z Gong, arXiv:1910.12903arXiv preprintX. Cao, J. Jia, and N. Z. Gong, "Ipguard: Protecting the intellectual property of deep neural networks via fingerprinting the classification boundary," arXiv preprint arXiv:1910.12903, 2019.
Attributing fake images to gans: Analyzing fingerprints in generated images. N Yu, L Davis, M Fritz, arXiv:1811.08180arXiv preprintN. Yu, L. Davis, and M. Fritz, "Attributing fake images to gans: Analyz- ing fingerprints in generated images," arXiv preprint arXiv:1811.08180, 2018.
Learning gan fingerprints towards image attribution. arXiv:1811.08180arXiv preprint--, "Learning gan fingerprints towards image attribution," arXiv preprint arXiv:1811.08180, 2019.
Deepmarks: A digital fingerprinting framework for deep neural networks. H Chen, B D Rohani, F Koushanfar, arXiv:1804.03648arXiv preprintH. Chen, B. D. Rohani, and F. Koushanfar, "Deepmarks: A digital fingerprinting framework for deep neural networks," arXiv preprint arXiv:1804.03648, 2018.
. Ix, Appendix, IX. APPENDIX
sorrel', 'dalmatian', 'fox squirrel', 'tiger', 'zebra', 'ram', 'orangutan', 'squirrel monkey', 'komondor', 'guinea pig', 'golden retriever', 'macaque', 'pug', 'water buffalo', 'American black bear', 'giant panda', 'armadillo', 'gibbon', 'German shepherd', 'koala', 'umbrella', 'soccer ball', 'starfish', 'grand piano', 'laptop', 'strawberry', 'airliner', 'balloon', 'space shuttle', 'aircraft carrier', 'tank', 'missile', 'mountain bike', 'steam locomotive', 'cab', 'snowplow', 'bookcase', 'toilet seat', 'pool table', 'orange', 'lemon', 'violin', 'sax', 'volcano. Fingerprints obtained from C-IGM with = 0.25 on ImageNet32. We selected the following 100 classes from ImageNet32. altar', 'mountain tentAmerican chameleon', 'green snake', 'European fire salamander', 'loudspeaker', 'microphone', 'digital clock', 'sunglass', 'combination lock. head cabbage', 'cucumber', 'plate', 'necklace', 'sandal', 'ski mask', 'teddy', 'golf ball', 'red wine', 'sunscreen', 'beer glass', 'cup', 'traffic light', 'lipstick', 'hotdog', 'toilet tissue', 'cassette', 'lotion', 'barrel', 'basketball', 'barbell', 'pole'Fig. 8: Fingerprints obtained from C-IGM with = 0.25 on ImageNet32. We selected the following 100 classes from ImageNet32: ['kit fox', 'Persian cat', 'gazelle', 'porcupine', 'sea lion', 'killer whale', 'African elephant', 'jaguar', 'otterhound', 'hyena', 'sorrel', 'dalmatian', 'fox squirrel', 'tiger', 'zebra', 'ram', 'orangutan', 'squirrel monkey', 'komondor', 'guinea pig', 'golden retriever', 'macaque', 'pug', 'water buffalo', 'American black bear', 'giant panda', 'armadillo', 'gib- bon', 'German shepherd', 'koala', 'umbrella', 'soccer ball', 'starfish', 'grand piano', 'laptop', 'strawberry', 'airliner', 'balloon', 'space shuttle', 'aircraft carrier', 'tank', 'mis- sile', 'mountain bike', 'steam locomotive', 'cab', 'snowplow', 'bookcase', 'toilet seat', 'pool table', 'orange', 'lemon', 'vi- olin', 'sax', 'volcano', 'coral reef', 'lakeside', 'hammer', 'vulture', 'hummingbird', 'flamingo', 'great white shark', 'hammerhead', 'stingray', 'barracouta', 'goldfish', 'Ameri- can chameleon', 'green snake', 'European fire salamander', 'loudspeaker', 'microphone', 'digital clock', 'sunglass', 'com- bination lock', 'nail', 'altar', 'mountain tent', 'scoreboard', 'mashed potato', 'head cabbage', 'cucumber', 'plate', 'neck- lace', 'sandal', 'ski mask', 'teddy', 'golf ball', 'red wine', 'sunscreen', 'beer glass', 'cup', 'traffic light', 'lipstick', 'hot- dog', 'toilet tissue', 'cassette', 'lotion', 'barrel', 'basketball', 'barbell', 'pole' ] |
233,254,411 | DEBIASING CONCEPT-BASED EXPLANATIONS WITH CAUSAL ANALYSIS | Concept-based explanation approach is a popular model interpertability tool because it expresses the reasons for a model's predictions in terms of concepts that are meaningful for the domain experts. In this work, we study the problem of the concepts being correlated with confounding information in the features. We propose a new causal prior graph for modeling the impacts of unobserved variables and a method to remove the impact of confounding information and noise using a two-stage regression technique borrowed from the instrumental variable literature. We also model the completeness of the concepts set and show that our debiasing method works when the concepts are not complete. Our synthetic and real-world experiments demonstrate the success of our method in removing biases and improving the ranking of the concepts in terms of their contribution to the explanation of the predictions. | [] | DEBIASING CONCEPT-BASED EXPLANATIONS WITH CAUSAL ANALYSIS
Mohammad Taha Bahadori [email protected]
David E Heckerman [email protected]
DEBIASING CONCEPT-BASED EXPLANATIONS WITH CAUSAL ANALYSIS
Published as a conference paper at ICLR 2021
Concept-based explanation approach is a popular model interpertability tool because it expresses the reasons for a model's predictions in terms of concepts that are meaningful for the domain experts. In this work, we study the problem of the concepts being correlated with confounding information in the features. We propose a new causal prior graph for modeling the impacts of unobserved variables and a method to remove the impact of confounding information and noise using a two-stage regression technique borrowed from the instrumental variable literature. We also model the completeness of the concepts set and show that our debiasing method works when the concepts are not complete. Our synthetic and real-world experiments demonstrate the success of our method in removing biases and improving the ranking of the concepts in terms of their contribution to the explanation of the predictions.
INTRODUCTION
Explaining the predictions of neural networks through higher level concepts (Kim et al., 2018;Ghorbani et al., 2019;Brocki & Chung, 2019;Hamidi-Haines et al., 2018) enables model interpretation on data with complex manifold structure such as images. It also allows the use of domain knowledge during the explanation process. The concept-based explanation has been used for medical imaging (Cai et al., 2019), breast cancer histopathology (Graziani et al., 2018), cardiac MRIs (Clough et al., 2019), and meteorology (Sprague et al., 2019).
When the set of concepts is carefully selected, we can estimate a model in which the discriminative information flow from the feature vectors x through the concept vectors c and reach the labels y. To this end, we train two models for prediction of the concept vectors from the features denoted by c(x) and the labels from the predicted concept vector y( c). This estimation process ensures that for each prediction we have the reasons for the prediction stated in terms of the predicted concept vector c(x).
However, in reality, noise and confounding information (due to e.g. non-discriminative context) can influence both of the feature and concept vectors, resulting in confounded correlations between them. Figure 1 provides an evidence for noise and confounding in the CUB-200-2011 dataset (Wah et al., 2011). We train two predictors for the concepts vectors based on features c(x) and labels c(y) and compare the Spearman correlation coefficients between their predictions and the true ordinal value of the concepts. Having concepts for which c(x) is more accurate than c(y) could be due to noise, or due to hidden variables independent of the labels that spuriously correlated c and x, leading to undesirable explanations that include confounding or noise.
In this work, using the Concept Bottleneck Models (CBM) (Koh et al., 2020;Losch et al., 2019) we demonstrate a method for removing the counfounding and noise (debiasing) the explanation with concept vectors and extend the results to Testing with Concept Activation Vectors (TCAV) (Kim et al., 2018) technique. We provide a new causal prior graph to account for the confounding information and concept completeness (Yeh et al., 2020). We describe the challenges in estimation of our causal prior graph and propose a new learning procedure. Our estimation technique defines and predicts debiased concepts such that the predictive information of the features maximally flow through them.
We show that using a two-stage regression technique from the instrumental variables literature, we can successfully remove the impact of the confounding and noise from the predicted concept vectors. Our proposed procedure has three steps: (1) debias the concept vectors using the labels, (2) predict (Wah et al., 2011). 112 concepts can be predicted more accurately with the features rather than the labels. Concept ids in the x-axis are sorted in the increasing ρ( c(y), c) order. We provide the detailed steps to obtain the figure in Section 4.2.
the debiased concept vectors using the features, and (3) use the predict concept vectors in the second step to predict the labels. Optionally, we can also find the residual predictive information in the features that are not in the concepts.
We validate the proposed method using a synthetic dataset and the CUB-200-2011 dataset. On the synthetic data, we have access to the ground truth and show that in the presence of confounding and noise, our debiasing procedure improves the accuracy of recovering the true concepts. On the CUB-200-2011 dataset, we use the RemOve And Retrain (ROAR) framework (Hooker et al., 2019) to show that our debiasing procedure ranks the concepts in the order of their explanation more accurately than the regular concept bottleneck models. We also show that we improve the accuracy of CBNs in the prediction of labels using our debiasing technique. Finally, using several examples, we also qualitatively show when the debasing helps improve the quality of concept-based explanations.
METHODOLOGY
Notations. We follow the notation of Goodfellow et al. (2016) and denote random vectors by bold font letters x and their values by bold symbols x. The notation p(x) is a probability measure on x and dp(x = x) is the infinitesimal probability mass at x = x. We use y(x) to denote the the prediction of y given x. In the graphical models, we show the observed and unobserved variables using filled and hollow circles, respectively.
Problem Statement. We assume that during the training phase, we are given triplets (x i , c i , y i ) for i = 1, . . . , n data points. In addition to the regular features x and labels y, we are given a human interpretable concepts vector c for each data point. Each element of the concept vector measures the degree of existence of the corresponding concept in the features. Thus, the concept vector typically have binary or ordinal values. Our goal is to learn to predict y as a function of x and use c for explaining the predictions. Performing in two steps, we first learn a function c(x) and then learn another function y( c(x)). The prediction c(x) is the explanation for our prediction y. During the test time, only the features are given and the prediction+explanation algorithm predicts both y and c.
In this paper, we aim to remove the bias and noise components from the estimated concept vector c such that it explains the reasons for prediction of the labels more accurately. To this end, we first need to propose a new causal prior graph that includes the potential unobserved confounders. Figure 2a shows the ideal situation in explanation via high-level concepts. The generative model corresponding to Figure 2a states that for generating each feature x i we first randomly draw the label y i . Given the label, we draw the concepts c i . Given the concepts, we generate the features. The hierarchy in this graph is from nodes with less detailed information (labels) to more detailed ones (features, images).
A NEW CAUSAL PRIOR GRAPH FOR CBMS
Ideal Graphical Model
Estimation ! " = $ ! % & = ' " ! " ! " #(
This model in Figure 2a is an explanation for the phenomenon in Figure 1, because the noise in generation of the concepts allows the x-c edge to be stronger than the c-y edge. However, another (non-mutually exclusive) explanation for this phenomenon is the existence of hidden confounders u shown in Figure 2b. In this graphical model, u represents the confounders and d represents the unconfounded concepts. Note that we assume that the confounders u and labels y are independent when x and c are not observed.
Another phenomenon captured in Figure 2b is the lack of concept completeness (Yeh et al., 2020). It describes the situation when the features, compared to the concepts, have additional predictive information about the labels.
The non-linear structural equations corresponding to the causal prior graph in Figure 2b are as follows
d = f 1 (y) + ε d ,(1)c = d + h(u), (2) x = f 2 (u, d) + f 3 (y) + ε x ,(3)
for some vector functions h, f 1 , f 2 , and f 3 . We have ε d ⊥ ⊥ y and u ⊥ ⊥ y. Our definition of d in Eq.
(2) does not restrict u, because we simply attribute the difference between c and f 1 (y) to a function of the latent confounder u and noise.
Our causal prior graph in Figure 2b corresponds to a generative process in which to generate an observed triplet (x i , c i , y i ) we first draw a label y i and a confounder u i vector independently. Then we draw the discriminative concepts d i based on the label and generate the features x i jointly based on the concepts, label, and the confounder. Finally, we draw the observed concept vector c i based on the drawn concept and confounder vectors.
Both causal graphs reflect our assumption that the direction of causality is from the labels to concepts and then to the features, y → d → x, to ensure that u and y are marginally independent in Figure 2b. This direction also correspond to moving from more abstract class labels to concepts to detailed features. During estimation, we fit the functions in the x → d → y direction, because finding the statistical strength of an edge does not depend on its direction.
Estimation of the model in Figure 2b is challenging because there are two distinct paths for the information from the labels y to reach the features x. Our solution is to prioritize the bottleneck path and estimate the y → d → x, then estimate the residuals of the regression using the y → x direct path. Our two-stage estimation technique ensures that the predictive information of the features maximally flow through the concepts. In the next sections, we focus on the first phase and using a two-stage regression technique borrowed from the instrumental variables literature to eliminate the noise and confounding in estimation of the d → x link.
Algorithm 1 Debiased CBMs
Require: Data tuples (x i , c i , y i ) for i = 1, . . . , n. 1: Estimate a model d(y) = E[c|y] using (c i , y i ) pairs. 2: Train a neural network as an estimator for
p φ (d|x) using (x i , d i )) pairs. 3: Use pairs (x i , y i ) to estimate function g θ by fitting g θ (d)dp φ (d = d|x i ) to y i . 4: Compute the debiased explanations E[d|x i ] − 1 n n i=1 E[d|x i ] for i = 1, .
. . , n. 5: return The CBM defined by (p φ , g θ ) and the debiased explanations.
INSTRUMENTAL VARIABLES
Background on two-stage regression. In causal inference, instrumental variables (Stock, 2015;Pearl, 2009) denoted by z are commonly used to find the causal impact of a variable x on y when x and y are jointly influenced by an unobserved confounder u (i.e., x ← u → y). The key requirement is that z should be correlated with x but independent of the confounding variable u (i.e. z → x → y and z ⊥ ⊥ u). The commonly used 2-stage least squares first regresses x in terms of z to obtain x followed by regression of y in terms of x. Because of independence between z and u, x is also independent of u. Thus, in the second regression the confounding impact of u is eliminated. Our goal is to use the two-stage regression trick again to remove the confounding factors impacting features and concept vectors. The instrumental variable technique can be used for eliminating the biases due to the measurement errors (Carroll et al., 2006).
Two-Stage Regression for CBMs. In our causal graph in Figure 2b, the label y can be used for the study of the relationship between concepts d and features x. We predict d as a function of y and use it in place of the concepts in the concept bottleneck models. The graphical model corresponding to this procedure is shown in Figure 2c, where the link u → c is eliminated. In particular, given the independence relationship y ⊥ ⊥ u, we have d(y) = E[c|y] ⊥ ⊥ h(u). This is the basis for our debiasing method in the next section.
THE ESTIMATION METHOD
Our estimation uses the observation that in graph 2b the label vector y is a valid instrument for removing the correlations due to u. Combining Eqs. (1) and (2) we have c = f 1 (y) + h(u) + ε d . Taking expectation with respect to p(c|y), we have
E[c|y] = E[f 1 (y) + h(u) + ε d |y] = f 1 (y) + E[h(u)] + E[ε d ].(4)
The last step is because both u and ε d are independent of y. Thus, two terms are constant in terms of x and y and can be eliminated after estimation. Eq. (4) allows us to remove the impact of u and ε d and estimate the denoised and debiased d(y) = E[c|y]. We find E[c|y] using a neural network trained on (c i , y i ) pairs and use them as pseudo-observations in place of d i . Given our debiased prediction for the discriminative concepts d i , we can perform the CBMs' two-steps of x → d and d → y estimation.
Because we use expected values of c in place of d during the learning process (i.e., d(y) = E[c|y]), the debiased concept vectors have values within the ranges of original concept vectors c. Thus, we do not lose the human readability with the debiased concept vectors.
Incorporating Uncertainty in Prediction of Concepts. Our empirical observations show that prediction of the concepts from the features can be highly uncertain. Hence, we present a CBM estimator that takes into account the uncertainties in prediction of the concepts. We take the conditional expectation of the labels y given features x as follows
E[y|x] = E[g θ ( d)|x] = g θ (d)dp φ (d = d|x),(5)
where p φ (d|x) is the probability function, parameterized by φ, that captures the uncertainty in prediction of labels from features. The g θ (·) function predicts labels from the debiased concepts.
In summary, we perform the steps in Algorithm 1 to learn debiased CBMs. In Line 3, we approximate the integral using Monte Carlo approach by drawing from the distribution p φ (d|x) estimated in Line 2, see (Hartford et al., 2017). Given the labels' data type, we can use the appropriate loss functions and are not limited to the least squares loss. Line 4 removes the constant mean that can be due to noise or confounding, as we discussed after Eq. (4). In the common case of binary concept vectors, we use the debiased concepts estimated in Line 4 to rank the concepts in the order of their contribution to explanation of the predictions.
DISCUSSIONS
Debiasing TCAVs. While we presented our debiasing algorithm for the CBMs, we can easily use it to debias the TCAV (Kim et al., 2018) explanations too. We can use the first step to remove the bias due to the confounding and perform TCAV using d vectors, instead of c vectors. The TCAV method is attractive, because unlike CBMs, it analyzes the existing neural networks and does not need to define a new model.
Measuring Concept Completeness.
Our primary objective of modeling the concept incompleteness phenomenon in our causal prior graph is to show that our debiasing method does not need the concept completeness assumption to work properly. If we are interested in measuring the degree of completeness of the concepts, we can do it based on the definition by Yeh et al. (2020). To this end, we fit a unrestricted neural network q(x) = E[y|x] to the to the residuals in the step 3 of our debiasing algorithm. The function q(·) captures the residual information in x. We compare the improvement in prediction accuracy over the accuracy in step 3 to quantify the degree of concept incompleteness. Because we first predict the labels y using the CBM and then fit the q(x) to the residuals, we ensure that the predictive information maximally go through the CBM link x ← d ← y.
Prior Work on Causal Concept-Based Explanation. Among the existing works on causal concept-based explanation, Goyal et al. (2019) propose a different causal prior graph to model the spurious correlations among the concepts and remove them using conditional variational autoencoders. In contrast, we aim at handling noise and spurious correlations between the features and concepts using the labels as instruments. Which work is more appropriate for a problem depending on the assumptions underlying that problem.
The Special Linear Gaussian Case. When the concepts have real continuous values, we might use a simple multivariate Gaussian distribution to model p(d|x) = N (x, σI), σ > 0. If we further use a linear regression to predict labels from the concepts, we can show that the steps are simplified as follows:
1. Learn d(y) by predicting y i → c i .
Learnd(x) by predicting
x i → d i .
3. Learn y(d) by predictingd i → y i .
SYNTHETIC DATA EXPERIMENTS
We create a synthetic dataset according to the following steps:
1. Generate n vectors y i ∈ R 100 with elements distributed according to unit normal distribution N (0, 1).
2. Generate n vectors u i ∈ R 100 with elements distributed according to unit normal distribution N (0, 1).
3. Generate n vectors ε c,i ∈ R 100 with elements distributed according to scaled normal distribution N (0, σ = 0.02).
4. Generate n vectors ε x,i ∈ R 100 with elements distributed according to scaled normal distribution N (0, σ = 0.02).
5. Generate matrices W 1 , W 2 , W 3 , W 4 ∈ R 100×100 with elements distributed according to scaled normal distribution N (0, σ = 0.1).
6. Compute d i = W 1 y i + ε d,i for i = 1, . . . , n.
7. Compute c i = d i + W 2 u i for i = 1, . . . , n.
8. Compute x i = W 3 d i + W 4 u i + ε x,i for i = 1, . . . , n.
In Figure 3a, we plot the correlation between the true unconfounded and noiseless concepts W y and the estimated concept vectors with the regular two-step procedure (without debiasing) and our debiasing method, as a function of sample size n. The results show that the bias due to confounding does not vanish as we increase the sample size and our debiasing technique can make the results closer to the true discriminative concepts.
In the ideal case, when the confounding impact is orthogonal to the impact of the labels, our debiasing algorithm recovers the true concepts more accurately. Figure 3b demonstrates the scenario when W 1 ⊥ W 2 , W 4 in the synthetic data generation. To generate matrices with this constraint, we first generate a unitary matrix Q and generate four diagonal matrices Λ 1 , . . . , Λ 4 with diagonal elements drawn from Uniform(0.1, 1). This choice of the distribution for the diagonal elements caps the condition number of the matrices by 10. To satisfy the orthogonality constraint, we set the first 50 diagonal elements of Λ 2 and Λ 4 and last 50 diagonal elements of Λ 1 to zero. We compute the matrices as W j = QΛ j Q for j = 1, . . . , 4. The orthogonality allows perfect separation of u and y impacts and the perfect debiasing by our method as the sample size grows.
In Figure 6, we repeat the synthetic experiments in a higher noise regime where the standard deviation of the noise variables ε c and ε c is set to σ = 0.1. The results confirm our main claim about our debiasing algorithm.
CUB-200-2011 EXPERIMENTS
Dataset and preprocessing. We evaluate the performance of the proposed approach on the CUB-200-2011 dataset (Wah et al., 2011). The dataset includes 11788 pictures (in 5994/5794 train/test partitions) of 200 different types of birds, annotated both for the bird type and 312 different concepts about each picture. The concept annotations are binary, whether the concept exists or not. However, for each statement, a four-level certainty score has been also assigned: 1: not visible, 2: guessing, 3: probably, and 4: definitely. We combine the binary annotation and the certainty score to create a 7-level ordinal variable as the annotation for each image as summarized in Table 1. For simplicity, we map the 7-level ordinal values to uniformly spaced valued in the [0, 1] interval. We randomly choose 15% of the training set and hold out as the validation set.
The result in Figure 1. To compare the association strength between y and c with the association strength between x and c we train two predictors of concepts c(x) and c(y). We use TorchVision's pre-trained ResNet152 network (He et al., 2016) for prediction of the concepts from the images. Because the labels y have categorical values, c(y) is simply the average concept annotation scores per each class. We use the Spearman correlation to find the association strengths in pairs ( c(x), c) and ( c(y), c) because the concept annotations are ordinal numbers. The concept ids in the x-axis are sorted in terms of increasing values of ρ( c(y), c).
The top ten concepts with the largest values of ρ( c(x), c) − ρ( c(y), c) are 'has back color::green', 'has upper tail color::green', 'has upper tail color::orange', 'has upper tail color::pink', 'has back color::rufous', 'has upper tail color::purple', 'has back color::pink', 'has upper tail color::iridescent', 'has back color::purple', and 'has back color::iridescent'. These concepts are all related to color and can be easily confounded by the context of the images.
Training details for Algorithm 1. We model the distribution of the concept logits as Gaussians with means equal to the ResNet152's logit outputs and a diagonal covariance matrix. We estimate the variance for each concept by using the logits of the true concept annotation scores that are clamped into [0.05, 0.95] to avoid large logit numbers. In each iteration of the training loop for Line 3, we draw 25 samples from the estimated p(d|x). Predictor of labels from concepts (the function g(·) in Eq. (5)) is a three-layer feed-forward neural network with hidden layer sizes (312,312,200). There is a skip connection from the input to the penultimate layer. All algorithms are trained with Adam optimization algorithm (Kingma & Ba, 2014).
Quantitative Results. Comparing to the baseline algorithm, our debiasing technique increases the average Spearman correlation between c(x) and c(y) from 0.406 to 0.508. For the above 10 concepts, our algorithm increases the average Spearman correlation from 0.283 to 0.389. Our debiasing algorithm also improves the generalization in prediction of the image labels. The debiasing also improves the top-5 accuracy of predicting the labels from 39.5% to 49.3%.
To show that our proposed debiasing accurately ranks the concepts in terms of their explanation of the predictions, we use the RemOve And Retrain (ROAR) framework (Hooker et al., 2019). In the ROAR framework, we sort the concept using the scores E[d|x i ] − 1 n n i=1 E[d|x i ] in the ascending order. Then, we mask (set to zero) the least explanatory x% of the concepts using the scores and retrain the g θ (d) function. We perform the procedure for x ∈ {0, 10, 20, . . . , 80, 90, 95} and record Figure 4: The ROAR evaluation: We mask x% of the concepts that are identified by the methods as less explanatory of the labels and retrain the g θ (·) function. We measure the change in the accuracy of predicting the labels y as we increase the masking percentage. For better comparison of the trends, we have normalized them by their first data point (x = 0%). Figure 5: Twelve example images where the debiasing helps. A common pattern is that, the image context has either prevented or misled the annotator from accurate annotation of the concepts. From the left to right, the birds are 'Brandt Cormorant', 'Pelagic Cormorant', 'Fish Crow', 'Fish Crow', 'Fish Crow', 'Ivory Gull', 'Ivory Gull', 'Green Violetear', 'Green Violetear', 'Cape Glossy Starling', 'Northern Waterthrush', 'Northern Waterthrush'. the testing top-5 accuracy of predicting the labels y. We repeat the ROAR experiments for 3 times and report the average accuracy as we vary the masking percentage. Figure 4 shows the ROAR evaluation of the regular and debiased algorithms. Because the debiased algorithm is more accurate, for easier comparison, we normalize both curves by dividing them by their accuracy at masking percentage x = 0%. An immediate observation is that the plot for debiased algorithm stays above the regular one, which is a clear indication of its superior performance in identifying the least explanatory concepts. The results show several additional interesting insights too. First, the prediction of the bird species largely rely on a sparse set of concepts as we can mask 95% of the concepts and still have a decent accuracy. Second, masking a small percentage of irrelevant concepts reduces the noise in the features and improves the generalization performance of both algorithms. Our debiased algorithm is more successful by being faster at finding the noisy features before x = 20% masking. Finally, the debiased accuracy curve is less steep after x = 80%, which again indicates its success in finding the most explanatory concepts.
Qualitative analysis of the results. In Figure 5, we show 12 images for which the d and c are significantly different. A common pattern among the examples is that the context of the image does not allow accurate annotations by the annotators. In images 3, 4, 5, 6, 7, 11, and 12 in Figure 5, the ten color-related concepts listed at the beginning are all set to 0.5, indicating that the annotators have failed in annotation. However, our algorithm correctly identifies that for example Ivory Gulls do not have green-colored backs by predicting c = 0.08 which is closer to c(y) = 0.06 than the true c = 0.5.
Another pattern is the impact of the color of the environment on the accuracy of the annotations. For example, the second image from the left is an image of Pelagic cormorant, whose back and upper tail colors are unlikely to be green with per-class average of 0.12 and 0.07, respectively. However, because of the color of the image and the reflections, the annotator has assigned 1.0 to both of 'has back color::green' and 'has upper tail color::green' concepts. Our algorithm predicts 0.11 and 0.16 for these two features respectively, which are closer to the per-class average. In Table 2, we list six examples to show the superior accuracy of the debiased CBM in ranking the concepts in terms of their explanation power. Moreover, in Table 3 in the appendix, we list the top and bottom concepts in terms of their predictive uncertainty in our debiased CBM.
CONCLUSIONS AND FUTURE WORKS
Studying the concept-based explanation techniques, we provided evidences for potential existence of spurious association between the features and concepts due to unobserved latent variables or noise. We proposed a new causal prior graph that models the impact of the noise and latent confounding fron the estimated concepts. We showed that using the labels as instruments, we can remove the impact of the context from the explanations. Our experiments showed that our debiasing technique not only improves the quality of the explanations, but also improve the accuracy of predicting labels through the concepts. As future work, we will investigate other two-stage-regression techniques to find the most accurate debiasing method.
A HIGH-NOISE SYNTHETIC EXPERIMENTS
Certainty Level Concepts
Most Uncertain (194) has nape color::black, (260) has primary color::black, (164) has forehead color::black, (7) has bill shape::all-purpose, (132) has throat color::black, (305) has crown color::black, (133) has throat color::white, (150) has bill length::about the same as head, (8) has bill shape::cone, (152) has bill length::shorter than head Least Uncertain (216) has wing shape::tapered-wings, (215) has wing shape::broad-wings, (217) has wing shape::long-wings, (214) has wing shape::pointed-wings, (77) has tail shape::fan-shaped tail, (213) has wing shape::rounded-wings, (79) has tail shape::squared tail, (83) has upper tail color::purple, (82) has upper tail color::iridescent, (75) has tail shape::rounded tail Table 3: The top 10 most and least uncertain concepts identified by the debiased algorithm.
Figure 1 :
1Spearman correlation coefficients (ρ) of the predictors of the concepts given features c(x) and labels c(y) for the 312 concepts in the test partition of the CUB-200-2011 dataset
Figure 2 :
2(a) The ideal view of the causal relationships between the features x, concepts c, and labels y. (b) In a more realistic setting, the unobserved confounding variable u impacts both x and c. The shared information between x and y go through the discriminative part of the concepts d. We also model the completeness of the concepts via a direct edge from the features x to the labels y. (c) When we use d(y) = E[c|y] in place of d and c, we eliminate the confounding link u → c.
Figure 3 :
3Correlation between the estimated concept vectors and the true discriminative concept vectors as the number of data points grow. Notice the different ranges of the y-axes and the logarithmic scale of the x-axes.
Figure 6 :
6Correlation between the estimated concept vectors and the true discriminative concept vectors as the number of data points grow. Notice the different ranges of the y-axes and the logarithmic scale of the x-axes.
The above steps show that given the linear Gaussian assumptions, steps 2 and 3 coincide with the sequential bottleneck (Koh et al., 2020) training of CBMs. We only need to change the concepts from c i to d i we can eliminate the noise and confounding bias. Note that we can debiase the other CBM training methods discussed in(Koh et al., 2020) similarly by using d i in place of c i .4 EXPERIMENTS
Evaluation of explanation algorithms is notoriously difficult. Thus, we first present experiments with
synthetic data to show that our debiasing technique improves the explanation accuracy when we
know the true explanations. Then, on the CUB-200-2011 dataset, we use the ROAR (Hooker et al.,
2019) framework to show that the debiasing improves the explanation accuracy. Finally, using several
examples, we identify the circumstances in which the debiasing helps.
Table 1 :
1Mapping the concept annotations to real values.Annotation
Certainty
Ordinal Score Numeric Map
Doesn't Exist definitely
0
0
Doesn't Exist probably
1
1/6
Doesn't Exist guessing
2
2/6
Doesn't Exist not visible
3
3/6
Exists
not visible
3
3/6
Exists
guessing
4
4/6
Exists
probably
5
5/6
Exists
definitely
6
1
Table 2 :
2Examples of differences between regular and debiased algorithms in ranking the concepts.Image
Top 15 Concepts
color::grey, has belly color::grey, has belly pattern::multi-colored, has breast pattern::multi-colored, has nape color::grey, has bill length::shorter than head, has breast color::red, has throat color::grey, has upperparts color::red, has back pattern::multi-colored, has underparts color::red, has primary color::grey, has belly color::red, has throat color::red Debiased: has bill length::about the same as head. Debiased: has throat color::black, has head pattern::plain, has forehead color::black, has breast color::black, has underparts color::black, has nape color::black, has crown color::black, has primary color::black, has bill color::black, has belly color::black, has wing color::orange, has breast pattern::solid, has upperparts color::orange, has wing pattern::multi-colored, has bill length::about the same as head Regular: has primary color::black, has wing color::black, has throat color::black, has upperparts color::black, has breast color::black, has primary color::blue, has underparts color::black, has belly color::black, has back color::black, has nape color::black, has upperparts color::blue, has tail pattern::solid, has crown color::blue, has under tail color::black, has forehead color::blue Debiased: has primary color::red, has crown color::red, has forehead color::red, has throat color::red, has nape color::red, has breast color::red, has underparts color::red, has belly color::red, has forehead color::rufous, has upperparts color::red, has crown color::rufous, has nape color::rufous, has wing pattern::multi-colored, has primary color::rufous, has throat color::rufous Regular: has underparts color::grey, has breast. rufous REFERENCES Lennart Brocki and Neo Christopher ChungICMLADebiased: has throat color::black, has head pattern::plain, has forehead color::black, has breast color::black, has underparts color::black, has nape color::black, has crown color::black, has primary color::black, has bill color::black, has belly color::black, has wing color::orange, has breast pattern::solid, has upperparts color::orange, has wing pattern::multi-colored, has bill length::about the same as head Regular: has primary color::black, has wing color::black, has throat color::black, has upperparts color::black, has breast color::black, has primary color::blue, has underparts color::black, has belly color::black, has back color::black, has nape color::black, has upperparts color::blue, has tail pattern::solid, has crown color::blue, has under tail color::black, has forehead color::blue Debiased: has primary color::red, has crown color::red, has forehead color::red, has throat color::red, has nape color::red, has breast color::red, has underparts color::red, has belly color::red, has forehead color::rufous, has upperparts color::red, has crown color::rufous, has nape color::rufous, has wing pattern::multi-colored, has pri- mary color::rufous, has throat color::rufous Regular: has underparts color::grey, has breast color::grey, has belly color::grey, has belly pattern::multi-colored, has breast pattern::multi-colored, has nape color::grey, has bill length::shorter than head, has breast color::red, has throat color::grey, has upperparts color::red, has back pattern::multi-colored, has underparts color::red, has primary color::grey, has belly color::red, has throat color::red Debiased: has bill length::about the same as head, has belly pattern::spotted, has underparts color::black, has bill shape::dagger, has breast color::black, has belly color::black, has breast pattern::spotted, has back pat- tern::spotted, has wing pattern::spotted, has nape color::red, has back color::black, has tail pattern::spotted, has under tail color::black, has upper tail color::black, has primary color::buff Regular: has primary color::brown, has wing color::brown, has upperparts color::brown, has crown color::brown, has back color::brown, has forehead color::brown, has nape color::brown, has throat color::white, has breast pattern::spotted, has under tail color::brown, has breast color::white, has belly color::white, has under- parts color::white, has upper tail color::brown, has breast color::brown Debiased: has shape::duck-like, has bill shape::spatulate, has size::medium (9 -16 in), has bill length::about the same as head, has throat color::buff, has underparts color::brown, has breast color::brown, has belly pat- tern::spotted, has crown color::brown, has primary color::brown, has belly color::brown, has nape color::brown, has forehead color::brown, has upperparts color::brown, has belly color::purple Regular: has belly color::grey, has underparts color::grey, has breast color::grey, has belly color::pink, has belly color::rufous, has underparts color::purple, has belly color::purple, has underparts color::pink, has throat color::grey, has primary color::grey, has belly color::green, has underparts color::rufous, has underparts color::green, has belly color::iridescent, has forehead color::grey Debiased: has throat color::black, has forehead color::black, has crown color::black, has primary color::black, has nape color::black, has breast color::red, has belly color::white, has underparts color::white, has underparts color::red, has back color::black, has primary color::white, has bill shape::cone, has breast pattern::multi-colored, has primary color::red, has upperparts color::black Regular: has nape color::black, has primary color::black, has nape color::rufous, has primary color::red, has wing color::orange, has nape color::red, has breast color::red, has crown color::black, has upperparts color::orange, has crown color::red, has underparts color::white, has upperparts color::rufous, has forehead color::black, has throat color::red, has back color::purple Debiased: has wing color::blue, has upperparts color::blue, has crown color::rufous, has primary color::blue, has bill color::rufous, has back color::blue, has under tail color::blue, has bill color::red, has upper tail color::blue, has wing pattern::multi-colored, has nape color::rufous, has crown color::brown, has belly color::brown, has nape color::brown, has underparts color::brown Regular: has primary color::red, has throat color::red, has underparts color::red, has breast color::red, has forehead color::red, has crown color::rufous, has crown color::red, has nape color::red, has belly color::red, has primary color::rufous, has throat color::rufous, has forehead color::rufous, has wing color::red, has nape color::rufous, has underparts color::rufous REFERENCES Lennart Brocki and Neo Christopher Chung. Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models. In ICMLA, 2019.
Human-centered tools for coping with imperfect algorithms during medical decision-making. J Carrie, Emily Cai, Narayan Reif, Jason Hegde, Been Hipp, Daniel Kim, Martin Smilkov, Fernanda Wattenberg, Greg S Viegas, Martin C Corrado, Stumpe, CHI. Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, et al. Human-centered tools for coping with imperfect algorithms during medical decision-making. In CHI, 2019.
Measurement error in nonlinear models: a modern perspective. J Raymond, David Carroll, Ruppert, A Leonard, Ciprian M Stefanski, Crainiceanu, CRC pressRaymond J Carroll, David Ruppert, Leonard A Stefanski, and Ciprian M Crainiceanu. Measurement error in nonlinear models: a modern perspective. CRC press, 2006.
Global and local interpretability for cardiac MRI classification. Ilkay James R Clough, Esther Oksuz, Bram Puyol-Antón, Ruijsink, P Andrew, Julia A King, Schnabel, MICCAI. James R Clough, Ilkay Oksuz, Esther Puyol-Antón, Bram Ruijsink, Andrew P King, and Julia A Schnabel. Global and local interpretability for cardiac MRI classification. In MICCAI, 2019.
Towards automatic concept-based explanations. Amirata Ghorbani, James Wexler, Y James, Been Zou, Kim, NeurIPS. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. In NeurIPS, pp. 9273-9282, 2019.
Deep learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT pressIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
Explaining Classifiers with Causal Concept Effect (CaCE). Yash Goyal, Uri Shalit, Been Kim, arXiv:1907.07165Yash Goyal, Uri Shalit, and Been Kim. Explaining Classifiers with Causal Concept Effect (CaCE). arXiv:1907.07165, 2019.
Regression concept vectors for bidirectional explanations in histopathology. Mara Graziani, Vincent Andrearczyk, Henning Müller, Understanding and Interpreting Machine Learning in Medical Image Computing Applications. SpringerMara Graziani, Vincent Andrearczyk, and Henning Müller. Regression concept vectors for bidirec- tional explanations in histopathology. In Understanding and Interpreting Machine Learning in Medical Image Computing Applications, pp. 124-132. Springer, 2018.
Mandana Hamidi-Haines, Zhongang Qi, Alan Fern, Fuxin Li, Prasad Tadepalli, arXiv:1812.07150Interactive Naming for Explaining Deep Neural Networks: A Formative Study. Mandana Hamidi-Haines, Zhongang Qi, Alan Fern, Fuxin Li, and Prasad Tadepalli. Interactive Naming for Explaining Deep Neural Networks: A Formative Study. arXiv:1812.07150, 2018.
Deep IV: A flexible approach for counterfactual prediction. Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy, ICML. Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. Deep IV: A flexible approach for counterfactual prediction. In ICML, pp. 1414-1423, 2017.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. In NearIPS. Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. A benchmark for interpretability methods in deep neural networks. In NearIPS, 2019.
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, ICML. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML, pp. 2668-2677, 2018.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang, Concept Bottleneck Models. In ICML. Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept Bottleneck Models. In ICML, 2020.
Max Losch, Mario Fritz, Bernt Schiele, arXiv:1907.10882Interpretability beyond classification output: Semantic bottleneck networks. Max Losch, Mario Fritz, and Bernt Schiele. Interpretability beyond classification output: Semantic bottleneck networks. arXiv:1907.10882, 2019.
. Judea Pearl, Causality, Cambridge university pressJudea Pearl. Causality. Cambridge university press, 2009.
Interpretable AI for Deep Learning-Based Meteorological Applications. Conner Sprague, Eric B Wendoloski, Ingrid Guch, American Meteorological Society Annual Meeting. AMSConner Sprague, Eric B Wendoloski, and Ingrid Guch. Interpretable AI for Deep Learning-Based Meteorological Applications. In American Meteorological Society Annual Meeting. AMS, 2019.
H James, Stock, Instrumental variables in statistics and econometrics. International Encyclopedia of the Social & Behavioral Sciences. James H Stock. Instrumental variables in statistics and econometrics. International Encyclopedia of the Social & Behavioral Sciences, 2015.
C Wah, S Branson, P Welinder, P Perona, S Belongie, The Caltech-UCSD Birds. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011
. Dataset, CNS-TR-2011-001California Institute of TechnologyTechnical ReportDataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
On Completeness-aware Concept-Based Explanations in Deep Neural Networks. Chih-Kuan Yeh, Been Kim, O Sercan, Chun-Liang Arik, Tomas Li, Pradeep Pfister, Ravikumar, NeurIPS. Chih-Kuan Yeh, Been Kim, Sercan O Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On Completeness-aware Concept-Based Explanations in Deep Neural Networks. In NeurIPS, 2020. |
3,481,593 | FIDELITY-WEIGHTED LEARNING | Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental qualityversus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised studentteacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations. | [
15108580,
10898149,
13886408,
3179848,
6628106,
15371885,
8696462,
15766287,
8125776,
2778800,
17175925,
6458072,
6751421,
17597823
] | FIDELITY-WEIGHTED LEARNING
Mostafa Dehghani [email protected]@[email protected]
MPI for Intelligent Systems Google Brain
MPI for Intelligent Systems
University of Amsterdam
University of Amsterdam
Arash Mehrjou
MPI for Intelligent Systems Google Brain
MPI for Intelligent Systems
University of Amsterdam
University of Amsterdam
Stephan Gouws
MPI for Intelligent Systems Google Brain
MPI for Intelligent Systems
University of Amsterdam
University of Amsterdam
Jaap Kamps [email protected]@tuebingen.mpg.de
MPI for Intelligent Systems Google Brain
MPI for Intelligent Systems
University of Amsterdam
University of Amsterdam
Bernhard Schölkopf
MPI for Intelligent Systems Google Brain
MPI for Intelligent Systems
University of Amsterdam
University of Amsterdam
FIDELITY-WEIGHTED LEARNING
Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental qualityversus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised studentteacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.
INTRODUCTION
The success of deep neural networks to date depends strongly on the availability of labeled data which is costly and not always easy to obtain. Usually it is much easier to obtain small quantities of high-quality labeled data and large quantities of unlabeled data. The problem of how to best integrate these two different sources of information during training is an active pursuit in the field of semi-supervised learning (Chapelle et al., 2006). However, for a large class of tasks it is also easy to define one or more so-called "weak annotators", additional (albeit noisy) sources of weak supervision based on heuristics or "weaker", biased classifiers trained on e.g. non-expert crowd-sourced data or data from different domains that are related. While easy and cheap to generate, it is not immediately clear if and how these additional weakly-labeled data can be used to train a stronger classifier for the task we care about. More generally, in almost all practical applications machine learning systems have to deal with data samples of variable quality. For example, in a large dataset of images only a small fraction of samples may be labeled by experts and the rest may be crowd-sourced using e.g. Amazon Mechanical Turk (Veit et al., 2017). In addition, in some applications, labels are intentionally perturbed due to privacy issues (Wainwright et al., 2012;Papernot et al., 2017).
Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of "strong" labels, the simplest approach is to expand the training set by including the weakly-supervised samples (all samples are equal). Alternatively, one may pretrain on the weak data and then fine-tune on observations from the true function or distribution (which we call strong data). Indeed, it has recently been shown that a small amount of expert-labeled data can be augmented in such a way by a large set of raw data, with labels coming from a heuristic function, to train a more accurate neural ranking model (Dehghani et al., 2017b). The downside is that such approaches are oblivious to the amount or source of noise in the labels. Step 1: Pre-train student on weak data, Step 2: Fit teacher to observations from the true function, and Step 3: Fine-tune student on labels generated by teacher, taking the confidence into account. Red dotted borders and blue solid borders depict components with trainable and non-trainable parameters, respectively.
In this paper, we argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we propose Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies introduced by the weak annotator in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data.
We propose a setting consisting of two main modules. One is called the student and is in charge of learning a suitable data representation and performing the main prediction task, the other is the teacher which modulates the learning process by modeling the inaccuracies in the labels. We explain our approach in much more detail in Section 2, but at a high level it works as follows (see Figure 1): We pretrain the student network on weak data to learn an initial task-dependent data representation which we pass to the teacher along with the strong data. The teacher then learns to predict the strong data, but crucially, based on the student's learned representation. This then allows the teacher to generate new labeled training data from unlabeled data, and in the process correct the student's mistakes, leading to a better final data representation and better final predictor.
We introduce the proposed FWL approach in more detail in Section 2. We then present our experimental setup in Section 3 where we evaluate FWL on a toy task and two real-world tasks, namely document ranking and sentence sentiment classification. In all cases, FWL outperforms competitive baselines and yields state-of-the-art results, indicating that FWL makes better use of the limited true labeled data and is thereby able to learn a better and more meaningful task-specific representation of the data. Section 4 provides analysis of the bias-variance trade-off and the learning rate, suggesting also to view FWL from the perspective of Vapnik's learning with privileged information (LUPI) framework (Vapnik & Izmailov, 2015). Section 5 situates FWL relative to related work, and we end the paper by drawing the main conclusions in Section 6.
FIDELITY-WEIGHTED LEARNING (FWL)
In this section, we describe our proposed FWL approach for semi-supervised learning when we have access to weak supervision (e.g. heuristics or weak annotators). We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of highquality samples labeled by experts, called the strong dataset, consisting of tuples of training samples x i and their true labels y i , i.e. D s = {(x i ,y i )}. We consider the latter to be observations from the true target function that we are trying to learn. We use the weak annotator to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak dataset consisting of tuples of training samples x i and their weak labelsỹ i , i.e. D w = {(x i ,ỹ i )}. Note that we can generate a large amount of weak training data D w at almost no cost using the weak annotator. In contrast, we have only a limited amount of observations from the true function, i.e. |D s | |D w |.
Algorithm 1 Fidelity-Weighted Learning.
1: Train the student on samples from the weakly-annotated data Dw. 2: Freeze the representation-learning component ψ(.) of the student and train teacher on the strong data Ds = (ψ(xj),yj). Apply teacher to unlabeled samples xt to obtain soft dataset Dsw = (xt,ȳt) wherē yt = T (xt) is the soft label and for each instance xt, the uncertainty of its label, Σ(xt), is provided by the teacher. 3: Train the student on samples from Dsw with SGD and modulate the step-size ηt according to the per-sample quality estimated using the teacher (Equation 1).
Our proposed setup comprises a neural network called the student and a Bayesian function approximator called the teacher. The training process consists of three phases which we summarize in Algorithm 1 and Figure 1.
Step 1 Pre-train the student on D w using weak labels generated with the weak annotator.
The goal of this stage is to learn a reasonably good representation of the data for the given task. The student function is a neural network consisting of two parts. The first part ψ(.) learns the data representation and the second part φ(.) performs the prediction task (e.g. classification). Therefore the overall function isŷ = φ(ψ(x i )). The student is trained on all samples of the weak dataset
D w = {(x i ,ỹ i )}.
For brevity, in the following, we will refer to both data sample x i and its representation ψ(x i ) by x i when it is obvious from the context.
Step 2 Train the teacher on the strong data (ψ(x j ), y j ) ∈ D s represented in terms of the student representation ψ(.) and then use the teacher to generate a soft dataset D sw consisting of sample, predicted label, confidence pairs for all data samples.
We use a Gaussian process as teacher to capture the label uncertainty in terms of the student representation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix C, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. The GP is trained on this representation of the strong dataset to learn the posterior mean m post (used to generate soft labels) and posterior co-variance K post (.,.) (which represents label uncertainty). We then create the soft dataset D sw = {(x t ,ȳ t )} using the posterior GP, input samples x t from D w ∪D s , and predicted labelsȳ t with their associated uncertainties as computed by T (x t ) and Σ(x t ):
T (x t ) = g(m post (x t )) Σ(x t ) = h(K post (x t ,x t ))
The generated labels are called soft labels. Therefore, we refer to D sw as a soft dataset. g(.) transforms the output of GP to the suitable output space. For example in classification tasks, g(.) would be the softmax function to produce probabilities that sum up to one. For multidimensional-output tasks where a vector of variances is provided by the GP, the vector K post (x t ,x t ) is passed through an aggregating function h(.) to generate a scalar value for the uncertainty of each sample. Note that we train GP only on the strong dataset D s but then use it to generate soft labelsȳ t = T (x t ) and uncertainty Σ(x t ) for samples belonging to D sw = D w ∪D s .
In practice, we furthermore divide the space of data into several regions and assign each region a separate GP trained on samples from that region. By this division of space, we take advantage of the knowledge learned by several teachers, each an expert on its specific region of data space. As a nice side-effect, this also solves the scalability issues of GPs in that we can increase the number of regions until the number of points in each region is tractable with a single GP, and train these models in parallel. See Algorithm 2 in Appendix A for the detailed description.
Step 3 Fine-tune the weights of the student network on the soft dataset, while modulating the magnitude of each parameter update by the corresponding teacher-confidence in its label.
The student network of Step 1 is fine-tuned using samples from the soft dataset D sw = {(x t ,ȳ t )} wherē y t = T (x t ). The corresponding uncertainty Σ(x t ) of each sample is mapped to a confidence value according to Equation 1 below, and this is then used to determine the step size for each iteration of the stochastic gradient descent (SGD). So, intuitively, for data points where we have true labels, the uncertainty of the teacher is almost zero, which means we have high confidence and a large step-size for updating the parameters. However, for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step 1.
More specifically, we update the parameters of the student by training on D sw using SGD:
w w w * = argmin w w w∈W 1 N (xt,ȳt)∈Dsw l(w w w,x t ,ȳ t )+R(w w w), w w w t+1 = w w w t −η t (∇l(w w w,x t ,ȳ t )+∇R(w w w))
where l(·) is the per-example loss, η t is the total learning rate, N is the size of the soft dataset D sw , w w w is the parameters of the student network, and R(.) is the regularization term.
We define the total learning rate as η t = η 1 (t)η 2 (x t ), where η 1 (t) is the usual learning rate of our chosen optimization algorithm that anneals over training iterations, and η 2 (x t ) is a function of the label uncertainty Σ(x t ) that is computed by the teacher for each data point. Multiplying these two terms gives us the total learning rate. In other words, η 2 represents the fidelity (quality) of the current sample, and is used to multiplicatively modulate η 1 . Note that the first term does not necessarily depend on each data point, whereas the second term does. We propose
η 2 (x t ) = exp[−βΣ(x t )],(1)
to exponentially decrease the learning rate for data point x t if its corresponding soft labelȳ t is unreliable (far from a true sample). In Equation 1, β is a positive scalar hyper-parameter. Intuitively, small β results in a student which listens more carefully to the teacher and copies its knowledge, while a large β makes the student pay less attention to the teacher, staying with its initial weak knowledge. More concretely speaking, as β → 0 student places more trust in the labelsȳ t estimated by the teacher and the student copies the knowledge of the teacher. On the other hand, as β → ∞, student puts less weight on the extrapolation ability of GP and the parameters of the student are not affected by the correcting information from the teacher.
EXPERIMENTS
In this section, we apply FWL first to a toy problem and then to two different real tasks: document ranking and sentiment classification. The neural networks are implemented in TensorFlow (Abadi et al., 2015;Tang, 2016). GPflow (Matthews et al., 2017) is employed for developing the GP modules. For both tasks, we evaluate the performance of our method compared to the following baselines:
1. WA. The weak annotator, i.e. the unsupervised method used for annotating the unlabeled data.
2. NN W . The student trained only on weak data. 3. NN S . The student trained only on strong data. 4. NN S + /W . The student trained on samples that are alternately drawn from D w without replacement, and D s with replacement. Since |D s | |D w |, it oversamples the strong data. 5. NN W→S . The student trained on weak dataset D w and fine-tuned on strong dataset D s . 6. NN W ω →S . The student trained on the weak data, but the step-size of each weak sample is weighted by a fixed value 0 ≤ ω ≤ 1, and fine-tuned on strong data. As an approximation for the optimal value for ω, we have used the mean of η 2 of our model (below). 7. FWL \Σ. The student trained on the weakly labeled data and fine-tuned on examples labeled by teacher without taking the confidence into account. This baseline is similar to (Veit et al., 2017). 8. FWL. Our FWL model, i.e. the student trained on the weakly labeled data and fine-tuned on examples labeled by teacher using the confidence scores.
In the following, we introduce each task and the results produced for it, more detail about the exact student network and teacher GP for each task are in the appendix.
TOY PROBLEM
We first apply FWL to a one-dimensional toy problem to illustrate the various steps. Let f t (x) = sin(x) be the true function (red dotted line in Figure 2a) from which a small set of observations D s = {x j ,y j } is provided (red points in Figure 2b). These observation might be noisy, in the same way that labels obtained from a human labeler could be noisy. A weak annotator function f w (x) = 2sinc(x) (magenta line in Figure 2a) is provided, as an approximation to f t (.). The task is to obtain a good estimate of f t (.) given the set D s of strong observations and the weak annotator function f w (.). We can easily obtain a large set of observations
D w = {x i ,ỹ i } from f w (.)
with almost no cost (magenta points in Figure 2a).
We consider two experiments:
1. A neural network trained on weak data and then fine-tuned on strong data from the true function, which is the most common semi-supervised approach (Figure 2c). 2. A teacher-student framework working by the proposed FWL approach.
As can be seen in Figure 2d, FWL by taking into account label confidence, gives a better approximation of the true hidden function. We repeated the above experiment 10 times. The average RMSE with respect to the true function on a set of test points over those 10 experiments for the student, were as follows:
1. Student is trained on weak data (blue line in Figure 2a): 0.8406, 2. Student is trained on weak data then fine tuned on true observations (blue line in Figure 2c): 0.5451, 3. Student is trained on weak data, then fine tuned by soft labels and confidence information provided by the teacher (blue line in Figure 2d): 0.4143 (best).
More details of the neural network and GP along with the specification of the data used in the above experiment are presented in Appendix C and E.1. This task is the core information retrieval problem and is challenging as the ranking model needs to learn a representation for long documents and capture the notion of relevance between queries and documents. Furthermore, the size of publicly available datasets with query-document relevance judgments is unfortunately quite small (∼ 250 queries). We employ a state-of-the-art pairwise neural ranker architecture as the student (Dehghani et al., 2017b). In this model, ranking is cast as a regression task. Given each training sample x as a triple of query q, and two documents d + and d − , the goal is to learn a function F : {< q,d + ,d − >} → R, which maps each data sample x to a scalar output value y indicating the probability of d + being ranked higher than d − with respect to q.
DOCUMENT RANKING
Ranker
The student follows the architecture proposed in (Dehghani et al., 2017b). The first layer of the network, i.e. representation learning layer ψ : {< q,d + ,d − >} → R m maps each input sample to an mdimensional real-valued vector. In general, besides learning embeddings for words, function ψ learns to compose word embedding based on their global importance in order to generate query/document
NN S + /W 0.2763 IJ123 0.4330 IJ123 0.1354 IJ123 0.2319 IJ123 5 NNW→S 0.2810 IJ123 0.4372 IJ123 0.1346 IJ123 0.2317 IJ123 6 NNWω→S 0.2899 IJ12345 0.4431 IJ12345 0.1320 IJ1234 0.2309 IJ1234 7 FWL \Σ 0.2980 IJ12345 0.4516 IJ12345 0.1386 IJ12345 0.2340 IJ12345 8 FWL 0.3124 IJ1234567 0.4607 IJ1234567 0.1472 IJ1234567 0.2453 IJ1234567
embeddings. The representation layer is followed by a simple fully-connected feed-forward network with a sigmoidal output unit to predict the probability of ranking d + higher than d − . The general schema of the student is illustrated in Figure 3. More details are provided in Appendix B.1.
The teacher is implemented by clustered GP algorithm. See Appendix C for more details.
The weak annotator is BM25 (Robertson & Zaragoza, 2009), a well-known unsupervised method for scoring query-document pairs based on statistics of the matched terms. More details are provided in Appendix D.1.
Description of the data with weak labels and data with true labels as well as the setup of the documentranking experiments is presented in Appendix E.2 in more details.
Results and Discussions
We conducted k-fold cross validation on D s (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1,000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents (nDCG@20). Table 1 shows the performance on both datasets. As can be seen, FWL provides a significant boost on the performance over all datasets. In the ranking task, the student is designed in particular to be trained on weak annotations (Dehghani et al., 2017b), hence training the network only on weak supervision, i.e. NN W performs better than NN S . This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available.
Alternating between strong and weak data during training, i.e. NN S + /W seems to bring little (but statistically significant) improvement. However, we can gain better results by the typical fine-tuning strategy, NN W→S . We can gain improvement by fine-tuning the NN W using labels generated by teacher without considering their confidence score, i.e. FWL \Σ. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher which is better than D s in terms of quantity and D w in terms of quality. This baseline is equivalent to setting β = 0 in Equation 1. However, we see a big jump in performance when we use FWL to include the estimated label quality from the teacher, leading to the best overall results.
SENTIMENT CLASSIFICATION
In sentiment classification, the goal is to predict the sentiment (e.g., positive, negative, or neutral) of a sentence. Each training sample x consists of a sentence s and its sentiment labelỹ.
The student for the sentiment classification task is a convolutional model which has been shown to perform best on the dataset we used (Deriu et al., 2017;Severyn & Moschitti, 2015a;b;Deriu et al., 2016). The first layer of the network learns the function ψ(.) which maps input sentence s to a dense vector as its representation. The inputs are first passed through an embedding layer mapping the sentence to a matrix S ∈ R m×|s| , followed by a series of 1d convolutional layers with max-pooling. The representation layer is followed by feed-forward layers and a softmax output layer which returns the probability distribution over all three classes. Figure 4 presents the general schema of the architecture of the student. See Appendix B.2 for more details.
The teacher for this task is modeled by a GP. See Appendix C for more details.
The weak annotator is a simple unsupervised lexicon-based method (Hamdan et al., 2013;Kiritchenko et al., 2014), which estimate a distribution over sentiments for each sentence, based on sentiment labels of its terms. More details are provided in Appendix D.2. Specification of the data with weak labels and data with true labels along with the detailed experimental setup are given in Appendix E.3.
Results and Discussion
We report Macro-F1, the official SemEval metric, in Table 2. We see that the proposed FWL is the best performing approach.
For this task, since the amount of data with true labels are larger compared to the ranking task, the performance of NN S is acceptable. Alternately sampling from weak and strong data gives better results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NN W ω →S seems to work quite well in this task. Similar to the ranking task, fine-tuning NN S based on labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning.
Besides the baselines, we also report the best performing systems which are also convolution-based models (
ANALYSIS
In this section, we provide further analysis of FWL by investigating the bias-variance trade-off and the learning rate. As mentioned in Section 2, β is a hyperparameter that controls the contribution of weak and strong data to the training procedure. In order to investigate its influence, we fixed everything in the model and ran the fine-tuning stage with different values of β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. Figure 5 illustrates the performance on the ranking (on Robust04 dataset) and sentiment classification tasks (on SemEval14 dataset). For both sentiment classification and ranking, β = 1 gives the best results (higher scores are better). We also experimented on the toy problem with different values of β in three cases: 1) having 10 observations from the true function (same setup as Section 3.1), marked as "Toy Data" in the plot, 2) having only 5 observations from the true function, marked as "Toy Data *" in the plot, and 3) having f (x) = x+1 as the weak function, which is an extremely bad approximator of the true function, marked as "Toy Data **" in the plot. For the "Toy Data" experiment, β = 1 turned out to be optimal (here, lower scores are better). However, for "Toy Data *", where we have an extremely small number of observations from the true function, setting β to a higher value acts (a) Models trained on different amount weak data.
HANDLING THE BIAS-VARIANCE TRADE-OFF
(b) Models trained on different amount of strong data. as a regularizer by relying more on weak signals, and eventually leads to better generalization. On the other hand, for "Toy Data **", where the quality of the weak annotator is extremely low, lower values of β put more focus on the true observations. Therefore, β lets us control the bias-variance trade-off in these extreme cases.
A GOOD TEACHER IS BETTER THAN MANY OBSERVATIONS
We now look at the rate of learning for the student as the amount of training data is varied. We performed two types of experiments for all tasks: In the first experiment, we use all the available strong data but consider different percentages of the entire weak dataset. In the second experiment, we fix the amount of weak data and provide the model with varying amounts of strong data. We use standard fine-tuning with similar setups as for the baseline models. Details on the experiments for the toy problem are provided in Appendix E.1. Figure 6 presents the results of these experiments. In general, for all tasks and both setups, the student learns faster when there is a teacher. One caveat is in the case where we have a very small amount of weak data. In this case the student cannot learn a suitable representation in the first step, and hence the performance of FWL is pretty low, as expected. It is highly unlikely that this situation occurs in reality as obtaining weakly labeled data is much easier than strong data.
The empirical observation of Figure 6 that our model learns more with less data can also be seen as evidence in support of another perspective to FWL, called learning using privileged information (Vapnik & Izmailov, 2015). We elaborate more on this connection in Appendix F.
RELATED WORK
In this section, we position our FWL approach relative to related work on semi-supervised learning.
Learning from imperfect labels has been thoroughly studied in the literature (Frénay & Verleysen, 2014). In the semi-supervised setup, some ideas were developed to utilize weakly or even unlabeled data. For instance, the idea of self-training (Rosenberg et al., 2005), pseudo-labeling (Lee, 2013;, and Co-training (Blum & Mitchell, 1998) are introduced for augmenting the training set by unlabeled data with predicted labels. As a common approach in semi-supervised learning, the unlabeled set can be used for learning the distribution of the data. In particular for neural networks, greedy layer-wise pre-training of weights using unlabeled data is followed by supervised finetuning (Hinton et al., 2006;Deriu et al., 2017;Severyn & Moschitti, 2015b;a;Go et al., 2009). Other methods learn unsupervised encoding at multiple levels of the architecture jointly with a supervised signal (Ororbia II et al., 2015;Weston et al., 2012).
Alternatively, some noise cleansing methods were proposed to remove or correct mislabeled samples (Brodley & Friedl, 1999). There are some studies showing that weak or noisy labels can be leveraged by employing a particular architecture or defining a proper loss function to avoid over-fitting to imperfections of the training data (Dehghani et al., 2017b;Vahdat, 2017;Patrini et al., 2017;Beigman & Klebanov, 2009;Zeng et al., 2015;Bunescu & Mooney, 2007).
One direction of research focuses on modeling the pattern of the noise or weakness in the labels. For instance, methods that use a generative model to correct weak labels such that a discriminative model can be trained more effectively (Ratner et al., 2016;Rekatsinas et al., 2017;Varma et al., 2017). Furthermore, methods that aim to capture the pattern of the noise by inserting an extra layer or a separate module tries to infer better labels from noisy ones and use them to supervise the training of the network (Sukhbaatar et al., 2015;Dehghani et al., 2017a;Veit et al., 2017). Our proposed method can be categorized in this class.
CONCLUSION
Training neural networks using large amounts of weakly annotated data is an attractive approach in scenarios where an adequate amount of data with true labels is not available, a situation which often arises in practice. In this paper, we introduced fidelity-weighted learning (FWL), a new student-teacher framework for semi-supervised learning in the presence of weakly labeled data. We applied FWL to document ranking and sentiment classification, and empirically verified that FWL speeds up the training process and improves over state-of-the-art semi-supervised alternatives.
Algorithm 2 Clustered Gaussian processes.
1: Let N be the sample size, n the sample size of each cluster, K the number of clusters, and ci the center of cluster i. 2: Run K-means with K clusters over all samples with true labels Ds = {xi,yi}.
K-means(xi) → c1,c2,...,cK where ci represents the center of cluster Ci containing samples D c i s = {xi,1,xi,2,...xi,n}. 3: Assign each of K clusters a Gaussian process and train them in parallel to approximate the label of each sample.
GP c i (m c i post ,K c i post ) = GP(mprior,Kprior)|D c i s = {(ψ(xs,c i ),ys,c i )} Tc i (xt) = g(m c i post (xt)) Σc i (xt) = h(K c i post (xt,xt))
where GP c i is trained on D c i s containing samples belonging to the cluster ci. Other elements are defined in Section 2 4: Use trained teacher Tc i (.) to evaluate the soft label and uncertainty for samples from Dsw to compute η2(xt) required for step 3 of Algorithm 1. We use T (.) as a wrapper for all teachers {Tc i }.
APPENDICES
We moved additional details to the appendices in order to keep the main text focused on the overall idea of the Fidelity-Weighted Learning approach. Specifically, we include further details on the clustered Gaussian process approach (Appendix A); on the student network architectures (Appendix B); on the teacher Gaussian process model (Appendix C); on the weak annotators (Appendix D); on the experimental data and setup (Appendix E); and on the connection to "learning with privileged information" (Appendix F).
A DETAILED DESCRIPTION OF CLUSTERED GP
We suggest using several GP = {GPc i } to cover the entire data space. This results in a better specialization of each teacher. In addition it solves the scalability issue when the size of the strong dataset Ds on which we train GP is too large. Here we propose a method called clustered Gaussian process inspired by (Shen et al., 2006) to alleviate the issue of large sample size.
Clustered GP: Let N be the size of the dataset on which we train the teacher. Assume we allocate K teachers to the entire data space. Therefore, each GP sees a dataset of size n = N/K. Then we use a simple clustering method (e.g. k-means) to find centroids of K clusters C1,C2,...,CK where Ci consists of samples {xi,1,xi,2,...,xi,n}. We take the centroid ci of cluster Ci as the representative sample for all its content. Note that ci does not necessarily belong to {xi,1,xi,2,...,xi,n}. We assign each cluster a GP trained by samples belonging to that cluster. More precisely, cluster Ci is assigned a GP whose data points are {xi,1,xi,2,...,xi,n}. Because there is no dependency among different clusters, we train them in parallel to speed-up the procedure more.
The pseudo-code of the clustered GP is presented in Algorithm 2. When the main issue is computational resources, we can first choose the number n which is the maximum size of the dataset on which our resources allow to train a GP, then find the number of clusters K = N/n accordingly. The rest of the algorithm remains unchanged.
B DETAILED ARCHITECTURE OF THE STUDENTS B.1 RANKING TASK
For the ranking task, the employed student is proposed in (Dehghani et al., 2017b). The first layer of the network models function ψ that learns the representation of the input data samples, i.e. (q, d + , d − ), and consists of three components: (1) an embedding function ε : V → R m (where V denotes the vocabulary set and m is the number of embedding dimensions), (2) a weighting function ω : V → R, and (3) a compositionality function : (R m ,R) n → R m . More formally, the function ψ is defined as:
ψ(q,d + ,d − ) = [ |q| i=1 (ε(t q i ),ω(t q i )) || |d + | i=1 (ε(t d + i ),ω(t d + i )) || |d − | i=1 (ε(t d − i ),ω(t d − i )) ],(2)
where t q i and t d i denote the i th term in query q respectively document d. The embedding function ε maps each term to a dense mdimensional real value vector, which is learned during the training phase. The weighting function ω assigns a weight to each term in the vocabulary. It has been shown that ω simulates the effect of inverse document frequency (IDF), which is an important feature in information retrieval (Dehghani et al., 2017b).
The compositionality function projects a set of n embedding-weighting pairs to an mdimensional representation, independent from the value of n:
n i=1 (ε(ti),ω(ti)) = n i=1 exp(ω(ti))·ε(ti) n j=1 exp(ω(tj)) ,(3)
which is in fact the normalized weighted element-wise summation of the terms' embedding vectors. Again, it has been shown that having global term weighting function along with embedding function improves the performance of ranking as it simulates the effect of inverse document frequency (IDF). In our experiments, we initialize the embedding function ε with word2vec embeddings (Mikolov et al., 2013) pre-trained on Google News and the weighting function ω with IDF.
The representation layer is followed by a simple fully connected feed-forward network with l hidden layers followed by a softmax which receives the vector representation of the inputs processed by the representation learning layer and outputs a predictionỹ. Each hidden layer z k in this network computes z k = α(W k z k−1 +b k ), where W k and b k denote the weight matrix and the bias term corresponding to the k th hidden layer and α(.) is the non-linearity. These layers follow a sigmoid output. We employ the cross entropy loss:
Lt = i∈B [−yilog(ŷi)−(1−yi)log(1−ŷi)],(4)
where B is a batch of data samples.
B.2 SENTIMENT CLASSIFICATION TASK
The student for the sentiment classification task is a convolutional model which has been shown to perform best in the dataset we used (Deriu et al., 2017;Severyn & Moschitti, 2015a;b;Deriu et al., 2016). The first layer of the network learns the function ψ which maps input sentence s to a vector as its representation consists of an embedding function ε : V → R m , where V denotes the vocabulary set and m is the number of embedding dimensions.
This function maps the sentence to a matrix S ∈ R m×|s| , where each column represents the embedding of a word at the corresponding position in the sentence. Matrix S is passed through a convolution layer. In this layer, a set of f filters is applied to a sliding window of length h over S to generate a feature map matrix C. Each feature map ci for a given filter F is generated by ci = k,j S[i : i+h] k,j F k,j , where S[i : i+h] denotes the concatenation of word vectors from position i to i+h. The concatenation of all ci produces a feature vector c ∈ R |s|−h+1 . The vectors c are then aggregated over all f filters into a feature map matrix C ∈ R f ×(|s|−h+1) .
We also add a bias vector b ∈ R f to the result of a convolution. Each convolutional layer is followed by a non-linear activation function (we use ReLU (Nair & Hinton, 2010)) which is applied element-wise. Afterward, the output is passed to the max pooling layer which operates on columns of the feature map matrix C returning the largest value: pool(ci) : R 1×(|s|−h+1) → R (see Figure 4). This architecture is similar to the state-of-the-art model for Twitter sentiment classification from Semeval 2015 and 2016 (Severyn & Moschitti, 2015b;Deriu et al., 2016).
We initialize the embedding matrix with word2vec embeddings (Mikolov et al., 2013) pretrained on a collection of 50M tweets. 1
The representation layer then is followed by a feed-forward layer similar to the ranking task (with different width and depth) but with softmax instead of sigmoid as the output layer which returnsŷi, the probability distribution over all three classes. We employ the cross entropy loss:
Lt = i∈B k∈K −y k i log(ŷ k i ),(5)
where B is a batch of data samples, and K is a set of classes.
1 Collected using the Twitter API. We plan to release the list of tweet ids along with the code for replicating our experiments.
C DETAILED ARCHITECTURE OF THE TEACHERS
We use Gaussian Process as the teacher in all the experiments. For each task, either regression or (multi-class) classification, in order to generate soft labels, we pass the mean of GP through the same function g(.) that is applied on the output of the student network for that task, e.g. softmax, or sigmoid. For binary classification or one dimensional regression, Σ(xt) is scalar and h(.) is identity. For multi-class classification or multi-dimensional regression tasks, h(.) is an aggregation function that takes variance over several dimensions and outputs a single measure of variance. As a reasonable choice, the aggregating function h(.) in our sentiment classification task (three classes) is mean of variances over dimensions.
In the teacher, linear combinations of different kernels are used for different tasks in our experiments.
Toy Problem: We use standard Gaussian process regression 2 with this kernel:
k(xi,xj) = kRBF(xi,xj)+k White (xi,xj)(6)
Document Ranking: We use sparse variational GP regression 3 (Titsias, 2009) with this kernel:
k(xi,xj) = k Matern3/2 (xi,xj)+kLinear(xi,xj)+k White (xi,xj)
Sentiment Classification: We use sparse variational GP for multiclass classification 4 (Hensman et al., 2015) with the following kernel:
k(xi,xj) = kRBF(xi,xj)+kLinear(xi,xj)+k White (xi,xj)
where,
kRBF(xi,xj) = exp xi −xj 2 2l 2 k Matern3/2 (xi,xj) = 1+ √ 3 xi −xj l exp − √ 3 xi −xj l
kLinear(xi,xj) = σ 2 0 +xi.xj k White (xi,xj) = constant value, ∀x1 = x2 and 0 otherwise We empirically found l = 1 satisfying value for the length scale of RBF and Matern3/2 kernels. We also set σ0 = 0 to obtain a homogeneous linear kernel. The constant value of K W hite (.,.) determines the level of noise in the labels. This is different from the noise in weak labels. This term explains the fact that even in true labels there might be a trace of noise due to the inaccuracy of human labelers.
We set the number of clusters in the clustered GP algorithm for the ranking task to 50 and for the sentiment classification task to 30.
D WEAK ANNOTATORS D.1 DOCUMENT RANKING
The weak annotator in the document ranking task is BM25 (Robertson & Zaragoza, 2009), a well-known unsupervised retrieval method. This method heuristically scores a given pair of query-document based on the statistics of their matched terms. In the pairwise document ranking setup,ỹi for a given sample xj = (q,d + ,d − ) is the probability of document d + being ranked higher than d − :ỹi = P q,d + ,d − = s q,d +/s q,d + +s q,d − , where s q,d is the score obtained from the weak annotator.
D.2 SENTIMENT CLASSIFICATION
The weak annotator for the sentiment classification task is a simple lexicon-based method (Hamdan et al., 2013;Kiritchenko et al., 2014). We use SentiWordNet03 (Baccianella et al., 2010) to assign probabilities (positive, negative and neutral) for each token in set Dw. We use a bag-of-words model for the sentence-level probabilities (i.e. just averaging the distributions of the terms), yielding a noisy labelỹi ∈ R |K| , where |K| = 3 is the number of classes. We found empirically that using soft labels from the weak annotator works better than assigning a single hard label. E DATA COLLECTION, PARAMETERS AND SETUP E.1 TOY PROBLEM Weak/True Data In all the experiments with the toy problem, we have randomly sampled 100 data points from the weak function and 10 data points from the true function. We introduce a small amount of noise to the observation of the true function to model the noise in the human labeled data.
Setup The neural network employed in the toy problem experiments is a simple feed-forward network with the depth of 3 layers and width of 128 neurons per layer. We have used tanh as the nonlinearity for the intermediate layers and a linear output layer. As the optimizer, we used Adam (Kingma & Ba, 2015) and the initial learning rate has been set to 0.001. For the teacher in the toy problem, we fit only one GP on all the data points (i.e. no clustering). Also during fine-tuning, we set β = 1.
Setup of experiments in Section 4.2 We fixed everything in the model and tried running the fine-tuning stage with different values for β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. For the experiments on toy problem in Section 4.2, the reported numbers are averaged over 10 trials. In the first experiment (i.e. Figure 6a), the size of sampled data data is: |Ds| = 50 and |Dw| = 100 (Fixed) and for the second one (i.e. Figure 6a): |Dw| = 100 and |Ds| = 10 (fixed).
E.2 RANKING TASK
Collections We use two standard TREC collections for the task of ad-hoc retrieval: The first collection (Robust04) consists of 500k news articles from different news agencies as a homogeneous collection. The second collection (ClueWeb) is ClueWeb09 Category B, a large-scale web collection with over 50 million English documents, which is considered as a heterogeneous collection. Spam documents were filtered out using the Waterloo spam scorer 5 (Cormack et al., 2011) with the default threshold 70%.
Data with true labels We take query sets that contain human-labeled judgments: a set of 250 queries (TREC topics 301-450 and 601-700) for the Robust04 collection and a set of 200 queries (topics 1-200) for the experiments on the ClueWeb collection. For each query, we take all documents judged as relevant plus the same number of documents judged as non-relevant and form pairwise combinations among them.
Data with weak labels We create a query set Q using the unique queries appearing in the AOL query logs (Pass et al., 2006). This query set contains web queries initiated by real users in the AOL search engine that were sampled from a three-month period from March 2006 to May 2006. We filtered out a large volume of navigational queries containing URL substrings ("http", "www.", ".com", ".net", ".org", ".edu"). We also removed all nonalphanumeric characters from the queries. For each dataset, we took queries that have at least ten hits in the target corpus using our weak annotator method. Applying all these steps, We collect 6.15 million queries to train on in Robust04 and 6.87 million queries for ClueWeb. To prepare the weakly labeled training set Dw, we take the top 1,000 retrieved documents using BM25 for each query from training query set Q, which in total leads to ∼ |Q|×10 6 training samples.
Setup For the evaluation of the whole model, we conducted a 3-fold cross-validation. However, for each dataset, we first tuned all the hyper-parameters of the student in the first step on the set with true labels using batched GP bandits with an expected improvement acquisition function (Desautels et al., 2014) and kept the optimal parameters of the student fixed for all the other experiments. The size and number of hidden layers for the student is selected from {64,128,256,512}. The initial learning rate and the dropout parameter were selected from {10 −3 ,10 −5 } and {0.0,0.2,0.5}, respectively. We considered embedding sizes of {300,500}. The batch size in our experiments was set to 128. We use ReLU (Nair & Hinton, 2010) as a non-linear activation function α in student. We use the Adam optimizer (Kingma & Ba, 2015) for training, and dropout (Srivastava et al., 2014) as a regularization technique.
At inference time, for each query, we take the top 2,000 retrieved documents using BM25 as candidate documents and re-rank them using the trained models. We use the Indri 6 implementation of BM25 with default parameters (i.e., k1 = 1.2, b = 0.75, and k3 = 1,000).
E.3 SENTIMENT CLASSIFICATION TASK
Collections We test our model on the twitter message-level sentiment classification of SemEval-15 Task 10B (Rosenthal et al., 2015). Datasets of SemEval-15 subsume the test sets from previous editions of SemEval, i.e. SemEval-13 and SemEval-14. Each tweet was preprocessed so that URLs and usernames are masked.
Data with true labels We use train (9,728 tweets) and development (1,654 tweets) data from SemEval-13 for training and SemEval-13-test (3,813 tweets) for validation. To make your results comparable to the official runs on SemEval we us SemEval-14 (1,853 tweets) and SemEval-15 (2,390 tweets) as test sets (Rosenthal et al., 2015;Nakov et al., 2016).
Data with weak labels We use a large corpus containing 50M tweets collected during two months for both, training the word embeddings and creating the weakly annotated set Dw using the lexicon-based method explained in Section 3.3.
Setup Similar to the document ranking task, we tuned hyper-parameters for the student in the first step with respect to the true labels of the validation set using batched GP bandits with an expected improvement acquisition function (Desautels et al., 2014) and kept the optimal parameters fixed for all the other experiments. The size and number of hidden layers for the classifier and is selected from {32,64,128}. We tested the model with both, 1 and 2 convolutional layers. The number of convolutional feature maps and the filter width is selected from {200,300} and {3,4,5}, respectively. The initial learning rate and the dropout parameter were selected from {1E −3,1E −5} and {0.0,0.2,0.5}, respectively. We considered embedding sizes of {100,200} and the batch size in these experiments was set to 64. ReLU (Nair & Hinton, 2010) is used as a non-linear activation function in student. Adam optimizer (Kingma & Ba, 2015) is used for training, and dropout (Srivastava et al., 2014) as a regularizer.
F CONNECTION WITH VAPNIK'S LEARNING USING PRIVILEGED INFORMATION
In this section, we highlight the connections of our work with Vapnik's learning using privileged information (LUPI) (Vapnik & Vashist, 2009;Vapnik & Izmailov, 2015). FWL makes use of information from a small set of correctly labeled data to improve the performance of a semi-supervised learning algorithm. The main idea behind LUPI comes from the fact that humans learn much faster than machines. This can be due to the role that an Intelligent Teacher plays in human learning. In this framework, the training data is a collection of triplets {(x1,y1,x * 1 ),...,(xn,yn,x * n )}∼P n (x,y,x * )
where each (xi,yi) is a pair of feature-label and x * i is the additional information provided by an intelligent teacher to ease the learning process for the student. Additional information for each (xi,yi) is available only during training time and the learning machine must only rely on xi at test time. The theory of LUPI studies how to leverage such a teaching signal x * i to outperform learning algorithms utilizing only the normal features xi. For example, MRI brain images can be augmented with high-level medical or even psychological descriptions of Alzheimer's disease to build a classifier that predicts the probability of Alzheimer's disease from an MRI image at test time. It is known from statistical learning theory (Vapnik, 1998) that the following bound for test error is satisfied with probability 1−δ:
R(f ) ≤ Rn(f )+O |F|V C −logσ n α ,(10)
where Rn(f ) denotes the training error over n samples, |F|V C is the VC dimension of the space of functions from which f is chosen, and α ∈ [0.5,1]. When the classes are not separable, α = 0.5 i.e. the machine learns at a slow rate of O(n −1/2 ). For easier problems where classes are separable, α = 1 resulting in a learning rate of O(n −1 ). The difference between these two cases is severe. The same error bound achieved for a separable problem with 10 thousand data points is only obtainable for a non-separable problem when 100 million data points are provided. This is prohibitive even when obtaining large datasets is not so costly. The theory of LUPI shows that an intelligent teacher can reduce α resulting in a faster learning process for the student. In this paper, we proposed a teacher-student framework for semi-supervised learning. Similar to LUPI, in FWL a student is supposed to solve the main prediction task while an intelligent teacher provides additional information to improve its learning. In addition, we first train the student network so that it obtains initial knowledge of weakly labeled data and learns a good data representation. Then the teacher is trained on truly labeled data enjoying the representation learnt by the student. This extends LUPI in a way that the teacher provides privileged information that is most useful for the current state of student's knowledge. FWL also extends LUPI by introducing several teachers each of which is specialized to correct student's knowledge related to a specific region of the data space. Figure 6(a) provides evidence for the assumption that privileged information in our task can accelerate the learning process of the student. It shows how the privileged information from an intelligent teacher affects the exponent α of the error bound in Equation 10. Figure 6(b) shows the test error for various number of samples |Ds| with true label. As expected, In both extremes where |Ds| is too small or too large, the performance of our model becomes close to the models without a teacher. The reason is that student has enough strong samples to learn a good model of true function. In more realistic cases where |Ds| |Dw| but |Ds| is still large enough to be informative about |Dw|, our model gives a lower test error than models without the intelligent teacher.
The theory of LUPI was first developed and proved for support vector machines by Vapnik as a method for knowledge transfer. Hinton introduced Dark knowledge as a spiritually close idea in the context of neural networks (Hinton et al., 2006). He proposed to use a large network or an ensemble of networks for training and a smaller network at test time. It turned out that compressing knowledge of a large system into a smaller system can improve the generalization ability. It was shown in (Lopez-Paz et al., 2016) that dark knowledge and LUPI can be unified under a single umbrella, called generalized distillation. The core idea of these models is machinesteaching-machines. As the name suggests, a machine is learning the knowledge embedded in another machine. In our case, student is correcting his knowledge by receiving privileged information about label uncertainty from teacher.
Our framework extends the core idea of LUPI in the following directions:
• Trainable teacher: It is often assumed that the teacher in LUPI framework has some additional true information.
We show that when this extra information is not available, one can still use the LUPI setup and define an implicit teacher whose knowledge is learned from the true data. In this approach, the performance of the final student-teacher system depends on a clever answer to the following question: which information should be considered as the privileged knowledge of teacher. • Bayesian teacher: The proposed teacher is Bayesian. It provides posterior uncertainty of the label of each sample. • Mutual representation: We introduced module ψ(.) which learns a mutual embedding (representation) for both student and teacher. This is in particular interesting because it defines a two-way channel between teacher and student. • Multiple teachers: We proposed a scalable method to introduce several teachers such that each teacher is specialized in a particular region of the data space.
Figure 1 :
1Illustration of Fidelity-Weighted Learning:
Training student on 100 examples from the weak function. (b) Fitting teacher based on 10 observations from the true function.(c) Fine-tuning the student based on observations from the true function. (d) Fine-tuning the student based on label/confidence from teacher.
Figure 2 :
2Toy example: The true function we want to learn is y = sin(x) and the weak function is y = 2sinc(x).
Figure 3 :
3The student for the document ranking task.
Figure 4 :
4The student for the sentiment classification task.
Rouvier & Favre 2016 on SemEval-14; Deriu et al. 2016 on SemEval-15). Using FWL and taking the confidence into consideration outperforms the best systems and leads to the highest reported results on both datasets.
Figure 5 :
5Effect of different values for β.
Figure 6 :
6Performance of FWL and the baseline model trained on different amount of data.
Table 1 :
1Performance of FWL approach and baseline methods for ranking task. IJi indicates that the improvements with respect to the baseline i are statistically significant at the 0.05 level using the paired two-tailed t-test with Bonferroni correction.Method
Robust04
ClueWeb
MAP
nDCG@20
MAP
nDCG@20
1
WABM25
0.2503 IJ3
0.4102 IJ3
0.1021 IJ3
0.2070 IJ3
2
NNW (Dehghani et al., 2017b) 0.2702 IJ13
0.4290 IJ13
0.1297 IJ13
0.2201 IJ13
3
NNS
0.1790
0.3519
0.0782
0.1730
4
Table 2 :
2Performance of the proposed FWL approach and baseline methods for sentiment classification task. IJi indicates that the improvements with respect to the baseline#i are statistically significant, at the 0.05 level using the paired two-tailed t-test, with Bonferroni correction.Method
SemEval-14
SemEval-15
1
WALexicon
0.5141
0.4471
2
NNW
0.6719 IJ13
0.5606 IJ1
3
NNS
0.6307 IJ1
0.5811 IJ12
4
NN S + /W
0.7032 IJ123
0.6319 IJ123
5
NNW→S
0.7080 IJ123
0.6441 IJ123
6
NNWω→S
0.7166 IJ1234
0.6603 IJ12345
7
FWL \Σ
0.7202 IJ12345
0.6590 IJ12345
8
FWL
0.7470 IJ1234567
0.6830 IJ1234567
9
SemEval Best 0.7162
0.6618
(Rouvier & Favre, 2016)
(Deriu et al., 2016)
Embedding
Classifier
Embedding
Conv.
Feature Map
Pooled Repr.
http://gpflow.readthedocs.io/en/latest/notebooks/regression.html 3 http://gpflow.readthedocs.io/en/latest/notebooks/SGPR_notes.html 4 http://gpflow.readthedocs.io/en/latest/notebooks/multiclass.html
http://plg.uwaterloo.ca/˜gvcormac/clueweb09spam/ 6 https://www.lemurproject.org/indri.php
TensorFlow: Large-scale machine learning on heterogeneous systems. Martín Abadi, Martín Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.
Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, LREC. 10Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC, volume 10, pp. 2200-2204, 2010.
Learning with annotation noise. Eyal Beigman, Beata Beigman Klebanov, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPAssociation for Computational Linguistics1Eyal Beigman and Beata Beigman Klebanov. Learning with annotation noise. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pp. 280-287. Association for Computational Linguistics, 2009.
Combining labeled and unlabeled data with co-training. Avrim Blum, Tom Mitchell, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT' 98. the Eleventh Annual Conference on Computational Learning Theory, COLT' 98Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT' 98, pp. 92-100, 1998.
Identifying mislabeled training data. Carla E Brodley, Mark A Friedl, Journal of artificial intelligence research. 11Carla E. Brodley and Mark A. Friedl. Identifying mislabeled training data. Journal of artificial intelligence research, 11:131-167, 1999.
Learning to extract relations from the web using minimal supervision. Razvan Bunescu, Raymond Mooney, ACL. Razvan Bunescu and Raymond Mooney. Learning to extract relations from the web using minimal supervision. In ACL, 2007.
Semi-Supervised Learning. Olivier Chapelle, Bernhard Schölkopf, Alexander Zien, The MIT Press1st editionOlivier Chapelle, Bernhard Schölkopf, and Alexander Zien. Semi-Supervised Learning. The MIT Press, 1st edition, 2006.
Efficient and effective spam filtering and re-ranking for large web datasets. Gordon V Cormack, Mark D Smucker, Charles L Clarke, Inf. Retr. 145Gordon V. Cormack, Mark D. Smucker, and Charles L. Clarke. Efficient and effective spam filtering and re-ranking for large web datasets. Inf. Retr., 14(5):441-465, 2011.
Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision. Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps, arXiv:1711.00313arXiv preprintMostafa Dehghani, Aliaksei Severyn, Sascha Rothe, and Jaap Kamps. Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision. arXiv preprint arXiv:1711.00313, 2017a.
Neural ranking models with weak supervision. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, W Bruce Croft, SIGIR'17. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. Neural ranking models with weak supervision. In SIGIR'17, 2017b.
Swisscheese at semeval-2016 task 4: Sentiment classification using an ensemble of convolutional neural networks with distant supervision. Jan Deriu, Maurice Gonzenbach, Fatih Uzdilli, Aurelien Lucchi, Valeria De Luca, Martin Jaggi, Proceedings of SemEval. SemEvalJan Deriu, Maurice Gonzenbach, Fatih Uzdilli, Aurelien Lucchi, Valeria De Luca, and Martin Jaggi. Swisscheese at semeval-2016 task 4: Sentiment classification using an ensemble of convolutional neural networks with distant supervision. Proceedings of SemEval, pp. 1124-1128, 2016.
Leveraging large amounts of weakly supervised data for multi-language sentiment classification. Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei Severyn, Simon Müller, Mark Cieliebak, Thomas Hofmann, Martin Jaggi, Proceedings of the 26th international International World Wide Web Conference (WWW'17). the 26th international International World Wide Web Conference (WWW'17)Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei Severyn, Simon Müller, Mark Cieliebak, Thomas Hofmann, and Martin Jaggi. Leveraging large amounts of weakly supervised data for multi-language sentiment classification. In Proceedings of the 26th international International World Wide Web Conference (WWW'17), pp. 1045-1052, 2017.
Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Thomas Desautels, Andreas Krause, Joel W Burdick, Journal of Machine Learning Research. 151Thomas Desautels, Andreas Krause, and Joel W. Burdick. Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research, 15(1): 3873-3923, 2014.
Classification in the presence of label noise: a survey. Benoît Frénay, Michel Verleysen, IEEE transactions on neural networks and learning systems. 25Benoît Frénay and Michel Verleysen. Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845-869, 2014.
Twitter sentiment classification using distant supervision. Alec Go, Richa Bhayani, Lei Huang, 1StanfordCS224N Project ReportAlec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12), 2009.
Experiments with dbpedia, wordnet and sentiwordnet as resources for sentiment analysis in micro-blogging. Hussam Hamdan, Frederic Béchet, Patrice Bellot, Second Joint Conference on Lexical and Computational Semantics (* SEM). 2Hussam Hamdan, Frederic Béchet, and Patrice Bellot. Experiments with dbpedia, wordnet and sentiwordnet as resources for sentiment analysis in micro-blogging. In Second Joint Conference on Lexical and Computational Semantics (* SEM), volume 2, pp. 455-459, 2013.
Scalable variational gaussian process classification. James Hensman, Alexander G De, G Matthews, Zoubin Ghahramani, Proceedings of AISTATS. AISTATSJames Hensman, Alexander G. de G. Matthews, and Zoubin Ghahramani. Scalable variational gaussian process classification. In Proceedings of AISTATS, 2015.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531NIPS 2014 Deep Learning Workshop. arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS 2014 Deep Learning Workshop, 2014. arXiv preprint arXiv:1503.02531.
A fast learning algorithm for deep belief nets. Geoffrey E Hinton, Simon Osindero, Yee-Whye Teh, Neural Comput. 187Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527-1554, 2006.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980ICLR. arXiv preprintDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. arXiv preprint arXiv:1412.6980.
Sentiment analysis of short informal texts. Svetlana Kiritchenko, Xiaodan Zhu, Saif M Mohammad, Journal of Artificial Intelligence Research. 50Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Mohammad. Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research, 50:723-762, 2014.
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. Dong-Hyun Lee, Workshop on Challenges in Representation Learning, ICML. 3Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, pp. 2, 2013.
Unifying distillation and privileged information. David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik, arXiv:1511.03643ICLR'16. arXiv preprintDavid Lopez-Paz, Léon Bottou, Bernhard Schölkopf, and Vladimir Vapnik. Unifying distillation and privileged information. In ICLR'16, 2016. arXiv preprint arXiv:1511.03643.
GPflow: A Gaussian process library using TensorFlow. Alexander G De, G Matthews, Mark Van Der, Tom Wilk, Nickson, Keisuke, Alexis Fujii, Pablo Boukouvalas, Zoubin León-Villagrá, James Ghahramani, Hensman, Journal of Machine Learning Research. 1840Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research, 18(40):1-6, 2017.
Distributed Representations of Words and Phrases and their Compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, NIPS '13. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed Representations of Words and Phrases and their Compositionality. In NIPS '13, pp. 3111-3119, 2013.
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, Proceedings of the 27th international conference on machine learning (ICML-10). the 27th international conference on machine learning (ICML-10)Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.
Semeval-2016 task 4: Sentiment analysis in twitter. Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, Veselin Stoyanov, Proceedings of SemEval. SemEvalPreslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. Semeval-2016 task 4: Sentiment analysis in twitter. Proceedings of SemEval, pp. 1-18, 2016.
Learning a deep hybrid model for semisupervised text classification. Alexander G Ororbia, I I , C Lee Giles, David Reitter, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)Alexander G. Ororbia II, C. Lee Giles, and David Reitter. Learning a deep hybrid model for semi- supervised text classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015.
Semi-supervised knowledge transfer for deep learning from private training data. Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar, arXiv:1610.05755ICLR, 2017. arXiv preprintNicolas Papernot, Martín Abadi,Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In ICLR, 2017. arXiv preprint arXiv:1610.05755.
A picture of search. Greg Pass, Abdur Chowdhury, Cayley Torgeson, InfoScale '06. Greg Pass, Abdur Chowdhury, and Cayley Torgeson. A picture of search. In InfoScale '06, 2006.
Making neural networks robust to label noise: a loss correction approach. Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, Lizhen Qu, arXiv:1609.03683CVPR. arXiv preprintGiorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural networks robust to label noise: a loss correction approach. In CVPR, 2017. arXiv preprint arXiv:1609.03683.
Data programming: Creating large training sets, quickly. Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, Christopher Ré, Advances in Neural Information Processing Systems. Alexander J. Ratner, Christopher M. De Sa, Sen Wu, Daniel Selsam, and Christopher Ré. Data programming: Creating large training sets, quickly. In Advances in Neural Information Processing Systems, pp. 3567-3575, 2016.
Holoclean: Holistic data repairs with probabilistic inference. Theodoros Rekatsinas, Xu Chu, F Ihab, Christopher Ilyas, Ré, PVLDB10Theodoros Rekatsinas, Xu Chu, Ihab F. Ilyas, and Christopher Ré. Holoclean: Holistic data repairs with probabilistic inference. PVLDB, 10(11):1190-1201, 2017.
The probabilistic relevance framework: Bm25 and beyond. Stephen Robertson, Hugo Zaragoza, Foundations and Trends in Information Retrieval. 34Stephen Robertson and Hugo Zaragoza. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389, 2009.
Semi-supervised self-training of object detection models. Chuck Rosenberg, Henry Hebert, Schneiderman, Seventh IEEE Workshop on Applications of Computer Vision. Chuck Rosenberg, Martial Hebert, and Henry Schneiderman. Semi-supervised self-training of object detection models. In Seventh IEEE Workshop on Applications of Computer Vision, 2005.
Semeval-2015 task 10: Sentiment analysis in twitter. Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, M Saif, Alan Mohammad, Veselin Ritter, Stoyanov, Proceedings of the 9th international workshop on semantic evaluation. the 9th international workshop on semantic evaluationSara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M. Mohammad, Alan Ritter, and Veselin Stoyanov. Semeval-2015 task 10: Sentiment analysis in twitter. In Proceedings of the 9th interna- tional workshop on semantic evaluation (SemEval 2015), pp. 451-463, 2015.
Sensei-lif at semeval-2016 task 4: Polarity embedding fusion for robust sentiment analysis. Mickael Rouvier, Benoit Favre, Proceedings of SemEval. SemEvalMickael Rouvier and Benoit Favre. Sensei-lif at semeval-2016 task 4: Polarity embedding fusion for robust sentiment analysis. Proceedings of SemEval, pp. 202-208, 2016.
Twitter sentiment analysis with deep convolutional neural networks. Aliaksei Severyn, Alessandro Moschitti, Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 38th International ACM SIGIR Conference on Research and Development in Information RetrievalACMAliaksei Severyn and Alessandro Moschitti. Twitter sentiment analysis with deep convolutional neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 959-962. ACM, 2015a.
Unitn: Training deep convolutional neural network for twitter sentiment classification. Aliaksei Severyn, Alessandro Moschitti, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationDenver, ColoradoAssociation for Computational LinguisticsAliaksei Severyn and Alessandro Moschitti. Unitn: Training deep convolutional neural network for twitter sentiment classification. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Association for Computational Linguistics, Denver, Colorado, pp. 464-469, 2015b.
Fast gaussian process regression using kd-trees. Yirong Shen, Matthias Seeger, Andrew Y Ng, Advances in neural information processing systems. Yirong Shen, Matthias Seeger, and Andrew Y. Ng. Fast gaussian process regression using kd-trees. In Advances in neural information processing systems, pp. 1225-1232, 2006.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, J. Mach. Learn. Res. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1): 1929-1958, 2014.
Training convolutional networks with noisy labels. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, Rob Fergus, Workshop contribution at ICLR 2015. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. In Workshop contribution at ICLR 2015, 2015.
learn: Tensorflow's high-level module for distributed machine learning. Yuan Tang, Tf, arXiv:1612.04251arXiv preprintYuan Tang. Tf.learn: Tensorflow's high-level module for distributed machine learning. arXiv preprint arXiv:1612.04251, 2016.
Variational learning of inducing variables in sparse gaussian processes. K Michalis, Titsias, International Conference on Artificial Intelligence and Statistics. Michalis K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In International Conference on Artificial Intelligence and Statistics, pp. 567-574, 2009.
Toward robustness against label noise in training deep discriminative neural networks. Arash Vahdat, NIPS '17. Arash Vahdat. Toward robustness against label noise in training deep discriminative neural networks. In NIPS '17, 2017.
Learning using privileged information: similarity control and knowledge transfer. Vladimir Vapnik, Rauf Izmailov, Journal of machine learning research. 1655Vladimir Vapnik and Rauf Izmailov. Learning using privileged information: similarity control and knowledge transfer. Journal of machine learning research, 16(20232049):55, 2015.
A new learning paradigm: Learning using privileged information. Vladimir Vapnik, Akshay Vashist, Neural networks. 225Vladimir Vapnik and Akshay Vashist. A new learning paradigm: Learning using privileged information. Neural networks, 22(5):544-557, 2009.
Vladimir N Vapnik, Statistical Learning Theory. Wiley-InterscienceVladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
Socratic learning: Correcting misspecified generative models using discriminative models. Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, Christopher Ré, arXiv:1610.08123arXiv preprintParoma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, and Christopher Ré. Socratic learning: Correcting misspecified generative models using discriminative models. arXiv preprint arXiv:1610.08123, 2017.
Learning from noisy large-scale datasets with minimal supervision. Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, Serge Belongie, The Conference on Computer Vision and Pattern Recognition. Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge Belongie. Learning from noisy large-scale datasets with minimal supervision. In The Conference on Computer Vision and Pattern Recognition, 2017.
Privacy aware learning. Martin J Wainwright, Michael I Jordan, John C Duchi, Advances in Neural Information Processing Systems. Martin J. Wainwright, Michael I. Jordan, and John C. Duchi. Privacy aware learning. In Advances in Neural Information Processing Systems, pp. 1430-1438, 2012.
Deep learning via semisupervised embedding. Jason Weston, Frédéric Ratle, Hossein Mobahi, Ronan Collobert, Neural Networks: Tricks of the Trade. SpringerJason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi- supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012.
Distant supervision for relation extraction via piecewise convolutional neural networks. Daojian Zeng, Kang Liu, Yubo Chen, Jun Zhao, EMNLP. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP, pp. 1753-1762, 2015. |
256,627,383 | POPULATION-SIZE-AWARE POLICY OPTIMIZATION FOR MEAN-FIELD GAMES | In this work, we attempt to bridge the two fields of finite-agent and infinite-agent games, by studying how the optimal policies of agents evolve with the number of agents (population size) in mean-field games, an agent-centric perspective in contrast to the existing works focusing typically on the convergence of the empirical distribution of the population. To this end, the premise is to obtain the optimal policies of a set of finite-agent games with different population sizes. However, either deriving the closed-form solution for each game is theoretically intractable, training a distinct policy for each game is computationally intensive, or directly applying the policy trained in a game to other games is sub-optimal. We address these challenges through the Population-size-Aware Policy Optimization (PAPO). Our contributions are three-fold. First, to efficiently generate efficient policies for games with different population sizes, we propose PAPO, which unifies two natural options (augmentation and hypernetwork) and achieves significantly better performance. PAPO consists of three components: i) the population-size encoding which transforms the original value of population size to an equivalent encoding to avoid training collapse, ii) a hypernetwork to generate a distinct policy for each game conditioned on the population size, and iii) the population size as an additional input to the generated policy. Next, we construct a multi-task-based training procedure to efficiently train the neural networks of PAPO by sampling data from multiple games with different population sizes. Finally, extensive experiments on multiple environments show the significant superiority of PAPO over baselines, and the analysis of the evolution of the generated policies further deepens our understanding of the two fields of finite-agent and infinite-agent games. . Ondemand high-capacity ride-sharing via dynamic trip-vehicle assignment. PNAS, 114(3):462-467, 2017. | [
235313715,
209532006
] | POPULATION-SIZE-AWARE POLICY OPTIMIZATION FOR MEAN-FIELD GAMES
Pengdeng Li
Nanyang Technological University
Singapore
Xinrun Wang
Nanyang Technological University
Singapore
Shuxin Li
Hau Chan [email protected]
Nanyang Technological University
Singapore
University of Nebraska
LincolnUSA
Bo An [email protected]
Nanyang Technological University
Singapore
POPULATION-SIZE-AWARE POLICY OPTIMIZATION FOR MEAN-FIELD GAMES
Published as a conference paper at ICLR 2023
In this work, we attempt to bridge the two fields of finite-agent and infinite-agent games, by studying how the optimal policies of agents evolve with the number of agents (population size) in mean-field games, an agent-centric perspective in contrast to the existing works focusing typically on the convergence of the empirical distribution of the population. To this end, the premise is to obtain the optimal policies of a set of finite-agent games with different population sizes. However, either deriving the closed-form solution for each game is theoretically intractable, training a distinct policy for each game is computationally intensive, or directly applying the policy trained in a game to other games is sub-optimal. We address these challenges through the Population-size-Aware Policy Optimization (PAPO). Our contributions are three-fold. First, to efficiently generate efficient policies for games with different population sizes, we propose PAPO, which unifies two natural options (augmentation and hypernetwork) and achieves significantly better performance. PAPO consists of three components: i) the population-size encoding which transforms the original value of population size to an equivalent encoding to avoid training collapse, ii) a hypernetwork to generate a distinct policy for each game conditioned on the population size, and iii) the population size as an additional input to the generated policy. Next, we construct a multi-task-based training procedure to efficiently train the neural networks of PAPO by sampling data from multiple games with different population sizes. Finally, extensive experiments on multiple environments show the significant superiority of PAPO over baselines, and the analysis of the evolution of the generated policies further deepens our understanding of the two fields of finite-agent and infinite-agent games. . Ondemand high-capacity ride-sharing via dynamic trip-vehicle assignment. PNAS, 114(3):462-467, 2017.
INTRODUCTION
Games involving a finite number of agents have been extensively investigated, ranging from board games such as Go (Silver et al., 2016;, Poker (Brown & Sandholm, 2018;Moravčík et al., 2017), and Chess (Campbell et al., 2002) to real-time strategy games such as StarCraft II (Vinyals et al., 2019) and Dota 2 (Berner et al., 2019). However, existing works are typically limited to a handful of agents, which hinders them from broader applications. To break the curse of many agents , mean-field game (MFG) (Huang et al., 2006;Lasry & Lions, 2007) was introduced to study the games that involve an infinite number of agents. Recently, benefiting from reinforcement learning (RL) (Sutton & Barto, 2018) and deep RL (Lillicrap et al., 2016;Mnih et al., 2015), MFG provides a versatile framework to model games with large population of agents (Cui & Koeppl, 2022;Fu et al., 2019;Guo et al., 2019;Laurière et al., 2022;Perolat et al., 2021;Perrin et al., 2022;Yang et al., 2018).
Despite the successes of finite-agent games and infinite-agent games, the two fields are largely evolving independently. Establishing the connection between an MFG and the corresponding finite-agent Markov (stochastic) games 1 has been a research hotspot and it is done typically by the convergence of the empirical distribution of the population to the mean-field (Saldi et al., 2018;Cui & Koeppl, 2022;Cui et al., 2022;Fabian et al., 2022). However, few results have been achieved from an agentcentric perspective. Specifically, a fundamental question is: how do the optimal policies of agents evolve with the population size? As the population size increases, the finite-agent games approximate, though never equal to, their infinite-agent counterparts (Cui & Koeppl, 2021;Mguni et al., 2018). Therefore, the solutions returned by methods in finite-agent games should be consistent with that returned by methods in infinite-agent games. As we can never generate finite-agent games with infinite number of agents, we need to investigate the evolution of the optimal policies of agents, i.e., scaling laws 2 , to check the consistency of the methods. However, theoretically investigating the scaling laws is infeasible, as obtaining the closed-form solutions of a set of finite-agent games is typically intractable except for some special cases (Guo & Xu, 2019). Hence, another natural question is: how to efficiently generate efficient policies for a set of finite-agent games with different population sizes? Most methods in finite-agent games can only return the solution of the game with a given number of agents (Bai & Jin, 2020;Jia et al., 2019;Littman et al., 2001). Unfortunately, the number of agents varies dramatically and rapidly in many real-world scenarios. For example, in Taxi Matching environment (Nguyen et al., 2018;Alonso-Mora et al., 2017), the number of taxis could be several hundred in rush hour while it could be a handful at midnight. Fig. 1 demonstrates the failure of two naive options in this environment: i) directly apply the policy trained in a given population size to other population sizes (PPO-Naive), and ii) train a policy by using the data sampled from multiple games with different population sizes (PPO). Furthermore, computing the optimal policies for games with different population sizes is computationally intensive.
In this work, we propose a novel approach to efficiently generate efficient policies for games with different population sizes, and then investigate the scaling laws of the generated policies. Our main contributions are three-fold. First, we propose PAPO, which unifies two natural methods: augmentation and hypernetwork, and thus, achieves better performance. Specifically, PAPO consists of three components: i) the population-size encoding which transforms the original value of population size to an equivalent encoding to avoid training collapse, ii) a hypernetwork to generate a distinct policy for each game conditioned on the population size, and iii) the population size as an additional input to the generated policy. Next, to efficiently train the neural networks of PAPO, we construct a multi-task-based training procedure where the networks are trained by using the data sampled from games with different population sizes. Finally, extensive experiments on multiple widely used game environments demonstrate the superiority of PAPO over several naive and strong baselines. Furthermore, with a proper similarity measure (centered kernel alignment (Kornblith et al., 2019)), we show the scaling laws of the policies generated by PAPO, which deepens our understanding of the two fields of finite-agent and infinite-agent games. By establishing the state-of-the-art for bridging the two research fields, we believe that this work contributes to accelerating the research in both fields from a new and unified perspective.
RELATED WORKS
Our work lies in the intersection of the two research fields: learning in Markov games (MGs) and learning in mean-field games (MFGs). Numerous works have tried to study the connection between an MFG and the corresponding finite-agent MGs from a theoretical or computational viewpoint such as (Saldi et al., 2018;Doncel et al., 2019;Cabannes et al., 2021), to name a few. The general result achieved is either that the empirical distribution of the population converges to the mean-field as the number of players goes to infinity or that the Nash equilibrium (NE) of an MFG is an approximate NE of the finite-agent game for a sufficiently large number of players, under different conditions such as the Lipschitz continuity of reward/cost and transition functions (Saldi et al., 2018;Cui & Koeppl, 2021;2022;Cui et al., 2022) or/and the convergence of the sequence of step-graphons (Cui & Koeppl, 2022;Cui et al., 2022). Though the advancements in these works provide a theoretical or computational understanding of the connection between the two fields, very few results have been achieved from an agent-centric perspective. More precisely, we aim to answer a fundamental question: how do the optimal policies of agents evolve with the population size? Toward this direction, the most related work is (Guo & Xu, 2019), which proves the convergence of the NE of the finiteagent game to that of the MFG by directly comparing the NEs. However, it requires the closed-form solutions of both the finite-agent game and the MFG to be computable, where extra conditions such as a convex and symmetric instantaneous cost function and càdlàg (bang-bang) controls are needed. In a more general sense, deep neural networks have been widely adopted to represent the policies of agents due to their powerful expressiveness (Laurière et al., 2022;Perolat et al., 2021;Perrin et al., 2022;Yang et al., 2018). In this sense, to our best knowledge, our work is the first attempt to bridge the two research fields of finite-agent MGs and MFGs. Specifically, we identify the critical challenges (theoretical intractability, computational difficulty, and sub-optimality of direct policy transfer) and propose novel algorithms and training regimes to efficiently generate efficient policies for finite-agent games with different population sizes and then investigate the scaling laws of the policies. For more discussion and related works on the two fields, see Appendices A.2 and A.3.
Our work is also related to Hypernetwork (Ha et al., 2016). A Hypernetwork is a neural network designed to produce the parameters of another network. It has gained popularity recently in a variety of fields such as computer vision (Brock et al., 2018;Jia et al., 2016;Littwin & Wolf, 2019;Potapov et al., 2018) and RL (Huang et al., 2021b;Rashid et al., 2018). In (Sarafian et al., 2021), Hypernetwork was used to map the state or task context to the Q or policy network parameters. Differently, in addition to generating a distinct efficient policy for each population size by using Hypernetwork, we also investigate the scaling laws of the generated policies, which has not been done in the literature.
PRELIMINARIES
In this section, we present the game models (finite-agent Markov game and infinite-agent mean-field game) and problem statement of this work.
Finite-agent Markov Game. A Markov Game (MG) with N < ∞ homogeneous agents is denoted as G(N ) = (N , S, A, p, r, T ). N = {1, · · · , N } is the set of agents. S and A are respectively the shared state and action spaces. At each time step t ∈ T = {0, 1, · · · , T }, an agent i in state s i t takes an action a i t . Let z N st denote the empirical distribution of the agent states s t = (s 1 t , · · · , s N t ) with z N st (s) = 1 N N i=1 1{s i t = s}, for all s ∈ S. Then, z N st ∈ ∆(S), the probability distribution over S. For agent i, given the state s i t , action a i t , and state distribution z N st , its dynamical behavior is described by the state transition function p : S × A × ∆(S) → ∆(S) and it receives a reward of r(s i t , a i t , z N st ). Let π i : S × T → ∆(A) be the policy 3 of agent i. Accordingly, π = (π i ) i∈N is the joint policy of all agents and π −i = (π j ) j∈N ,j =i is the joint policy of all agents except i. Given the initial joint state s and joint policy π, the value function for agent 1 ≤ i ≤ N is given by
V i (s i , π i , π −i ) = E T t=0 r(s i t , a i t , z N st ) s i 0 = s i , s i t+1 ∼ p, a i t ∼ π i .(1)
A joint policy π * = (π i, * ) i∈N is called a Nash policy if an agent has no incentive to unilaterally deviate from others: for any i ∈ N ,
V i (s i , π * ) ≥ max π i V i (s i , π i , π −i, * ). Given a joint policy π, the NASHCONV(π) = N i=1 maxπi V i (s i ,π i , π −i ) − V i (s i , π)
measures the "distance" of π to the Nash policy. That is, π is a Nash policy if NASHCONV(π) = 0.
Mean-Field Game. A Mean-Field Game (MFG) consists of the same elements as the finite-agent MG except that N = ∞, which is denoted as G(∞) = (S, A, p, r, T ). In this case, instead of modeling N separate agents, it models a single representative agent and collapses all other (infinite number of) agents into the mean-field, denoted by µ t ∈ ∆(S). As the agents are homogeneous, the index i is suppressed. Consider a policy π as before. Given some fixed mean-field (µ t ) t∈T of the population and initial state s ∼ µ 0 of the representative agent, the value function for the agent is
V (s, π, (µ t ) t∈T ) = E T t=0 r(s t , a t , µ t ) s 0 = s, s t+1 ∼ p, a t ∼ π .
(2) π * is called a Nash policy if the representative agent has no incentive to unilaterally deviate from the population (Perolat et al., 2021;Perrin et al., 2020):
V (s, π * , (µ * t ) t∈T ) ≥ maxπ V (s,π, (µ * t ) t∈T ), where (µ * t )
t∈T is the mean-field of the population following π * . Given a policy π, the NashConv is defined as NASHCONV(π) = maxπ V (s,π, (µ t ) t∈T ) − V (s, π, (µ t ) t∈T ) with (µ t ) t∈T being the mean-field of the population following π. Then, π is a Nash policy if NASHCONV(π) = 0.
Problem Statement. Though numerous advancements have been achieved for the finite-agent MGs and infinite-agent MFGs, the two fields are largely evolving independently (we provide more detailed discussions on the connection between MGs and MFGs in Appendix A.2). Bridging the two fields can contribute to accelerating the research in both fields. There are two closely coupled questions: how do the optimal policies of agents evolve with the population size, i.e., scaling laws, and how to efficiently generate efficient policies for finite-agent MGs with different population sizes? Formally, let G = G(N ), · · · , G(N ), · · · , G(N ) denote a set of MGs, where N andN denote the minimum and maximum number of agents. Let π * N denote the Nash policy of a game G(N ). Let ρ(N ) = ρ(π * N , π * N +1 ) denote some measure capturing the difference between the Nash policy of G(N ) and that of G(N + 1) (see Appendix A.5 for more discussion on ρ). Then, one question is how ρ(N ) changes with N . However, directly investigating the evolution of ρ(N ) is infeasible, as we need to obtain the Nash policy π * N for each game G(N ). In addition, directly applying the Nash policy π * N to a game G(N ) with N = N could result in worse (or arbitrarily worse) performance, i.e., large (or arbitrarily large) NASHCONV(π * N ) for G(N ), as shown in Fig. 1. Thus, another question is how to efficiently generate a policy π N that works well for each game G(N ) ∈ G, i.e., small NASHCONV(π N ), though it may not be the Nash policy π N = π * N . Unfortunately, most existing methods can only return the Nash policy for the game with a given N , which hinders them from many real-world applications where the number of agents varies dramatically and rapidly, e.g., the number of taxis could be several hundred in rush hour while it could be a handful at midnight (Alonso-Mora et al., 2017). Furthermore, learning the Nash policies of all the games in G is computationally intensive. Therefore, our objective is to develop novel methods that can efficiently generate efficient policies 4 for the games in G and enable us to investigate the scaling laws.
POPULATION-SIZE-AWARE POLICY OPTIMIZATION
We start by introducing the basic structure of the policy network of a representative agent. Let π θ denote the agent's policy parameterized by θ ∈ Θ. Note that θ can be any type of parameterization such as direct policy parameterizations (Leonardos et al., 2022) and neural networks (Agarwal et al., 2021). In this work, we use a neural network with three layers to represent the policy, i.e., θ = (θ 1 , θ 2 , θ 3 ) with θ l = (w l , b l , g l ) denoting the vectors of weights (w l ), biases (b l ), and scaling factors (g l ) of the layer 1 ≤ l ≤ 3. In practice, θ is typically trained by using RL algorithms (see Sec. 4.3 for practical implementation). However, as mentioned in the problem statement, either training a distinct policy for each game is computationally intractable or naively applying a policy trained under a given N to other games is sub-optimal (as shown in Fig. 1). To address these challenges, we propose a new policy optimization approach, the Population-size-Aware Policy Optimization (PAPO), which is shown in Fig. 2.
POPULATION-SIZE ENCODING
As PAPO is population-size dependent, one may choose to directly take N as the input (we call it the raw encoding (RE, for short)). However, this might work poorly in practice. In many real-world applications, the population size can be large and vary dramatically and rapidly, e.g., the total number of taxis in Manhattan is around 13,000 and in one day it can fluctuate from a handful at midnight to several hundred in rush hour (Alonso-Mora et al., 2017). When directly taking population sizes as inputs, bigger N 's may have a higher contribution to the network's output error and dominate the update of the network. This could degrade the performance or severely, collapse the training (as observed in experiments where the training of PAPO with RE is collapsed).
To address the aforementioned issue, we propose the population-size encoding to pre-process N . We use binary encoding (BE, for short) to equivalently transform the original decimal representation of N into a binary vector with size k > 0. Formally, given N , we obtain a vector e(N ) = [e 1 , · · · , e k ] such that N = k j=1 2 k−j e j where e j ∈ {0, 1}. After that, the encoding is further mapped to the embedding (through an embedding layer parameterized with η) which will be fed to the policy network and hypernetwork as shown in Fig. 2. With a slight abuse of notation, throughout this work, we simply use N to represent the embedding of e(N ) if it is clear from the context.
POPULATION-SIZE-AWARE ARCHITECTURE
The critical insight of our approach is to leverage the population-size information to generate efficient policies for the games in G. In view of the fact that none of the existing works has addressed a similar problem as ours, we first propose two natural methods: augmentation and hypernetwork.
Augmentation. That is, we concatenate N with s t and then pass the resulting input to the policy. A similar idea has been adopted in the literature to address the problem of policy generalization. For example, in (Perrin et al., 2022), the state is concatenated with the initial mean-field to endow the policy with the capability of generalizing across the mean-field space. In our context, by augmenting the state with N , the policy can work well across games with different N 's.
Hypernetwork. That is, we generate a distinct policy for each game by mapping N to the policy network parameters through a hypernetwork (Ha et al., 2016;Sarafian et al., 2021). This choice is completely different from the augmentation as the game-level information (population size in our context) is fully disentangled from the policy input. More specifically, let h β denote a hypernetwork parameterized with β. h β takes N as input and outputs a set of parameters (weights, biases, and scaling factors) which will be reshaped to form the structure of the policy network. Formally, we have θ = h β (N ). In the hypernetwork, three independent heads are used to generate the parameters of the three layers of the policy network (more details of the network architecture such as the number of neurons of each layer can be found in Appendix B.1).
Although the above two methods can obtain efficient policies for the games in G, there is no affirmative answer to the question of which one is better as they achieve the goal from different angles: augmentation uses a fixed set of policy network parameters and takes elements from the cartesian product of two domains (S and {2, 3, · · · }) as inputs while the hypernetwork generates a distinct set of policy network parameters for each game. In this work, instead of trying to answer this question, we propose a unified framework, PAPO, which encompasses the two options as two special cases. Specifically, as shown in Fig. 2, PAPO uses a hypernetwork to generate the parameters of the policy network conditioned on N and also takes N as an additional input to the generated policy.
Intuitively, PAPO preserves the merits of the two special cases and possibly achieves better performance. On one hand, it induces a hierarchical information structure. To a certain extent, it decouples the game-level information from the policy by introducing a hypernetwork to encode the varying elements across different games. In the context of our work, the only difference between the games in G is the population size while other game elements such as the state and action spaces are identical. Thus, PAPO can generate a distinct policy (or in other words, a specialized policy) for each game conditioned on the population size. On the other hand, once the policy is generated, PAPO becomes the augmentation, which, as mentioned before, can work well in the corresponding game.
Furthermore, we can investigate the scaling laws of the generated policies by studying the difference between them. Formally, let π θ=h β (N ) and π θ=h β (N +1) be two policies generated by PAPO for the two games with N and N + 1 agents, respectively. Then, the difference, denoted as ρ(N ) = ρ π θ=h β (N ) , π θ=h β (N +1) , as a function of N , characterizes the evolution of the policies with the population size. In practice, one can choose any type of ρ(N ). In experiments, we will identify a proper measure (a similarity measure) that can be used to achieve this goal.
PRACTICAL IMPLEMENTATION
Notice that PAPO is a general approach. In this section, we specify the practical implementations of the modules in PAPO and construct a training procedure to efficiently train the neural networks.
Algorithmic Framework. We implement PAPO following the framework of PPO (Schulman et al., 2017) as it is one of the most popular algorithms for solving decision-making problems. That is, in addition to the policy network θ (i.e., actor), a value network φ (i.e., critic) is used to estimate the value of a state and also generated by a hypernetwork. Then, the networks (hypernetworks and embedding layers) are trained with the following loss function (see Appendix B.2 for details):
L t (β A , β C ) = E L 1 t (θ = h β A (N )) − c 1 L 2 t (φ = h β C (N )) + c 2 H(π θ=h β A (N ) ) .(3)
As the agents learn their policies independently, the policy learning in this work falls in the category of using independent RL to find Nash equilibrium (Ozdaglar et al., 2021). Recent works (Ding et al., 2022;Fox et al., 2021;Leonardos et al., 2022) have shown that independent policy gradient can converge to the Nash equilibrium under some conditions, which, to some extent, provides support to our work. More discussion can be found in Appendix A.1.
Training Procedure. To make PAPO capable of generating efficient policies for the games in the set G, a trivial idea is to train the networks sequentially over G, i.e., train the networks until they converge in one game and then proceed to the next. Unfortunately, such a method is computationally intensive and could suffer catastrophic forgetting, i.e., the networks focus on the learning in the current game. A more reasonable approach is to train the networks by using the data sampled from all the games (a way similar to multi-task learning (Zhang & Yang, 2021)). Inspired by this idea, we construct a multi-task-based training process to efficiently train the networks. At the beginning of an episode, a game G(N ) is uniformly sampled from G. Then, PAPO takes N as input and generates a policy which is used by the agents to interact with the environment. Finally, the experience tuples are used to train the networks. More details can be found in Appendix B.3.
EXPERIMENTS
We first discuss the environments, baselines, metric used to assess the quality of learning, similarity measure used to investigate the scaling laws, and then present the results and ablation studies.
Environments. We consider the following environments which have been widely used in previous works: Exploration (Laurière et al., 2022), Taxi Matching (Nguyen et al., 2018, and Crowd in Circle (Perrin et al., 2020). Details of these environments can be found in Appendix C.1.
Baselines.
(1) PPO: the standard deep RL method.
(2) AugPPO: augments the input of PPO with the population size.
(3) HyperPPO: without the additional input to the generated policy. (4) PPO-Large: the number of neurons of each layer is increased such that the total number of learnable parameters is similar to PAPO. (5) AugPPO-Large: similar to PPO-Large. The last two baselines are necessary to show that performance gain is not due to the additional number of learnable parameters, but due to the architecture of PAPO. In addition, all the baselines are trained with the same training procedure as PAPO, which ensures a fair comparison between them and PAPO. More details on baselines and hyperparameters are given in Appendix C.2 and Appendix C.3, respectively.
Approximate NashConv. Computing the exact NashConv is typically intractable in complex games as it requires obtaining the exact best response (BR) policy of the representative agent. Instead, we obtain an approximate BR by training it for a large enough number of episodes (1e 6 ) and thus, obtain an approximate NashConv, which is the common choice in the literature (Laurière et al., 2022;Perrin et al., 2020). More details are given in Appendix C.4. Furthermore, in Appendix D.2, we show the BR training curves to demonstrate that the BR has approximately converged, ensuring that the approximate NashConv is a reasonable metric to assess the quality of learning.
Similarity Measure. Intuitively, two policies are similar means that their output features (representations) of the corresponding layers (not their parameter vectors) are similar, given the same input data. Particularly, the outputs of the final layer are closely related to an agent's decision-making (behavior). Therefore, we use the centered kernel alignment (CKA) (Kornblith et al., 2019) to measure the similarity between two policies. More details can be found in Appendix C.5.
$SSUR[1DVK&RQY $ ([SORUDWLRQ 332 332/DUJH $XJ332 $XJ332/DUJH +\SHU332 3$322XUV 3RSXODWLRQ6L]H 7D[L0DWFKLQJ 3RSXODWLRQ6L]H &URZGLQ&LUFOH 3RSXODWLRQ6L]H &.$ % a (p) b (p) ,QSXW H H +LGGHQ H H 2XWSXW H H ,QSXW5DZ +LGGHQ5DZ 2XWSXW5DZ ,QSXW)LW +LGGHQ)LW 2XWSXW)LW 3RSXODWLRQ6L]H a (p) b (p) ,QSXW H H +LGGHQ H H 2XWSXW H H 3RSXODWLRQ6L]H a (p) b (p) ,
RESULTS
In Fig. 1, we show the results obtained in a small-scale setting (2 ≤ N ≤ 50) in Taxi Matching environment to illustrate our motivation. There are two most straightforward options to obtain policies for different games: i) directly apply the policy trained in a given game (N = 20, marked by star) to other games, i.e., PPO-Naive, and ii) train a single policy by using the training procedure in Sec. 4.3 and then apply it to the evaluated games, i.e., PPO. At first glance, it might be natural to expect PPO to work well across different games when it is trained by using the data sampled from different games. However, as shown in the results, though it outperforms PPO-Naive, it still has a large approximate NashConv when evaluating in different games. In striking contrast, PAPO works well across different games. In addition, given the same training budget, PAPO still preserves a similar performance in the game (N = 20) where PPO-Naive was trained.
In Fig. 3, we show the results obtained in a larger-scale setting (2 ≤ N ≤ 200) in different environments. From Panel A, we can draw the following conclusions.
(1) PAPO significantly outperforms other naive and strong baselines in terms of approximate NashConv across different games, which demonstrates that PAPO has achieved one of our goals -efficiently generating efficient policies for a set of finite-agent games.
(2) As two special cases of PAPO, AugPPO and HyperPPO can work well across different games. However, they are competitive, which verifies that there is no affirmative answer to the question of which one is better (Sec. 4.2 and Appendix D.1).
(3) Given the same training budget, simply increasing the number of learnable parameters of the policy (PPO-Large and AugPPO-Large) or only relying on the hypernetwork to generate the policies (Hyper-PPO), though could outperform PPO, is still struggling to achieve better performance. This shows the superiority of PAPO as a unified framework which inherits the merits of the two special cases (AugPPO/AugPPO-Large and HyperPPO) and thus, achieves a better performance.
In Panel B, we show how the policies generated by PAPO change with the population size. From the results we can see that the similarity between two policies increases with the increasing population size. In other words, with the increase in population size, the policies in the games with different population sizes tend to be similar to each other, which shows a convergent evolution of the policies in terms of population size. Though it is impractical to experimentally set N = ∞, from the results we hypothesize that the similarity measure increases to 1 when N → ∞ (the finite-agent game would become an infinite-agent MFG when N = ∞). To quantitatively describe the scaling laws, we show the curves fitting the similarity, which is obtained by employing the tools for curve fitting in Numpy (Harris et al., 2020). We consider a function with the form of ρ(N ) = 1 − a N b ≤ 1, which maps N to the similarity measure. The fitted curves again verify the aforementioned conclusions. Furthermore, we compute the p-value of each parameter of the function (a and b), which shows that the curves fit well (p < 0.05) the original CKA values. Notice that there could be different forms of ρ(N ). For example, a more general case is ρ(N ) = 1 − a cN b +d ≤ 1. However, by computing the p-values, we found that some parameters (c and d) are less significant and hence, can be removed from ρ(N ). We give a more detailed analysis in Appendix D.3.
In addition, we observed that the variance of the similarity of the input layer is small while that of the output layer is large (especially when N is small). This coincides with the well-known conclusion in deep learning (Alzubaidi et al., 2021;LeCun et al., 2015): the first layer of a neural network extracts the lowest-level features while the last layer extracts the highest-level features. In our context, the only difference between two games is the number of agents while the underlying environmental setup (e.g., grid size) is the same. Hence, the low-level features extracted by the first layers of the policies are similar. In contrast, as the output layers capture the features of agents' decision makings which are largely impacted by the presence of other agents (more precisely, the number of agents), the extracted high-level features could be very different in different games (especially for small N 's).
To gain more intuition on how the generated policies are different from each other, in Panel C, we perform an analysis of the policies' output representations similar to (Jaderberg et al., 2019). We manually construct some high-level game states such as "agent on the upper-right/bottom-left of the map" and "agent on the center of the circle" and obtain the output representations of the policy of each game G(N ) by using UMAP (McInnes et al., 2018). We colored the representations of some policies: N ∈ {5, 10, 15, 20} and N ∈ {185, 190, 195, 200} to represent small-scale and large-scale settings, respectively. From the results, we can see that when N is large, the policies of different games tend to encode the same game states in similar ways (i.e., the UMAP embeddings of the same game state of different policies tend to be centered), which is aligned with the findings revealed by the CKA values. We give more analysis in Appendix D.4.
ABLATIONS
To gain a deeper understanding of the proposed approach and further support the results obtained in the previous section, we provide more discussion in this section and Appendix D.
Effect of Population-size Encoding. In Fig. 4, we show the training curves of PAPO when using RE (i.e., directly using N as the input). It is observed that using RE can result in training collapse. In Taxi Matching environment, though PAPO with RE still gets positive rewards, it is lower than PAPO with BE. The results demonstrate the importance of the population-size encoding for training the neural networks of PAPO. More detailed analysis can be found in Appedix D.5.
( Figure 4: Collapsed training of PAPO when using RE.
SLVRGH 5HWXUQ × ([SORUDWLRQ 3$32Z5( 3$322XUV (SLVRGH × 7D[L0DWFKLQJ (SLVRGH × &URZGLQ&LUFOH
Generalization to Unseen Games. In Fig. 5, we show the performance of applying PAPO and other baselines to unseen games. We found that, though the population-size aware methods (PAPO, HyperPPO, AugPPO, and AugPPO-Large) could perform better than the population-size unaware methods (PPO and PPO-Large) in some of the unseen games, their performance fluctuates and could be much worse. Therefore, an important future direction is to improve the generalizability of PAPO (as well as other population-size-aware methods), where more advanced techniques such as adver-
CONCLUSIONS AND FUTURE WORKS
In this work, we attempt to bridge the two research fields of finite-agent and infinite-agent games from an agent-centric perspective with three main contributions: i) a novel policy optimization approach, PAPO, to efficiently generate efficient policies for a set of games with different population sizes, ii) a multi-task-based training procedure to efficiently train the neural networks of PAPO, and iii) extensive experiments to demonstrate the significant superiority of our approach over several naive and strong baselines, and the analysis of the scaling laws of the policies to further deepen our understanding of the two research fields of finite-agent and infinite-agent games.
There are several future directions.
(1) Beyond the single type of agents, in real-world scenarios, the agents can be divided into multiple types (Ganapathi Subramanian et al., 2020;Yang et al., 2020a), e.g., in Taxi
A LEARNING IN MARKOV GAMES
In this section, we provide more discussion on learning in Markov games. First, we present some discussion and analysis on independent learning for finding Nash equilibria of Markov games. Then, we provide more discussion on the connection between MGs and MFGs presented in this work to further facilitate the understanding of our proposal.
A.1 CONVERGENCE ANALYSIS OF INDEPENDENT POLICY GRADIENT
In a given game, as all the agents learn their policies independently, the policy learning in this work falls in the category of using independent RL to find Nash equilibrium (Ozdaglar et al., 2021). Recent works (Ding et al., 2022;Fox et al., 2021;Leonardos et al., 2022) have shown that independent policy gradient can always converge to the Nash equilibrium under some conditions. In this section, we provide more analysis on the convergence of independent policy gradient in MGs, which, to some extent, provides support to our approach.
A.1.1 MARKOV POTENTIAL GAMES
In the standard definition of the Markov game (Littman, 1994;Shapley, 1953;Ozdaglar et al., 2021;Ding et al., 2022;Fox et al., 2021;Leonardos et al., 2022), agents are typically assumed to have access to the global state. In the context of our work, the global state is the joint state of all agents s t = (s 1 t , · · · , s N t ) ∈ S N . In this sense, a Markov game G(N ) is called a Markov potential game (MPG) if there exists a (global) state-dependent potential function Φ s : Π → R such that
Φ s (π i , π −i ) − Φ s (π i , π −i ) = V i (s, π i , π −i ) − V i (s,π i , π −i ),(4)
for all agents i ∈ N , states s ∈ S N and policies
π i ,π i ∈ Π i , π −i ∈ Π −i . Π i is the policy space of agent i and Π = × i∈N Π i (Π −i = × j∈N ,j =i Π j )
is the space of joint policy of all agents (except i).
As one of the most important and widely studied classes of games, MPGs have gained much research attention recently due to their power in modeling mixed cooperative/competitive environments and the convergence guarantee of independent policy gradient in these games (Ding et al., 2022;Fox et al., 2021;Leonardos et al., 2022).
A.1.2 INDEPENDENT POLICY GRADIENT
Assume that all agents update their policies according to the gradient ascent (GA) on their policies independently, i.e., only local information such as each agent's own rewards, actions, and observations (the agent's local state s ∈ S) is used during the learning process (Leonardos et al., 2022). The GA is given by
π i,k+1 := π i,k + δ∇ π i V i (π i,k ), ∀i ∈ N ,(5)
where k indicates the k-th iteration of the policy. Note that the policy π i,k+1 after k-th update is always bounded on its policy space π i,k+1 ∈ Π i , i.e., π i,k+1 (·|s) ∈ ∆(A) for all s ∈ S. If not so, a projection operation can be applied to ensure this condition. We omit such an operation for simplicity. Furthermore, we assume that all agents i ∈ N use the direct policy parameterization with α-greedy exploration as follows:
π i (a|s) = (1 − α)x i,s,a + α |A| ,(6)
where x i,s,a ≥ 0 for all states s ∈ S, actions a ∈ A and a∈A π i (a|s) = 1 for all s ∈ S. Essentially, π i (s) = (π i (a|s)) a∈A is a mixed strategy in state s. α is the exploration parameter.
In practice, the exact gradient ∇ π i V i (π i,k ) is typically unavailable, agents use stochastic gradient ascent (SGA) to update their policies. Specifically, the exact gradient ∇ π i V i (π i,k ) is replaced by an estimator, denoted as∇ k π i , which is typically derived from a batch of observations collected through interactions with the game environment by employing the policy π i,k at k-th iteration. The commonly used (REINFORCE) gradient estimator is as follows:
∇ k π i = R i T t=0 ∇ log π i,k (a t |s t ),(7)
where R i = T t=0 r t is the sum of rewards of agent i along the trajectory collected by using the policy π i,k at k-th iteration. Now the SGA is given by
π i,k+1 := π i,k + δ∇ k π i , ∀i ∈ N .(8)
A.1.3 CONVERGENCE ANALYSIS
In this section, we briefly discuss the convergence of independent policy gradient in MGs, following the results obtained in (Leonardos et al., 2022). Proposition A.1.1. If the reward function is a global signal, i.e., r(s i , a i , z N s ) = r(z N s ), ∀i ∈ N , ∀s i ∈ S, and ∀a i ∈ A, the finite-agent MG G(N ) is an MPG.
Proof. The result can be derived by showing that the conditions in Proposition 3.2 in (Leonardos et al., 2022) are satisfied. Specifically, as the agents share a global reward r(z N s ), G(N ) is a potential game at each (global) state s ∈ S N , i.e., the potential function is the global reward function φ s (a) = r(z N s ). Therefore, each agent i's instantaneous reward is decomposed as r(s i , a i , z N s ) = φ s (a) + u i s (a −i ), where the dummy u i s (a −i ) ≡ 0 trivially satisfies the condition:
∇ π i E τ ∼π T t=0 u i st (a −i t ) s 0 = s = (c s 1) s∈S N ,(9)
where a and a −i are respectively joint actions of all agents and of all agents other than i, c s ∈ R, 1 ∈ R |A| , τ is the trajectory when all agents follow the policy π.
Note that here we only give a simple discussion on the connection between our work and the recent advances in the convergence of independent policy gradient, which is not the focus of this work. Note that the above convergence is established under the direct policy parameterization of π θ . In practice, such a choice is typically less expressive than using a neural network to represent the policy. However, establishing the convergence guarantee for neural network-type parameterization is extremely hard, if not impossible, because neural networks are typically highly non-linear and non-convex/concave. Nonetheless, empirical verification can be conducted by computing the (approximate) NashConv of a trained policy π θ , i.e., if the (approximate) NashConv is 0, then the policy π θ is a Nash policy, regardless of the type of parameterization.
In our experiments, we use neural networks to represent the policies of agents. In this context, as the (approximate) NashConv measures the distance of a policy to the Nash policy, we train/generate a policy that has a lower (approximate) NashConv, which shows the potential (though not exact) convergence of our approach.
A.2 CONNECTION BETWEEN MG AND MFG
To facilitate the understanding of our proposal, we give some remarks on the connection between the MG and MFG described in Sec. 3.
We attempt to bridge the two research fields (finite-agent MG and infinite-agent MFG) from a finiteto-infinite perspective. That is, we try to envision how will the policy of an MFG behave by investigating a series of finite-agent MGs. In this sense, the finite-agent MGs we considered should share a similar structure with the MFG, i.e., all agents are homogeneous. Though such game-level connection between MG and MFG is somewhat implicit, MFG provides us the guidance to generate the "correct" finite-agent MGs, i.e., the games that are different only in the number of agents while other elements such as state and action spaces and reward functions are kept identical. Notice that directly generating a game with an infinite number of agents is impractical as it requires infinite computing resources for simulating the behaviors of an infinite number of agents.
On the policy-level connection, we note that it is not directly comparable between our proposed PAPO and the methods for MFGs. In view of the finite-to-infinite perspective, to implement a fair comparison, naturally we first need to generate a game with an infinite number of agents and then train the networks of PAPO by using the data sampled from this game. However, we can neither generate such a game as it requires infinite computing resources for simulating the behaviors of an infinite number of agents nor train the networks as PAPO will need to take N = ∞ as input. In this sense, our experiments do not (or more precisely, cannot) implement such a comparison but provide insights into the relationship between the policies generated by PAPO and the policy of the MFG.
On the other hand, much also can be done from an infinite-to-finite perspective that is opposite to ours. This is grounded on the well-known fact that a policy that gives a mean field equilibrium also gives an (N )-Nash equilibrium for the corresponding game with N agents, where (N ) goes to 0 as N goes to infinity (Saldi et al., 2018). In this sense, one can first learn a policy in the MFG (under the assumption that the algorithm has access to a simulator which takes the current state, action, and mean-field as inputs and outputs the reward, next state, and next mean-field (Guo et al., 2019)) and then apply it to the finite-agent games with different N 's. Essentially, this is analog to the PPO-Naive mentioned in the Introduction. Differently, here the policy is trained in the MFG while in PPO-Naive it is trained in a finite-agent MG. As a consequence, the (approximate) NashConv could be arbitrarily large when N decreases from large to small (ideally, from N = ∞ to N = 2). In this sense, it is worth interest to propose novel methods to mitigate the loss of applying the policy learned in the MFG to finite-agent MGs, which we leave for future works. Chen et al., 2021;Huang et al., 2021a;Jin et al., 2021;Xie et al., 2020; and potential games (Ding et al., 2022;Fox et al., 2021;Leonardos et al., 2022), or built on techniques for learning single-agent Markov decision process (Bai et al., 2021;Liu et al., 2021). Our work adds to the vast body of existing works on learning in MGs. Instead of concentrating on the policy learning in a given MG, we aim to investigate how the optimal policies of agents evolve with the population size, i.e., scaling laws (Kaplan et al., 2020;Kello et al., 2010;Lobacheva et al., 2020), and propose novel methods to efficiently generate policies that work well across games with different population sizes, given that the agents are homogeneous which is a common phenomenon in many real-world domains such as crowd modeling (Yang et al., 2020b) Learning in Mean-Field Games (MFGs). In contrast to finite-agent MGs, MFGs consider the case with an infinite number of agents. Since introduced in (Huang et al., 2006) and (Lasry & Lions, 2007), MFG has gained intensive research attention due to its power in modeling games involving a large population of agents (Achdou & Laurière, 2020;Gomes et al., 2014;Lauriere, 2021;Ruthotto et al., 2020). Recently, the capability of MFG is further revealed by benefiting from RL (Fu et al., 2019;Guo et al., 2019;Perolat et al., 2021;Perrin et al., 2020) and deep RL (Laurière et al., 2022;Perrin et al., 2022;Subramanian & Mahajan, 2019;Yang et al., 2018). Despite the progress, the evolution of finite-agent games to infinite-agent MFGs is poorly understood and unexplored when the policies of agents are represented by deep neural networks. In this sense, our work makes the first attempt to bridge the two research fields by establishing a novel tool which we call PAPO.
A.3 MORE RELATED WORKS
A.4 MORE DISCUSSION ON THE TERM "SCALING LAWS"
The terminology "Scaling Law" has been widely used in different areas including biology, physics, social science, and computer science. It typically describes the functional relationship between two quantities. In this sense, we note that the two quantities are typically problem-dependent, e.g., the performance of a model and the model size (Kaplan et al., 2020), the fluctuations in the number of messages sent by members in a communication network and their level of activity (Rybski et al., 2009), the probability that a vertex in a social network interacts with other vertices and the k other vertices (Barabási & Albert, 1999), to name a few.
In this work, the two quantities are: (1) the behavior of the policy network, and (2) the number of agents. In fact, this is conceptually similar to the scaling laws in social networks which investigate how some quantity (e.g., the property of a social network such as the connectivity) changes with the number of vertices (typically, a vertex stands for an individual, i.e., an agent) (Barabási & Albert, 1999). Our work follows a similar idea but focuses on investigating how the behavior of the policy network change with the number of agents. Therefore, the term "Scaling Law" is suited to this work as, in a more general sense, it can be used to describe the relationship between any two quantities which are determined by the specific problem at hand, not only the size of the model or training set or the amount of compute used for training (Kaplan et al., 2020). Furthermore, in contrast to the areas such as natural language processing (NLP), computer vision (CV), and single-agent RL which typically consider a single model, it is natural to consider the scaling laws of the policy with the number of agents in multi-agent systems (MAS).
A.5 MORE DISCUSSION ON SIMILARITY MEASURE
In this section, we give more discussion on the similarity measure employed in this work. First, we answer an important question as follows:
What is the most appropriate measure for studying the scaling laws in the context of this work?
In this work, we aim to investigate how the Nash policy π * N changes with the population size N . However, it is non-trivial to determine an appropriate measure to capture the evolution of π * N with N . There are two intuitive choices:
(1) The performance of π * N , which is similar to (Kaplan et al., 2020). However, we note that taking the performance as the measure is suitable only when the underlying task is identical, as in (Kaplan et al., 2020). In our work, the games with different N 's are essentially different tasks, which means that the performance of (approximate NashConv) is not an appropriate measure. This is also the difference between our work and some machine learning (ML) works such as (Kaplan et al., 2020).
(2) The difference between π * N and a fixed reference policyπ. Such a measure may unable to show how π * N changes with N , because all π * N 's could have the same difference. For example, suppose that π * N andπ have two parameters (2 dimensions). Then, when all π * N 's form a circle and the centroid is the fixed reference policyπ, all π * N 's have the same distance toπ, but they are different themselves.
Thus, considering an "absolute" quantity (performance of π * N or difference between π * N and a fixed reference policyπ) could be struggling in studying the scaling laws of the Nash policies π * N . Instead, we consider the "relative" change between the Nash policies, i.e., the difference ρ(N ) between two Nash policies. Intuitively, such a measure could have more implications and does not cause inconsistency with our objective (i.e., how π * N itself changes with N ) as one could recover an "absolute" quantity from the "relative" changes. For example, let the reference policy beπ = π * N +2 , ρ(N ) be the difference between π * N and π * N +1 , ρ(N + 1) be the difference between π * N +1 andπ. Then, it could be possible to derive the difference between π * N andπ by combining ρ(N ) and ρ(N + 1). In addition, when considering the existence of multiple equilibria, rigorously defining the similarity measure ρ(N ) could be more involved and outside the scope of this work, as it could be closely related to closed-form solutions which are typically hard to obtain in complex multi-player games, or needs more elaborate discussion. Nevertheless, in this work, as the policies are represented by deep neural networks (DNNs), we use CKA to measure the similarity between two (approximate) Nash policies (the more similar the smaller difference between them), which is well-defined because:
(1) CKA is one of the commonly used measures to characterize the similarity between two DNNs, as given in Appendix C.5 and (Kornblith et al., 2019), and (2) As mentioned in Sec. 5, such a measure is capable of characterizing the intuition: two policies are similar means that their output features (representations) of the corresponding layers (not their parameter vectors) are similar, given the same input data.
B MORE DETAILS ON PAPO
In this section, we provide more details on PAPO and practical implementations.
B.1 NETWORK ARCHITECTURE
The embedding layer is a linear layer with 128 neurons. The policy network consists of three fullyconnected (FC) layers, each with 128 neurons. The activation between layers is ReLU.
The hypernetwork starts with two FC layers, each with 128 neurons. The activation between layers is ReLU. After that, three independent heads are used to generate the parameters of the three layers of the policy network. In each head, three linear layers are used to map the output z of the two FC layers to the weights, biases, and scaling factors. Such an architecture is similar to (Sarafian et al., 2021;Littwin & Wolf, 2019). Fig. 6 shows the architecture of the described hypernetwork. The generated parameters are reshaped to form the structure of the policy network. Take the hidden layer (l = 2) of the policy network as an example. Suppose that the output from the input layer is x and the weights, biases, and scaling factors generated by the head corresponding to the hidden layer are w 2 (z), b 2 (z), and g 2 (z), respectively. Then, the output to the next layer is x = ReLU(xw 2 (z) (1 + g 2 (z)) + b 2 (z)). To be more specific, in Fig. 7 we present the code snippet in the forward process of PAPO. The "emb" denotes the embedding of the population size and the batch size is the number of experience tuples (transitions) obtained in every E episodes as the common practice in PPO (see Algorithm 1 for details). The final logits will be used to compute the loss during training or infer the action after passing through a softmax activation during execution.
B.2 LOSS FUNCTION
Let π θ and V φ respectively denote a representative agent's actor and critic (π θold and V φold respectively denote the old version of actor and critic which are periodically updated by copying from θ and φ). Similar to PPO (Schulman et al., 2017), the loss function for PAPO is:
L t (β A , β C ) = E L 1
where h β A and h β C are respectively the hypernetworks for generating the actor and critic networks. For notation simplicity, here we use θ N and φ N to represent θ = h β A (N ) and φ = h β C (N ), respectively. Then, the three terms in the above equation are:
L 1 t (θ = h β A (N )) = E min π θ N (a t |s t ) π θold (a t |s t ) A t , clip( π θ N (a t |s t ) π θold (a t |s t ) , 1 − , 1 + )A t , L 2 t (φ = h β C (N )) = V φ N (s t ) − T t =t r(s t , a t , z N s t ) 2 , H(π θ=h β A (N ) (·|s t )) = −E at∼π θ N log π θ N (a t |s t ),
where A t is the truncated version of generalized advantage estimation:
A t = δ t + (γλ)δ t+1 + · · · + (γλ) T −t δ T with δ t = r t + γV φ N (s t+1 ) − V φ N (s t ),
where r t = r(s t , a t , z N st ) and δ t is the TD error (Sutton & Barto, 2018). The expectation E is taken on a finite batch of experiences sampled by interacting with the environment.
B.3 TRAINING PROCEDURE
We construct a training procedure where the networks of PAPO are trained by using the data sampled from all the games in the target set G. Specifically, at the beginning of each episode, a game G(N ) is uniformly sampled from the target set G. Then, PAPO takes N as input and generates a policy which is used by the agents to interact with the environment for T steps. Next, the T experience tuples of the representative agent are stored in the episode buffer D. Finally, every E episodes, we optimize the surrogate loss function L t (β A , β C ) with respect to β A and β C (as well as the embedding layers η A and η C ) for K epochs with the collected ET experience tuples. As the agents are homogeneous, without loss of generality, we choose i = 1 as the representative agent. A more detailed description of the training procedure is shown in Algorithm 1.
C EXPERIMENTAL DETAILS C.1 DETAILS ON GAME ENVIRONMENTS
Exploration (Laurière et al., 2022). Consider a grid environment with size M × M . The state of an agent is his coordinate s t = (x, y) and available actions are: left, right, up, down, and stay. The agent cannot walk through the walls on the boundaries. Given s t = (x, y), a t , and µ t , the reward for the agent is r(s t , a t , µ t ) = − log(µ t (s t )). In experiments, we set M = 10.
Taxi Matching (Nguyen et al., 2018). Consider a grid world environment with size M × M . A set of drivers aim to maximize their own revenue by picking orders distributed over the map. As the demand in different zones is varied (e.g., the demand in downtown is higher than that in residential areas), each driver needs to find an optimal roaming policy while taking the competition with other drivers into consideration. The state of a driver is his coordinate s t = (x, y) and actions are: left, right, up, down, and stay. Given s t = (x, y), a t , and µ t , the reward for a driver is r(s t , a t , µ t ) = −o st log(µ t (s t )), where o st denotes the total order price in state s t . In experiments, we set M = 10 and consider that there are 100 orders each with a reward of 1 and distributed over the map according to the Gaussian distribution with the mean at the center of the map. (Perrin et al., 2020). Consider a 1D environment with M = 20 states denoted as S = {1, 2, · · · , |S| = 20} and form a circle. The available actions for an agent are: left, right, and stay. At each time step t, there is a point of interest (PoI, for short) located ats t . Given s t , a t , and µ t , the reward for the agent is r(s t , a t , µ t ) = − log(µ t (s t )) + 5 × 1{s t =s t }, i.e., the agent gets a reward of 5 when he is located on the PoI while still favoring social distancing. In experiments, we consider two PoIs during an episode: for t ≤ T 2 ,s t = 5 while for t > T 2 ,s t = 15.
Crowd in Circle
C.2 ALGORITHM FRAMEWORKS AND REMARKS ON BASELINES
In this section, we first introduce the baselines and then present the details of the algorithm frameworks of PAPO and different baselines.
To facilitate the understanding of the experimental design, we introduce the baseline and give some remarks on them. As the network architecture of PAPO is large (has more learnable parameters), in addition to the standard baselines (PPO and AugPPO), we consider three more baselines: PPO-Large, AugPPO-Large, and HyperPPO (note that HyperPPO has a larger number of learnable parameters than PPO and AugPPO and hence, we regard it as a stronger baseline), which have similar numbers of learnable parameters as PAPO by increasing the number of neurons of the hidden layers of the policy network (PPO-Large and AugPPO-Large) or hypernetwork (HyperPPO). This is critical to ensure a fair comparison and demonstrate the effectiveness of our approach. In Table 1, we give the numbers of learnable parameters of different methods in different environments. As the dimensions of the input and output of different environments are different, the numbers are different (notice that for the same environment, the numbers for different methods should be kept similar).
In Table 2, we give the numbers of neurons used in different methods to obtain a similar number of learnable parameters. Note that for HyperPPO and PAPO, we change the two FC layers of the hypernetwork as the policy network structure is fixed.
Although the methods using RE (e.g., AugPPO w/ RE, AugPPO-Large w/ RE, HyperPPO w/ RE) can be considered as the baselines, we note that they could be weaker than those using BE (AugPPO, AugPPO-Large, HyperPPO) due to the possible training collapse when using RE, as shown in Fig. 4 and Appendix D.5. Thus, when presenting the main results obtained in experiments in Fig. 3, we focus on the baselines which use BE, which also ensures a fair comparison between them and PAPO (as well as the fair comparison between these baselines). We study the effect of RE on the performance in the ablation study ( Fig. 4 and Appendix D.5).
In Fig. 8, we present the algorithm frameworks of PAPO and different baselines. "emb" denotes the embedding of the population size (see Sec. 4.1 and Appendix B.1).
• In PPO/PPO-Large, the actor (critic) takes the agent's state and current time as inputs and outputs the policy (value). PPO-Large operates in the same manner as PPO except that the number of learnable parameters is similar to PAPO.
• In AugPPO/AugPPO-Large, the actor (critic) takes the agent's state, current time, and the embedding of population size as inputs and outputs the policy (value). AugPPO-Large operates in the same manner as AugPPO except that the number of learnable parameters is similar to PAPO. • In HyperPPO, we first use the hypernetwork to generate the actor (critic) by taking the embedding of population size as input, then the generated actor (critic) takes the agent's state and current time as inputs and outputs the policy (value). • In PAPO, we first use the hypernetwork to generate the actor (critic) by taking the embedding of population size as input, then the generated actor (critic) takes the agent's state, current time, and the embedding of population size as inputs and outputs the policy (value). Table 3 lists the parameters used in our experiments. Unless otherwise specified, the parameters listed in Table 3 are the same in different game environments. The parameters related to the network architectures can be found in Sec. B.1. The parameters related to the environments can be found in the previous section. Without loss of generality, we evaluate the performance of different methods in a subset: G = {10, 20, · · · , 200}. As for generalization to unseen games, we evaluate the methods in a set of unseen games:G = {220, 240, · · · , 400}. The approximate BR policy is obtained by using (single-agent) PPO. Moreover, all experiments are run on a machine with 20 Intel i9-9820X CPUs and 4 NVIDIA RTX2080 Ti GPUs, and averaged over 3 random seeds. In this section, we provide more details on how to compute the approximate NashConv. Let π θ N be the policy returned by a method (PAPO or other baselines) for the given game G(N ). According to the definition given in Sec. 3, the NashConv of this policy is as follows:
NASHCONV(π θ N ) = N i=1 max π i V i (s i ,π i , {π j = π θ N } N j=1,j =i ) − V i (s i , {π i = π θ N } N i=1 ). (11)
As the agents are homogeneous, without loss of generality, we compute the NashConv of a representative agent i. Specifically, we first train an approximate BR policy for agent i, denoted as π i,BR θ (deriving the exact BR policy for the agent i is typically difficult in multi-player games). Then, we compute the approximate NashConv of the agent i as:
NASHCONV i (π θ N ) = V i (s i , π i,BR θ , {π j = π θ N } N j=1,j =i ) − V i (s i , {π i = π θ N } N i=1 ).(12)
Roughly speaking, the approximate NashConv of agent i is the difference between his value function of following the BR policy π i,BR θ and his value function of following current policy π θ N , given that the other agents are fixed to the current policy {π j = π θ N } N j=1,j =i .
C.5 SIMILARITY MEASURE: CENTERED KERNEL ALIGNMENT
In this work, we use centered kernel alignment (CKA) (Kornblith et al., 2019) to measure the similarity between two policies generated by PAPO. Specifically, we aim to compute the similarity measure ρ(N ) = ρ(π θ=h β (N ) , π θ=h β (N +1) ). Under CKA, the process is as follows. We randomly sample a batch of m states and fed them into the two generated policies π θ=h β (N ) and π θ=h β (N +1) . Then, we obtain the output of each layer:
X N input ∈ R m×qinput , X N hidden ∈ R m×qhidden , X N output ∈ R m×qoutput(13)
for π θ=h β (N ) and X N +1 input , X N +1 hidden , and X N +1 output for π θ=h β (N +1) , where q input , q hidden , and q output are respectively the number of neurons of input, hidden, and output layers of the policies. Now, we can measure the similarity of each layer between the two generated policies, i.e., we have ρ input (N ) = CKA(X N input , X N +1 input ), ρ hidden (N ) = CKA(X N hidden , X N +1 hidden ), and ρ output (N ) = CKA(X N output , X N +1 output ). Details on the computation of the CKA can be found in (Kornblith et al., 2019).
D MORE EXPERIMENTAL RESULTS
In this section, we provide more results and analysis to deepen the understanding of our approach.
D.1 MORE EXPLANATIONS ON THE EXPERIMENTAL RESULTS
From Fig. 3, Panel A, we can see that AugPPO and HyperPPO, as the two most natural methods, are competitive in terms of performance, but both are weaker than our PAPO. We hypothesize that the augmentation or hypernetwork alone would be individually insufficient. (1) In HyperPPO, the population size information needs to first pass through the hypernetwork (which is typically much larger) before passing to the underlying policy network. This could be inefficient when the gradient of the embedding of the population size backpropagates through the deeper hypernetwork, which is similar to the observation in (Sarafian et al., 2021) where the context gradient did not backpropagate through the hypernetwork. (2) In AugPPO, the population size information is directly augmented to the input of the policy network. However, the policy network is less expressive than the hypernetwork as it is typically much smaller than the hypernetwork. Therefore, by inheriting the merits of the two special cases, PAPO could achieve a better performance. However, as mentioned in Sec. 4.2, instead of thoroughly investigating the two special cases and answering the question of which one is better (which could be more involved and outside the scope of our work), we propose a unified framework which encompasses the two options as two special cases.
Given the same training budget as PAPO, PPO-Large and AugPPO-Large cannot always generate approximate Nash policies for games with different population sizes. In this sense, it could be the case that AugPPO-Large could perform worse than PPO-Large as it is more sensitive to the population size. However, we note that we cannot derive the conclusion that AugPPO-Large performs definitely worse than PPO-Large (in "Crowd in Circle" environment, AugPPO-Large performs better than PPO-Large in small-scale settings). Further investigating the difference between PPO-Large and AugPPO-Large could be outside the scope of this work.
D.2 TRAINING CURVES
In Fig. 9, we present the training curves of different methods in different environments. From the results, we can see that all methods have converged after a sufficient number of training episodes (2 · 10 7 ). Notice that theoretically computing the exact Nash policy is typically difficult in multiplayer games, if not impossible. In this sense, we would expect the methods to return an approximate Nash policy, given that they are trained with a large enough training budget.
During evaluation in a given game, we train a new PPO as the representative agent's BR policy while other agents are fixed to execute the approximate Nash policy. From the representative agent's perspective, the environment is stationary and hence, the problem of learning the BR policy is reduced to an RL problem. In Fig. 10, we present the BR training curves of different methods in a given game. As we can see, the BR training curves have approximately converged, ensuring that the computed approximate NashConv is a reasonable metric to assess the quality of learning.
D.3 CKA ANALYSIS
In this section, we provide a more detailed analysis on the CKA values. In Fig. 3 (B), we consider the function ρ(N ) with the form of ρ(N ) = 1 − a N b . In fact, as mentioned in Sec. 5.1, there could be different forms of ρ(N ). However, thoroughly investigating all forms of ρ(N ) could be outside the scope of this work. Instead, we choose the one with the form of an inverse polynomial because it coincides with the intuition that when N goes to infinity, the similarity between two policies goes to 1 (i.e., upper-bounded by 1).
Intuitively, the most general case is ρ(N ) = 1 − a cN b +d . However, by computing the p-values, we found that some parameters (c and d) are less significant and hence, can be removed from ρ(N ). In Fig. 11, we show the curves and in Table 4, we present the values of the parameters and their p-values. We can see that, in most cases, only the parameter b has a significant influence on the curve fitting, i.e., has a small p-value. As a result, in Fig. 3 (B), we consider that ρ(N ) = 1 − a N b and from the results we can see that the two parameters (a and b) are statistically significant, which means that this simpler formula is sufficient to quantitatively characterize the evolution of the CKA values over the population size.
Though the results in Fig. 3 (B) and Fig. 11 provide some intuitions about the evolution of the agents' optimal policies with the population size, they do not rigorously prove the convergence of finite-agent games to MFG. In fact, rigorously proving the convergence could be more involved and maybe only achievable for some special cases such as in (Guo & Xu, 2019). Nevertheless, our results provide an empirical verification, which has not been done before when the policies are represented by DNNs.
D.4 REPRESENTATIONS OF GAME STATES
In this section, we provide more results of the representations of different game states. To this end, we first manually construct some natural and understandable game states according to the position of an agent on the map/circle. For example, the game state "Bottom-left" means the agent is located on the bottom-left of the map at any time. In Fig. 12 -Fig. 14, we show the UMAP embeddings of different game states. We found that the UMAP embeddings of some game states (e.g., bottom-left and upper-right in Exploration, bottom-left and bottom-right in Taxi Matching, center in Crowd in Circle) present a similar trend as mentioned in the main text while the UMAP embeddings of other game states do not have clear patterns. We hypothesize that some manually constructed high-level (coarse-grained) game states may consist of very different underlying (fine-grained) states of the agents. On the other hand, to some extent, the results imply that our PAPO may not be a smooth enough function of the population size. In this sense, in future works, proposing novel techniques to drift PAPO toward a smoother function of the population size is worth interesting and may further improve the performance of PAPO, making it a more elegant solution.
D.5 EFFECT OF POPULATION-SIZE ENCODING
In this section, we provide more analysis to explore the effect of population-size encoding on the training process. In Fig. 4, we can see that the training of PAPO is collapsed when using RE. Though the approximate NashConv is less meaningful in this case, for completeness, we present the resulting approximate NashConv in Fig. 15-Fig. 16. The results in Fig. 15 correspond to Fig. 1 which considers small-scale settings in Taxi Matching environment. The results in Fig. 16 Figure 16: Approximate NashConv versus population size in large-scale settings in different environments.
It is well known that increasing the entropy of a policy at the beginning of training can incentivize it to explore the environment more widely and in turn improve the overall performance of the policy (Haarnoja et al., 2018;Ahmed et al., 2019). In this sense, given the initialized PAPO, then we expect that the generated policies have high entropy, regardless of the input population size. More precisely, given any N , the generated policy π θ=h β (N ) by the initialized PAPO would output an approximate uniform distribution over the action space A for a given state s t . Letπ be the uniform policy, i.e.,π(a|s t ) = 1 |A| for all a ∈ A. We can compute the KL-divergence between π θ=h β (N ) andπ, denoted as κ(N ) = D KL (π θ=h β (N ) π). Then, intuitively, at the beginning of training, κ(N ) is near 0 for different N 's. After the neural networks of PAPO are well trained, κ(N ) will be larger than 0 and vary with N .
In Fig. 17, we present the results to verify the aforementioned conclusions. From the results, we can see that, when using RE, the approximate uniform action distribution only holds for small N 's, as shown in the first row. However, κ(N ) quickly increases when N is increasing, which could severely hamper the training of PAPO (see the comparison between "PAPO w/ RE (initial)" and "PAPO w/ RE (trained)"). In striking contrast, using BE can effectively avoid such a training collapse.
D.6 REWARD DISTRIBUTION SHIFT
One of the reasons we hypothesize why PPO performs poorly across different games is the reward distribution shift in the environments. To verify the intuition, for an environment, we use a randomly initialized policy to sample 1,000 trajectories (episodes) and get the reward distribution by statistics. As shown in Fig. 18-Fig. 20, the reward distribution varies sharply with the increasing population size. Intuitively, PPO works poorly across the games as it does not consider the increasing diversity of reward signals resulting from the changes in population size. In contrast, PAPO (and other population-size-aware methods) possesses the ability to work consistently well across different games by explicitly taking the information of the population size into account during training.
Figure 1 :
1Experiments on Taxi Matching environment show the failure of two naive methods and the success of our PAPO. ↓ means the lower the better performance. See Sec. 5.1 for details.
Figure 2 :
2The neural network architecture of PAPO.
Figure 3 :
3Experimental results. (A) Approximate NashConv versus population size. (B) CKA value versus population size. (C) UMAP embeddings of some human-understandable game states: agent on the upper-right/bottom-left of the map and agent on the center of the circle.
Matching environment, different vehicles have different capacities (Alonso-Mora et al., 2017). Investigating the scaling laws of each type requires new methods and training regimes. (2) Beyond the population size (Gatchel, 2021), in future works, other game elements such as state and action spaces or even the rules of the games can be different between games. PAPO demonstrates the possibility of learning a universal policy for completely different games. (3) Beyond the target set of games, generalization to unseen games requires more advanced techniques such as adversarial training (Ganin et al., 2016). (4) Our approach may have implications for Meta-RL (Fakoor et al., 2019; Finn et al., 2017; Rakelly et al., 2019; Sohn et al., 2019) as well.
Indeed, a global reward can ensure that G(N ) is an MPG, but we do not impose such a condition in the definition of the game model in Sec. 3. In different benchmark environments, an agent can receive a global reward, a local reward (Laurière et al., 2022), or both (Nguyen et al., 2018). Proposition A.1.2. (Convergence of Independent Policy Gradient, Theorem 1.2 in(Leonardos et al., 2022)) Consider the MG G(N ) satisfying the condition in Proposition A.1.1. If each agent runs SGA 8 using direct policy parameterization (Eq. 6) on their policies and the updates are simultaneous, then the learning converges to an approximate Nash policy.
Learning in Markov Games (MGs). Among finite-agent games, Markov game(Littman, 1994;Shapley, 1953) is a widely used framework for characterizing games with sequential decisionmaking, e.g., in multi-agent reinforcement learning. There is a long line of work in developing efficient algorithms for finding Nash equilibria of MGs under various assumptions such as full environmental knowledge(Hansen et al., 2013;Hu & Wellman, 2003; Littman et al., 2001;Wei et al., 2020), access to simulators(Jia et al., 2019;Sidford et al., 2020;Wei et al., 2017;2021), and special structures like zero-sum games (Bai & Jin, 2020; Bai et al.
, Ad auction (Gummadi et al., 2012), fleet management (Lin et al., 2018), and sharing economy (Hamari et al., 2016).
Figure 6 :
6The architecture of hypernetwork.
Figure 7 :
7The code snippet in the forward process of PAPO.
Figure 8 :
8The algorithm frameworks of PAPO and different baselines.
Figure 9 :
9Training curves of different methods in different environments.
Figure 10 :
10BR training curves of different methods in different environments (N = 10).
Figure 11 :
11CKA values versus population size. ρ(N ) = 1 − a cN b +d .
Figure 12 :Figure 13 :Figure 14 :
121314UMAP embeddings of different game states in Exploration. UMAP embeddings of different game states in Taxi Matching. UMAP embeddings of different game states in Crowd in Circle.
Figure 18 :
18Sampled reward distributions and trajectories in Exploration.
Figure 19 :
19Sampled reward distributions and trajectories in Taxi Matching.
Figure 20 :
20Sampled reward distributions and trajectories in Crowd in Circle.
sarial training (Ganin et al., 2016) are needed. &URZGLQ&LUFOH Figure 5: Performance of PAPO and baselines on unseen games.3RSXODWLRQ6L]H
$SSUR[1DVK&RQY
([SORUDWLRQ
3RSXODWLRQ6L]H
7D[L0DWFKLQJ
332
332/DUJH
$XJ332
$XJ332/DUJH
+\SHU332
3$322XUV
3RSXODWLRQ6L]H
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In ICLR, 2016. David Mguni, Joel Jennings, and Enrique Munoz de Cote. Decentralised learning in systems with many, many strategic agents. In AAAI, pp. 4686-4693, 2018. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. Asuman Ozdaglar, Muhammed O Sayin, and Kaiqing Zhang. Independent learning in stochastic games. arXiv preprint arXiv:2111.11743, 2021. Julien Perolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls, and Olivier Pietquin. Scaling up mean field games with online mirror descent.Kaixiang Lin, Renyu Zhao, Zhe Xu, and Jiayu Zhou. Efficient large-scale fleet management via
multi-agent deep reinforcement learning. In KDD, pp. 1774-1783, 2018.
Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In ICML,
pp. 157-163, 1994.
Michael L Littman et al. Friend-or-foe Q-learning in general-sum games. In ICML, pp. 322-328,
2001.
Gidi Littwin and Lior Wolf. Deep meta functionals for shape representation. In ICCV, pp. 1824-
1833, 2019.
Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin. A sharp analysis of model-based reinforcement
learning with self-play. In ICML, pp. 7001-7010, 2021.
Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, and Dmitry P Vetrov. On power laws
in deep ensembles. NeurIPS, pp. 2375-2385, 2020.
Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and
projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin Morrill, Nolan Bard, Trevor
Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial
intelligence in heads-up no-limit poker. Science, 356(6337):508-513, 2017.
Duc Thien Nguyen, Akshat Kumar, and Hoong Chuin Lau. Credit assignment for collective multi-
agent RL with global rewards. NeurIPS, pp. 8102-8113, 2018.
arXiv
preprint arXiv:2103.00623, 2021.
Sarah Perrin, Julien Pérolat, Mathieu Laurière, Matthieu Geist, Romuald Elie, and Olivier Pietquin.
Fictitious play for mean field games: Continuous time analysis and applications. arXiv preprint
arXiv:2007.03458, 2020.
Sarah Perrin, Mathieu Laurière, Julien Pérolat, Romuald Élie, Matthieu Geist, and Olivier Pietquin.
Generalization in mean field games by learning master policies. In AAAI, pp. 7143-7150, 2022.
Alexey Potapov, Oleg Shcherbakov, Innokentii Zhdanov, Sergey Rodionov, and Nikolai Skorobo-
gatko. Hypernets and their application to learning spatial transformations. In ICANN, pp. 476-
486, 2018.
Kate Rakelly, Aurick Zhou, Chelsea Finn, Sergey Levine, and Deirdre Quillen. Efficient off-policy
meta-reinforcement learning via probabilistic context variables. In ICML, pp. 5331-5340, 2019.
Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and
Shimon Whiteson. QMIX: Monotonic value function factorisation for deep multi-agent rein-
forcement learning. In ICML, pp. 4295-4304, 2018.
Table 1 :
1The numbers of learnable parameters of different methods in different environments.Env./Method
PPO AugPPO PPO-Large AugPPO-Large HyperPPO
PAPO
Exploration
45K
81K
10,159K
10,457K
10,166K 10,176K
Taxi Matching
44K
80K
5,936K
5,918K
5,883K
5,893K
Crowd in Circle 44K
80K
10,146K
10,444K
9,995K 10,076K
Table 2 :
2The numbers of neurons used in different methods.Env./Method
PPO-Large
AugPPO-Large HyperPPO
PAPO
Exploration
(2230, 2230, 2230) (2200, 2200, 2200) (128, 220) (128, 128)
Taxi Matching
(1700, 1700, 1700) (1635, 1635, 1635) (128, 128)
(128, 74)
Crowd in Circle (2230, 2230, 2230) (2200, 2200, 2200) (128, 220) (128, 128)
C.3 HYPERPARAMETERS
Table 3 :
3Hyperparameters.Hyperparameter
Value
optimizer
Adam
length of an episode T
20
minimum number of agents N
2
maximum number of agentsN
200
maximum number of policy training episodes 2 · 10 7
maximum number of BR training episodes
1 · 10 6
actor learning rate
3 · 10 −5
critic learning rate
3 · 10 −4
update every E episodes
5
optimize K epochs at each update
5
critic loss coefficient c 1
0.5
entropy loss coefficient c 2
0.01
batch size m for computing CKA
1000
dimension of binary encoding k
12
C.4 APPROXIMATE NASHCONV
Table 4 :
4Values of parameters inρ(N ) = 1 − a cN b +d in different game environments. (0.97) 4.68e +0 (0.00) −4.68e +0 (0.00) ρhidden 4.09e −1 (0.99) 6.74e −1 (0.00) 2.98e +0 (0.99) −3.63e +0 (0.99) ρoutput 4.22e +0 (0.99) 8.93e −1 (0.00) 1.02e +1 (0.99) −1.09e +1 (0.99) ρinput −2.95e −4 (0.91) −7.77e −1 (0.16) 1.75e +0 (0.91) −1.62e +0 (0.91) ρhidden +1.49e −4 (0.99) +2.65e −4 (0.99) 1.46e +1 (0.99) −1.45e +1 (0.99) ρoutput −5.13e −2 (0.99) −3.43e −1 (0.47) 1.05e +1 (0.99) −1.32e +1 (0.99)Exploration
Layer
a(p)
b(p)
c(p)
d(p)
ρinput
3.81e −6 (0.97)
3.92e −5 Taxi Matching
Layer
a(p)
b(p)
c(p)
d(p)
ρinput
8.96e −3 (0.99)
1.11e +1 (0.09) 5.71e +0 (0.99) −6.09e +0 (0.99)
ρhidden 2.90e −2 (0.99)
1.28e −1 (0.16) 2.32e +0 (0.99) −2.44e +0 (0.99)
ρoutput 8.43e −1 (0.99)
4.52e −1 (0.02) 1.84e +0 (0.99) +6.26e −1 (0.99)
Crowd in Circle
Layer
a(p)
b(p)
c(p)
d(p)
correspond to Fig. 3 which considers larger-scale settings in different environments.Figure 15: Approximate NashConv versus population size in small-scale settings in Taxi Matching environment.3RSXODWLRQ6L]H
$SSUR[1DVK&RQY ( )
3321DLYH
332
3$32Z5(
3$322XUV
3RSXODWLRQ6L]H
$SSUR[1DVK&RQY
([SORUDWLRQ
332
332/DUJH
$XJ332
$XJ332/DUJH
+\SHU332
3$32Z5(
3$322XUV
3RSXODWLRQ6L]H
7D[L0DWFKLQJ
3RSXODWLRQ6L]H
&URZGLQ&LUFOH
In this work, we focus on the finite-agent Markov games sharing a similar structure with the MFG, see Sec. 3 and Appendix A.2 for more details and discussion.2 We use the term "scaling laws" to refer to the evolution of agents' optimal policies with the population size, which is different from that of(Kaplan et al., 2020). See Appendix A.4 for a more detailed discussion.
In experiments, we follow(Laurière et al., 2022) to make the policy dependent on time by concatenating the state with time.
We call a policy an "efficient policy" if it has a small NashConv in the corresponding game.
t (θ = h β A (N )) − c 1 L 2 t (φ = h β C (N )) + c 2 H(π θ=h β A (N ) ) ,(10)
ACKNOWLEDGMENTSThis research is supported by the National Research Foundation, Singapore under its Industry Alignment Fund -Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.Algorithm 1: Training ProcedureInput: Hyperparameters Output: Trained PAPO 1 D ← ∅; 2 for episode = 1, 2, · · · do 3 Sample a game G(NGenerate a policy π θ=h β A (N ) ;Figure 17: The KL divergence κ versus population size.
Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. Peter Thomas Dueholm Hansen, Uri Bro Miltersen, Zwick, Journal of the ACM. 601Thomas Dueholm Hansen, Peter Bro Miltersen, and Uri Zwick. Strategy iteration is strongly poly- nomial for 2-player turn-based stochastic games with a constant discount factor. Journal of the ACM, 60(1):1-16, 2013.
Array programming with NumPy. Jarrod Charles R Harris, Millman, J Stéfan, Ralf Van Der Walt, Pauli Gommers, David Virtanen, Eric Cournapeau, Julian Wieser, Sebastian Taylor, Nathaniel J Berg, Smith, Nature. 5857825Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array program- ming with NumPy. Nature, 585(7825):357-362, 2020.
Nash Q-learning for general-sum stochastic games. Junling Hu, Michael P Wellman, Journal of Machine Learning Research. 4Junling Hu and Michael P Wellman. Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4(Nov):1039-1069, 2003.
Towards general function approximation in zero-sum Markov games. Baihe Huang, Jason D Lee, Zhaoran Wang, Zhuoran Yang, arXiv:2107.14702arXiv preprintBaihe Huang, Jason D Lee, Zhaoran Wang, and Zhuoran Yang. Towards general function approxi- mation in zero-sum Markov games. arXiv preprint arXiv:2107.14702, 2021a.
Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Minyi Huang, P Roland, Peter E Malhamé, Caines, Communications in Information & Systems. 63Minyi Huang, Roland P Malhamé, and Peter E Caines. Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Communica- tions in Information & Systems, 6(3):221-252, 2006.
Continual model-based reinforcement learning with hypernetworks. Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti, ICRA. Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, and Florian Shkurti. Continual model-based reinforcement learning with hypernetworks. In ICRA, pp. 799-805, 2021b.
Human-level performance in 3D multiplayer games with population-based reinforcement learning. Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. 364Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nico- las Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science, 364(6443):859-865, 2019.
. Xu Jia, Bert De Brabandere, Tinne Tuytelaars, Luc V Gool, NeurIPS. Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. NeurIPS, pp. 667-675, 2016.
Feature-based Q-learning for two-player stochastic games. Zeyu Jia, F Lin, Mengdi Yang, Wang, arXiv:1906.00423arXiv preprintZeyu Jia, Lin F Yang, and Mengdi Wang. Feature-based Q-learning for two-player stochastic games. arXiv preprint arXiv:1906.00423, 2019.
The power of exploiter: Provable multi-agent RL in large state spaces. Chi Jin, Qinghua Liu, Tiancheng Yu, arXiv:2106.03352arXiv preprintChi Jin, Qinghua Liu, and Tiancheng Yu. The power of exploiter: Provable multi-agent RL in large state spaces. arXiv preprint arXiv:2106.03352, 2021.
Jared Kaplan, Sam Mccandlish, Tom Henighan, B Tom, Benjamin Brown, Rewon Chess, Scott Child, Alec Gray, Jeffrey Radford, Dario Wu, Amodei, arXiv:2001.08361Scaling laws for neural language models. arXiv preprintJared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
Scaling laws in cognitive sciences. T Christopher, Kello, D A Gordon, Ramon Brown, John G Ferrer-I Cancho, Klaus Holden, Theo Linkenkaer-Hansen, Guy C Rhodes, Van Orden, Trends in Cognitive Sciences. 145Christopher T Kello, Gordon DA Brown, Ramon Ferrer-i Cancho, John G Holden, Klaus Linkenkaer-Hansen, Theo Rhodes, and Guy C Van Orden. Scaling laws in cognitive sciences. Trends in Cognitive Sciences, 14(5):223-232, 2010.
Similarity of neural network representations revisited. Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton, ICML. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In ICML, pp. 3519-3529, 2019.
Mean field games. Jean-Michel Lasry, Pierre-Louis Lions, Japanese Journal of Mathematics. 21Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Japanese Journal of Mathematics, 2 (1):229-260, 2007.
Numerical methods for mean field games and mean field type control. Mathieu Lauriere, Mean Field Games. 78221Mathieu Lauriere. Numerical methods for mean field games and mean field type control. Mean Field Games, 78:221, 2021.
Scalable deep reinforcement learning algorithms for mean field games. Sarah Mathieu Laurière, Sertan Perrin, Paul Girgin, Ayush Muller, Theophile Jain, Georgios Cabannes, Julien Piliouras, Romuald Pérolat, Olivier Élie, Pietquin, ICML. Mathieu Laurière, Sarah Perrin, Sertan Girgin, Paul Muller, Ayush Jain, Theophile Cabannes, Geor- gios Piliouras, Julien Pérolat, Romuald Élie, Olivier Pietquin, et al. Scalable deep reinforcement learning algorithms for mean field games. In ICML, pp. 12078-12095, 2022.
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, Nature. 5217553Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, 2015.
Global convergence of multi-agent policy gradient in Markov potential games. Stefanos Leonardos, Will Overman, ICLR. 2022Ioannis Panageas, and Georgios PiliourasStefanos Leonardos, Will Overman, Ioannis Panageas, and Georgios Piliouras. Global convergence of multi-agent policy gradient in Markov potential games. In ICLR, 2022.
A machine learning framework for solving high-dimensional mean field game and mean field control problems. Lars Ruthotto, J Stanley, Wuchen Osher, Levon Li, Samy Wu Nurbekyan, Fung, PNAS117Lars Ruthotto, Stanley J Osher, Wuchen Li, Levon Nurbekyan, and Samy Wu Fung. A machine learning framework for solving high-dimensional mean field game and mean field control prob- lems. PNAS, 117(17):9183-9193, 2020.
Scaling laws of human interaction activity. Diego Rybski, V Sergey, Shlomo Buldyrev, Fredrik Havlin, Liljeros, Makse, PNAS106Diego Rybski, Sergey V Buldyrev, Shlomo Havlin, Fredrik Liljeros, and Hernán A Makse. Scaling laws of human interaction activity. PNAS, 106(31):12640-12645, 2009.
Markov-Nash equilibria in mean-field games with discounted cost. Naci Saldi, Tamer Basar, Maxim Raginsky, SIAM Journal on Control and Optimization. 566Naci Saldi, Tamer Basar, and Maxim Raginsky. Markov-Nash equilibria in mean-field games with discounted cost. SIAM Journal on Control and Optimization, 56(6):4256-4287, 2018.
Recomposing the reinforcement learning building blocks with hypernetworks. Elad Sarafian, Shai Keynan, Sarit Kraus, ICML. Elad Sarafian, Shai Keynan, and Sarit Kraus. Recomposing the reinforcement learning building blocks with hypernetworks. In ICML, pp. 9301-9312, 2021.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Stochastic games. S Lloyd, Shapley, PNAS39Lloyd S Shapley. Stochastic games. PNAS, 39(10):1095-1100, 1953.
Solving discounted stochastic two-player games with near-optimal time and sample complexity. Aaron Sidford, Mengdi Wang, Lin Yang, Yinyu Ye, AISTATS. Aaron Sidford, Mengdi Wang, Lin Yang, and Yinyu Ye. Solving discounted stochastic two-player games with near-optimal time and sample complexity. In AISTATS, pp. 2992-3002, 2020.
Mastering the game of Go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den, Julian Driessche, Ioannis Schrittwieser, Veda Antonoglou, Marc Panneershelvam, Lanctot, Nature. 5297587David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Science. 3626419David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140- 1144, 2018.
Meta reinforcement learning with autonomous inference of subtask dependencies. Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee, ICLR. Sungryull Sohn, Hyunjae Woo, Jongwook Choi, and Honglak Lee. Meta reinforcement learning with autonomous inference of subtask dependencies. In ICLR, 2019.
Reinforcement learning in stationary mean-field games. Jayakumar Subramanian, Aditya Mahajan, AAMAS. Jayakumar Subramanian and Aditya Mahajan. Reinforcement learning in stationary mean-field games. In AAMAS, pp. 251-259, 2019.
Reinforcement Learning: An Introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT press, 2018.
Grandmaster level in StarCraft II using multi-agent reinforcement learning. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, H David, Richard Choi, Timo Powell, Petko Ewalds, Georgiev, Nature. 5757782Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Juny- oung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
Breaking the curse of many agents: Provable mean embedding Q-iteration for mean-field reinforcement learning. Lingxiao Wang, Zhuoran Yang, Zhaoran Wang, ICML. Lingxiao Wang, Zhuoran Yang, and Zhaoran Wang. Breaking the curse of many agents: Provable mean embedding Q-iteration for mean-field reinforcement learning. In ICML, pp. 10092-10103, 2020.
Online reinforcement learning in stochastic games. Chen-Yu Wei, Yi-Te Hong, Chi-Jen Lu, NeurIPS. Chen-Yu Wei, Yi-Te Hong, and Chi-Jen Lu. Online reinforcement learning in stochastic games. NeurIPS, pp. 4987-4997, 2017.
Linear last-iterate convergence in constrained saddle-point optimization. Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo, arXiv:2006.09517arXiv preprintChen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Linear last-iterate convergence in constrained saddle-point optimization. arXiv preprint arXiv:2006.09517, 2020.
Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive Markov games. Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo, COLT. Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive Markov games. In COLT, pp. 4259-4299, 2021.
Learning zero-sum simultaneousmove Markov games using function approximation and correlated equilibrium. Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang, COLT. Qiaomin Xie, Yudong Chen, Zhaoran Wang, and Zhuoran Yang. Learning zero-sum simultaneous- move Markov games using function approximation and correlated equilibrium. In COLT, pp. 3674-3682, 2020.
Bayesian multi-type mean field multi-agent imitation learning. Fan Yang, Alina Vereshchaka, Changyou Chen, Wen Dong, NeurIPS. Fan Yang, Alina Vereshchaka, Changyou Chen, and Wen Dong. Bayesian multi-type mean field multi-agent imitation learning. In NeurIPS, pp. 2469-2478, 2020a.
Learning deep mean field games for modeling large population behavior. Jiachen Yang, Xiaojing Ye, Rakshit Trivedi, Huan Xu, Hongyuan Zha, In ICLR. Jiachen Yang, Xiaojing Ye, Rakshit Trivedi, Huan Xu, and Hongyuan Zha. Learning deep mean field games for modeling large population behavior. In ICLR, 2018.
A review on crowd simulation and modeling. Shanwen Yang, Tianrui Li, Xun Gong, Bo Peng, Jie Hu, Graphical Models. 111101081Shanwen Yang, Tianrui Li, Xun Gong, Bo Peng, and Jie Hu. A review on crowd simulation and modeling. Graphical Models, 111:101081, 2020b.
Model-based multi-agent RL in zerosum Markov games with near-optimal sample complexity. Kaiqing Zhang, Sham Kakade, Tamer Basar, Lin Yang, NeurIPS. Kaiqing Zhang, Sham Kakade, Tamer Basar, and Lin Yang. Model-based multi-agent RL in zero- sum Markov games with near-optimal sample complexity. NeurIPS, pp. 1166-1178, 2020.
A survey on multi-task learning. Yu Zhang, Qiang Yang, IEEE Transactions on Knowledge and Data Engineering. Yu Zhang and Qiang Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 2021. |
257,219,926 | A UNIFIED FRAMEWORK FOR SOFT THRESHOLD PRUNING | Soft threshold pruning is among the cutting-edge pruning methods with state-ofthe-art performance 1 . However, previous methods either perform aimless searching on the threshold scheduler or simply set the threshold trainable, lacking theoretical explanation from a unified perspective. In this work, we reformulate soft threshold pruning as an implicit optimization problem solved using the Iterative Shrinkage-Thresholding Algorithm (ISTA), a classic method from the fields of sparse recovery and compressed sensing. Under this theoretical framework, all threshold tuning strategies proposed in previous studies of soft threshold pruning are concluded as different styles of tuning L 1 -regularization term. We further derive an optimal threshold scheduler through an in-depth study of threshold scheduling based on our framework. This scheduler keeps L 1 -regularization coefficient stable, implying a time-invariant objective function from the perspective of optimization. In principle, the derived pruning algorithm could sparsify any mathematical model trained via SGD. We conduct extensive experiments and verify its state-of-the-art performance on both Artificial Neural Networks (ResNet-50 and MobileNet-V1) and Spiking Neural Networks (SEW ResNet-18) on ImageNet datasets. On the basis of this framework, we derive a family of pruning methods, including sparsify-during-training, early pruning, and pruning at initialization. The code is available at https://github.com/Yanqi-Chen/LATS. . Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182-1195, 2007. Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659-1671, 1997. Bernard Martinet. Régularisation d'inéquations variationnelles par approximations successives. Revue Francaise d'informatique et de Recherche operationelle, 4:154-158, 1970. Risk, Rajit Manohar, and Dharmendra S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668-673, 2014. training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):1-12, 2018. . NP-hardness of 0 minimization problems: revision and extension to the non-negative setting. . Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. propagation: A sparsity inducing weight reparameterisation. | [
246867209,
211146532,
202888885
] | A UNIFIED FRAMEWORK FOR SOFT THRESHOLD PRUNING
Yanqi Chen
National Engineering Research Center of Visual Technology
School of Computer Science
Peking University
Peng Cheng Laboratory
Zhengyu Ma
Peng Cheng Laboratory
Wei Fang
National Engineering Research Center of Visual Technology
School of Computer Science
Peking University
Peng Cheng Laboratory
Xiawu Zheng
Peng Cheng Laboratory
Zhaofei Yu
National Engineering Research Center of Visual Technology
School of Computer Science
Peking University
Institute for Artificial Intelligence
Peking University
Peng Cheng Laboratory
Yonghong Tian [email protected]
National Engineering Research Center of Visual Technology
School of Computer Science
Peking University
Peng Cheng Laboratory
A UNIFIED FRAMEWORK FOR SOFT THRESHOLD PRUNING
Published as a conference paper at ICLR 2023
Soft threshold pruning is among the cutting-edge pruning methods with state-ofthe-art performance 1 . However, previous methods either perform aimless searching on the threshold scheduler or simply set the threshold trainable, lacking theoretical explanation from a unified perspective. In this work, we reformulate soft threshold pruning as an implicit optimization problem solved using the Iterative Shrinkage-Thresholding Algorithm (ISTA), a classic method from the fields of sparse recovery and compressed sensing. Under this theoretical framework, all threshold tuning strategies proposed in previous studies of soft threshold pruning are concluded as different styles of tuning L 1 -regularization term. We further derive an optimal threshold scheduler through an in-depth study of threshold scheduling based on our framework. This scheduler keeps L 1 -regularization coefficient stable, implying a time-invariant objective function from the perspective of optimization. In principle, the derived pruning algorithm could sparsify any mathematical model trained via SGD. We conduct extensive experiments and verify its state-of-the-art performance on both Artificial Neural Networks (ResNet-50 and MobileNet-V1) and Spiking Neural Networks (SEW ResNet-18) on ImageNet datasets. On the basis of this framework, we derive a family of pruning methods, including sparsify-during-training, early pruning, and pruning at initialization. The code is available at https://github.com/Yanqi-Chen/LATS. . Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182-1195, 2007. Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659-1671, 1997. Bernard Martinet. Régularisation d'inéquations variationnelles par approximations successives. Revue Francaise d'informatique et de Recherche operationelle, 4:154-158, 1970. Risk, Rajit Manohar, and Dharmendra S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668-673, 2014. training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):1-12, 2018. . NP-hardness of 0 minimization problems: revision and extension to the non-negative setting. . Low-rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components. propagation: A sparsity inducing weight reparameterisation.
INTRODUCTION
Pruning has been a thriving area of network compression. Since the day deep neural networks stretch their tentacles to every corner of machine learning applications, the demand for shrinking the size of network parameters has never stopped growing. Fewer parameters usually imply less computing burden on resource-constrained hardware such as embedded devices or neuromorphic chips. Some pioneering studies have revealed considerable redundancies in both Artificial Neural Networks (ANNs) (Han et al., 2015;2016;Wen et al., 2016;Liu et al., 2017) and Spiking Neural Networks (SNNs) (Qi et al., 2018;Chen et al., 2021;Yin et al., 2021;Deng et al., 2021;Kundu et al., 2021;Kim et al., 2022b).
In essence, pruning can be formulated as an optimization problem under constraint on L 0 norm, the number of nonzero components in network parameters. Assuming L is the loss function of vectorized network weight w, we expect lower L 0 norm w 0 along with lower loss L(w). Despite different formulations like hard constraints min L(w)≤c w 0 ;
(1)
min w 0≤K L(w);(2)
or soft constraints (penalized) min
w {L(w) + µ w 0 },(3)
all these forms are without exception NP-Hard (Natarajan, 1995;Davis et al., 1997;Nguyen et al., 2019). Relaxing L 0 norm to L p (0 < p < 1) norm will not make it more tractable for it is still strongly NP-Hard (Ge et al., 2011). Nowadays, research on pruning and sparse optimization is mainly focused on the L 1 -regularized problem, the tightest convex relaxation of L 0 norm, which dates back to a series of groundbreaking studies on compressed sensing (Donoho, 2006;Candès et al., 2006). These researches technically allows us to solve L 1 -regularized problem as an alternative or, sometimes even an equivalent option (Candès, 2008) to confront L 0 norm constraint. A variety of modern methods such as magnitude-based pruning are still firmly rooted in solving the L 1 regularized optimization problem. Be that as it may, L 1 regularization is mostly employed for shrinking the magnitude of weight before the hard thresholding step, which has started to be replaced by other sorts of novel regularization (Zhuang et al., 2020).
In the past few years, a new range of pruning methods based on soft threshold reparameterization of weights has been developing gradually. The term "reparameterization" here refers to a specific mapping to network weights w from a latent space of hidden parameters θ. The "geometry" of latent space could be designed for guiding actual weights w towards sparsity. In soft threshold pruning, the mapping is an element-wise soft threshold function with time-variant threshold. Among these studies, two representative ones are Soft Threshold weight Reparameterization (STR) (Kusupati et al., 2020) and State Transition of Dendritic Spines (STDS) . They both achieve the best performance of that time. STDS further demonstrates the analogy between soft threshold mapping and a structure in biological neural systems, i.e., dendritic filopodia and mature dendritic spines. However, few researchers notice that soft threshold mapping also appear as the shrinkage operator in the solution of LASSO (Tibshirani, 1996) when the design matrix is orthonormal. The studies on LASSO further derives the Iterative Shrinkage-Thresholding Algorithm (ISTA) (Daubechies et al., 2004;Elad, 2006), which used to be popularized in sparse recovery and compressed sensing. ISTA has many variants (Bioucas-Dias & Figueiredo, 2007;Beck & Teboulle, 2009b;Bayram & Selesnick, 2010) and has long been certified as an effective sparsification methods in all sorts of fields like deep learning (He et al., 2017;Zhang et al., 2018;Bai et al., 2020), computer vision (Beck & Teboulle, 2009a;Dong et al., 2013), medical imageology (Lustig et al., 2007;Otazo et al., 2015) and geophysics (Herrmann & Hennenfent, 2008). Despite an abecedarian analysis on the similarity between STDS and ISTA, many issues remains to be addressed, such as 1) the exact equivalence between ISTA and the growing threshold in soft threshold pruning, 2) the necessity of setting threshold trainable in STR, and 3) the way to improve existing methods without exhaustively trying different tricks for scheduling threshold.
In this work, we proposed a theoretical framework serving as a bridge between the underlying L 1regularized optimization problem and threshold scheduling. The bridge is built upon the key finding that soft threshold pruning is an implicit ISTA for nonzero weights. Specifically, we prove that the L 1 coefficient in the underlying optimization problem is determined by both threshold and learning rate. In this way, any threshold tuning strategy can now be interpreted as a scheme for tuning L 1 penalty. We find that a time-invariant L 1 coefficient lead to performance towering over previous pruning studies. Moreover, we bring a strategy of tuning L 1 penalty called continuation strategy (Xiao & Zhang, 2012), which was once all the rage in the field of sparse optimization, to the field of pruning. It derives broad categories of algorithms covering several tracks in the present taxonomy of pruning. In brief, our contributions are summarized as follows:
• Theoretical cornerstone of threshold tuning strategy. To the best of our knowledge, this is the first work interpreting increasing threshold as an ever-changing regularized term. Under theoretical analysis, we present a unified framework for the local equivalence of ISTA and soft threshold pruning. It enables us to make a comprehensive study on threshold tuning using the classic method in sparse optimization.
• Learning rate adapted threshold scheduler. Through our proposed framework, we reveal the strong relation between the learning rate scheduler and the threshold scheduler. Then we show that an time-invariant L 1 coefficient requires the changing of threshold being proportional to the learning rate. The Learning rate Adapted Threshold Scheduler (LATS) built upon L 1 coefficient achieves a state-of-the-art performance-sparsity tradeoff on both deep ANNs and SNNs.
• Sibling schedulers cover multiple tracks of pruning. We propose an early pruning algorithm by translating the homotopy continuation algorithm into a pruning algorithm with our framework. It achieves indistinguishable performance to LATS as a conventional early pruning method. Moreover, the algorithm in the pruning-at-initialization setting erases some subsequent layers in ResNet and maintains the identity mapping, shrinking deep ResNet to a shallow one.
RELATED WORKS
There has been a deluge of pruning algorithms emerged since the term "deep compression" was invented. These various studies emphasize different points like granularity (structured or unstructured) and stage of pruning (at initialization, during training, post training). The difference in granularity is similar to that between LASSO and group LASSO. Empirically, unstructured pruning tends to reach higher sparsity under the same accuracy degradation. For the pruning phase, sparsify during training commonly lead to higher accuracy than early phase one, e.g., pruning at initialization. Moreover, pruning during training is cheaper than post-training pruning when the overhead of dense training is considered. Some most relevant works are introduced as follows. (Zhou et al., 2021), GPO (Wang et al., 2022), OptG and STDS . Many of these works are based on the reparameterization of weights using either binary mask or element-wise nonlinear mapping. The former choose to confront L 0 constraint directly while the latter are committed to adjusting the landscape of loss function around zero. Our method is based on soft threshold reparameterization, which is piecewise linear and has an intrinsic connection to the ISTA with L 1 regularization.
Early pruning. We refer to a variant of sparsify-during-training as early pruning here, which only exerts pruning to network in the early stage of training. It includes pruning at initialization, e.g., GraSP , SynFlow (Tanaka et al., 2020), SBP-SR (Hayou et al., 2021), ProsPr (Alizadeh et al., 2022), and the conventional early pruning methods which stop pruning after several epochs of training (You et al., 2020;Liu et al., 2021b;Rachwan et al., 2022). Most of these works are inspired by the discovery of Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2019) or SNIP (Lee et al., 2019), if not both. LTH suggests we can find a sparse subnetwork with comparable performance to the dense network after iterative retraining, while SNIP manages to train from an initial sparse network. A wide array of criteria like connectivity sensitivity are taken for finding such sparse networks in the early stage with promising accuracy after training.
PRELIMINARIES
Notation. We use |x| to denote the element-wise absolute value of x, and x p denotes the pnorm of x. x y denotes the element-wise product of x and y. If not otherwise specified, the superscript within parenthesis x (i) denotes x at i-th iteration of gradient descent. The element-wise sigmoid function is denoted by σ(x) i := 1/(1 + e −xi ). The soft threshold mapping is also an element-wise mapping defined by S d (x) i := sign(x i ) · max {|x i | − d, 0} with scalar threshold d.
SOFT THRESHOLD PRUNING
Basically, soft threshold pruning will iteratively execute following three core steps:
(i) Mapping hidden weight to actual weight w through the soft threshold mapping w (t) ← S d (θ (t) ) during training, where θ is a trainable hidden weight with the same shape as w. (ii) Training hidden weight θ through backpropagation in latent space (iii) Growing threshold d pushes the term max {|θ i | − d, 0} in S d (θ) towards zero and thereby enforces sparsity for w.
Algorithm 1
The general form of soft threshold pruning algorithm coupled with vanilla SGD (STR and STDS for instance).
Input: initialized network parameters w (0) , threshold scheduler function g(·), initial threshold d (0) , final threshold D, initial learnable parameter sinit, loss function L(w), the number of training iterations T , L2 penalty λ. Output: trained sparse parameters w (T )
1: s (0) ← sinit, d (0) ← σ(s (0) ), θ (0) ← w (0)
Initialization of weight and threshold 2: for t = 0, 1, . . . , T − 1 do 3:
∆θ (t) ← ∇w(L(w (t) )) 1 |θ|>d (t)
Computing gradient with respect to hidden weight 4:
θ (t+1) ← θ (t) − η (t) (∆θ (t) + λθ (t) )
Update hidden weight. η (t) is learning rate 5:
s (t+1) ← s (t) − η (t) (∇s(L(w; s (t) )) + λs (t) )
Threshold training in STR 6:
d (t+1) ←σ(s (t+1) )g((t + 1)/T ) · D Update threshold 7: w (t+1) ← S d (t+1) (θ (t+1) )
Update the actual weight 8: end for 9: return w (T )
We provide a general form of soft threshold pruning as Algorithm 1, which is a prototype of both STR and STDS. GPO change the mapping in line 7 to a convex combination of soft threshold and identity mapping, leading to a slight difference. However, GPO only obtain marginal performance improvement with respect to STR, and will soon degenerate to STR as discussed in Appendix C. Therefore, the following of this paper are focused on STR and STDS. The differences between them are concluded from two aspects.
Gradient computing. With the reparameterization mapping w (t) ← S d (θ (t) ), the forward step includes mapping via θ → w and evaluating loss with w as the actual network weight. Hence, gradient is backpropagated via path L → w → θ. The learning rule is thus used for updating θ instead of w. Note that S d is non-differentiable at ±d and has zero gradient in interval (−d, d). STR takes advantage of the subgradient at ±d and leaves zero gradients as it is. STDS views S d as an identity mapping during backward, and provides a convergence analysis by approximating S d using
a smooth surrogateŜ d (x) = 1 α [ζ(α(x − d)) − ζ(−α(x + d))], where ζ(x) := log(1 + e x )
denotes the softplus function.
Threshold tuning. These studies spontaneously try manipulating threshold d for different sparsity. STR assigns an independent threshold for each layer, resulting into a threshold vector d. Besides, STR further parameterizes the threshold by another trainable parameter s. The mapping from s to d is individually designed for CNN and RNN. Compared to STR, STDS set threshold manually by introducing the threshold scheduler as d (t) = g(t/T ) · D, wherein scheduler function g : [0, 1] → [0, 1] is increasing and satisfies g(0) = 0, g(1) = 1, T is the total training steps. The formulation is based on the idea that increasing threshold from 0 to D could follow different paths. The final threshold D is the only adjustable hyperparameter for different sparsity levels when g is given. Larger D always leads to higher sparsity in practice.
The above techniques are summarized in Tab. 1. (Wang et al., 2022) w
= (1 − k)S d (θ) + kθ Gradient d := σ(s)
s, k are layer-wise, trainable, and L 2 regularized STDS w = S d (θ) 1 (Viewed as identity) d (t) = g(t/T ) · D g is increasing, g(0) = 0, g(1) = 1
ITERATIVE SHRINKAGE-THRESHOLDING ALGORITHM
ISTA is initially derived from solving linear inverse problem with regularization min x∈R n Ax − b 2 2 + r(x) , where A ∈ R m×n and b ∈ R m . ISTA is later extended to general objective as min x∈R n {F (x) := f (x) + r(x)} with assumptions listed below (i) Objective function f : R n → R is continuous differentiable, and L-smooth, i.e., ∇f (x)− ∇f (y) 2 ≤ L f x − y 2 , ∀x, y ∈ R n , where L f > 0 is the Lipschitz constant of ∇f .
(ii) Regularization function r : R n → R is continuous convex and can be nonsmooth.
(iii) F is bounded from below.
Leave the regularization r(x) alone, applying vanilla SGD to f can be viewed as iteratively calculating proximal regularization (Martinet, 1970) of the linearized f at x, which is suggested by the following fact:
x − η∇f (x) = arg min y {f (x) + y − x, ∇f (x) + 1 2η y − x 2 2 },
where η is explained as "stepsize" in optimization or "learning rate" in the context of deep learning. For a given point x, F (y) can be approximated by expanding f to the quadratic term in a similar vein
F η (y; x) := f (x) + y − x, ∇f (x) + 1 2η y − x 2 2 + r(y),(4)
The above problem admits a unique minimizer arg min yF η (y; x) = arg min
y 1 2η y − (x − η∇f (x)) 2 2 + r(y) .(5)
Note that η = 1 L f gives a upperbound of f (y), which is obtained by the descent lemma Beck, 2017). It implies with proper choice of η, we are virtually optimizing the upperbound of F (y) using minimizer in Eq. 5. The general form of ISTA iteratively solves Eq. 5 as x (t+1) = arg min xFη (x; x (t) ), which is also known as proximal gradient methods (Combettes & Wajs, 2005). The detailed convergence analysis is presented in vast optimization literature like FISTA (Beck & Teboulle, 2009b) and GIST (Gong et al., 2013). For sparsity, we are interested in L 1 regularization term r(x) = µ x 1 , µ > 0. Since L 1 norm is separable, we have a closed-form solution with element-wise soft threshold operation as
f (y) ≤ f (x) + y − x, ∇f (x) + L f 2 y − x 2 2 (x (t+1) = S µη (x (t) − η∇f (x (t) )).(6)
Eq. 6 gives the ISTA update rule under L 1 regularization.
A FRAMEWORK FOR SOFT THRESHOLD PRUNING
In this part, we will formulate the growing threshold under soft threshold reparameterization as an implicit ISTA. For simplicity, we assume the threshold is global across all parameters, which is consistent with the setting of STDS and STR-GS, i.e., the global threshold version of STR. Hence, we use scalar d rather than vector d in Algorithm 1 to denote the global threshold.
To begin with, we investigate the update rule of nonzero components in actual weight
θ (t+1) ← θ (t) − η (t) ∇ w (L(w (t) )) ∇ θ (w(θ (t) , d (t) )), Line 4 in Algorithm 1, (7) w (t+1) ← S d (t+1) (θ (t+1) ), Line 7 in Algorithm 1. (8)
Assuming the sign of weight remains unchanged after an update, which happens when the gradient has the opposite sign of weight or the gradient magnitude is sufficiently small, we have the following Lemma: Lemma 1 (Local update rule). The update rule in Eq. 7 and Eq. 8 implies the following update of a nonzero component w(θ, d) in actual weight w
w (t+1) = S d (t+1) −d (t) (w (t) − η (t) ∇ w (L(w (t) ))).(9)
The formal version and proof of Lemma 1 is given in Appendix A. Note that the form in Eq. 9 is equivalent to the ISTA update rule with threshold equal to forward finite difference d (t+1) − d (t) . Recall Eq. 6, the corresponding optimization problem can be deduced as Theorem 1: Theorem 1. Let L(w) be the loss function depending on the network weight w, which is further reparamterized by hidden weight θ and threshold d as w(θ, d) = S d (θ). When applying vanilla SGD to reparameterized nonzero weight, the update rule is locally equivalent to solving the following problem using ISTA with penalty term
min w F (w) := L(w) + d (t+1) − d (t) η (t) w 1 ,(10)
where η (t) , d (t) denotes the learning rate and threshold at the t-th iteration, respectively. The magnitude of L 1 -penalty is the quotient of forward finite difference of threshold scheduler divided by learning rate.
Theorem 1 serves as the glue holding the learning rate scheduler η (t) , threshold d (t) and penalty µ (t) together. It also to some extent justify the increasing threshold for a positive penalty term. Under this framework, we further explain in Appendix B that the original STDS uses an improper threshold scheduler. Moreover, the validity of training threshold is discussed in Appendix C.
FINDING OPTIMAL THRESHOLD SCHEDULER
In this part, we are devoted to evaluating some threshold schedulers based on our theoretical framework and literature on sparse optimization. With the framework, any past strategy of tuning L 1 penalty can be converted to a feasible pruning algorithm today.
LEARNING RATE ADAPTED THRESHOLD SCHEDULER
If the optimization problem in Theorem 1 is fixed, or in other words, the L 1 penalty is invariant µ (t) ≡ µ during learning, we can deduce a unique threshold scheduler LATS. In LATS, the change on threshold d (t+1) − d (t) must be proportional to the learning rate η (t) during training. Corollary 1 (Learning rate Adapted Threshold Scheduler). For fixed L 1 regularized problem
min w {F (w) := L(w) + µ w 1 } ,(11)
where µ > 0 is the time-independent L 1 penalty coefficient, the threshold scheduler is governed by the learning rate scheduler as Assuming the initial threshold is zero, simple algebra shows the definition of corresponding LATS with a slight abuse of notation
d (t) = d (0) + µ t−1 i=0 η (i)d (n,b) = µη max B 4 2n + 1 + sin 2n−1 2N π sin π 2N + b 2 1 + cos nπ N ,(12)
where the threshold d (n,b) depends on epoch id n and batch id b. Here n = 0, 1, . . . , N − 1 denotes the current id of training epoch, and b = 1, 2, . . . , B denotes the batch id in n-th epoch. We elaborate the detailed derivation of Eq. 12 in Appendix D.
SIMPLIFIED THRESHOLD SCHEDULER
Computation such as Eq. 12 is intricate for implementation. In effect, painstakingly coding LATS according to given learning rate scheduler is not inevitable. To ease the computing burden, we turn to replacing sum of learning rate with integration of learning rate function.
To be specific, assuming the learning rate scheduler can be expressed by η (n,b) = h(n/N ), the simplified threshold scheduler is defined by
d (t) = d (N −1,B) · t T 0 h(x)dx 1 0 h(x)dx .(13)
The simplification is loyal to the idea that the value of Riemann integral could be approximated by rectangle method. Eq. 13 can be interpreted with scheduler form in STDS as B) . The detailed derivation of Eq. 13 is given in Appendix D. Now we have the Simplified LATS (S-LATS for short) for the cosine annealing learning rate scheduler h(x) = ηmax 2 (1 + cos(πx))
d (t) = g(t/T ) · D with scheduler function g(x) = t T 0 h(x)dx/ 1 0 h(x)dx and final threshold D = d (N −1,d (t) = µη max T 2 · t T 0 1 2 (1 + cos πx)dx 1 0 1 2 (1 + cos πx)dx = µη max T 2 1 π sin( tπ T ) + t T .(14)
The final threshold is D = µη max T /2, which satisfies D ∝ µ. Tuning D is thus akin to changing the magnitude of penalty. In the following discussion, we employ the final threshold D instead of d (N −1,B) to lighten the notation. The threshold schedulers in the rest of this work will thus be expressed in a unified form of d (t) = g(t/T ) · D. We evaluate LATS and S-LATS under identical final thresholds D = 0.1, 0.5, 1.0, 5.0, which can be gleaned from Tab. 2. The results show LATS and S-LATS are indistinguishable from accuracy and sparsity. Thus, we turn to a simplified threshold scheduler in the following discussion.
CONTINUATION STRATEGY
Also known as "warm starting", continuation strategy is designated for accelerating convergence. Similar to annealing of learning rate, continuation refers to gradually reducing L 1 penalty during learning. It is also explained in Hale et al. (2008)
PGH SCHEDULER
In the series works of proximal gradient homotopy (PGH) (Xiao & Zhang, 2012;Lin & Xiao, 2014), the researchers provide proof of geometric convergence rate when inducing exponentially decaying L 1 coefficient µ (t) = β t T , where 0 < β < 1 is a constant. Considering our formulation of soft threshold pruning in Theorem 1, PGH can be translated into the PGH scheduler (simplified using Eq.13), which can be written as
d (t) = g PGH (t/T ) · D := t T 0 1 2 (1 + cos πx)β x dx 1 0 1 2 (1 + cos πx)β x dx · D,(15)
where the analytic form of g PGH is shown below
g PGH (x) = π 2 (β x − 1) + log 2 (β) (β x − 2) + log(β)β x (log(β) cos(πx) + π sin(πx)) π 2 (β − 1) − 2 log 2 (β) .(16)
LINK TO EARLY PRUNING
For the PGH scheduler, we interpolate β between 0 and 1 and get a series of different PGH schedulers. As shown in Fig. 1, the increasing in threshold slows over time, which is caused by decaying L 1 penalty. For 0 < β < 1, if pruning is ignored when penalty is below a preset threshold, we get a family of early pruning algorithms. They will stop pruning at different stages. Recall that the regularized term is proportional to the forward finite difference and thus can be approximated by derivative, for conventional early pruning, we regard g PGH (t/T ) < 0.1 as the termination criterion of pruning.
g PGH (x) β = 1e-10 β = 1e-05 β = 0.1 S-LATS (β → 1) β → 0 Figure 1: PGH schedulers un- der different β.
There are two limit cases when β approaches 0 or 1. It is obvious that β → 1 leads to S-LATS for no decay is applied. When β → 0, the penalty is always zero except for the beginning. In this case, PGH scheduler degenerates to a magnitude-based pruning after weight initialization followed by a normal training stage. This is also referred to as sparse-to-sparse training or pruning at initialization method.
EXPERIMENTS
In this section, we test proposed threshold scheduler S-LATS on both deep ANNs and SNNs. The favorable performances against the previous studies are confirmed. In all experiments we switch sparisty levels by changing D, which is equivalent to tuning L 1 penalty coefficient. We also tune hyperparameter β in PGH scheduler to maintain different phases of early pruning algorithm. Compared to the dense baseline, no tuning on other training hyperparameters is needed, which minimize the effort when applying to other networks. We also admit it fails to achieve comparable performance with a few baselines like OptG and in Pruning using large batch size. Powerprop (Schwarz et al., 2021), adopts a batch size of 4096 for pruning of ResNet-50 and achieves fascinating performances under high sparsity. Hence, we explore the large batch size setting. Due to limited resources, we only increase it to 1024 and enlarge the learning rate correspondingly. The rightmost of Fig. 2 shows it indeed leads to higher performance, which outperforms all other SOTA studies. Astonishingly, even though the performance of the dense network slightly degrades, the accuracy of the sparse ones is improved overall. On the basis of the above finding, we believe applying an even larger batch size, like 4096 used in Powerprop, to our method may lead to top-notch performance tradeoff.
PGH SCHEDULER
Conventional early pruning Our experiments of early pruning includes β = 0.1, 10 −5 , 10 −10 . The corresponding ending criteria t/T = 0.743, 0.382, 0.231 are given by numerical solution. It indicates pruning roughly stops at the 74th, the 38th and the 23th epoch. The network using PGH scheduler with smaller β converges faster to a sparse one, which is illustrated in the left of Fig. 3. Surprisingly, we find in the middle of Fig. 3 that for different β, the datapoint of accuracy against sparsity almost lies on the curve of S-LATS. It suggests these schedulers have practically the same performances as S-LATS, but with faster convergence to sparse networks. With the help of PGH scheduler, we are able to find sparse networks earlier with negligible performance degradation.
Pruning at initialization We also try β → 0, which refers to increasing the threshold to its maximum at the first iteration. Note that our method is agnostic about the structure of network. Hence, some layers are completely pruned as shown in the right of Fig. 3,
CONCLUSION & DISCUSSION
In this work, we present a framework interpreting increasing threshold as a constantly changing penalty term and reveal the underlying connection between soft threshold pruning and ISTA. We also derive a couple of threshold schedulers, which achieve comparable performance to current SOTA works and cover multiple tracks of pruning. It is worth noting that our method is agnostic about the object of pruning. This design endows our method with versatility while treating weight wheresoever equally, and yet becomes totally ignorant of nowadays pruning researches like sparsity budget allocation, e.g., Erdős-Rényi (Mocanu et al., 2018) and Erdős-Rényi-Kernel (ERK) (Evci et al., 2020), or commonsense in this area like "Leave at least one path from input through output". We believe our method can for sure further benefit from knowledge in the prosperous field of network pruning.
A PROOF OF THEOREMS AND LEMMAS
For clarity, we restate the theorem or lemma in the main text again here.
Lemma 1 (Local update rule). The update rule below
θ (t+1) ← θ (t) − η (t) ∇ w (L(w (t) )) ∇ θ (w(θ (t) , d (t) )) (17) w (t+1) ← S d (t+1) (θ (t+1) )(18)
imply the following update of any nonzero component w(θ, d) in actual weight w
w (t+1) = S d (t+1) −d (t) (w (t) − η (t) ∇ w (L(w (t) ))).(19)
when |θ (t+1) | > d (t) and the sign condition sign(θ (t+1) ) = sign(θ (t) ) are met.
Proof. For any nonzero weight w (t) = 0, θ (t) = w (t) + d (t) sign(w (t) ), using Eq. 17 we have
θ (t+1) = w (t) + d (t) sign(w (t) ) − η (t) ∇ w (L(w (t) ))(20)Letw (t+1) := w (t) − η (t) ∇ w (L(w (t) )
) be the target point of vanilla SGD without regularization. Recall Eq. 20, we havew
(t+1) = θ (t+1) − d (t) sign(w (t) ) = sign(θ (t+1) )|θ (t+1) | − sign(θ (t) )d (t) = sign(θ (t+1) )|θ (t+1) | − sign(θ (t+1) )d (t) = sign(θ (t+1) )(|θ (t+1) | − d (t) ),(21)
which has the same sign as θ (t+1) . Now we have sign(w (t+1) ) = sign(θ (t+1) ) = sign(θ (t) ) = sign(w (t) ) = sign(w (t+1) ).
To evaluate the updated weight, by Eq. 18, Eq. 20, Eq. 21 and the definition of soft threshold mapping, we derive
w (t+1) = sign(θ (t+1) ) max{|w (t) + d (t) sign(w (t) ) − η (t) ∇ w (L(w (t) ))| − d (t+1) , 0} = sign(θ (t+1) ) max{|w (t+1) + d (t) sign(w (t+1) )| − d (t+1) , 0} = sign(θ (t+1) ) max{| sign(w (t+1) )(|w (t+1) | + d (t) )| − d (t+1) , 0} = sign(θ (t+1) ) max{|w (t+1) | + d (t) − d (t+1) , 0} = sign(w (t+1) ) max{|w (t+1) | − (d (t+1) − d (t) ), 0} = S d (t+1) −d (t) (w (t) − η (t) ∇ w (L(w (t) ))).(22)
B ORIGINAL THRESHOLD SCHEDULER IN STDS
In STDS, the authors propose the Sine scheduler d (t) = 1 2 (1+sin(π( t T − 1 2 )))D = 1 2 (1−cos( tπ T ))D. With the form of simplified threshold scheduler in Eq. 13, by Theorem 1, the corresponding penalty µ (t) has the form
µ (t) = d (t+1) − d (t) η (t) = D · cos( tπ T ) − cos( (t+1)π T ) η max (1 + cos tπ T ) = 2D sin π T η max · sin( tπ T + π 2T ) 1 + cos tπ T ≈ C · sin( tπ T ) 1 + cos tπ T = C · tan( tπ 2T )(23)
It is a function of training progress t/T with constant C. Investigate the function tan(x/2) on interval (0, π), we have µ(x) is increasing from 0 to +∞, which implies µ can be sufficiently large during training. The loss is thus insignificant compared to the regularization term in the last stage of training and leads to performance degradation with respect to S-LATS.
C DISCUSSION ABOUT TRAINING THRESHOLD
In the main text, we propose a framework explaining the style of growth in threshold as an everchanging optimization problem. However, we only cover manually designed threshold schedulers. We make a discussion here and show training threshold is not as easy as STR or GPO did. We will show GPO and STR share the same discussion of training threshold since GPO will degenerate to STR in a few training epochs. Moreover, we suggest not simply setting threshold trainable if one really wants to investigate optimization of threshold.
C.1 L 2 PENALTY DOMINATES EARLY TRAINING OF STR.
In the official codebase of STR, we notice the trainable sparse threshold is also together with weight decay (L 2 regularization λ s 2 2 ). λ is of magnitude around 10 −5 ∼ 10 −4 . The initial value s init is usually set to negative number with large magnitude around −10 4 ∼ −10 3 for CNN trials. It is easy to see the magnitude of L 2 regularized term is around 0.01 ∼ 1 in the early stage of training.
Take CNN for instance. For given loss function L and weight w l of the l-th layer, the gradient passed to threshold s l can be estimated as
∇ s l L(w l (s l , θ l )) = ∇ w l L(w l ), ∂w l ∂s l = (w,θ)∈(w l ,θ l ) ∇ w L(w l ) · ∇ s l (S σ(s l ) (θ)) = −σ(s l )(1 − σ(s l )) (w,θ)∈(w l ,θ l ) w =0 ∇ w L(w l ) · sign(θ)(24)
wherein each term of sum has magnitude less than 1 4 ∇ w L(w l ) since sigmoid function σ(x) is bounded in (0, 1). Given that the stochastic gradient noise across parameters in w l admits to Lévy distribution , some negative and positive terms will balance each other, which makes us wonder whether the regularization term dominates the training of threshold. Therefore, we compare the gradient to L 2 penalty by tracing the magnitude ratio of ∇ s l L to λ|s l | during training of ResNet-50 on ImageNet, which is shown in Fig. 4.
It is evident that the L 2 penalty plays the leading role in the early stage rather than the gradient. We even observe for several final layers, the penalty always dominates the training of threshold. The update of s can thereby be rewritten to s (t+1) ≈ s (t) (1 − η (t) λ), where η (t) is the learning rate at the t-th iteration. Regardless of gradient, at the beginning of training, STR can be viewed as a special case of threshold scheduler as follows
d (t) = σ s init · t−1 i=0 (1 − η (i) λ) , t = 1, 2, . . .(25)
By Theorem 1, the corresponding penalty can be derived by
µ (t) = d (t+1) − d (t) η (t) = σ(s (t+1) ) − σ(s (t) ) η (t) = σ (s)(s (t+1) − s (t) ) η (t) = − σ (s)s (t) η (t) λ η (t) = −σ (s)s (t) λ(26)
wheres lies between s (t+1) and s (t) . The existence ofs is shown by Lagrange's Mean Value Theorem. It is rather difficult to analytically give the explicit expression of µ (t) since it relies on the behavior of s, which has a dynamic decay rate 1 − η (t) λ.
Loosely speaking, we adopt the approximations ≈ s (t) . We still cannot show the explicit form of µ (t) , but now we can investigate the trend of penalty by analyzing function µ(x) = −σ (x)x, which is increasing in (−∞, γ). Here γ ≈ −1.5434 is the unique negative root of e −x (x + 1) − x + 1 = 0.
In the early stage, s is a negative number with a much larger magnitude than γ. Based on the above, µ is increasing in the early stage.
C.2 GPO: FALL BACK TO STR SHORTLY
It concludes STR is majorly influenced by L 2 decay on s. We will see that GPO has a similar behavior on k and GPO will degenerate to STR shortly after the training begins.
In GPO, the authors introduces another trainable parameter β in the reparameterization w = (1 − k)S d (θ) + kθ with k = 10 −6 |β|. GPO starts from the identity mapping w = θ by initializing β = 10 6 . The gradient passed to β l in the l-th layer is given by
∇ β l L(w l (s l , β l , θ l )) = ∇ w l L(w l ), ∂w l ∂β l = 10 −6 sign(β l ) · (w,θ)∈(w l ,θ l ) ∇ w L(w l ) · (θ − S d (θ)) (27) Notice that θ − S d (θ) = θ, |θ| < d d sign(θ), |θ| ≥ d
has magnitude not greater than d, which derives that the gradient ∇ w L(w l ) are added with coefficient whose magnitude is below 10 −6 d.
From the codebase of GPO, we confirm the weight decay on β is 10 −4 . Since the initial β is 10 6 , the L 2 regularized term is of magnitude around 10 −4 × 10 6 = 100 at the beginning of training. Recall that d = σ(s) < 1, it is obvious that the gradient pass to β has a much smaller magnitude than the L 2 regularization. For this reason, the gradient can be ignored for k. Furthermore, k will shrink exponentially to almost zero and the mapping in GPO will fall back to STR within a few epochs, which can be seen by w = (1 − k)S d (θ) + kθ ≈ S d (θ).
C.3 FOCUS ON THRESHOLD SCHEDULER INSTEAD OF THE FINAL THRESHOLD.
Conventional wisdom suggests that when a parameter is set trainable, it will be optimized automatically. This idea should only work for those directly determining the performance, e.g., weights in a dense network. For pruning, authors of STDS find the differences in positions of performance versus sparsity curve should be ascribed to the evolving patterns of the threshold. To verify this, we replace the threshold training mechanism in STR and default scheduler in STDS by several schedulers with the same final threshold D = 10 −3 (D = 0.5 for STDS) shown below
• Sine: d (t) = 1 2 (1 + sin(π( t T − 1 2 )))D • Linear: d (t) = t T D • Log2: d (t) = log 2 ( t T + 1)D
We conduct the training on ANN ResNet-50 for both methods, the results are shown in Tab. 5. It is obvious that even though the final thresholds are set equally, the accuracy and overall sparsity vary with the scheduler. In fact, simply setting threshold trainable indicates one only cares about whether the final threshold is optimal, which turns out to be a tangential issue in soft threshold pruning.
The above results suggest the correct manner to manipulate threshold is pursuing a well-performed scheduler. Even if one studies the learning of threshold, it should be concentrated on the optimization of threshold scheduler, which may require tools in discrete-time optimal control. Due to the complex coupling between constantly changing threshold and final performance, discussion based on discrete-time optimal control is beyond the scope of this work.
D DETAILED DERIVATION FOR LATS AND S-LATS
In this part, we provide detailed derivations for LATS and S-LATS, which covers Eq. 12 and Eq. 13.
D.1 LATS FOR COSINE ANNEALING SCHEDULER
In most of the deep learning applications, the training process includes the schedule of the learning rate, which is also known as learning rate scheduler. Generally, the learning rate is updated at the end of an epoch. Assuming there are N training epochs in total, each of which includes B training mini-batches. The learning rate scheduler is defined as η (n,b) = h(n/N ), which evaluates the learning rate at the b-th mini-batch in the n-th epoch. Here n = 0, 1, . . . , N − 1 denotes the current id of the training epoch, and b = 1, 2, . . . , B denotes the batch id in n-th epoch. We denote h : [0, 1] → R + as the scheduler function for the learning rate. For cosine annealing scheduler with η min = 0, we have
h(x) = η max 2 (1 + cos(πx))(28)
In Corollary 1, the threshold scheduler for LATS is obtained by
d (t) = d (0) + µ t−1 i=0 η (i) , where d (t)
is the threshold after t mini-batches from the beginning. Under the learning rate scheduler described in Eq. 28, the threshold d (n,b) is shown by accumulating all previous learning rates, which gives
d (n,b) = d (0,0) + µ b · h(n/N ) + B n−1 i=0 h(i/N )(29)
For cosine annealing learning rate scheduler, we have
d (n,b) = d (0,0) + µη max 2 b(1 + cos nπ N ) + B n−1 i=0 (1 + cos iπ N ) = d (0,0) + µη max 2 b(1 + cos nπ N ) + Bn + B n−1 i=0 cos iπ N(30)
To evaluate the sum of cos(iπ/N ), we give the following results
n−1 i=0 cos iπ N = 1 sin π 2N n−1 i=0 cos iπ N sin π 2N = 1 sin π 2N n−1 i=0 1 2 sin i N + 1 2N π − sin i N − 1 2N π = 1 2 sin π 2N n−1 i=0 sin 2i + 1 2N π − n−1 i=0 sin 2i − 1 2N π = 1 2 sin π 2N n i=1 sin 2i − 1 2N π − n−1 i=0 sin 2i − 1 2N π = 1 2 sin π 2N sin 2n − 1 2N π + sin π 2N = 1 2 + sin 2n−1 2N π 2 sin π 2N .(31)
Recall Eq. 30, we have
d (n,b) = d (0,0) + µη max 2 b(1 + cos nπ N ) + Bn + B 1 2 + sin 2n−1 2N π 2 sin π 2N = d (0,0) + µη max b 2 (1 + cos nπ N ) + B 4 2n + 1 + sin 2n−1 2N π sin π 2N .(32)
When d (0,0) = 0, this gives LATS in the form of Eq. 12.
D.2 THE DETAILED MOTIVATION OF S-LATS
The implementation of LATS is rather complicated and requires meticulous coding. Worse still, for some learning rate schedulers, e.g., polynomial decay scheduler (Liu et al., 2015;Chen et al., 2018)
η (t) = η max 1 − t T κ ,(33)
where κ > 0 is a constant (κ = 0.9 in aforementioned studies), the form of LATS cannot be reduced like cosine annealing scheduler in most cases. To see this, we write the corresponding LATS as
d (t) = d (0) + µη max t−1 i=0 1 − i T κ .(34)
Note that 1 − i T makes up an arithmetic progression, and the threshold is the sum of their powers. Simplifying the sums of powers of arithmetic progression requires the so-called Bernoulli number (Jacobi, 1834;Knuth, 1993) when κ is integer. For a general κ > 0, the expression of Eq. 34 includes the generalized harmonic numbers H (−p) n := n k=1 k p , which is further based on the Hurwitz zeta function (Coffey, 2008). In such case, we cannot analytically compute LATS, which forces us to do the summation in Eq. 34. It thus brings about the accumulative error. In brief, we cannot expect each learning rate scheduler corresponds to an analytical and simple form of LATS.
To handle this, we turn to an approximation rather than precisely evaluating d (t) . Returning to Eq. 29, with d (0,0) ignored, we split the right hand side into two terms B n−1 i=0 h(i/N ) and b · h(n/N ).
The first term could be viewed as left Riemann sum in two steps 1) interpolate points 0, 1/N, 2/N, . . . , (n − 1)/N, n/N with constant spacing 1/N , the width of the rectangles 2) evaluate the sum of rectangle areas with height h(i/N ). It leads to the approximation of the Riemann integral
B n−1 i=0 h(i/N ) = BN n−1 i=0 1 N h(i/N ) ≈ BN n/N 0 h(x)dx.(35)
The second term match b/B part of a residual tiny rectangle
b · h(n/N ) = BN · b B · 1 N h(n/N )(36)
To sum up, Eq. 35 and Eq. 36 together make a numerical approximation of integral BN n/N +b/BN 0 h(x)dx, which is shown schematically in Fig. 5. Note that we could write training progress as t/T now, where t = Bn + b is the current iteration id and T = BN is the total training iterations.
So far, we successfully replace the summation with integration. However, the current final threshold is BN 1 0 h(x)dx, which is different from the real one d (N −1,B) . To keep the final threshold d (T ) = d (N −1,B) , we normalize the integral as follows
d (t) = d (N −1,B) · t T 0 h(x)dx 1 0 h(x)dx .(37)
For most learning rate scheduler functions, integration is much easier than summation. S-LATS enables us to apply our pruning method on wider varieties of deep learning applications. The results are shown in Tab. 6. Apparently, our proposed method towers over previous sparsifyduring-training work. We elaborate on the sparsity budgets and the corresponding final thresholds in Tab. 12 of Appendix I. The hyperparameters for training MobileNet-V1 are stated in Tab. 8 of Appendix H. To evaluate the performance gains brought by the threshold scheduler alone, the weight decay or L 2 penalty must be removed. To explain this, recall line 4 in Algorithm 1, we know the update rule of hidden weight θ is affected by both gradient and L 2 penalty λ θ 2 . However, the analysis in Lemma 1 is based on vanilla SGD without weight decay. To account for this inconsistency, let's first investigate the influence of weight decay on the equivalent optimization problem.
E PRUNING EXPERIMENTS
In the presence of weight decay, the update rule described by Eq. 17 has an additional penalty term as follows
θ (t+1) ← θ (t) − η (t) ∇ w (L(w (t) )) ∇ θ (w(θ (t) , d (t) )) − η (t) λθ (t)(38)
Following derivation in Eq. 20, we have
θ (t+1) = w (t) + d (t) sign(w (t) ) − η (t) ∇ w (L(w (t) )) − η (t) λθ (t)(39)
Similarly, we denotew (t+1) := w (t) − η (t) ∇ w (L(w (t) )) to be the target point of vanilla SGD without regularization. With Eq. 39, we havē
w (t+1) = θ (t+1) − d (t) sign(w (t) )η (t) − λθ (t) = sign(θ (t+1) )|θ (t+1) | − sign(θ (t) )d (t) − λ sign(θ (t) )|θ (t) | = sign(θ (t+1) )|θ (t+1) | − sign(θ (t+1) )d (t) − λ sign(θ (t) )|θ (t) | = sign(θ (t+1) )(|θ (t+1) | − d (t) − λ|θ (t) |),(40)
Ifw (t+1) still satisfies the local relation that sign(w (t+1) ) = sign(θ (t+1) ) = sign(θ (t) ) = sign(w (t) ) = sign(w (t+1) ), mimicking Eq. 22, we have
w (t+1) = S d (t+1) −d (t) +λ|θ (t) | (w (t) − η (t) ∇ w (L(w (t) ))).(41)
Apparently, the L 2 penalty of θ lies into the equivalent L 1 penalty term of w in the ISTA rule, making the analysis of the corresponding threshold scheduler intractable. Accordingly, we decide to remove weight decay, i.e., and set λ to zero in the ablation study to prevent an unpredictable threshold scheduler.
F.2 SINE SCHEDULER IN STDS VS S-LATS
After removing weight decay, we rerun the original STDS and our methods under several sparsity levels (through changing D) while keeping the other hyperparameters and the batch size of 256. As illustrated in Fig. 6, the results on ResNet-50 show our method clearly surpasses the original STDS on the ImageNet dataset. A theoretical analysis is enclosed in Appendix B. Spiking neural networks (SNNs) are honored as the third generation of neural network models (Maass, 1997), derived from biological neural network modeling. SNNs are composed of spiking neurons, which release spikes in binary form, and connections between neurons. The model of spiking neurons is a dynamical system described by one or more ordinary differential equations (ODE) and a firing threshold. The dynamical system is also called "subthreshold dynamics" in the context of computational neuroscience. A spike is generated and passed to all postsynaptic spiking neurons when the variable representing membrane potential exceeds the firing threshold. Today, the most commonly used neuron model is the Leaky Integrate-and-Fire (LIF) model. Specifically, LIF has the subthreshold dynamic as follows
τ m du(t) dt = −(u(t) − u rest ) + Iw,(42)
where u(t) is the membrane potential at time t, u rest is the resting potential, τ m is the membrane constant, I and w denote input spikes and input weights respectively. The firing behavior of LIF neurons is depicted as an instantaneous jump of membrane potential shown below
lim ∆t→0 + u(t f + ∆t) = u rest , if u(t f ) ≥ u th ,(43)
where u th , t f are the firing threshold and firing time respectively.
The ODE in Eq.42 can be discretized via the Euler method and transformed into an RNN-like iterative computing manner as follows
u[t − ] = u[t − 1] + 1 τ m −(u[t − 1] − u rest ) + i w i I i [t] , s[t] = H(u[t − ] − u th ), u[t] = s[t]u rest + (1 − s[t])u[t − ].(44)
where u[t − ], u[t] are the membrane potential before and after firing at timestep t respectively, H(·) is the Heaviside step function modeling jump behavior when a spike is triggered. However, training techniques in RNN, such as backpropagation through time (BPTT) (Werbos, 1990) cannot be directly applied to SNNs for the spiking behavior described by Heaviside is non-differentiable. Thanks to the surrogate gradient method proposed in Wu et al. (2018);Neftci et al. (2019), researchers can now incorporate BPTT into training of SNNs by switching to a "differentiable mode" of the Heaviside step function when computing gradient. It refers to replacing Heaviside step with a differentiable surrogate function. Surrogate gradient resembles the straight-through estimator (Bengio et al., 2013) closely in both computing style and ideology.
G.2 REDUCING SNNS COST ON NEUROMORPHIC HARDWARE
SNNs are considered energy efficient when deployed on a series of dedicated hardware, also known as neuromorphic hardware or event-driven hardware. On these chips, the computation is triggered only when there are incoming spikes and weights are nonzero (Merolla et al., 2014). For this reason, there are three mainstream methods for alleviating the energy cost of a given SNN on neuromorphic chips including 1) unstructured pruning of weights, 2) reducing the number of spikes, and 3) searching for efficient SNN structures. The NAS studies on SNNs (Na et al., 2022;Kim et al., 2022a) are not based on existing SNN structure, so we omit the discussion of NAS methods.
Many recent studies have made ample signs of progress on pruning and reducing spike counts. They confirm there is a weak correlation between the number of spikes and weight sparsity (Deng et al., 2021;Kim et al., 2022b). We also evaluate the spike counts of pruned SNNs. Compared to the number of spikes, a more frequently used metric is average firing rate, which is obtained by averaging the number of spikes across timesteps and spiking neurons. We collect the average firing rate of each trial during inference using pruned SEW ResNet-18. We further provide a plot of the average firing rate against the sparsity, which is shown in Fig. 7. The weak relationship between the number of spikes and weight sparsity is manifested in the slightly decreased average firing rate. Despite a downward trend in the average firing rate, the relative magnitude of the decline is trifling. It is consistent with previous observations and suggests pruning is an inefficient means of reducing the number of spikes in SNNs.
In conclusion, pruning is an efficient way to induce weight sparsity and lower cost. However, we should not expect the suppression of firing rates as a bonus.
H TRAINING HYPERPARAMETERS
We make the detailed setting in our experiments clear in Tab. 7, Tab. 8 and Tab. 9.
For
the most frequently used testbed, ResNet-50 (He et al., 2016) on ImageNet dataset (Deng et al., 2009), researchers usually use the cosine annealing learning rate scheduler (Loshchilov & Hutter, 2017) with η min = 0.
as an analogue to the homotopy algorithms in statistics. Continuation method used to serve as a common trick in abundant classic literature concerning sparse optimization includingGPSR (Figueiredo et al., 2007), fixed point continuation method(Hale et al., 2008),SpaRSA (Wright et al., 2009) and NESTA(Becker et al., 2011).
LATS achieves state-of-the-art performances on both ANNs(ResNet-50, MobileNet-V1 (Howard et al., 2017)) and SNNs(SEW ResNet-18 (Fang et al., 2021)). The results on ResNet-like networks and MobileNet-V1 are illustrated inFig. 2. We add Gradual Magnitude Pruning (GMP) (Zhu & Gupta, 2017) into comparison for few recent studies are conducted on MobileNet-V1. Notably, our method surges ahead of all the other baselines under <98% sparsity for pruning on ResNet-50, which is shown in Tab. 3. It should be noted that the origin STDS excludes the last FC layer from pruning in SNNs, while the results reported here are shown by rerunning it with the last layer pruned.
Figure 2 :
2Performance of several SOTA pruning strategies of ResNet-50 (Leftmost & Rightmost), MobileNet-V1 (Middle left) and SEW ResNet-18 (Middle right) on ImageNet. All trials uses the standard training setting (256 batch size) except the rightmost one, which uses an enlarged batch size marked in parenthesis. Detailed layerwise sparsity and accuracy are given in Appendix I.
Figure 3 :
3Overall sparsity during learning when final threshold D = 0.1 (Left). Performance under different sparsity levels (Middle). Layerwise sparsity of PGH scheduler under pruning at initialization setting β → 0 (Right).
Figure 4 :
4Magnitude ratios of gradient to L 2 -regularization ∇s l L λs l in different layers when training using STR at 90.23% overall sparsity. Data within an epoch are averaged using geometric mean.
Figure 5 :
5ON MOBILENET-V1 USING S-LATS Besides the ResNet-like structures mentioned in the main text, we also conduct experiments on MobileNet-V1 (Howard et al., 2017) to show the power of our proposed methods S-LATS on the lightweight network. To make a fair comparison, we choose those SOTA studies using the standard Explanation on numerical approximation of integral n/N +b/BN 0 h(x)dx. Cosine annealing learning rate scheduler is exemplified in a darker blue curve. The area of n cyan rectangles corresponds to Eq. 35. The area of the tiny yellow rectangle is Eq. 36. training setting, i.e., batch size of 256 and 100 training epochs. They include STR (Kusupati et al., 2020), gradual pruning in WoodFisher (Singh & Alistarh, 2020), and a modern implementation (Gale et al., 2019) of Gradual Magnitude Pruning (GMP) (Zhu & Gupta, 2017). Note that we do not compare S-LATS to OptG, since OptG adopts 1.8× (180) training epochs, or the comparison would be unfair.
Figure 6 :
6Performance comparison of original STDS and our method. G SPARSITY VS FIRING RATE IN SNNS G.1 AN OVERVIEW OF SNNS
Figure 7 :
7The trend of average firing rate against sparsity.
Table 1 :
1Techniques used in previous studies.Method
Reparameterization mapping
w(θ, d)
Gradient of mapping
Threshold
Note
STR
(Kusupati et al., 2020)
w = S d (θ)
Subgradient
d :=
σ(s), for CNN
e s ,
for RNN
s is layer-wise
(global for STR-GS),
trainable,
and L 2 regularized
GPO
Table 2 :
2Comparison of LATS and S-LATS when applied to ResNet-50 on ImageNet dataset.Final threshold D
STDS + LATS
STDS + S-LATS
Sparsity (%) Top-1 Acc. (%) Sparsity (%) Top-1 Acc. (%)
0.1
79.97
76.53
79.95
76.75
0.5
95.54
73.12
95.53
73.03
1.0
97.43
69.56
97.43
69.64
5.0
99.27
53.80
99.28
53.64
Table 3 :
3Comparison of ResNet-50 Top-1 accuracy on ImageNet in recent studies using 100 training epochs. For some studies without strict control on sparsity, the closest sparsity is reported behind the performance. Performance of S-LATS is averaged over three trials.57±0.15 (79.95) 75.43±0.17 (89.57) 73.20±0.09 (95.12) 71.48±0.13 (96.58) 69.49±0.18 (97.43) 67.25±0.19 (98.01) 58.39±0.25 (99.02) S-LATS 1024 76.61±0.25 (79.00) 75.87±0.15 (90.15) 74.29±0.28 (95.01) 72.80±0.18 (96.53) 70.78±0.09 (97.54) 69.15±0.13 (98.00) 61.90±0.18 (98.93)extreme sparsity (≥99%), which suggests the theory might be imperfect under such conditions. An ablation study shows S-LATS outperforms the default threshold scheduler of STDS, which is shown in Appendix F.Method
Batch size
Sparsity
80%
90%
95%
96.5%
97.5%
98%
99%
STR
256
76.19 (79.55)
74.31 (90.23)
70.40 (95.03)
67.22 (96.53)
-
61.46 (98.05)
51.82 (98.98)
STR-GS
256
-
74.13 (89.54)
-
-
-
62.17 (97.91)
-
GraNet
256
76
74.5
72.3
70.5
-
-
-
ProbMask
256
-
74.68
71.5
-
-
66.83
61.07
OptG
256
-
74.28
72.38
70.85
-
67.2
62.1
WoodFisher
256
76.73
75.26
72.16
-
-
65.47
-
S-LATS
256
76.RigL (ERK)
4096
75.1
73.0
69.7
67.2
-
-
-
Top-KAST
(Powerprop)
4096
76.24
75.23
73.25
-
-
-
-
Top-KAST
(Powerprop+ERK)
4096
76.76
75.74
-
-
-
-
-
Table 4 :
4Results in pruning at initialization experiments.wherein three consecutive layers within a residual block tend to be pruned simultaneously. However, owing to the skip connection in ResNet, the feature can still pass through shortcuts to the final FC layer, and thus the whole networks are still normally trained. The aforementioned results are collected in Tab. 4.Final threshold D
0.1
0.11
0.13
0.15
Overall sparsity (%) 87.16 90.00 93.11 95.64
Top-1 Acc. (%)
74.69 72.89 68.23 62.22
# Zeroed layers
0
9
9
27
Patrick L. Combettes and Valérie R. Wajs. Signal recovery by proximal forward-backward splitting. Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In Proceedings of the 37th International Conference on Machine Learning, pp. 2943-2952, 2020. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, 2015. Jangho Kim, KiYoon Yoo, and Nojun Kwak. Position-based scaled gradient for model quantization and pruning. In Advances in Neural Information Processing Systems, pp. 20415-20426, 2020. Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, and Priyadarshini Panda. Neural architecture search for spiking neural networks. In Computer Vision -ECCV 2022, pp. Souvik Kundu, Gourav Datta, Massoud Pedram, and Peter A. Beerel. Spike-thrift: Towards energyefficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3953-3962, 2021. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations, 2019. Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017. Tao Zhuang, Zhixuan Zhang, Yuheng Huang, Xiaoyi Zeng, Kai Shuang, and Xiang Li. Neuron-level structured pruning using polarization regularizer. In Advances in Neural Information Processing Systems, pp. 9865-9877, 2020.Multiscale Modeling & Simulation, 4(4):1168-1200, 2005.
Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm
for linear inverse problems with a sparsity constraint. Communications on Pure and Applied
Mathematics, 57(11):1413-1457, 2004.
Geoff Davis, Stephane Mallat, and Marco Avellaneda. Adaptive greedy approximations. Construc-
tive approximation, 13(1):57-98, 1997.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hier-
archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition,
pp. 248-255, 2009.
Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, and Yuan Xie.
Comprehensive snn compression using admm optimization and activity regularization. IEEE
Transactions on Neural Networks and Learning Systems, pp. 1-15, 2021.
Weisheng Dong, Lei Zhang, Guangming Shi, and Xin Li. Nonlocally centralized sparse representa-
tion for image restoration. IEEE Transactions on Image Processing, 22(4):1620-1630, 2013.
David L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289-
1306, 2006.
Michael Elad. Why simple shrinkage is still relevant for redundant representations? IEEE Transac-
tions on Information Theory, 52(12):5559-5569, 2006.
Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep
residual learning in spiking neural networks. In Advances in Neural Information Processing
Systems, pp. 21056-21069, 2021.
MÁrio A. T. Figueiredo, Robert D. Nowak, and Stephen J. Wright. Gradient projection for sparse
reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of
Selected Topics in Signal Processing, 1(4):586-597, 2007.
Dongdong Ge, Xiaoye Jiang, and Yinyu Ye. A note on the complexity of L p minimization. Mathe-
matical programming, 129(2):285-299, 2011.
Pinghua Gong, Changshui Zhang, Zhaosong Lu, Jianhua Huang, and Jieping Ye. A general itera-
tive shrinkage and thresholding algorithm for non-convex regularized optimization problems. In
Proceedings of the 30th International Conference on Machine Learning, pp. 37-45, 2013.
Elaine T. Hale, Wotao Yin, and Yin Zhang. Fixed-point continuation for 1 -minimization: Method-
ology and convergence. SIAM Journal on Optimization, 19(3):1107-1130, 2008.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks
with pruning, trained quantization and huffman coding. In International Conference on Learning
Representations, 2016.
Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, and Yee Whye Teh. Robust pruning at initial-
ization. In International Conference on Learning Representations, 2021.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-
nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2016.
Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural net-
works. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct
2017.
Felix J. Herrmann and Gilles Hennenfent. Non-parametric seismic data recovery with curvelet
frames. Geophysical Journal International, 173(1):233-248, 04 2008.
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep
learning: Pruning and growth for efficient inference and training in neural networks. Journal of
Machine Learning Research, 22(241):1-124, 2021.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand,
Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for
mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Carl Gustav Jacob Jacobi. De usu legitimo formulae summatoriae maclaurinianae. Journal für die
reine und angewandte Mathematik, 12:263-272, 1834.
Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen. Top-kast: Top-k
always sparse training. In Advances in Neural Information Processing Systems, pp. 20744-20754,
2020.
36-56, 2022a.
Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, and
Priyadarshini Panda. Exploring lottery ticket hypothesis in spiking neural networks. In Com-
puter Vision -ECCV 2022, pp. 102-120, 2022b.
Donald E Knuth. Johann faulhaber and sums of powers. Mathematics of Computation, 61(203):
277-294, 1993.
Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham
Kakade, and Ali Farhadi. Soft threshold weight reparameterization for learnable sparsity. In
Proceedings of the 37th International Conference on Machine Learning, pp. 5544-5555, 2020.
Qihang Lin and Lin Xiao. An adaptive accelerated proximal gradient method and its homotopy
continuation for sparse optimization. In Proceedings of the 31st International Conference on
Machine Learning, pp. 73-81, 2014.
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola
Pechenizkiy, Zhangyang Wang, and Decebal Constantin Mocanu. Sparse training via boosting
pruning plasticity with neuroregeneration. In Advances in Neural Information Processing Sys-
tems, pp. 9908-9922, 2021a.
Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy. Do we actually need
dense over-parameterization? in-time over-parameterization in sparse training. In Proceedings of
the 38th International Conference on Machine Learning, pp. 6989-7000, 2021b.
Table 5 :
5Comparison of threshold schedulers when applied to ResNet-50 on ImageNet.Scheduler
STR (D = 10 −3 )
STDS (D = 0.5)
Sparsity (%) Top-1 Acc. (%) Sparsity (%) Top-1 Acc. (%)
Sine
94.14
71.51
95.72
72.42
Linear
95.41
69.85
98.46
59.79
Log2
95.95
68.55
98.05
64.74
Table 6 :
6Performance comparison of MobileNet-V1 on ImageNet using standard training setting (256 batch size, 100 epochs). The results of GMP are gleaned from the manuscripts of STR and OptG. The accuracy of our method is averaged over three trials.Method
Top-1 Acc. (%) Sparsity (%)
Dense
71.95
0
GMP
67.70
74.11
STR
68.35
75.28
STR
66.52
79.07
WoodFisher
70.09
75.28
S-LATS
68.25 ±0.19
81.84
GMP
61.80
89.03
STR
64.83
85.80
STR
62.10
89.01
STR
61.51
89.62
WoodFisher
63.87
89.00
S-LATS
66.73 ±0.08
85.87
S-LATS
65.63 ±0.20
88.22
S-LATS
64.93 ±0.21
89.08
Table 7 :
7ANN ResNet-50 hyperparameters.Description
Notation
Value
# Epoch
-
100
Optimizer
-
Momentum SGD (momentum = 0.875)
Overall batch size
-
256 1024
Max. learning rate
η max
0.256 0.512
Learning rate scheduler
-
Cosine annealing
Warmup epochs
-
5
Label smoothing
-
0.1
Weight decay
λ
3.05e-5 (0 for ablation study in Appendix F)
Prune BN layers?
-
No
Prune first and last layers?
-
Yes
Table 8 :
8ANN MobileNet-V1 hyperparameters.Description
Notation
Value
# Epoch
-
100
Optimizer
-
Momentum SGD (momentum = 0.875)
Overall batch size
-
256
Max. learning rate
η max
0.256
Learning rate scheduler
-
Cosine annealing
Warmup epochs
-
5
Label smoothing
-
0.1
Weight decay
λ
3.05e-5
Prune BN layers?
-
No
Prune first and last layers?
-
Yes
Table 9 :
9SNN SEW ResNet-18 hyperparameters.Description
Notation
Value
# Epoch
-
320
Optimizer
-
Momentum SGD (momentum = 0.9)
Overall batch size
-
256
Max. learning rate
η max
0.1
Learning rate scheduler
-
Cosine annealing
Weight decay
λ
0
Prune BN layers?
-
No
Prune first and last layers?
-
Yes
Simulation timesteps
-
4
SEW function
-
ADD
I SPARSITY BUDGETS
Table 10 :
10Sparsity budgets of ResNet-50 using STDS + S-LATS on ImageNet (256 batch size). Overall 79.95 89.57 92.99 94.60 95.12 95.53 96.13 96.58 96.94 97.43 98.01 98.11 98.48 98.89 99.02 99.13 99.28 99.39 conv1 34.18 49.83 56.73 59.57 64.35 67.90 69.79 70.89 75.54 74.31 78.36 78.71 81.92 83.94 85.58 86.20 88.01 89.47 layer1.0.conv1 43.04 58.79 66.58 68.82 72.22 74.90 75.44 76.03 79.30 82.25 84.69 83.84 88.06 91.33 89.72 91.77 93.53 94.26 layer1.0.conv2 73.59 82.82 87.36 89.43 90.82 91.38 92.01 93.18 93.06 94.96 95.97 95.87 97.08 98.22 97.96 98.44 98.51 99.00 layer1.0.conv3 67.26 77.89 84.59 86.93 88.71 89.52 91.52 92.95 93.19 95.28 95.83 95.84 97.19 98.13 98.12 98.55 98.49 99.16 layer1.0.downsample.0 58.61 73.39 78.28 82.06 83.91 85.33 85.86 87.01 89.11 90.31 91.83 92.61 93.19 94.56 95.14 95.73 96.19 96.62 layer1.1.conv1 69.42 78.74 83.92 86.19 87.51 88.46 90.94 91.72 93.46 93.51 94.32 94.32 96.12 97.20 96.99 98.46 98.84 99.05 layer1.1.conv2 73.06 84.09 88.31 89.18 90.32 91.81 93.83 94.73 96.02 95.62 95.72 95.36 97.18 97.96 98.02 99.20 99.64 99.41 layer1.1.conv3 69.48 79.34 86.22 85.91 88.12 92.01 92.58 92.60 95.00 94.74 95.12 95.20 97.07 98.32 98.66 98.96 99.72 99.58 layer1.2.conv1 64.66 75.52 82.27 86.07 87.00 87.75 89.77 88.64 91.03 92.55 94.18 95.09 96.27 97.03 97.97 97.82 98.78 98.91 layer1.2.conv2 63.14 76.05 84.24 86.97 88.20 89.35 90.86 90.06 91.91 93.02 94.81 94.34 96.93 97.15 98.01 97.94 98.56 99..downsample.0 81.42 90.05 92.85 94.08 94.78 95.22 95.66 96.28 96.44 97.40 97.88 98.07 98.25 98.86 99.03 99.08 99.16 99.83.26 86.70 90.06 90.17 91.70 92.47 92.75 93.46 94.34 95.50 96.47 96.95 97.47 97.94 98.46 98.45 98.79 layer2.3.conv2 74.50 83.83 88.23 91.11 91.28 92.90 93.06 94.27 94.36 94.89 97.02 96.67 97.29 97.94 98.25 98.73 98.92 98.61 layer2.3.conv3 72.81 85.69 89.03 91.17 92.36 92.16 93.69 94.37 95.23 95.33 96.74 97.07 97.90 98.27 98.73 99.03 99.11 99.04 layer3.0.conv1 61.26 74.70 80.18 83.80 84.81 85.51 87.13 88.01 88.81 90.43 91.82 92.57 93.79 95.04 95.56 96.42 96.69 97.29 layer3.0.conv2 82.41 91.42 94.51 95.78 96.15 96.49 96.94 97.27 97.56 97.94 98.33 98.33 98.68 98.87 99.04 99.12 99.25 99.35 layer3.0.conv3 71.21 82.18 86.32 88.76 89.38 89.93 91.42 92.04 92.46 93.73 94.97 95.40 96.23 97.28 97.50 97.95 98.20 98.61 layer3.0.downsample.0 86.21 93.20 95.39 96.83 96.96 97.29 97.77 97.98 98.13 98.53 98.83 98.96 99.24 99.39 99.44 99.56 99.60 99.60 layer3.1.conv1 86.76 92.91 95.17 96.46 96.48 97.02 97.25 97.80 97.95 98.37 98.67 98.82 99.16 99.38 99.41 99.55 99.56 99.63 layer3.1.conv2 86.67 93.68 95.71 96.79 97.09 97.37 97.77 98.09 98.32 98.52 98.92 98.99 99.31 99.51 99.51 99.66 99.67 99.72 layer3.1.conv3 75.18 86.50 90.08 92.77 93.44 94.10 94.53 95.48 96.12 96.75 97.53 97.82 98.46 99.03 98.98 99.29 99.38 99.50 layer3.2.conv1 83.83 90.82 93.96 95.29 96.06 95.69 97.01 96.80 97.13 97.58 98.36 98.28 98.65 99.08 99.32 99.27 99.83.17 87.02 89.51 90.39 90.99 92.21 93.09 93.73 94.63 95.87 96.01 96.73 97.61 97.78 98.02 98.38 98.60 layer4.0.downsample.0 85.36 92.49 94.97 96.03 96.39 96.77 97.13 97.42 97.63 97.92 98.33 98.41 98.70 99.00 99.13 99.20 99.33 99.44 layer4.1.conv1 81.34 90.10 93.30 94.91 95.55 95.99 96.52 97.17 97.47 98.00 98.60 98.59 98.86 99.23 99.35 99.42 99.56 99.65 layer4.1.conv2 83.00 91.75 94.76 96.19 96.68 97.03 97.39 97.85 98.14 98.52 98.95 98.96 99.18 99.46 99.55 99.57 99.Final threshold D
0.1
0.2
0.3
0.4
0.45
0.5
0.6
0.7
0.8
1.0
1.4
1.5
2.0
3.0
3.5
4.0
5.0
6.0
Top-1 Acc. (%)
76.75 75.52 74.50 73.75 73.18 73.03 72.04 71.47 70.81 69.64 67.47 67.02 64.20 60.35 58.88 56.79 53.64 50.41
Layer(s)
Sparsity (%)
25
layer1.2.conv3
68.90 80.36 86.72 87.87 89.11 90.09 91.77 92.69 92.73 94.42 96.64 95.80 97.64 98.26 99.04 98.71 99.18 99.33
layer2.0.conv1
59.82 73.85 80.93 82.27 83.13 86.86 89.47 89.19 90.64 91.19 93.37 92.78 95.23 96.01 97.02 97.26 97.98 98.43
layer2.0.conv2
72.90 85.04 88.93 92.21 92.57 93.19 94.54 94.80 94.89 95.74 97.05 97.01 97.50 98.04 98.27 98.28 98.61 99.16
layer2.0.conv3
71.13 82.92 86.98 89.32 90.37 91.58 92.63 93.15 93.37 94.38 96.03 96.03 96.76 97.78 98.00 97.85 98.12 98.72
layer2.031
layer2.1.conv1
83.44 90.47 93.48 95.35 95.96 95.80 96.62 96.73 96.73 97.95 98.25 98.61 98.89 99.04 99.21 99.42 99.27 99.56
layer2.1.conv2
82.60 90.41 94.05 96.28 96.22 96.37 97.17 96.90 97.76 98.31 98.68 99.02 99.14 99.44 99.46 99.61 99.56 99.76
layer2.1.conv3
73.53 83.15 88.32 92.18 91.63 92.38 93.93 93.80 95.03 96.08 96.79 97.37 98.29 98.66 99.04 99.28 99.04 99.48
layer2.2.conv1
74.48 85.75 88.91 91.48 92.37 91.97 93.44 94.26 94.86 95.85 96.09 96.88 97.54 98.33 98.46 98.97 98.85 99.08
layer2.2.conv2
75.42 87.51 90.66 92.41 93.18 92.90 93.92 94.89 95.26 96.32 96.25 97.47 97.96 98.61 98.36 98.88 99.18 99.19
layer2.2.conv3
70.68 82.69 86.23 90.11 90.35 91.14 92.36 93.93 93.28 95.32 96.33 96.91 97.50 98.33 98.39 98.82 99.04 99.12
layer2.3.conv1
71.05 55 99.58
layer3.2.conv2
84.24 92.07 94.68 95.84 96.17 96.22 97.35 97.26 97.53 97.93 98.62 98.69 98.92 99.22 99.42 99.40 99.62 99.66
layer3.2.conv3
75.66 85.49 89.88 92.30 93.13 92.99 95.21 94.99 95.49 96.24 97.61 97.65 98.28 98.83 99.14 99.10 99.44 99.54
layer3.3.conv1
80.26 90.40 92.47 94.60 94.41 94.94 95.91 95.87 96.48 97.20 98.17 98.13 98.67 98.84 99.03 99.09 99.32 99.53
layer3.3.conv2
83.78 91.46 94.33 95.39 96.14 96.59 96.90 97.70 97.88 98.34 98.75 98.97 99.12 99.40 99.49 99.55 99.62 99.70
layer3.3.conv3
76.82 87.45 91.13 93.10 93.32 94.60 95.05 96.24 96.35 97.10 98.08 98.33 98.66 99.06 99.26 99.36 99.43 99.53
layer3.4.conv1
77.62 87.58 91.12 92.47 93.41 94.15 94.67 96.03 96.16 96.44 97.39 97.55 98.16 98.85 98.84 99.19 99.38 99.37
layer3.4.conv2
83.13 91.87 94.67 95.63 96.17 96.66 97.32 97.48 97.87 98.14 98.61 98.71 99.11 99.42 99.53 99.63 99.73 99.72
layer3.4.conv3
76.09 87.14 91.38 92.55 93.79 94.55 95.29 95.94 96.44 97.00 97.93 98.10 98.52 99.14 99.23 99.43 99.62 99.62
layer3.5.conv1
73.48 85.83 89.67 91.89 92.45 93.36 94.45 94.68 94.92 95.73 96.69 97.02 97.61 98.39 98.88 98.86 99.13 99.23
layer3.5.conv2
81.39 90.78 93.61 95.30 95.85 96.28 96.62 97.39 97.56 98.09 98.60 98.59 99.00 99.33 99.53 99.50 99.61 99.69
layer3.5.conv3
73.98 85.35 89.23 91.80 92.53 93.13 94.12 95.16 95.16 95.98 96.99 97.31 98.06 98.69 99.14 99.21 99.23 99.44
layer4.0.conv1
64.48 77.49 82.89 85.92 87.00 87.93 89.26 90.33 91.14 92.09 93.65 93.85 94.79 95.83 96.25 96.68 97.19 97.40
layer4.0.conv2
84.02 94.14 97.01 97.83 98.07 98.17 98.34 98.45 98.66 98.82 99.05 99.05 99.17 99.36 99.38 99.44 99.53 99.57
layer4.0.conv3
72.53 69 99.73
layer4.1.conv3
75.37 86.50 90.65 92.84 93.42 94.08 94.81 95.43 95.88 96.63 97.40 97.48 97.79 98.47 98.59 98.64 98.88 99.00
layer4.2.conv1
74.54 86.22 90.56 92.98 93.83 94.53 95.26 96.01 96.63 97.23 97.82 97.94 98.35 98.84 98.93 99.03 99.21 99.39
layer4.2.conv2
79.66 90.89 95.74 97.04 97.38 97.52 97.81 98.07 98.36 98.56 98.77 98.82 99.01 99.23 99.29 99.38 99.49 99.60
layer4.2.conv3
66.98 79.13 84.53 87.56 88.74 89.37 90.60 91.60 92.61 93.82 95.04 95.41 96.41 97.57 97.79 98.08 98.51 98.86
fc
86.44 94.47 96.69 97.58 97.87 98.10 98.44 98.65 98.79 99.04 99.25 99.27 99.40 99.49 99.54 99.56 99.57 99.60
Table 11 :
11Sparsity budgets of ResNet-50 using STDS + S-LATS on ImageNet (1024 batch size).Final threshold D
0.1
0.2
0.23
0.25
0.3
0.4
0.475
0.5
0.6
0.73
0.8
1.0
1.13
1.5
2.0
3.0
3.5
4.0
Top-1 Acc. (%)
76.61 76.15 75.97 75.88 75.58 74.90 74.61 74.04 73.68 72.84 72.73 71.67 70.68 69.20 67.16 63.81 61.72 60.40
Layer(s)
Sparsity (%)
Overall
79.00 88.81 90.15 90.92 92.34 94.19 95.01 95.25 95.93 96.53 96.78 97.30 97.54 98.00 98.38 98.79 98.93 99.04
Table 12 :
12Sparsity budgets of MobileNet-V1 using STDS + S-LATS on ImageNet.Final threshold D
0.4
0.6
0.8
0.9
Top-1 Acc. (%)
68.44 66.64 65.41 65.13
Layer(s)
Sparsity (%)
Overall
81.84 85.87 88.22 89.08
Table 13 :
13Sparsity budgets of ResNet-50 using PGH scheduler in pruning at initialization setting on ImageNet.Final threshold D
0.1
0.11
0.13
0.15
Top-1 Acc. (%)
74.69 72.89
68.23
62.22
Layer(s)
Sparsity (%)
Table 14 :
14Sparsity budgets of SEW ResNet-18 using STDS + S-LATS on ImageNet.Final threshold D
0.5
1.0
2.0
3.0
5.0
7.0
10
15
20
Top-1 Acc. (%)
62.59 62.3 60.806 59.816 57.572 55.454 53.74 50.024 47.586
Layer(s)
Sparsity (%)
Overall
60.11 71.18 79.74
83.75
88.11
89.96 92.57 94.30
95.21
conv1
38.70 49.83 62.40
66.90
73.93
79.44 80.71 85.43
86.90
layer1.0.conv1.0
48.98 60.42 73.58
76.98
82.13
86.19 87.39 90.02
91.90
layer1.0.conv2.0
37.88 50.73 61.17
66.69
71.79
76.28 79.39 82.53
84.40
layer1.1.conv1.0
40.37 54.44 66.75
72.12
77.59
80.65 84.02 86.53
87.84
layer1.1.conv2.0
39.94 51.82 62.50
67.47
73.18
77.47 80.40 84.16
85.53
layer2.0.conv1.0
39.78 52.67 64.46
68.41
74.30
77.91 81.07 83.65
85.84
layer2.0.conv2.0
46.68 59.70 70.55
74.26
79.64
83.32 85.99 88.92
90.62
layer2.0.downsample.0.0 16.81 25.96 36.11
41.39
47.29
54.48 59.00 63.48
67.22
layer2.1.conv1.0
48.87 62.95 73.58
77.19
81.24
84.64 87.51 89.59
91.89
layer2.1.conv2.0
49.85 63.01 71.87
77.04
81.44
84.69 87.65 90.72
91.80
layer3.0.conv1.0
48.98 60.98 71.23
75.30
80.16
83.48 85.91 88.49
90.02
layer3.0.conv2.0
56.54 68.36 77.14
81.23
86.21
88.38 91.20 93.05
94.38
layer3.0.downsample.0.0 27.11 37.56 48.76
54.21
61.76
66.29 70.50 75.62
79.10
layer3.1.conv1.0
61.25 72.95 80.88
83.91
87.98
89.96 91.59 93.66
94.48
layer3.1.conv2.0
60.16 71.00 79.03
82.63
86.38
88.59 90.49 92.50
93.61
layer4.0.conv1.0
58.92 70.02 78.17
82.08
86.23
88.38 90.44 92.65
93.86
layer4.0.conv2.0
64.63 74.90 82.40
85.93
89.74
91.58 93.27 95.14
95.96
layer4.0.downsample.0.0 31.91 42.72 53.25
58.97
65.57
69.29 72.34 75.55
77.45
layer4.1.conv1.0
67.33 77.95 85.81
89.57
93.42
95.61 96.99 97.89
98.38
layer4.1.conv2.0
63.04 72.57 80.14
84.08
88.71
91.79 94.04 95.35
96.01
fc
33.00 52.40 71.29
79.66
86.71
89.96 92.52 94.95
96.27
Table 15 :
15Sparsity budgets of SEW ResNet-18 using our implementation of original STDS (STDS + Sine scheduler) on ImageNet.Final threshold D
0.6
0.8
1.5
3.0
5.0
Top-1 Acc. (%)
61.114 60.218 57.458 52.966 48.436
Layer(s)
Sparsity (%)
Overall
76.06
79.91
85.91
90.62
93.19
conv1
48.07
53.99
68.15
79.49
85.71
layer1.0.conv1.0
61.37
69.80
81.65
88.07
91.37
layer1.0.conv2.0
56.20
62.73
73.62
79.45
83.24
layer1.1.conv1.0
56.73
64.55
79.27
84.19
87.55
layer1.1.conv2.0
58.62
65.86
73.35
79.39
84.28
layer2.0.conv1.0
58.34
66.05
75.50
80.44
85.06
layer2.0.conv2.0
66.05
72.17
80.13
85.38
88.55
layer2.0.downsample.0.0 28.70
36.28
49.60
58.87
64.78
layer2.1.conv1.0
68.32
74.54
82.47
87.28
89.96
layer2.1.conv2.0
68.65
73.85
80.92
86.52
89.55
layer3.0.conv1.0
68.04
73.05
80.11
86.17
88.89
layer3.0.conv2.0
74.94
79.00
85.20
89.69
92.33
layer3.0.downsample.0.0 44.28
50.82
62.14
70.68
76.31
layer3.1.conv1.0
78.62
82.50
87.89
91.36
93.57
layer3.1.conv2.0
76.93
80.73
86.32
90.13
92.28
layer4.0.conv1.0
76.41
80.41
86.30
90.59
92.95
layer4.0.conv2.0
80.60
83.84
88.43
92.07
93.92
layer4.0.downsample.0.0 49.57
55.31
63.96
71.63
75.94
layer4.1.conv1.0
82.60
85.81
90.74
94.64
96.62
layer4.1.conv2.0
77.17
79.68
84.09
88.96
92.32
fc
46.12
56.43
77.07
90.62
95.18
For example,STR (Kusupati et al., 2020) is the first to achieve >50% Top-1 accuracy of ImageNet on ResNet-50 under >99% sparsity. STDS is the first pruning algorithm achieving acceptable performance degradation (∼ 3% under 88.8% sparsity) for spiking neural networks with 18+ layers.
ACKNOWLEDGMENTSThis work is supported by grants from the National Key R&D Program of China under Grant 2020AAA01035, the Key-Area Research Development Program of Guangdong Province (2021B0101400002), and the National Natural Science Foundation of China under contract No. 62088102, No. 62176003, No. 62006132, No. 62027804 and No. 61825101. The computing resources of Pengcheng Cloudbrain are used in this research.
Prospect pruning: Finding trainable weights at initialization using meta-gradients. Milad Alizadeh, A Shyam, Tailor, M Luisa, Joost Zintgraf, Sebastian Van Amersfoort, Nicholas Donald Farquhar, Yarin Lane, Gal, International Conference on Learning Representations. Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, and Yarin Gal. Prospect pruning: Finding trainable weights at initializa- tion using meta-gradients. In International Conference on Learning Representations, 2022.
Few shot network compression via cross distillation. Haoli Bai, Jiaxiang Wu, Irwin King, Michael Lyu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Haoli Bai, Jiaxiang Wu, Irwin King, and Michael Lyu. Few shot network compression via cross distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):3203-3210, Apr. 2020.
A subband adaptive iterative shrinkage/thresholding algorithm. Ilker Bayram, Ivan W Selesnick, IEEE Transactions on Signal Processing. 583Ilker Bayram and Ivan W. Selesnick. A subband adaptive iterative shrinkage/thresholding algorithm. IEEE Transactions on Signal Processing, 58(3):1131-1143, 2010.
First-Order Methods in Optimization. Amir Beck, Society for Industrial and Applied Mathematics. 109Amir Beck. First-Order Methods in Optimization, pp. 109. Society for Industrial and Applied Mathematics, 2017.
Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. Amir Beck, Marc Teboulle, IEEE Transactions on Image Processing. 1811Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419-2434, 2009a.
A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Amir Beck, Marc Teboulle, SIAM Journal on Imaging Sciences. 21Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183-202, 2009b.
NESTA: A fast and accurate first-order method for sparse recovery. Stephen Becker, Jérôme Bobin, Emmanuel J Candès, SIAM Journal on Imaging Sciences. 41Stephen Becker, Jérôme Bobin, and Emmanuel J. Candès. NESTA: A fast and accurate first-order method for sparse recovery. SIAM Journal on Imaging Sciences, 4(1):1-39, 2011.
Estimating or propagating gradients through stochastic neurons for conditional computation. Yoshua Bengio, Nicholas Léonard, Aaron Courville, arXiv:1308.3432arXiv preprintYoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. M José, Bioucas-Dias, A T Mário, Figueiredo, IEEE Transactions on Image Processing. 1612JosÉ M. Bioucas-Dias and MÁrio A. T. Figueiredo. A new TwIST: Two-step iterative shrink- age/thresholding algorithms for image restoration. IEEE Transactions on Image Processing, 16 (12):2992-3004, 2007.
The restricted isometry property and its implications for compressed sensing. Emmanuel J Candès, Comptes Rendus Mathematique. 3469Emmanuel J. Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9):589-592, 2008.
Stable signal recovery from incomplete and inaccurate measurements. Emmanuel J Candès, Justin K Romberg, Terence Tao, Communications on Pure and Applied Mathematics. 598Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8):1207- 1223, 2006.
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, IEEE Transactions on Pattern Analysis and Machine Intelligence. 404Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4): 834-848, 2018.
Pruning of deep spiking neural networks through gradient rewiring. Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, and Yonghong Tian. Pruning of deep spiking neural networks through gradient rewiring. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 1713-1721, 2021.
State transition of dendritic spines improves learning of sparse spiking neural networks. Yanqi Chen, Zhaofei Yu, Wei Fang, Zhengyu Ma, Tiejun Huang, Yonghong Tian, Proceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine LearningYanqi Chen, Zhaofei Yu, Wei Fang, Zhengyu Ma, Tiejun Huang, and Yonghong Tian. State transi- tion of dendritic spines improves learning of sparse spiking neural networks. In Proceedings of the 39th International Conference on Machine Learning, pp. 3701-3715, 2022.
On some series representations of the Hurwitz zeta function. W Mark, Coffey, Journal of Computational and Applied Mathematics. 2161Mark W Coffey. On some series representations of the Hurwitz zeta function. Journal of Computa- tional and Applied Mathematics, 216(1):297-305, 2008.
Woodfisher: Efficient second-order approximation for neural network compression. Pal Sidak, Dan Singh, Alistarh, Advances in Neural Information Processing Systems. Sidak Pal Singh and Dan Alistarh. Woodfisher: Efficient second-order approximation for neural network compression. In Advances in Neural Information Processing Systems, pp. 18098-18109, 2020.
Pruning neural networks without any data by iteratively conserving synaptic flow. Hidenori Tanaka, Daniel Kunin, L Daniel, Surya Yamins, Ganguli, Advances in Neural Information Processing Systems. Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. In Advances in Neural Information Processing Systems, pp. 6377-6389, 2020.
Regression shrinkage and selection via the lasso. Robert Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological). 581Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996.
Picking winning tickets before training by preserving gradient flow. Chaoqi Wang, Guodong Zhang, Roger Grosse, International Conference on Learning Representations. Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations, 2020.
Learning soft threshold for sparse reparameterization using gradual projection operators. Xiaodong Wang, Xianxian Zeng, Yun Zhang, Dong Li, Weijun Yang, Neurocomputing. 488Xiaodong Wang, Xianxian Zeng, Yun Zhang, Dong Li, and Weijun Yang. Learning soft threshold for sparse reparameterization using gradual projection operators. Neurocomputing, 488:381-390, 2022.
Learning structured sparsity in deep neural networks. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li, Advances in Neural Information Processing Systems. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2082-2090, 2016.
Backpropagation through time: what it does and how to do it. J Paul, Werbos, Proceedings of the IEEE. 7810Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, 1990.
Sparse reconstruction by separable approximation. J Stephen, Robert D Wright, Nowak, A T Mário, Figueiredo, IEEE Transactions on Signal Processing. 577Stephen J. Wright, Robert D. Nowak, and MÁrio A. T. Figueiredo. Sparse reconstruction by sepa- rable approximation. IEEE Transactions on Signal Processing, 57(7):2479-2493, 2009.
Spatio-temporal backpropagation for training high-performance spiking neural networks. Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Luping Shi, Frontiers in Neuroscience. 12331Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 12:331, 2018.
A proximal-gradient homotopy method for the l1-regularized leastsquares problem. Lin Xiao, Tong Zhang, Proceedings of the 29th International Conference on Machine Learning (ICML-12). the 29th International Conference on Machine Learning (ICML-12)Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the l1-regularized least- squares problem. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 839-846, 2012.
A proximal-gradient homotopy method for the sparse least-squares problem. Lin Xiao, Tong Zhang, SIAM Journal on Optimization. 232Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM Journal on Optimization, 23(2):1062-1091, 2013.
A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. Zeke Xie, Issei Sato, Masashi Sugiyama, International Conference on Learning Representations. Zeke Xie, Issei Sato, and Masashi Sugiyama. A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. In International Conference on Learning Representations, 2021.
Energy-efficient models for high-dimensional spike train classification using sparse spiking neural networks. Hang Yin, John Boaz Lee, Xiangnan Kong, Thomas Hartvigsen, Sihong Xie, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningHang Yin, John Boaz Lee, Xiangnan Kong, Thomas Hartvigsen, and Sihong Xie. Energy-efficient models for high-dimensional spike train classification using sparse spiking neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2017-2025, 2021.
Drawing early-bird tickets: Toward more efficient training of deep networks. Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G Baraniuk, Zhangyang Wang, Yingyan Lin, International Conference on Learning Representations. Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, and Yingyan Lin. Drawing early-bird tickets: Toward more efficient training of deep networks. In International Conference on Learning Representations, 2020.
Learning to share: Simultaneous parameter tying and sparsification in deep learning. Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano, International Conference on Learning Representations. Dejiao Zhang, Haozhu Wang, Mario Figueiredo, and Laura Balzano. Learning to share: Simultane- ous parameter tying and sparsification in deep learning. In International Conference on Learning Representations, 2018.
Optimizing gradient-driven criteria in network sparsity. Yuxin Zhang, Mingbao Lin, Mengzhao Chen, Zihan Xu, Fei Chao, Yunhan Shen, Ke Li, Yongjian Wu, Rongrong Ji, arXiv:2201.12826Gradient is all you need. arXiv preprintYuxin Zhang, Mingbao Lin, Mengzhao Chen, Zihan Xu, Fei Chao, Yunhan Shen, Ke Li, Yongjian Wu, and Rongrong Ji. Optimizing gradient-driven criteria in network sparsity: Gradient is all you need. arXiv preprint arXiv:2201.12826, 2022.
Effective sparsification of neural networks with global sparsity constraint. Xiao Zhou, Weizhong Zhang, Hang Xu, Tong Zhang, 74.04 76.87 77.51 79.39 83.93 84.56 87.38 layer1.0.conv1 52.32 58.42 65.16 64.92 65.36 72.02 74.71 74.07 79.30 79.30 81.84 82.93 85.82 83.20 87.96 90.33 91.04 90.97 layer1.0.conv2 75.36 82.51 85.46 86.66 87.70 89.99 91.46 91.87 92.69 93.60 94.34 94.89 95.41 95.79 96.87 97.73 98.06 98.10 layer1.0.conv3 70.77 77.27 80.81 81.59 83.83 86.68 88.84 89.89 90.94 92.82 93.27 94.27 94.79 95.66 96.89 97.58 97.92 98.32Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)6472Xiao Zhou, Weizhong Zhang, Hang Xu, and Tong Zhang. Effective sparsification of neural networks with global sparsity constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3599-3608, 2021. conv1 40.67 46.90 55.02 53.64 56.36 62.73 66.07 64.46 71.75 70.97 72.86 74.04 76.87 77.51 79.39 83.93 84.56 87.38 layer1.0.conv1 52.32 58.42 65.16 64.92 65.36 72.02 74.71 74.07 79.30 79.30 81.84 82.93 85.82 83.20 87.96 90.33 91.04 90.97 layer1.0.conv2 75.36 82.51 85.46 86.66 87.70 89.99 91.46 91.87 92.69 93.60 94.34 94.89 95.41 95.79 96.87 97.73 98.06 98.10 layer1.0.conv3 70.77 77.27 80.81 81.59 83.83 86.68 88.84 89.89 90.94 92.82 93.27 94.27 94.79 95.66 96.89 97.58 97.92 98.32 |
59,316,477 | Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP | A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. [6] proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded byÕ( SA 2 (1−γ) 7 ). This improves the previously best known result of O( SA 4 (1−γ) 8 ) in this setting achieved by delayed Q-learning[13], and matches the lower bound in terms of as well as S and A except for logarithmic factors. * These two authors contributed equally † | [] | Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP
January 29, 2019
Kefan Dong
Institute for Interdisciplinary Information Sciences
Tsinghua University
China
Yuanhao Wang
Institute for Interdisciplinary Information Sciences
Tsinghua University
China
XiaoyuChen §2
MOE
School of EECS
Key Laboratory of Machine Perception
Peking University
Liwei Wang
MOE
School of EECS
Key Laboratory of Machine Perception
Peking University
Center for Data Science
Institute of Big Data Research
Peking University
Beijing
Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP
January 29, 2019
A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. [6] proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded byÕ( SA 2 (1−γ) 7 ). This improves the previously best known result of O( SA 4 (1−γ) 8 ) in this setting achieved by delayed Q-learning[13], and matches the lower bound in terms of as well as S and A except for logarithmic factors. * These two authors contributed equally †
Introduction
The goal of reinforcement learning is to construct algorithms that learn and plan in sequential decision making systems when the underlying system dynamics are unknown. A typical model in RL is Markov Decision Process (MDP). At each time step, the environment is in state s. The agent may take an action a, obtain a reward, and then the environment may transit to another state. In reinforcement learning, the transition probability distribution is unknown. The algorithm needs to learn the transition dynamics of MDP, while aiming to maximize the cumulative reward. This causes an exploration-exploitation dilemma: whether to act to gain new information (explore) or to act consistently with past experience to maximize reward (exploit).
Theoretical analysis of reinforcement learning falls into two broad categories: those assuming a simulator (a.k.a. generative model), and those without a simulator. In the first category, the algorithm is able to query the outcome of any state action pair from an oracle. The emphasis is on the number of calls needed to estimate the Q value or to output a near-optimal policy. There has been extensive research in literature following this line of research, the majority of which focuses on discounted infinite horizon MDPs [1,4,12]. The current results have achieved near-optimal time and sample complexities [12].
Without a simulator, there is a dichotomy between finite-horizon and infinite-horizon settings. In finite-horizon settings, there are straightforward definitions for both regret and sample complexity; the latter is defined as the number of samples needed before the policy becomes near optimal. In this setting, extensive research in the past decade [6,2,5] has achieved great progress, and established nearly-tight bounds for both regret and sample complexity.
The infinite-horizon setting is a very different matter. First of all, the performance measure we use cannot be a straightforward extension of the sample complexity defined above. Instead, the measure of sample efficiency we adopt is the so-called sample complexity of exploration [7], which is also a widely-accepted definition. This measure counts the number of times that the algorithm "makes mistakes" along the whole trajectory. See also [14] for further discussions regarding this issue.
Several model based algorithms have been proposed for infinite horizon MDP, for example R-max [3], MoRmax [15] and UCLR-γ [8]. It is noteworthy that there still exists a considerable gap between the state-of-the-art algorithm and the theoretical lower bound [8], but the gap is only about 1/(1 − γ).
Though model-based algorithms have been proved to be sample efficient in various MDP settings, most state-of-the-art RL algorithms are developed in the model-free paradigm [11,10,9]. Model-free algorithms are more flexible and require less space, which have achieved remarkable performance on benchmarks such as Atari games and simulated robot control problems.
For infinite horizon MDPs without access to simulator, the best known result of sample complexity of exploration isÕ( SA 4 (1−γ) 8 ), achieved by delayed Q-learning [13]. It is also the first algorithm to achieve aÕ(SA) bound in sample complexity of exploration. The authors provide a novel strategy of argument when proving the upperbound for the sample complexity of exploration, namely identifying a sufficient condition for optimality, and then bound the number of times that this condition is violated.
However, the results of Delayed Q-learning still leave a quadratic gap in 1/ from the best-known lowerbound. This is partly because, many samples collected by the algorithms actually have no direct effect on the learning process. If one wants to bridge this gap, intuitively, the algorithm might need to make use of every sample. This, as well as the success of the Q-learning with UCB algorithm [6] in proving a regret bound in finite-horizon settings, motivates us to incorporate a UCB-like exploration term into our algorithm.
In this work, we propose a Q-learning algorithm with a UCB exploration policy. We show the sample complexity of exploration bound of the algorithm isÕ( SA 2 (1−γ) 7 ). This striclty improves the previous best known result due to Delayed Q-learning. It also matches the lower bound in terms in the dependence on , S and A (logarithmic factors ignored).
Although our algorithm is quiet similar to the algorithm proposed by [6], the analysis of the sample complexity of exploration for infinite-horizon MDP is challenging so that the techniques developed in [6] do not directly apply here. Please see Section 3.2 for detailed explanations.
The rest of the paper is organized as follows. After introducing the notation used in the paper in Section 2, we describe our infinite Q-learning with UCB algorithm in Section 3. We then state our main theoretical results, which are in the form of PAC sample complexity bounds. In Section 4 we present some interesting properties beyond sample complexity bound. Finally, we conclude the paper in Section 5.
Preliminary
We consider a Markov Decision Process defined by a five tuple S, A, p, r, γ , where S is the state space, A is the action space, p(s |s, a) is the transition function, r : S × A → [0, 1] is the deterministic reward function, and 0 ≤ γ < 1 is the discount factor for rewards. Let S = |S| and A = |A| denote the number of states and the number of actions respectively.
Starting from a state s 1 , the agent interacts with the environment for infinite number of time steps. At each time step, the agent observes state s t ∈ S, picks action a t ∈ A, and receives reward r t , then the system transits to next state s t+1 .
A policy π t : S → A refers to the non-stationary control policy of the algorithm in step t. We use V πt (s) to denote the value function under policy π t , which is defined as
V πt (s) = E[ ∞ i=1 γ i−1 r(s i , π t (s i ))|s 1 = s].
We also use V * (s) = sup π V π (s) to denote the value function of the optimal policy. Accordingly, we define Q πt (s, a) = r(s, a) + E[ ∞ i=2 γ i−1 r(s i , π t (s i ))|s 1 = s, a 1 = a] as the Q function under policy π t ; Q * (s, a) is the Q function under optimal policy π * .
We use the sample complexity of exploration defined in [7] to measure the learning efficiency of our algorithm. This sample complexity definition has been widely used in previous works [13,8,14]. Definition 1. Sample complexity of Exploration of an algorithm ALG is defined as the number of time steps t such that the non-stationary policy π t at time t is not -optimal for current state s t ,
i.e. V πt (s t ) < V * (s t ) − .
We use the following definition of PAC-MDP [13].
Definition 2. An algorithm ALG is said to be PAC-MDP (Probably Approximately Correct in Markov Decision Processes) if, for any and δ, the sample complexity of ALG is less than some polynomial in the relevant quantities (S, A, 1/ , 1/δ, 1/(1 − γ)), with probability at least 1 − δ.
Finally, recall that Bellman equation is defined as the following:
V πt (s) = Q πt (s, π t (s)) Q πt (s, a) := (r t + γPV πt ) (s, a) ∀s ∈ S, V * (s) = Q * (s, π t (s)) Q * (s, a) := (r t + γPV * ) (s, a) ∀s ∈ S, which is frequently used in our analysis. Here we denote [PV πt ] (s, a) := E s ∼P(·|s,a) V πt (s ).
Main Results
In this section, we present the UCB Q-learning algorithm and the sample complexity bound.
Algorithm
Our UCB Q-learning algorithm (Algorithm 1) starts from an optimistic estimation of action value function Q(s, a) and its historical minimum valueQ(s, a). The target value U at time step t consists of three parts: the immediate reward r(s t , a t ), the discounted value function of the next state γV (s t+1 ) and the UCB term b k . HereV (s) ← max a∈AQ (s, a) is the value function maintained by our algorithm.
The learning rate is defined as
α k = H + 1 H + k .
Algorithm 1 Infinite Q-learning with UCB Parameters: 1 , H, ι, γ Initialize Q(s, a),Q(s, a) ← 1 1−γ , N (s, a) ← 0 for t = 1, 2, ... do Take action a t ← arg max a Q (s t , a )
5:
Receive reward r t and transit to s t+1
N (s t , a t ) ← N (s t , a t ) + 1 k ← N (s t , a t ), b k ← c 2 1−γ Hι(k) k U = r(s t , a t ) + b k + γV (s t+1 ) Q(s t , a t ) ← (1 − α k )Q(s t , a t ) + α k U 10:Q(s t , a t ) ← min(Q(s t , a t ), Q(s t , a t ))
end for ι(k) = ln(SA(k + 1)(t + 2)/δ) is an log factor. 1 is a parameter whose value will be determined in the proof of Theorem 1. H is chosen as ln 1/((1−γ) 1 )
ln 1/γ ≤ ln 1/((1−γ) 1 ) 1−γ .
We also introduce here some notations that are useful in our analysis. N t (s, a) denotes the number of times that (s, a) is experienced as (s r , a r ) with r < t; τ (s, a, k) denotes the time step at which (s t , a t ) = (s, a) for the k-th time; if this state-action pair is not visited that many times, τ (s, a, k) = ∞. Q t (s, a) andQ t (s, a) denotes the Q andQ value of (s, a) that the algorithm maintains when arriving at s t respectively.
Sample Complexity of Exploration
Our main result is the following sample complexity of exploration bound. Theorem 1. For any > 0, δ > 0, with probability 1 − δ, the sample complexity of exploration (i.e., the number of time steps t such that π t is not -optimal at s t ) of Algorithm 1 is at most
O SA ln 1/δ 2 (1 − γ) 7 ,
whereÕ hides log factors of 1/ , 1/(1 − γ) and SA.
We first point out the obstacles for proving the theorem and why the techniques in [6] do not directly apply here. We then give a high level description of the ideas of our approach.
One important issue is caused by the difference in sample complexity for finite and infinite horizon MDP. For finite horizon settings, sample complexity only characterizes the performance (i.e., V π (s 1 ) − V * 1 (s 1 )) of a policy π at the starting state of episodes s 1 . On the contrary, for infinite horizon settings, sample complexity of exploration characterizes the performance for all actions that the policy takes. For example, consider an MDP where there exists a state s such that, for any policy π, s can only be reached with exponentially small probability p from the starting state s 1 . The performance of policy π at state s, denoted as δ(s) = V * (s) − V π (s), only contributes an exponentially small part (i.e., pδ(s)) to the performance at s 1 , which is the only consideration of sample complexity defined finite horizon. More concretely, even if policy π takes the worst actions at state s, π can still be an -optimal policy for > pδ(s). However, for sample complexity of exploration, if π behaves poorly at s, it will contribute to the sample complexity every time s is encountered, which may happen infinite times since the trajectory has infinite length. Therefore π is not considered as a good policy under the sample complexity defined in infinite horizon MDPs.
The major reason that the techniques in [6] do not apply to our problem is the following. The key idea in [6] is a clever design of the learning rate so that at episode k and time h, the learning error can be decomposed as the errors from a set of consecutive episodes before k at time h + 1. This nice property allows one to control the regret in a recursive manner. However the property heavily relies on the fact that the setting is episodic finite-horizon MDP.
For the setting of infinite-horizon MDP, the above property does not exist anymore. For our setting, suppose at time t the agent is at state s t and takes action a t . Then the learning error at t only depends on those previous time steps such that the agent encountered the same state as s t and took the same action as a t . Thus the learning error at time t cannot be decomposed as errors from a set of consecutive time steps before t, but errors from a set of non-consecutive time steps without any structure. (Please see Fig. 1 for an illustration.) Therefore, we have to control the sum of learning errors over an unstructured set of time steps. This makes the analysis challenging. Now we give a very brief description of the proof of Theorem 1. The basic idea is to establish a sufficient condition so that π t learned at step t is -optimal for state s t , i.e. V * (s t ) − V πt (s t ) ≤ . At a high level this follows the approach of [13]. It turns out that a sufficient condition for
V * (s t ) − V πt (s t ) ≤ is that V * (s t ) − Q * (s t , a t )
is small for a few time steps t within an interval [t, t + R] for a carefully chosen R. We then bound the total number of bad time steps on which V * (s t ) − Q * (s t , a t ) is large for the whole MDP. This in turn relies on a key technical lemma (Lemma 2), which controls the weighted sum of learning errors.
We now present the formal proof of Theorem 1.
Proof. (Proof for Theorem 1) For a fixed s t , let TRAJ(R) be the set of length-R trajectories starting from s t . Our goal is to give a sufficient condition so that π t , the policy learned at step t, is -optimal. For any 2 > 0, define R := ln
1 2 (1−γ) /(1 − γ). Denote V * (s t ) − Q * (s t , a t ) by ∆ t . We have V * (s t ) − V πt (s t ) =V * (s t ) − Q * (s t , a t ) + Q * (s t , a t ) − V πt (s t ) =V * (s t ) − Q * (s t , a t ) + γP V * − V π+1 (s t , π t (s t )) =V * (s t ) − Q * (s t , a t ) + γ s t+1 p (s t+1 |s t , π t (s t )) · [V * (s t+1 ) − Q * (s t+1 , a t+1 )] + γ s t+1 ,s t+2 p (s t+2 |s t+1 , π t+1 (s t+1 )) · p (s t+1 |s t , π t (s t )) [V * (s t+2 ) − Q * (s t+2 , a t+2 )]
. . .
≤ 2 + traj∈ TRAJ(R) p(traj) · R−1 j=0 γ j ∆ t+j ,(1)
where the last inequality holds because γ R 1−γ ≤ 2 , which follows from the definition of R. For any fixed trajectory of length R starting from s t , let
∆ t = V * (s t ) − Q * (s t , a t ). Consider the sequence (∆ t ) t≤t ≤t+R . Let X (i) t be the i-th largest item of (∆ t ) t≤t ≤t+R . Rearranging Eq. (1), we obtain V * (s t ) − V πt (s t ) ≤ 2 + E traj R i=1 γ i−1 X (i) t .
Although the right-hand-side of the above inequality consists of the sum of R items X
(1) t , ..., X (R) t ,
we claim that there are log 2 R items, if they are small then π t is good. Specifically, let ξ i :=
1 2 i+2 2 ln 1 1−γ −1 . If for all 0 ≤ i ≤ log 2 R , E[X (2 i ) t ] ≤ ξ i ,(2)then V * (s t ) − V πt (s t ) ≤ 2 2 .
To see why this is true, note that X (i) t is monotonically decreasing with respect to i. Eq. (2) implies that for 1/2 < γ < 1,
E R i=1 γ i−1 X (i) t = R i=1 γ i−1 E[X (i) t ] ≤ R i=1 γ i−1 2 − log 2 i −2 2 ln 1 1 − γ −1 ≤ R i=1 γ i−1 i 2 ln 1 1 − γ −1 ≤ 2 ,
where the last inequality follows from the fact that
∞ i=1 γ i−1 i = 1 γ ln 1 1−γ . For fixed i, let M = 2 log 2 1 ξ R (1−γ) , and η j = ξ i M +1 · 2 j−1 . The reason behind the choice of M is to ensure that η M > 1/(1 − γ) 1 . It follows that, for 1 ≤ j ≤ M , E[X (2 i ) t ] = 1/(1−γ) 0 Pr X (2 i ) t > x dx ≤ η 1 + M j=1 η j Pr[X (2 i ) t > η j−1 ].
Thus, a sufficient condition for E[X
(2 i ) t ] ≤ ξ i is, η j Pr[X (2 i ) t > η j−1 ] ≤ ξ i M + 1 , ∀1 ≤ j ≤ M.(3)
Eq.
(3) implies that if a time step t is not (2 2 )-optimal, there exists 0 ≤ i < log 2 R and 1 ≤ j ≤ M such that
η j Pr[X (2 i ) t > η j−1 ] > ξ i M + 1 .
Now, the sample complexity can be bounded by the number of (i, j) pairs so that the above inequality holds.
We first fix i and consider how many j there are. The following lemma helps to provide an estimation.
Lemma 1. For fixed t and η > 0, let B (t) η be the event that V * (s t ) − Q * (s t , a t ) > η 1−γ in step t. If η > 2 1 , then with probability at least 1 − δ/2, t=∞ t=1 I B (t) η ≤ SA ln SA ln 1/δ η 2 (1 − γ) 3 · polylog 1 1 , 1 1 − γ ,(4)
where I[·] is the indicator function.
By lemma 1, for any 1 ≤ j ≤ M ,
∞ t=1 I [V * (s t ) − Q * (s t , a t ) > η j−1 ] ≤ C, where C = SA ln SA ln 1/δ η 2 j (1 − γ) 5 ·P .(5)
HereP is a shorthand for polylog
1 1 , 1 1−γ . Let A t = I[X 2 (i) t ≥ η j ]
be a Bernoulli random variable, and {F t } t≥1 be the filtration generated by random variables {(s τ , a τ ) :
1 ≤ τ ≤ t}. Since A t is F t+R −measurable, for any 0 ≤ k < R, {A k+tR − E[A k+tR | F k+tR ]} t≥0 is a martingale difference sequence. For any 0 ≤ k < R, by Azuma- Hoeffiding inequality, after T = O C 2 i · η j (M +1) ξ i
ln(RM L) time steps (if it happens that many times) with 1 We assume that ξR < 1/12, which is true when < 1/3.
Pr X 2 (i) k+tR ≥ η j = E[A k+tR ] > ξ i η j (M + 1) ,(6)
we have t A k+tR ≥ C/2 i with probability at least 1 − δ/(2M RL). Here the δ is the same as that in (5). The reason that δ appears here is that T contains a log 1/δ factor. On the other hand, if A j+tR happens, within [k + tR, k + tR + R − 1], there must be at least 2 i time steps at which V * (s t ) − Q * (s t , a t ) > η j−1 . The latter event happens at most C times, and
[k + tR, k + tR + R − 1] are disjoint. Therefore, ∞ t=0 A j+tR ≤ C/2 i .
This suggests that the event described by (6) happens at most T times for fixed i and j. Via a union bound on 0 ≤ k < R, we can show that with probability 1 − δ/(2M L), there are at most RT time steps where
Pr X (2 i ) t ≥ 2 · 2 i−1 > ξ i η j (M + 1)
.
Then, we sum over j to upper bound the number of time steps that violate condition (3) for a fixed i:
M j=1 SA(M + 1)R ln 1/δ ln SA η j ξ i · 2 i (1 − γ) 5P = SA ln SA ln 1/δ ξ 2 i · 2 i (1 − γ) 6P .
Finally, we sum over i to obtain an upper bound of the number of time steps when (3) is violated with some i and j (with probability 1 − δ/2):
log 2 R i=0 SA ln SA ln 1/δ ξ 2 i · 2 i (1 − γ) 6P ≤ log 2 R i=0 SA · 2 i+6 ln SA ln 1/δ 2 2 (1 − γ) 6P ≤ SAR ln SA ln 1/δ 2 2 (1 − γ) 6P ≤ SA ln SA ln 1/δ 2 2 (1 − γ) 7P .
It should be stressed that throughout the lines,P is a shorthand for an asymptotic expression, instead of an exact value. Note that our final choice of 2 and 1 is
2 = 2 , 1 = 16R(M + 1) ln 1 1−γ .
The only consideration of 1 is to meet the requirements of lemma 1 throughout the proof. That is, we require
2 1 < 2 16R(M + 1) ln 1 1 − γ −1 .
It is not hard to see that ln 1 1 = poly(ln 1 , ln 1 1−γ ). This immediately implies that the number of time steps such that
(V * − V π ) (s t ) > isÕ SA ln 1/δ 2 (1 − γ) 7 ,
where hidden factors are poly(ln 1 , ln 1 1−γ , ln SA). With probability 1 − δ/2, the result of lemma 1 holds; together with the 1 − δ/2 probability above, we can see that this sample complexity bound holds for 1 − δ probability.
Key Lemmas
In this subsection, we state a key lemma (lemma 2). This lemma provides the main technical tool for controlling the sum of errors. Lemma 1 in subsection 3.2 also follows from Lemma 2 and some elementary calculations.
Definition 3. A sequence (w t ) t≥1 is said to be a (C, w)-sequence for C, w > 0, if 0 ≤ w t ≤ w for all t ≥ 1, and t≥1 w t ≤ C.
Lemma 2.
For every (C, w)-sequence (w t ) t≥1 , with probability 1 − δ/2, the following holds:
t≥1 w t (Q t − Q * )(s t , a t ) ≤ C 1 1 − γ + O wSAC (C) (1 − γ) 2.5 + wSA ln C (1 − γ) 3 ln 1 (1 − γ) 1 .
where (C) = ι(C) ln 1 (1−γ) 1 is a log-factor. Here we give a sketch of the proof. The rigorous proof can be found in supplementary material. Proof sketch:
Let α i t = α i t j=i+1 (1 − α j ), α 0 t = I[t = 0].
By expanding the algorithm's update rule and applying a concentration inequality, it is not hard to show that with probability 1 − δ/2,
0 ≤ (Q p − Q * )(s, a) ≤ α 0 t 1 − γ + β t + t i=1 γα i t (V t i − V * )(s t i +1 ),
where t = N p (s, a), t i = τ (s, a, i) and β t = c 3 Hι(t)/((1 − γ) 2 t). Note that to use a union bound over infinite number of events, we partition probability non-uniformly among timesteps. That is, we assign probability δ/ [SA(k + 1)(k + 2)] for the concentration when a state-action pair is visited the k-th time. By doing so, these probabilities sum to δ/2. Next, we consider the weighted sum t≥1 w t (Q t − Q * )(s t , a t ) by expanding it with the inequality above. Let n t = N t (s t , a t ) for simplicity. The result will be
t≥1 w t (Q t − Q * )(s t , a t ) ≤ t≥1 w t α 0 nt 1 − γ + t≥1 w t β nt + γ t≥1 w t nt i=1 α i nt V τ (st,at,i) − V * (s τ (st,at,i)+1 )
The result has three parts. The first part is easily controlled by SA 1−γ . The second and trickiest part consists of a weighted sum of β t s. The key observation is that β t is convex and decreasing with respect to t. Using the rearrangement inequality and Jensen's inequality, we can show that the second term is O (1 − γ) −1 wSAHCι(C) , where w is the supreme of {w t }. The ln C term in the result comes from technical reasons and is dominated by √ C term when C is large enough. The third part is γ multiplied to another weighted sum with total weight no larger than C. By carefully chosen learning rate α t , the supreme of {w t } (cf. Section 4 of [6]) will only be magnified by a factor of 1+1/H. That is, the term is a weighted sum whose weights are a (C, (1+1/H)w)−sequence. In fact, it can be shown that
t≥1 w t (Q t − Q * )(s t , a t ) ≤ c 3 wSAHCι(C) 1 − γ + O wSAH 1 − γ ln C + γ t≥1 w t+1 Q t+1 − Q * (s t+1 , a t+1 ),
where w t+1 is the (C, (1 + 1/H)w)−sequence mentioned above. Note that the last term of right hand size of this inequality has the same form of the left hand size. Therefore we can repeat this unrolling process R = H times until the remaining weighted summation is bound by O( ) due to discount. Putting things together will result in the bound above.
Finally, We explain how to prove Lemma 1 with Lemma 2. (Full proof can be found in supplementary materials.) Note that sinceQ t ≥ Q * , a t ).
V * (s t ) − Q * (s t , a t ) ≤Q t (s t , a t ) − Q * (s t ,
We now consider a set J = {t : V * (s t ) − Q * (s t , a t ) > η(1 − γ) −1 }, and consider the (|J|, 1)weight sequence defined by w t = I [t ∈ J]. We can now apply this lemma to weighted sum a t )] . On the one hand, this quantity is obviously at least |J|η(1 − γ) −1 . On the other hand, by lemma 2, it is upper bounded by the weighted sum of (Q − Q * )(s t , a t ), which is in turn bounded by lemma 2. Thus we get
t≥1 w t [V * (s t ) − Q * (s t ,|J|η(1 − γ) −1 ≤ C 1 1 − γ + O SA|J| (|J|) (1 − γ) 2.5 + wSA ln |J| (1 − γ) 3 ln 1 (1 − γ) 1 .
Now focus on the dependence on |J|. The left-hand-side has linear dependence on |J|, whereas the left-hand-side has aÕ |J| dependence. This allows us to solve out an upper bound on |J|.
Discussion
In this section, we discuss the implication of our results, and present some interesting properties of our algorithm beyond its sample complexity bound.
Comparison with other results
Previously, the best sample complexity bound for a model-free algorithm isÕ SA 4 (1−γ) 8 (hiding all logarithmic terms). To the best of our knowledge, the current best minimax lower bound for sample complexity is Ω SA 2 (1 − γ) 3 ln 1/δ due to [8]. There was a quadratic gap between this lower bound and Delayed Q-learning's result in the dependence on 1/ , which our result closes. The gap between our results and this lower bound lies only in the dependence on 1/(1 − γ) and logarithmic terms of SA, 1/1 − γ and 1/ .
In model-based algorithms, better sample complexity results in infinite horizon settings have been shown [15,8]. To the best of our knowledge, the best published result without further restrictions on MDPs isÕ SA 2 (1−γ) 6 due to [15], which is (1 − γ) smaller than our upper bound. From a practical point of view, there is a clear distinction between model-based and model-free approaches. Our claim that we improved the best model-free result is based on such a rough classification. In this sense, we can also claim that, for infinite-horizon reinforcement learning, model-free approach can be nearly as sample efficient as the best model-based ones.
If we take a theoretical point of view, however, until this date there is no clear and definite classification criterion between model-free and model-based algorithm. One candidate criterion is based on space complexity [13]. In this sense, our algorithm is indeed much more memory-efficient. Our algorithm stores O(SA) values, whereas the algorithms of [15] needs Ω(S 2 A) memory even to store the transition model.
Monotonicity
In our algorithm, the state-action value based on which the algorithm acts isQ. For any (s, a),Q t (s, a) has the interesting property of being decreasing over time. Since we know thatQ(s, a) ≥ Q * (s, a), this means that the Q-error Q − Q * (s, a) also decreases over t. Although this property is not used in our current proof, it comes at no cost.
Application to finite horizon tasks
In a finite horizon setting, it is shown that when ignoring the dependence on H, S, A, anÕ (T α ) regret bound can be translated into anÕ −1/α PAC sample complexity bound. On the other hand, anÕ −β PAC sample complexity bound can be translated into anÕ T β/(1+β) regret bound [6]. For example, the algorithm withÕ √ T regret has PAC sample complexityÕ −2 . And the algorithm withÕ −2 PAC sample complexity has a regret in the order ofÕ T 2/3 . Note that the PAC sample complexity defined for finite horizon MDP is different from that defined for infinite horizon MDP. However by applying the analysis of our algorithm on the sample complexity of exploration, it can be shown that the algorithm has a regret boundÕ T 1/2 and a PAC sample complexityÕ −2 when running on finite horizon MDPs.
First we define a mapping from a finite horizon MDP to an infinite horizon MDP so that our algorithm can be applied. LetV t be the value function inM at time t and V k h the value function in M at episode k, step h. It follows thatV * (s s 1 ,1 ) = γ H 1−γ H V * 1 (s 1 ). And the policy mapping is defined as π h (s) =π(s s,h ) for policyπ inM. Value functions in MDP M andM are closely related in a sense that, V * (s s 1 ,1 ) = γ H 1−γ H V * 1 (s 1 ), and any -optimal policyπ ofM corresponding to an ( /γ H )-optimal policy π in M (see supplementary material for proof). Note that here
γ H = (1 − 1/H) H = O(1) is an constant.
For any > 0, by running our algorithm onM forÕ( 3SAH 9
2 ) time steps, the starting state s 1 is visited at leastÕ( 3SAH 8 2 ) times, and at most 1/3 of them are not -optimal. If we select the policy uniformly randomly from the policy π tH+1 for 0 ≤ t < T /H, with probability at least 2/3 we can get an -optimal policy. Therefore the PAC sample complexity isÕ −2 after hiding S, A, H terms.
On the other hand, we want to show that for any K episodes,
Regret(T ) = T /H k=1 V * (s 1 ) − V k 1 (s 1 ) ∝ T 1/2 .
The reason why our algorithm can have a better reduction from regret to PAC is that, after choosing 1 , it follows from the argument of theorem 1 that for all 2 >Õ( 1 /(1 − γ)), the number (1 − γ)).
Formally, let X k = V * (s 1 ) − V k 1 (s 1 ) be the regret of k-th episode. For any T , set = SA/T and 2 =Õ( 1 /(1 − γ)). Let M = log 2 1 2 (1−γ) . It follows that,
Regret(T ) ≤ T 2 + M i=1 k : {X k ≥ 2 · 2 i−1 } 2 · 2 i ≤Õ T 2 + M i=1 SA ln 1/δ 2 · 2 i−2 ≤Õ √
SAT ln 1/δ with probability 1 − M δ. Note that theÕ notation hides the polylog (1/ 1 , 1/(1 − γ)) which is, by our reduction, polylog (H, T, S, A).
Future Work
There is still a quartic gap in the dependence on 1/(1 − γ) between our result and the best lower bound. Future work may close this gap, either through refined analysis or through more sophisticated algorithms.
Conclusion
Infinite-horizon MDP with discounted reward is a setting that is arguably more difficult than other popular settings, such as finite-horizon MDP. Previously, the best sample complexity bound achieved by model-free reinforcement learning algorithms in this setting isÕ( SA 4 (1−γ) 8 ), due to Delayed Qlearning [13]. In this paper, we propose a variant of Q-learning that incorporates upper confidence bound, and show that it has a sample complexity ofÕ( SA 2 (1−γ) 7 ). This matches the best lower bound except in dependence on 1/(1 − γ) and logarithmic factors.
Acknowledgments
We thank Chongjie Zhang for helpful discussions.
A Appendix
A.1 Proof of Lemma 1
Proof. Let I = {t : V * (s t ) − Q * (s t , a t ) > η 1−γ }. By lemma 2, with probability 1 − δ, η|I| 1 − γ ≤ t∈I (V * (s t ) − Q * (s t , a t )) ≤ t∈I Q t − Q * (s t , a t ) ≤ |I| 1 1 − γ + O 1 (1 − γ) 5/2 SA|I| (|I|) + SA (1 − γ) 3 ln |I| ln 1 1 (1 − γ) ≤ |I| 1 1 − γ + O ln 1 1 (1 − γ) · SA|I| ln SA|I| δ (1 − γ) 5/2 + SA ln |I| (1 − γ) 3 ≤ |I| 1 1 − γ + O ln 1 δ ln 1 1 (1 − γ) · SA|I| ln SA|I| (1 − γ) 5/2 + SA ln |I| (1 − γ) 3
Suppose that |I| = SAk 2 η 2 (1−γ) 3 ln SA, for some k > 1. Then it follows that for some constant C 1 ,
η|I| 1 − γ = k 2 SA ln SA (1 − γ) 4 η ≤ 2 (η − 1 )|I| 1 − γ ≤ C 1 ln 1 δ ln 1 1 (1 − γ) SA|I| ln (SA|I|) (1 − γ) 5/2 + SA ln |I| (1 − γ) 3 ≤ C 1 ln 1 δ ln 1 1 (1 − γ) SAk η(1 − γ) 4 ln SA · (ln SA + ln |I|) + SA ln |I| (1 − γ) 3 .
Therefore k 2 ln(SA) ≤ C 1 ln 1 δ ln 1
1 (1 − γ) (k (ln SA + ln |I|) + η(1 − γ) ln |I|) ≤ kC 1 ln 1 δ ln 1 1 (1 − γ) · (ln SA + 2 ln |I|) ≤ kC 1 ln 1 δ ln 1 1 (1 − γ) · 3 ln SA + 4 ln k + 6 ln 1 η(1 − γ) ≤ 6kC 1 ln 1 δ ln 2 1 1 (1 − γ) (ln SA + ln ek) . Let C = 6C 1 ln 1 δ ln 2 1 1 (1−γ) . k ≤ C (2 + ln k).(7)
If k ≥ 10C ln C , then
k − C 2 + ln k δ ≥ 8C ln C − (2 + ln 10)C ≥ 4C 2 ln C − 4 ≥ 0,
which means violation of (7). (We assume C ≥ 2.) Therefore
k ≤ 10C ln C ≤ 360C 2 1 ln 4 1 1 (1 − γ) .(8)
It immediately follows that
|I| = SAk 2 η 2 (1 − γ) 3 ln SA (9) ≤ SA ln SA η 2 (1 − γ) 5 · ln 1 δ · O ln 8 1 1 (1 − γ) .(10)
A.2 Proof of Lemma 2 (2) If at time p,Q(s, a) is updated at line 10, then after this update is finished,
Q p+1 (s, a) ≥ Q p+1 (s, a)
.
α i t [b i + γ (V t i − V * ) (s t i +1 ) + γ (V * (s t i +1 ) − PV * (s, a))] .
The identity above holds for arbitrary p, s and a. Now fix s ∈ S, a ∈ A and p ∈ N. Let t = N p (s, a), t i = τ (s, a, i). The t = 0 case is trivial; we assume t ≥ 1 below. Now consider an arbitrary fixed k. Define
∆ i = α i k · I[t i < ∞] · PV * −P t i V * (s, a)
Let F i be the σ-Field generated by random transitions from step
1 to t i . Clearly E [∆ i |F i ] = 0, while ∆ i is measurable in F i+1 .
Also, clearly |∆ i | ≤ 2 1−γ . Therefore, ∆ i is a martingale difference sequence; by the Azuma-Hoeffding inequality,
Pr k i=1 ∆ i > η ≤ 2 exp − η 2 8 (1 − γ) −2 k i=1 (α i k ) 2 .(13)
By choosing η, we can show that with probability δ/ [SA(k + 1)(k + 2)]
k i=1 ∆ i ≤ c 1 1 − γ · k i=1 (α i k ) 2 · ln 2(k + 1)(k + 2)SA δ ≤ c 2 1 − γ Hι(k)
k .
Here c 1 = 2 √ 2 and c 2 = 8 will do. ι(k) = ln (k+1)(k+2)SA δ . By union bound, this holds for arbitrary k simultaneously with probability 1 − δ/(2SA); it holds for arbitrary s , a with probability 1 − δ/2. Therefore it holds for the random t = N p (s, a) for that probability as well (p can be arbitrary).
Proof of the right hand side of (11): We also know that (b k = c 2
1−γ Hι(k) k ) c 2 1 − γ Hι(k) k ≤ k i=1 α i k b i ≤ 2c 2 1 − γ Hι(k) k .
Therefore,
(Q p − Q * )(s, a) ≤ γ t i=1 ∆ i + t i=1 α i t γ(V t i − V * )(x t i +1 ) + b i ≤ 3c 2 1 − γ Hι(t) t + t i=1 γα i t (V t i − V * )(x t i +1 ) ≤ α 0 t 1 − γ + t i=1 γα i t (V t i − V * )(x t i +1 ) + β t .
Note that β t = c 3 (1 − γ) −1 Hι(t)/t; c 3 = 3c 2 will be enough.
Proof of the left hand side of (11): Now, we state a proposition that Q p ≥ Q * for all (s, a) and p ≤ p . This proposition is obviously true when p = 0. Assume we live in the 1 − δ/2 probability. Then
(Q p − Q * )(s, a) ≥ −γ t i=1 ∆ i + t i=1 α i t γ(V t i − V * )(x t i +1 ) + b i ≥ t i=1 α i t b i − γ t i=1 ∆ i ≥ 0.
Therefore the proposition holds for p + 1 as well. By induction, it holds for all p.
We now see that (11) holds for probability 1 − δ/2 for all p, s, a. SinceQ p (s, a) is always greater than Q p (s, a) for some p ≤ p, we know thatQ p (s, a) ≥ Q p (s, a) ≥ Q * (s, a), thus proving (12).
We now give a proof for lemma 2. Recall the definition for a (C, w)-sequence. A sequence (w t ) t≥1 is said to be a (C, w)-sequence for C, w > 0, if 0 ≤ w t ≤ w for all t ≥ 1, and t≥1 w t ≤ C.
Proof. Let n t = N t (s t , a t ) for simplicity, we have The last inequality is due to lemma 4. Note that α 0 nt = I[n t = 0], the first term in the summation can be bounded by,
t≥1 w t α 0 nt 1 − γ ≤ SAw 1 − γ .
For the second term, define u(s, a) = sup t N t (s, a). 2 It follows that,
Where C s,a = t≥1,(st,at)=(s,a) w t . Inequality (14) comes from rearrangement inequality, since ι(x)/x is monotonically decreasing. And inequality (15) We claim that w t is a (C, (1 + 1 H )w)-sequence. We now prove this claim. By lemma 3, for any t ≥ 0, And by i j=0 α j i = 1, we have t ≥1 w t +1 ≤ t≥1 w t ≤ C. It follows that
w t +1 ≤ wt≥1 w t+1 V t − V * (s t+1 ) (16) = t≥1 w t+1 V t+1 − V * (s t+1 ) + t≥1 w t+1 V t −V t+1 (s t+1 ) (17) ≤ t≥1 w t+1 V t+1 − V * (s t+1 ) + t≥1 w t+1 2α nt 1 1 − γ (18) ≤ t≥1 w t+1 V t+1 − V * (s t+1 ) + O wSAH 1 − γ ln C (19) ≤ t≥1 w t+1 Q t+1 − Q * (s t+1 , a t+1 ) + O wSAH 1 − γ ln C(20)
Inequality (18) comes from the update rule of our algorithm. Inequality (19) comes from the fact that α t = (H + 1)/(H + t) ≤ H/t and Jensen's Inequality.
Putting all terms together, we have,
t≥1 w t (Q t − Q * )(s t , a t ) (21) ≤ c 3 wSAHCι(C) 1 − γ + O wSAH 1 − γ ln C + γ t≥1 w t+1 Q t+1 − Q * (s t+1 , a t+1 ).(22)
Observe that the forth term is another weighted sum with the same form. Then we can recursively unroll the summation term for index set I and so on. Suppose that our original weight sequence is also denoted by {w t } t≥1 denotes the weight sequence after unrolling for k times. Let w (k) be w · (1 + 1/H) k . Then we can see that {w (k) t } t≥1 is a (C, w (k) )-sequence. Suppose that we unroll for R times. Then t≥1 w t (Q t − Q * )(s t , a t )
≤ c 3 w (R) SAHCι(C) (1 − γ) 2 + O w (R) SAH (1 − γ) 2 ln C + γ R t≥1 w (R) t Q t − Q * (s t , a t ) ≤ c 3 w (R) SAHCι(C) (1 − γ) 2 + O w (R) SAH (1 − γ) 2 ln C + γ R C 1 − γ .
We set H = R = ln 1/((1−γ) 1 ) ln 1/γ ≤ ln 1/((1−γ) 1 ) 1−γ . It follows that w (R) = (1 + 1/H) R w (0) ≤ ew (0) , and that γ R C 1−γ ≤ C 1 . Therefore,
t≥1 w t (Q t − Q * )(s t , a t ) ≤ C 1 1 − γ + O wSAC (C) (1 − γ) 2.5 + wSA (1 − γ) 3 ln C ln 1 (1 − γ) 1 .(23)
Figure 1 :
1An illustration of how error is propagated for Q-learning algorithms in (a) finite horizon settings; (b) infinite horizon settings. At blocks with the same color the same state-action pair is experienced. The arrows indicate how one error contributes to future errors. The dashed frames indicate errors of interest (terms in the summation).
For an arbitrary finite horizon MDP M = (S, A, H, r h (s, a), p h (s | s, a)) where H is the length of episode, the corresponding infinite horizon MDPM = (S,Ā, γ,r(s,ā),p(s | s,ā)) is defined as, •S = S × H,Ā = A; • γ = (1 − 1/H); • for a state s at step h, lets s,h be the corresponding state. For any action a and next state s , definer(s s,h , a) = γ H−h+1 r h (s, a) andp(s s ,h+1 |s s,h , a) = p h (s, h). And for h = H, set r(s s,h , a) = 0 andp(s s ,1 |s s,h , a) = I[s = s 1 ] for a fixed starting state s 1 .
1 − δ. In contrast, delayed Q-learning[13] can only give an upper bound on 1 -suboptimal steps after setting parameter 1 . In other words, after setting 1 , we can give a O −2 2 uniform upper bound for the curve of the number of 2 -suboptimal steps versus 2 , as long as 2 >Õ( 1 /
Fact 1 .
1(1) The following statement holds throughout the algorithm,Q p+1 (s, a) ≤ Q p+1 (s, a).
t≥1 w t (Q t − Q * )(s t , a t ) ≤ t≥1 w t ( 1 + (Q t − Q * )(s t , a t )) ≤ t≥1 w t (Q t − Q * )(s t , a t ) τ (st,at,i) − V * (s τ (st,at,i)+1 )
≤ c 3
3(1 − γ) −1 wSAHCι(C).
comes from Jensen's inequality. For the third term of the summation, (st,at,i)− V * (s τ (st,at,i)+1 ) ≤ t ≥1 V t − V * (s t +1 )
u(s, a) could be infinity when (s, a) is visited for infinite number of times.
Proof. Simply refer to the algorithm.Before proving lemma 2, we will prove two auxiliary lemmas. Lemma 3. The following properties hold for α i t :1.i ≤ 2 1 t for every t ≥ 1, c > 0.2. max i∈[t] α i t ≤ 2H t and t i=1 (α i t ) 2 ≤ 2H t for every t ≥ 1.3.∞ t=i α i t = 1 + 1/H, for every i ≥ 1.4.Properties 1-3 are proven by[6]. Now we prove the last property. On the one hand,where the last inequality follows from property 3. The left-hand side is proven by induction on t. For the base case, when t = 1,Lemma 4. With probability at least 1 − δ/2, for all p ≥ 0 and (s, a)-pair,Proof. Recall thatIt is not hard to see that our algorithm maintains the following Q(s, a): Subtracting the two equations:A.3 MDP mappingRecall that our MDP mapping from M = (S, A, H, r h (s, a), p h (s | s, a)) toM = (S,Ā, γ,r(s,ā),p(s | s,ā)) is defined as,• for a state s at step h, lets s,h be the corresponding state. For a trajectory {(s s 1 ,1 ,ā 1 ), (s s 2 ,2 ,ā 2 ), · · · } inM, let {(s 1 , a 1 ), (s 2 , a 2 ), · · · } be the corresponding trajectory in M. Note that M has a unique fixed starting state s 1 , which means that s tH+1 = s 1 for all t ≥ 0. Denote the corresponding policy ofπ t as π t (may be non-stationary), then we havēThen for a stationary policyπ, we can concludeVπ(s s 1 ,1 ) = γ H 1−γ H V π (s 1 ). Since the optimal policȳ π * is stationary, we haveV * (s s 1 ,1 ) = γ H 1−γ H V * (s 1 ). By definition,π is -optimal at time step t means that Vπ t (s s 1 ,1 ) ≥V * (s s 1 ,1 ) − .It follows thatγ H V π t (s 1 ) + γ HV π t+H (s s 1 ,1 ) =Vπ(s s 1 ,1 ) ≥V * (s s 1 ,1 ) − , hence γ H V π t (s 1 ) ≥ (1 − γ H )V * (s s 1 ,1 ) + γ H (V * (s s 1 ,1 ) −V π t+H (s s 1 ,1 )) − ≥ (1 − γ H )V * (s s 1 ,1 ) − .Therefore we havewhich means that π t is an ( /γ H )-optimal policy.
Speedy q-learning. Remi Mohammad Gheshlaghi Azar, Mohammad Munos, Hilbert Ghavamzadeh, Kappen, Advances in neural information processing systems. Mohammad Gheshlaghi Azar, Remi Munos, Mohammad Ghavamzadeh, and Hilbert Kappen. Speedy q-learning. In Advances in neural information processing systems, 2011.
Ian Mohammad Gheshlaghi Azar, Rémi Osband, Munos, arXiv:1703.05449Minimax regret bounds for reinforcement learning. arXiv preprintMohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. arXiv preprint arXiv:1703.05449, 2017.
R-max -a general polynomial time algorithm for near-optimal reinforcement learning. Ronen I Brafman, Moshe Tennenholtz, J. Mach. Learn. Res. 3Ronen I. Brafman and Moshe Tennenholtz. R-max -a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3:213-231, March 2003.
Learning rates for q-learning. -Dar Eyal Even, Yishay Mansour, Journal of Machine Learning Research. 5Eyal Even-Dar and Yishay Mansour. Learning rates for q-learning. Journal of Machine Learning Research, 5(Dec):1-25, 2003.
Near-optimal regret bounds for reinforcement learning. Thomas Jaksch, Ronald Ortner, Peter Auer, Journal of Machine Learning Research. 11Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563-1600, 2010.
Is q-learning provably efficient?. Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, Michael I Jordan , Advances in Neural Information Processing Systems. Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? In Advances in Neural Information Processing Systems, pages 4864-4874, 2018.
On the sample complexity of reinforcement learning. Sham Machandranath Kakade, University of London London, EnglandPhD thesisSham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003.
Pac bounds for discounted mdps. Tor Lattimore, Marcus Hutter, International Conference on Algorithmic Learning Theory. SpringerTor Lattimore and Marcus Hutter. Pac bounds for discounted mdps. In International Conference on Algorithmic Learning Theory, pages 320-334. Springer, 2012.
Asynchronous methods for deep reinforcement learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, International conference on machine learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937, 2016.
Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller, arXiv:1312.5602arXiv preprintVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz, International Conference on Machine Learning. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pages 1889-1897, 2015.
Variance reduced value iteration and faster algorithms for solving markov decision processes. Aaron Sidford, Mengdi Wang, Xian Wu, Yinyu Ye, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsAaron Sidford, Mengdi Wang, Xian Wu, and Yinyu Ye. Variance reduced value iteration and faster algorithms for solving markov decision processes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 770-787. Society for Industrial and Applied Mathematics, 2018.
Pac model-free reinforcement learning. L Alexander, Lihong Strehl, Eric Li, John Wiewiora, Michael L Langford, Littman, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningACMAlexander L Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L Littman. Pac model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, pages 881-888. ACM, 2006.
An analysis of model-based interval estimation for markov decision processes. L Alexander, Strehl, Michael L Littman, Journal of Computer and System Sciences. 748Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.
Model-based reinforcement learning with nearly tight exploration complexity bounds. István Szita, Csaba Szepesvári, Proceedings of the 27th International Conference on Machine Learning (ICML-10). the 27th International Conference on Machine Learning (ICML-10)István Szita and Csaba Szepesvári. Model-based reinforcement learning with nearly tight exploration complexity bounds. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 1031-1038, 2010. |
202,750,230 | REDUCING TRANSFORMER DEPTH ON DEMAND WITH STRUCTURED DROPOUT | Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation. | [
91184134,
44131019,
5034059,
3432876,
11816014,
1870512,
59310641,
16639476,
990233,
964287
] | REDUCING TRANSFORMER DEPTH ON DEMAND WITH STRUCTURED DROPOUT
Angela Fan [email protected]
Facebook AI Research
Facebook AI Research
Facebook AI Research
LORIA
Edouard Grave [email protected]
Facebook AI Research
Facebook AI Research
Facebook AI Research
LORIA
Armand Joulin [email protected]
Facebook AI Research
Facebook AI Research
Facebook AI Research
LORIA
REDUCING TRANSFORMER DEPTH ON DEMAND WITH STRUCTURED DROPOUT
Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation.
INTRODUCTION
Transformer architectures (Vaswani et al., 2017) have become the dominant architecture in natural language processing, with state-of-the-art performance across a variety of tasks, including machine translation (Vaswani et al., 2017;Ott et al., 2018), language modeling Baevski & Auli, 2018) and sentence representation (Devlin et al., 2018;. Each of its layers contains millions of parameters accessed during the forward pass, making it computationally demanding in terms of memory and latency during both training and inference. In an ideal situation, we would be able to extract sub-networks -automatically and without finetuning -from this over-parameterized network, for any given memory or latency constraint, while maintaining good performance. In contrast, standard pruning or distillation methods follow a strategy that often includes a finetuning or retraining step, and the process must be repeated for each desired depth.
In this work, we propose a novel approach to train over-parameterized networks such that it is possible to extract any sub-network without a post-hoc pruning process. The core of our method is to sample small sub-networks from the larger model during training by randomly dropping model weights as in Dropout (Hinton et al., 2012) or DropConnect (Wan et al., 2013). This has the advantage of making the network robust to subsequent pruning. If well-chosen groups of weights are dropped simultaneously, the resulting small sub-networks can be very efficient. In particular, we drop entire layers to extract shallow models at inference time. Previous work (Huang et al., 2016) has shown that dropping layers during training can regularize and reduce the training time of very deep convolutional networks. In contrast, we focus on pruning. As illustrated in Figure 1, an advantage of our layer dropping technique, or LayerDrop, is that from one single deep model, we can extract shallow sub-networks of any desired depth on demand at inference time.
We validate our findings on a variety of competitive benchmarks, namely WMT14 English-German for machine translation, WikiText-103 (Merity et al., 2016) for language modeling, CNN-Dailymail (Hermann et al., 2015) for abstractive summarization, ELI5 for long form question answering, and several natural language understanding tasks (Wang et al., 2019) for sentence representation. Our approach achieves state of the art on most of these benchmarks as a result of the regularization effect, which stabilizes the training of larger and deeper networks. We also show that we can prune Transformer architectures to much smaller models while maintaining com- Figure 1: LayerDrop (right) randomly drops layers at training time. At test time, this allows for sub-network selection to any desired depth as the network has been trained to be robust to pruning. In contrast to standard approaches that must re-train a new model from scratch for each model size (left), our method trains only one network from which multiple shallow models can be extracted.
petitive performance, outperforming specific model reduction strategies dedicated to BERT (Devlin et al., 2018;Sanh, 2019) as well as training smaller models from scratch. Overall, applying Layer-Drop to Transformer networks provides the following key advantages:
• LayerDrop regularizes very deep Transformers and stabilizes their training, leading to stateof-the-art performance across a variety of benchmarks. • Small and efficient models of any depth can be extracted automatically at test time from a single large pre-trained model, without the need for finetuning. • LayerDrop is as simple to implement as dropout.
RELATED WORK
Our approach is a form of Dropout (Srivastava et al., 2014) applied to model weights instead of activations, as in DropConnect (Wan et al., 2013). Different from DropConnect, we drop groups of weights to induce group redundancy to create models suited for pruning to shallow, efficient models at inference time. Gomez et al. (2018) propose a targeted Dropout and DropConnect, where they learn the drop rate of the weights to match a targeted pruning scheme. Instead, we adapt the masks to the structures that we are interested in pruning. Closer to our work, the Stochastic Depth approach of Huang et al. (2016) drops layers randomly during training. As opposed to our work, they are interested in accelerating the training of very deep ResNets (He et al., 2016), so their dropping schedule is adapted to this goal. Concurrently to this work, Pham et al. (2019) applied Stochastic Depth to train very deep Transformers for speech and show the benefits of its regularization effect.
More generally, our method is a form of structured pruning . As opposed to weight pruning (LeCun et al., 1990), structured pruning removes coherent groups of weights to preserve the original structure of the network. Structured pruning has been used in some NLP applications, such as machine translation (See et al., 2016), text classification and language modeling (Murray & Chiang, 2015). However, it has been more widely adopted in computer vision and applied to convolutional network to remove filters Wen et al., 2016), channels (He et al., 2017), or residual blocks Huang & Wang, 2018). Similar to Mittal et al. (2018), we take advantage of the plasticity of neural networks to learn models that are resilient to random pruning, rather than learning the pruning itself. We refer the reader to for an exhaustive study of these approaches and their evaluation in the context of convolutional networks.
Reducing the memory footprint of Transformer architectures and BERT in particular is an active subject of research. Several works have compressed BERT as a post-processing step using different forms of distillation (Turc et al., 2019;Tang et al., 2019;Shulga, 2019;Sanh, 2019). Similarly, various papers have shown evidence that Transformers are over-parameterized, especially that most self-attention heads can be dropped at test time (Michel et al., 2019;Voita et al., 2019). Different from these, our models are trained to be resilient to pruning, which significantly reduces the performance drop induced by test time pruning. Others have proposed trainable adaptive mechanisms to control their memory footprint (Jernite et al., 2016;Sukhbaatar et al., 2019;Correia et al., 2019). These approaches are complementary to ours and should benefit from each other.
METHOD
In this section, we briefly introduce the Transformer, then describe our Structured Dropout technique and its application to layers. We also discuss several inference time pruning strategies.
THE TRANSFORMER ARCHITECTURE
We succinctly review the Transformer architecture and refer the reader to Vaswani et al. (2017) for additional details. A Transformer is a stack of layers composed of two sub-layers: multi-head self-attention followed by a feedforward sub-layer. The multi-head self-attention sub-layer consists of multiple attention heads applied in parallel. Each attention head takes a matrix X where each row represents an element of the input sequence and updates their representations by gathering information from their context using an Attention mechanism (Bahdanau et al., 2014):
Y = Softmax(X T K(QX + P))VX,
where K, V, Q and P are matrices of parameters. The outputs of the heads are then concatenated along the time step into a sequence of vectors. The second sub-layer then applies a fully connected feedforward network to each element of this sequence independently, FFN(x) = U ReLU (Vx), where V and U are matrices of parameters. Each sub-layer is followed by a AddNorm operation that is a residual connection (He et al., 2016) and a layer normalization (Ba et al., 2016).
TRAINING TRANSFORMERS WITH RANDOM STRUCTURED PRUNING
We present an regularization approach that makes Transformers robust to subsequent structured pruning at inference time. We focus in particular on the case where the targeted structure is a layer.
RANDOMLY DROPPING STRUCTURES AT TRAINING TIME
Regularizing networks to be robust to pruning can be achieved by randomly removing weights during its training as in DropConnect (Wan et al., 2013). In this approach, each weight is dropped independently following a Bernoulli distribution associated with a parameter p > 0 that controls the drop rate. This is equivalent to a pointwise multiplication of the weight matrix W with a randomly sampled {0, 1} mask matrix M:
W d = M W.
DropConnect is a form of random unstructured pruning that leads to smaller, but not necessarily more efficient, models. We propose to add structure to this mechanism to target model efficiency.
Random Structured Dropout. The weights of a Transformer network belong to multiple overlapping structures, such as heads, FFN matrices, or layers. Dropping weights using groups that follow some of these inherent structures potentially leads to a significant reduction of the inference time. This is equivalent to constraining the mask M to be constant over some predefined groups of weights. More precisely, given a set G of predefined groups of weights, the {0, 1} mask matrix M is randomly sampled over groups instead of weights:
∀i, M[i] ∈ {0, 1}, and ∀G ∈ G, ∀(i, j) ∈ G, M[i] = M[j].
This structured dropout formulation is general and can be applied to any overlapping groups of weights, whether heads, FFN matrices, or layers. Nonetheless, not all of the structures in a Transformer lead to the same benefits when dropped. For example, dropping attention heads does not reduce runtime as they are usually computed in parallel. For simplicity, we focus on dropping layers, and we name this structured pruning, LayerDrop. This is inspired by the Stochastic Depth approach of Huang et al. (2016) used to train very deep ResNets (He et al., 2015).
PRUNING AT INFERENCE TIME
Selecting Layers to Prune Training with LayerDrop makes the network more robust to predicting with missing layers. However, LayerDrop does not explicitly provide a way to select which groups to prune. We consider several different pruning strategies, described below:
• Every Other: A straightforward strategy is to simply drop every other layer. Pruning with a rate p means dropping the layers at a depth d such that d ≡ 0(mod 1 p ). This strategy is intuitive and leads to balanced networks.
• Search on Valid: Another possibility is to compute various combinations of layers to form shallower networks using the validation set, then select the best performing for test. This is straightforward but computationally intensive and can lead to overfitting on validation. • Data Driven Pruning: Finally, we propose data driven pruning where we learn the drop rate of each layer. Given a target drop rate p, we learn an individual drop rate p d for the layer at depth d such that the average rate over layers is equal to p. More precisely, we parameterize p d as a non-linear function of the activation of its layer and apply a softmax. At inference time, we forward only the fixed top-k highest scoring layers based on the softmax output (e.g. chosen layers do not depend on the input features).
In practice, we observe that the Every Other strategy works surprisingly well across many tasks and configurations. Search on Valid and Data Driven Pruning only offer marginal gains. Note that we do not further finetune any of the pruned networks (see Appendix for analysis of finetuning).
Setting the drop rate for optimal pruning. There is a straightforward relationship between the drop rate of groups and the average pruning level that the network should be resilient to. Assuming N groups and a fixed drop ratio p, the average number of groups used by the network during training is N (1 − p). As a consequence, to target a pruning size of r groups, the optimal drop rate is:
p * = 1 − r N
In practice, we observe that networks are more robust to pruning than their expected ratio but higher pruning rates leads to better performance for smaller models. We use a LayerDrop rate of p = 0.2 for all our experiments, but we recommend p = 0.5 to target very small inference time models.
EXPERIMENTAL SETUP
We apply our method to a variety of sequence modeling tasks: neural machine translation, language modeling, summarization, long form question answering, and various natural language understanding tasks. Our models are implemented in PyTorch using fairseq-py . Additional implementation and training details with hyperparameter settings are in the Appendix.
Neural Machine Translation. We experiment on the WMT English-German machine translation benchmark using the Transformer Big architecture. We use the dataset of 4.5M en-de sentence pairs from WMT16 (Vaswani et al., 2017) for training, newstest2013 for validation, and newstest2014 for test. We optimize the dropout value within the range {0.1, 0.2, 0.5} on the validation set and set the LayerDrop rate p to 0.2. For generation, we average the last 10 checkpoints, set the length penalty to 0.6, and beam size to 8, following the settings suggested in Wu et al. (2019), and measure case-sensitive tokenized BLEU. We apply compound splitting, as used in Vaswani et al. (2017).
Language Modeling. We experiment on the Wikitext-103 language modeling benchmark (Merity et al., 2016) which contains 100M tokens and a large vocabulary size of 260K. We adopt the 16 layer Transformer used in Baevski & Auli (2018). We set the LayerDrop rate p to 0.2 and tune the standard dropout parameter in {0.1, 0.2, 0.3} on the validation set. We report test set perplexity (PPL).
Summarization. We adopt the Transformer base architecture and training schedule from and experiment on the CNN-Dailymail multi-sentence summarization benchmark. The training data contains over 280K full-text news articles paired with multi-sentence summaries (Hermann et al., 2015;See et al., 2017). We tune a generation length in the range {40, 50, 60} and use 3-gram blocking. We set the LayerDrop rate p to 0.2. We evaluate using ROUGE (Lin, 2004 (2019). We generate long answers using beam search with beam size 5 and apply 3-gram blocking . We evaluate with ROUGE.
Sentence representation Pre-training. We train base and large BERT (Devlin et al., 2018)
RESULTS
LAYERDROP AS A REGULARIZER
Language Modeling. In Table 2, we show the impact of LayerDrop on the performance of a Transformer network trained in the setting of Adaptive Inputs (Baevski & Auli, 2018). Adding LayerDrop to a 16 layer Transformer improves the performance by 0.4 perplexity, matching the state-of-the-art results of Transformer-XL. Our 40 layer Transformer with LayerDrop further improves the state of the art by 0.6 points. Very deep Transformers are typically hard to train because of instability and memory usage, and they are prone to overfitting on a small dataset like Wikitext-103. LayerDrop regularizes the network, reduces the memory usage, and increases training stability as fewer layers are active at each forward pass. These results confirm that this type of approach can be used to efficiently train very deep networks, as shown in Huang et al. (2016) for convolutional networks.
Sequence to sequence modeling. Similarly, as shown in Table 1 and Bi-Directional Pre-training. In a second set of experiments, we look at the impact of LayerDrop on pre-training for sentence representation models and subsequent finetuning on multiple natural language understanding tasks. We compare our models to a variant of BERT for sentence representations, called RoBERTa , and analyze the results of finetuning for data adaptation on MNLI, MRPC, QNLI, and SST2. We apply LayerDrop during both pre-training and finetuning.
We compare the performance of the large architecture on the BooksCorpus+Wiki dataset used in BERT. We analyze the performance of training on the additional data used in RoBERTa as well as pre-training for even longer. Comparing fixed model size and training data, LayerDrop can improve the performance of RoBERTa on several tasks. LayerDrop can further be used to both enable and stabilize the training (Huang et al., 2016) of models double the size for even stronger performance.
PRUNING TRANSFORMER LAYERS TO ON-DEMAND DEPTH WITH LAYERDROP
Pruning Generation Tasks. In Figure 2, we investigate the impact of the number of pruned decoder layers on the performance of a Transformer for language modeling, neural machine translation, and summarization. We compare three different settings: standard Transformer models trained without LayerDrop but subsequently pruned, standard Transformer models trained from scratch to each desired depth, and lastly our approach: pruning layers of a Transformer trained with Layer-Drop. Our model is trained once with the maximum number of layers and then pruned to the desired depth, without any finetuning in the shallower configuration. Our approach outperforms small models trained from scratch, showing that LayerDrop leads to more accurate small models at a whole range of depths. Further, training with LayerDrop does not incur the computational cost of retraining a new model for each desired depth. For completeness, dropping layers of a deep Transformer trained without LayerDrop performs poorly as it was not trained to be robust to missing layers.
Pruning BERT-like Models. In Table 7 (left), we compare pruning Transformers trained with LayerDrop to different approaches used to create smaller, shallower models. We compare to BERT base and RoBERTa base trained from scratch with 6 and 3 layers as well as recent work on distillation, called DistilBERT (Sanh, 2019). We analyze both BERT and RoBERTa models as the vocabulary is not the same due to differences in subword tokenization, which affects performance.
DistilBERT occasionally performs worse than BERT of the same size trained from scratch, which confirms the findings of training small models from scratch. Our approach, however, obtains results better than BERT and RoBERTa trained from scratch. Further, our method does not need any post-processing: we simply prune every other layer of our RoBERTa model that has been pre-trained with LayerDrop and finetune the small models on each of the downstream tasks, following standard procedure. When training with additional data, shown in Table 7 (right), even stronger performance can be achieved.
ABLATION STUDIES
Comparison of Structured Dropout Figure 4 (left) contrasts various forms of structured dropout: dropping attention heads, FFN matrices, and entire Transformer layers. Dropping heads alone is worse than dropping entire sub-layers or layers. It also offers no advantage in terms of running time as attention heads are computed in parallel for computational efficiency. We observe no large differences between dropping sub-layers and layers, possibly because we are working with relatively shallow networks. In theory, dropping sub-layers should perform better and we expect this to be the case with very deep Transformers. We experiment with overlapping structured groups, such as heads + layers and heads + sub-layers and find that the beneficial effect can be advantageously combined. We focus on layers for simplicity, as dropping more structures introduces more parameters to tune.
Comparison of Various Pruning Strategies. Figure 4 ( forward strategy of selecting every other layer, is tough to beat. We find only marginal improvement can be gained by searching over the validation set for the best set of 8 layers to use and by learning which layers to drop. In contrast, dropping chunks of consecutive layers is harmful. Namely, removing the first half or last half of a model is particularly harmful, as the model does not have the ability to process the input or project to the full vocabulary to predict the subsequent word.
Choosing which Layers to Prune. Not all layers are equally important. In an experiment on Wikitext-103, we pruned selections of 8 layers at random. Figure 5 displays the perplexity when that layer is removed, averaging results from 20 pruned model per layer. The input and output layers of a network are the most important, as they process the input and project to the output vocabulary.
Relationship between LayerDrop at Training Time and Pruning at Inference Time. Figure 6 displays the relationship between the training time LayerDrop and the performance of a pruned network at test time. If significant depth reduction is desired, training with larger LayerDrop is beneficial -this equalizes the train and test time settings. An analysis for BERT is in the Appendix.
CONCLUSION
Structured dropout regularizes neural networks to be more robust to applying structured pruning at inference time. We focus on the setting where structures are layers, enabling pruning of shallow and efficient models of any desired depth. In a variety of text generation and pre-training tasks, we show that LayerDrop enables and stabilizes the training of substantially deeper networks and simultaneously allows for the extraction of models of various depths with strong performance. Training smaller models: We train the 6 and 3 layer RoBERTa models following the same settings, but using the smaller number of layers and without LayerDrop. We finetune with the same sweep parameters. The 6 and 3 layer BERT model results are taken from Devlin et al. (2018).
Training larger models: We train the 48 layer RoBERTa model with 0.5 LayerDrop so only 24 layers on average are active during a forward pass.
Pruning: When pruning RoBERTa models, we use the Every Other Layer strategy and finetune without LayerDrop for the smaller models.
A.2 ADDITIONAL RESULTS
IWSLT Table 6 displays results on the IWSLT de-en dataset. We see small improvement, likely as the network is small and already has a large quantity of regularization with dropout, attention dropout, and weight decay. The Transformer is not the state of the art architecture, and there remains a large gap between the Transformer and the DynamicConv model proposed by Wu et al. (2019).
Pruning BERT Models
The numerical values corresponding to the pruned 6 and 3 layer RoBERTa + LayerDrop models are shown in Table 7.
A.3 ADDITIONAL ANALYSIS
Impact of LayerDrop on training time. Figure 7 shows the increase in training speed when training with increasingly large quantities of LayerDrop. The words per second were computed on 8 V100 GPUs with 32GB of memory, without floating point 16, for a 16 layer model trained on Wikitext-103. Assuming fixed layer size, LayerDrop removes layers at training time randomly, which increases the training speed almost 2x if dropping half the number of layers.
BERT: Relationship between LayerDrop at Training Time and Pruning at Inference Time Similar to the analysis on Language Modeling, we find that training with larger quantities of Layer-Drop allows for more aggressive pruning at inference time on various natural language generation tasks. However, as these tasks involve a finetuning step on the downstream tasks after pre-training, the effect is less straightforward. Results are shown in Figure 8. LayerDrop allows models to be pruned to the desired depth at test time. Apart from finetuning for data adaptation on the GLUE tasks, we do not finetune the performance of our smaller models on any of the other tasks we consider in this work. As shown in Table 8, we found that finetuning the pruned models only results in marginal improvement. Further, the finetuning parameters were dependent on the depth of the model at test time and difficult to optimize.
models following the open-source implementation of. We use two datasets: Bookscorpus + Wiki from and the larger combination of Bookscorpus + OpenWebText + CC-News + Stories. We evaluate the pretrained models on various natural language understanding tasks. Specifically, we evaluate accuracy on MRPC(Dolan & Brockett, 2005), QNLI(Rajpurkar et al., 2016), MNLI(Williams et al., 2018), and SST2(Socher et al., 2013).
Figure 2 :
2about the performance of pruned models compared to Performance as a function of Pruning on various generation tasks (test set), compared to training smaller models from scratch and pruning a Transformer baseline trained without LayerDrop. Pruning networks with LayerDrop performs strongly compared to these alternatives.
Figure 3 :
3(left) Performance as a function of Pruning on MNLI and SST2 compared to BERT and RoBERTa trained from scratch and DistilBERT. Pruning one network trained with LayerDrop (blue) outperforms alternatives that require a new network for each point. (right) Performance when Training on More Data shows even stronger results on MNLI and SST2 for pruned models.
Figure 4 :Figure 5 :Figure 6 :
456(left) Impact of Various Structured Dropouts on Wikitext-103 Valid. Dropping Layers is straightforward and has strong performance. (right) Comparison of Pruning Strategies on Wikitext-103 Valid. Marginal gains can be achieved, but dropping every other layer is hard to beat. Relative Importance of Specific Layers. (Wikitext-103 Valid) The full network is pruned into various 8 layer sub-network configurations, and the average perplexity pruning layer n is displayed above. Effect of Train LayerDrop on Inference-time Pruning. (Wikitext-103 Valid) Training with larger LayerDrop is beneficial for significant pruning.
Figure 7 :
7Effect
Figure 8 :
8Effect of Train LayerDrop on Inference-time Pruning on MNLI, SST2, and QNLI Impact of Finetuning.
).Model
Enc Layers Dec Layers BLEU
Transformer (Vaswani et al., 2017)
6
6
28.4
Transformer (Ott et al., 2018)
6
6
29.3
DynamicConv (Wu et al., 2019)
7
6
29.7
Transformer (Ott et al., 2018) + LayerDrop
6
6
29.6
Transformer (Ott et al., 2018) + LayerDrop
12
6
30.2
Table 1: Results on WMT en-de Machine Translation (newstest2014 test set)
Model
Layers Params PPL
Adaptive Inputs (Baevski & Auli, 2018)
16
247M 18.7
Transformer XL Large (Dai et al., 2019)
18
257M 18.3
Adaptive Inputs + LayerDrop
16
247M 18.3
Adaptive Inputs + LayerDrop
40
423M 17.7
Table 2 :
2Results on Wikitext-103 language modeling benchmark (test set). Five along with extracted supporting documents from web search. We follow the Transformer Big architecture and training procedure of Fan et al.Long Form Question Answering. We consider the Long Form Question Answering Dataset ELI5
of Fan et al. (2019), which consists of 272K question answer pairs from the subreddit Explain Like
I'm
Table 3 ,
3applying Layer-Drop to Transformers on text generation tasks such as neural machine translation, summarization, and long form question answering also boosts performance for all tasks. In these experiments, we take the Transformer architectures that are state-the-art and train them with LayerDrop. In neural machine translation on newstest2014, our 12 encoder layer Transformer model with LayerDrop further improves the state of the art, reaching 30.2 BLEU. In comparison, a standard Transformer trained without LayerDrop diverges with 12 encoder layers. This is a known problem, and techniques such as improved initialization could be used to maintain stability (Junczys-Dowmunt, 2019;Zhang et al., 2019), but are out of the scope of this work. Similar results are seen in summarization.Model
Enc Dec ROUGE-1 ROUGE-2 ROUGE-L
Abstractive Summarization
Transformer (Edunov et al., 2019)
6
6
40.1
17.6
36.8
Transformer + LayerDrop
6
6
40.5
17.9
37.1
Transformer + LayerDrop
6
8
41.1
18.1
37.5
Long Form Question Answering
Transformer Multitask (Fan et al., 2019)
6
6
28.9
5.4
23.1
Transformer Multitask + LayerDrop
6
6
29.4
5.5
23.4
Table 3 :
3Results for CNN-Dailymail Summarization and ELI5 QA (test set).Data
Layers Model
MNLI-m MRPC QNLI SST2
Books + Wiki 24
RoBERTa
89.0
90.2
93.9
95.3
24
RoBERTa + LayerDrop
89.2
90.2
94.2
95.4
+ more data
24
RoBERTa
90.2
90.9
94.7
96.4
24
RoBERTa + LayerDrop
90.1
91.0
94.7
96.8
48 1
RoBERTa + LayerDrop
90.4
90.9
94.8
96.9
Table 4 :
4Results on Various NLU Tasks for RoBERTa Large trained for 500K updates (dev set).
A APPENDIXA.1 ADDITIONAL IMPLEMENTATION DETAILS A.1.1 NEURAL MACHINE TRANSLATION WMT en-de: We model a 32K joint byte-pair encoding. We train using the cosine(Loshchilov & Hutter, 2016) learning rate schedule fromWu et al. (2019) with label smoothing 0.1. vocabulary(Sennrich et al., 2015).IWSLT de-en: The dataset consists of 160K training pairs, fully lowercased. We model a 10K joint BPE vocabulary and generate with beam size 4. We do not average checkpoints. FollowingWu et al. (2019), we use the Transformer base architecture with 6 encoder layers and 6 decoder layers. As the dataset is small, we decrease the overall model size and instead use the following parameters: FFN size 1024, hidden dimension 512, and 4 attention heads.Hyperparameter
Base Large
Number of Layers
12
24
Hidden Size
768
1024
FFN Size
3072 4096
Attention Heads
12
16
LayerDrop
0.2
0.2
Warmup Steps
24k
30k
Peak Learning Rate 6e-4
4e-4
Batch Size
8192 8192
Table 5 :
5Hyperparameters for RoBERTa PretrainingModel
BLEU
Transformer (Wu et al., 2019)
34.4
Dynamic Conv (Wu et al., 2019)
35.2
Transformer + LayerDrop
34.5
Table 6 :
6BLEU for IWSLT (test set).natural language understanding task. We do not perform ensembling. When finetuning models trained with LayerDrop, we apply LayerDrop during finetuning time as well.
Table 7 :
7Comparison between BERT base with and without distillation with our RoBERTa base trained with LayerDrop. Our models are pruned before finetuning on each individual task. The numbers from BERT are taken fromDevlin et al. (2018).
Table 8 :
8Impact of additional finetuning on a 16 layer language model pruned to 8 layers.
The 48 layer model was trained for 300K updates.
Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune.A.1.2 LANGUAGE MODELINGTraining: To handle the large vocabulary of Wikitext-103, we followDauphin et al. (2017)andBaevski & Auli (2018)in using adaptive softmax and adaptive input for computational efficiency. For both input and output embeddings, we use dimension size 1024 and three adaptive bands: 20K, 40K, and 200K. We use a cosine learning rate schedule(Baevski & Auli, 2018;Loshchilov & Hutter, 2016)and train with Nesterov's accelerated gradient(Sutskever et al., 2013). We set the momentum to 0.99 and renormalize gradients if the norm exceeds 0.1(Pascanu et al., 2014). During training, we partition the data into blocks of contiguous tokens that ignore document boundaries. At test time, we respect sentence boundaries.Pruning: We apply the Every Other Layer strategy and do not finetune. Training: We train using Adam with a cosine learning rate schedule, warming up for 10K steps. We optimize dropout in the range {0.2, 0.3} on the validation set and set LayerDrop to 0.2.Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune.A.1.4 LONG FORM QUESTION ANSWERINGTraining: We compare to the full multi-task setting of , where data augmentation and multi-tasking is done at training time to increase the data available.Generation: We set the minimum length to 150 tokens and the maximum length to 200.A.1.5 BI-DIRECTIONAL PRE-TRAININGTraining: The base architecture is a 12 layer model with embedding size 768 and FFN size 3072. The large architecture consists of 24 layers with embedding size 1024 and FFN size 4096. For both settings, we follow in using the subword tokenization scheme from Radford et al., which uses bytes as subword units. This eliminates unknown tokens. Note this produces a different vocabulary size than BERT(Devlin et al., 2018), meaning models of the same depth do not have the same number of parameters. We train with large batches of size 8192 and we do not use next sentence prediction(Lample & Conneau, 2019). We optimize with Adam with a polynomial decay learning rate schedule.Finetuning: During finetuning, we hyperparameter search over three learning rate options (1e-5, 2e-5, 3e-5) and batchsize (16 or 32 sentences). The other parameters are set following . We do single task finetuning, meaning we only tune on the data provided for the given
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Alexei Baevski, Michael Auli, arXiv:1809.10853Adaptive input representations for neural language modeling. arXiv preprintAlexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
. M Gonçalo, Vlad Correia, André Ft Niculae, Martins, arXiv:1909.00015Adaptively sparse transformers. arXiv preprintGonçalo M Correia, Vlad Niculae, and André FT Martins. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015, 2019.
Transformer-xl: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, W William, Jaime Cohen, Carbonell, V Quoc, Ruslan Le, Salakhutdinov, arXiv:1901.02860arXiv preprintZihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
Language modeling with gated convolutional networks. N Yann, Angela Dauphin, Michael Fan, David Auli, Grangier, Proc. of ICML. of ICMLYann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proc. of ICML, 2017.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proceedings of the International Workshop on Paraphrasing. the International Workshop on ParaphrasingWilliam B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing, 2005.
Pre-trained language model representations for language generation. Sergey Edunov, Alexei Baevski, Michael Auli, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4052-4059, 2019.
Angela Fan, David Grangier, Michael Auli, arXiv, abs/1711.05217Controllable abstractive summarization. Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv, abs/1711.05217, 2017.
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli, arXiv:1907.09190Eli5: Long form question answering. arXiv preprintAngela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019.
. N Aidan, Ivan Gomez, Kevin Zhang, Yarin Swersky, Geoffrey E Gal, Hinton, Targeted dropoutAidan N Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, and Geoffrey E Hinton. Targeted dropout. 2018.
Efficient softmax approximation for gpus. Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Herve Jegou, abs/1609.04309Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient softmax approximation for gpus. arXiv, abs/1609.04309, 2016.
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proc. of CVPR. of CVPRKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Channel pruning for accelerating very deep neural networks. Yihui He, Xiangyu Zhang, Jian Sun, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionYihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural net- works. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389-1397, 2017.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Proc. of NIPS. of NIPSKarl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proc. of NIPS, 2015.
Improving neural networks by preventing co-adaptation of feature detectors. Nitish Geoffrey E Hinton, Alex Srivastava, Ilya Krizhevsky, Ruslan R Sutskever, Salakhutdinov, arXiv:1207.0580arXiv preprintGeoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdi- nov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
Deep networks with stochastic depth. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Q Weinberger, European conference on computer vision. SpringerGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016.
Condensenet: An efficient densenet using learned group convolutions. Gao Huang, Shichen Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGao Huang, Shichen Liu, Laurens Van der Maaten, and Kilian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2752-2761, 2018.
Data-driven sparse structure selection for deep neural networks. Zehao Huang, Naiyan Wang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 304-320, 2018.
Yacine Jernite, Edouard Grave, Armand Joulin, Tomas Mikolov, arXiv:1611.06188Variable computation in recurrent neural networks. arXiv preprintYacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. Variable computation in recur- rent neural networks. arXiv preprint arXiv:1611.06188, 2016.
Armand Joulin, Edouard Grave, Piotr Bojanowski, arXiv:1612.03651Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext. zip: Compressing text classification models. arXiv preprintArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016.
Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. Marcin Junczys-Dowmunt, arXiv:1907.06170arXiv preprintMarcin Junczys-Dowmunt. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. arXiv preprint arXiv:1907.06170, 2019.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Cross-lingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
Optimal brain damage. Yann Lecun, S John, Sara A Denker, Solla, Advances in neural information processing systems. Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf, arXiv:1608.08710Pruning filters for efficient convnets. arXiv preprintHao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Workshop on Text Summarization Branches Out. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, 2004.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Rethinking the value of network pruning. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell, arXiv:1810.05270arXiv preprintZhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, abs/1609.07843Pointer Sentinel Mixture Models. arXivStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer Sentinel Mixture Models. arXiv, abs/1609.07843, 2016.
Are sixteen heads really better than one?. Paul Michel, Omer Levy, Graham Neubig, arXiv:1905.10650arXiv preprintPaul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650, 2019.
Recovering from random pruning: On the plasticity of deep convolutional neural networks. Deepak Mittal, Shweta Bhardwaj, M Mitesh, Balaraman Khapra, Ravindran, IEEE Winter Conference on Applications of Computer Vision (WACV). IEEEDeepak Mittal, Shweta Bhardwaj, Mitesh M Khapra, and Balaraman Ravindran. Recovering from random pruning: On the plasticity of deep convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 848-857. IEEE, 2018.
Auto-sizing neural networks: With applications to n-gram language models. Kenton Murray, David Chiang, arXiv:1508.05051arXiv preprintKenton Murray and David Chiang. Auto-sizing neural networks: With applications to n-gram lan- guage models. arXiv preprint arXiv:1508.05051, 2015.
Scaling neural machine translation. Myle Ott, Sergey Edunov, David Grangier, Michael Auli, Proc. of WMT. of WMTMyle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proc. of WMT, 2018.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proceedings of NAACL-HLT 2019: Demonstrations. NAACL-HLT 2019: DemonstrationsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
How to construct deep recurrent neural networks. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, Proceedings of the Second International Conference on Learning Representations. the Second International Conference on Learning RepresentationsICLR 2014Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), 2014.
Very deep selfattention networks for end-to-end speech recognition. Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Muller, Alex Waibel, arXiv:1904.13377arXiv preprintNgoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Muller, and Alex Waibel. Very deep self- attention networks for end-to-end speech recognition. arXiv preprint arXiv:1904.13377, 2019.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of EMNLP. EMNLPAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP, pp. 2383-2392. Association for Computational Linguistics, 2016.
Smaller, faster, cheaper, lighter: Introducing distilbert. Victor Sanh, Victor Sanh. Smaller, faster, cheaper, lighter: Introducing distilbert, a distilled version of bert. https://medium.com/huggingface/distilbert-8cf3380435b5, 2019.
Compression of neural machine translation models via pruning. Abigail See, Minh-Thang Luong, Christopher D Manning, arXiv:1606.09274arXiv preprintAbigail See, Minh-Thang Luong, and Christopher D Manning. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274, 2016.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proc. of ACL. of ACLAbigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer- generator networks. In Proc. of ACL, 2017.
Rico Sennrich, Barry Haddow, Alexandra Birch, arXiv:1508.07909Neural machine translation of rare words with subword units. arXiv preprintRico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proc. of ACL. of ACLRico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proc. of ACL, 2016.
Distilling bert how to achieve bert performance using logistic regression. towardsdatascience.com. Dima Shulga, Dima Shulga. Distilling bert how to achieve bert performance using logistic regression. towards- datascience.com, 2019.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Andrew Manning, Christopher Ng, Potts, Proceedings of EMNLP. EMNLPRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pp. 1631-1642, 2013.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, JMLR. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958, 2014.
Adaptive attention span in transformers. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin, arXiv:1905.07799arXiv preprintSainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799, 2019.
On the importance of initialization and momentum in deep learning. Ilya Sutskever, James Martens, George Dahl, Geoffrey Hinton, International conference on machine learning. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initial- ization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147, 2013.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin, arXiv:1903.12136Distilling taskspecific knowledge from bert into simple neural networks. arXiv preprintRaphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. Distilling task- specific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136, 2019.
Well-read students learn better: The impact of student initialization on knowledge distillation. Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1908.08962arXiv preprintIulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962, 2019.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov, arXiv:1905.09418arXiv preprintElena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418, 2019.
Regularization of neural networks using dropconnect. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, Rob Fergus, International conference on machine learning. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International conference on machine learning, pp. 1058-1066, 2013.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, the Proceedings of ICLR. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019. In the Proceedings of ICLR.
Learning structured sparsity in deep neural networks. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li, Advances in neural information processing systems. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074-2082, 2016.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel R Bowman, Proceedings of NAACL-HLT. NAACL-HLTAdina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, 2018.
Pay less attention with lightweight and dynamic convolutions. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, Michael Auli, International Conference on Learning Representations. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, 2019. URL https://arxiv.org/abs/1901.10430.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V Le, Xlnet, arXiv:1906.08237Generalized autoregressive pretraining for language understanding. arXiv preprintZhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
Hongyi Zhang, Tengyu Yann N Dauphin, Ma, arXiv:1901.09321Fixup initialization: Residual learning without normalization. arXiv preprintHongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. arXiv preprint arXiv:1901.09321, 2019. |
11,428,611 | ROTATIONAL UNIT OF MEMORY | The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. However, RNN still have a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms. In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory. We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. RUM's performance in the bAbI Question Answering task is comparable to that of models with attention mechanism. We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data. The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation. * equal contribution 1 arXiv:1710.09537v1 [cs.LG] | [
6628106,
12713052
] | ROTATIONAL UNIT OF MEMORY
Rumen Dangovski [email protected]
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Li Jing [email protected]
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Marin Soljačić [email protected]
Massachusetts Institute of Technology
Massachusetts Institute of Technology
Massachusetts Institute of Technology
ROTATIONAL UNIT OF MEMORY
The concepts of unitary evolution matrices and associative memory have boosted the field of Recurrent Neural Networks (RNN) to state-of-the-art performance in a variety of sequential tasks. However, RNN still have a limited capacity to manipulate long-term memory. To bypass this weakness the most successful applications of RNN use external techniques such as attention mechanisms. In this paper we propose a novel RNN model that unifies the state-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM is its rotational operation, which is, naturally, a unitary matrix, providing architectures with the power to learn long-term dependencies by overcoming the vanishing and exploding gradients problem. Moreover, the rotational unit also serves as associative memory. We evaluate our model on synthetic memorization, question answering and language modeling tasks. RUM learns the Copying Memory task completely and improves the state-of-the-art result in the Recall task. RUM's performance in the bAbI Question Answering task is comparable to that of models with attention mechanism. We also improve the state-of-the-art result to 1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB) task, which is to signify the applications of RUM to real-world sequential data. The universality of our construction, at the core of RNN, establishes RUM as a promising approach to language modeling, speech recognition and machine translation. * equal contribution 1 arXiv:1710.09537v1 [cs.LG]
INTRODUCTION
Recurrent neural networks are widely used in a variety of machine learning applications such as language modeling (Graves et al. (2014)), machine translation (Cho et al. (2014)) and speech recognition (Hinton et al. (2012)). Their flexibility of taking inputs of dynamic length makes RNN particularly useful for these tasks. However, the traditional RNN models such as Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber (1997)) and Gated Recurrent Unit (GRU, Cho et al. (2014)) exhibit some weaknesses that prevent them from achieving human level performance: 1) limited memory-they can only remember a hidden state, which usually occupies a small part of a model; 2) gradient vanishing/explosion (Bengio et al. (1994)) during training-trained with backpropagation through time the models fail to learn long-term dependencies.
Several ways to address those problems are known. One solution is to use soft and local attention mechanisms (Cho et al. (2014)), which is crucial for most modern applications of RNN. Nevertheless, researchers are still interested in improving basic RNN cell models to process sequential data better. Numerous works (Graves et al. (2014); Ba et al. (2016a)) use associative memory to span a large memory space. For example, a practical way to implement associative memory is to set weight matrices as trainable structures that change according to input instances for training. Furthermore, the recent concept of unitary or orthogonal evolution matrices (Arjovsky et al. (2016); Jing et al. (2017b)) also provides a theoretical and empirical solution to the problem of memorizing long-term dependencies.
Here, we propose a novel RNN cell that resolves simultaneously those weaknesses of basic RNN. The Rotational Unit of Memory is a modified gated model whose rotational operation acts as associative memory and is strictly an orthogonal matrix. We tested our model on several benchmarks. RUM is able to solve the synthetic Copying Memory task while traditional LSTM and GRU fail. For synthetic Recall task, RUM exhibits a stronger ability to remember sequences, hence outperforming state-of-the-art RNN models such as Fastweight RNN (Ba et al. (2016a)) and WeiNet (Zhang & Zhou (2017)). By using RUM we achieve the state-of-the-art result in the real-world Character Level Penn Treebank task. RUM also outperforms all basic RNN models in the bAbI question answering task. This performance is competitive with that of memory networks, which take advantage of attention mechanisms.
Our contributions are as follows:
1. We develop the concept of the Rotational Unit that combines the memorization advantage of unitary/orthogonal matrices with the dynamic structure of associative memory;
2. We implement the rotational operation into a novel RNN model-RUM-which outperforms significantly the current frontier of models on a variety of sequential tasks.
MOTIVATION AND RELATED WORK
UNITARY APPROACH
The problem of the gradient vanishing and exploding problem is well-known to obstruct the learning of long-term dependencies (Bengio et al. (1994)).
We will give a brief mathematical motivation of the problem. Let's assume the cost function is C.
In order to evaluate ∂C/∂W ij , one computes the derivative gradient using the chain rule:
∂C ∂h (t) = ∂C ∂h (T ) ∂h (T ) ∂h (t) = ∂C ∂h (T ) T −1 k=t ∂h (k+1) ∂h (k) = ∂C ∂h (T ) T −1 k=t D (k) W −1 ,
where D (k) = diag{σ (Wx (k) + Ah (k−1) + b)} is the Jacobian matrix of the point-wise nonlinearity. As long as the eigenvalues of D (k) are of order unity, then if W has eigenvalues λ i 1, they will cause gradient explosion ∂C/∂h (T ) → ∞, while if W has eigenvalues λ i 1, they can cause gradient vanishing, ∂C/∂h (T ) → 0. Either situation hampers the efficiency of RNN.
LSTM is designed to solve this problem, but gradient clipping (Pascanu et al. (2012)) is still required for training. Recently, by restraining the hidden-to-hidden matrix to be orthogonal or unitary, many models have overcome the problem of exploding and vanishing gradients. Theoretically, unitary and orthogonal matrices will keep the norm of the gradient because the absolute value of their eigenvalues equals one.
Several approaches have successfully developed the applications of unitary and orthogonal matrix to recurrent neural networks. Arjovsky et al. (2016);Jing et al. (2017b) use parameterizations to form the unitary spaces. Wisdom et al. (2016) applies gradient projection onto a unitary manifold. Vorontsov et al. (2017) uses penalty terms as a regularization to restrain matrices to be unitary, hence accessing long-term memorization.
Only learning long-term dependencies is not sufficient for a powerful RNN. Jing et al. (2017a) finds that the combination of unitary/orthogonal matrices with a gated mechanism improves the performance of RNN because of the benefits of a forgetting ability. Jing et al. (2017a) also points out the optimal way of such a unitary/gated combination: the unitary/orthogonal matrix should appear before the reset gate, which can then be followed by a modReLU activation. In RUM we implement an orthogonal operation in the same place, but the construction of that matrix is completely different: instead of parameterizing the kernel, we encode a natural rotation, generated by the inputs and the hidden state.
ASSOCIATIVE MEMORY APPROACH
Limited memory in RNN is truly a shortage. Adding an external associative memory is a natural solution. For instance, the Neural Turing Machine (Graves et al. (2014)) and many other models have shown the power of using this technique. While it expands the accessible memory space, the technique significantly increases the size of the model, therefore making the process of learning so many parameters harder. Now, we will briefly describe the concept of associative memory. In basic RNN, h t = σ(W x t + Ah t−1 + b) where h t is the hidden state at time step t and x is the input data at each step. Here W and A are trainable parameters that are fixed in the model. A recent approach replaces A with a dynamic A t (as a function of time) so that this matrix can serve as a memory state. Thus, the memory size increases from
O(N h ) to O(N 2 h ),
where N h is the hidden size. In particular, A t is determined by A t−1 , h t−1 and x t which can be a part of a multi-layer or a Hopfiled net. By treating the RNN weights as memory determined by the current input data, a larger memory size is provided and less trainable parameters are required. This significantly increases the memorization ability of RNN. Our model also falls into this category of associative memory through its rotational design of an orthogonal A t matrix.
METHODS
The goal of this section is to suggest ways of engineering models that incorporate rotations as units of memory. In the following discussion N x is the input size and N h is the hidden size.
THE OPERATION Rotation
The operation Rotation is an efficient encoder of an orthogonal operation, which acts as a unit of memory. Rotation computes an orthogonal operator R(a, b) in R N h ×N h that represents the rotation between two non-collinear vectors a and b in the two-dimensional subspace span(a, b) of the Euclidean space R N h with distance · . As a consequence, R can act as a kernel on a hidden state h. More formally, what we propose is a function
Rotation : R N h × R N h → R N h ×N h ,
such that after ortho-normalizing a and b to A practical advantage of Rotation is that it is both orthogonal and differentiable. On one hand, it is a composition of differentiable sub-operations, which enables learning via backpropagation. On the other hand, it preserves the norm of the hidden state, hence it can yield more stable gradients. We were motivated to find differentiable implementations of unitary (orthogonal in particular) operations in existing toolkits for deep learning. Our conclusion is that Rotation can be implemented in various frameworks that are utilized for RNN and other deep learning architectures. Indeed, Rotation is not constrained to parameterize a unitary structure, but instead it produces an orthogonal matrix from simple components in the cell, which makes it useful for experimentation.
u a = a a and u b = b − (u a · b) · u a b − (u a · b) · u a , for which θ = arccos u a · u b u a u b , we encode the following matrix in R N h × R N h R(a, b) = 1 − u T a · u a − u T b · u b + (u a , u b ) T ·R(θ) · (u a , u b ) .(1)
We implement Rotation together with its action on a hidden state efficiently. 1 We do not need to compute the matrix R t before we rotate. Instead we can directly apply the RHS of equation (1) to the hidden state. Hence, the memory complexity of our algorithm is the RHS of (1). Note that we only use two trainable vectors in R N h to generate orthogonal weights in R N h ×N h , which means the model has O(N 2 h ) degrees of freedom for a single unit of memory. Likewise, the time complexity is O(N b · N 2 h ). Thus, Rotation is a universal operation that enables implementations suitable to any neural network model with backpropagation.
O(N b ·N h ), which is determined by !̃τ h R R & h (a) h ' h '() x ' h ' + , ' 1 − + / ReLU 0 1 ' !̃' (b) ⊙ ⊙ , '() , '
THE RUM ARCHITECTURE
We propose the Recurrent Unit of Memory as the first example of an application of Rotation to a recurrent cell. Figure 1 (b) is a sketch of the connections in the cell. RUM consists of an update gate u ∈ R N h that has the same function as in GRU. Instead of a reset gate, however, the model learns a memory target variable τ ∈ R N h . RUM also learns to embed the input vector x ∈ R Nx into R N h to yieldε ∈ R N h . Hence Rotation encodes the rotation between the embedded input and the target, which is accumulated to the associative memory unit R t ∈ R N h ×N h (originally initialized to the identity matrix). Here λ is a non-negative integer that is a hyper-parameter of the model. From here, the orthogonal R t acts on the state h to produce an evolved hidden stateh. Finally RUM obtains the new hidden state via u, just as in GRU. The RUM equations are as follows
(u t τ t ) = W xh · x t + W hh · h t−1 + b t
initial update gate and memory target;
u t = sigmoid(u t )
σ activation of the update gate;
ε t =W xh · x t +b t embedded input for Rotation; R t = (R t−1 ) λ · Rotation(ε t , τ t ) rotational associative memory; h t = ReLU (ε t + R t · h t−1 ) unbounded evolution of hidden state; h t = u t h t−1 + (1 − u t ) h t hidden state before time normalization N; h t = η h t h t new hidden state, with norm η.
We have introduced time subscripts to demonstrate the recurrence relations. The kernels have dimensions given by
W xh ∈ R Nx×2N h , W hh ∈ R N h ×2N h andW xh ∈ R Nx×N h . The biases are variables b t ∈ R 3N h andb t ∈ R N h .
The norm η is a scalar hyper-parameter of the RUM model.
The orthogonal matrix R(ε t , τ ) conceptually takes the place of a kernel acting on the hidden state in GRU. This is the most efficient place to introduce an orthogonal operation, as the Gated Orthogonal Recurrent Unit (GORU, Jing et al. (2017a)) experiments suggest. The difference with the GORU cell is that GORU parameterizes and learns the kernel as an orthogonal matrix, while RUM does not parameterize the rotation R. Instead, RUM learns τ , which together with x, determines R. The orthogonal matrix keeps the norm of the vectors, so we experiment with a ReLU activation instead of the conventional tanh in gated mechanisms.
Even though R is an orthogonal element of RUM, the norm of h t is not stable because of the ReLU activation. Therefore, we suggest normalizing the hidden state h t to a have norm η. We call this technique time normalization as we usually feed mini-batches to the RNN during learning that have the shape (N b , N T ), where N b is the size of the batch and N T is the length of the sequence that we feed in. Time normalization happens along the sequence dimension as opposed to the batch dimension in batch normalization. Choosing appropriate η for the RUM model stabilizes learning and ensures the eigenvalues of the kernels are bounded from above. This in turn means that the smaller η is, the more we reduce the effect of exploding gradients.
Finally, even though RUM uses an update gate, it is not a standard gated mechanism, as it does not have a reset gate. Instead we suggest utilizing additional memory via the target vector τ . By feeding inputs to RUM, τ adapts to encode rotations, which align the hidden states in desired locations in R N h , without changing the norm of h. We believe that the unit of memory R t gives advantage to RUM over other gated mechanisms, such as LSTM and GRU.
EXPERIMENTS
Firstly, we test RUM's memorization capacity on the Copying Memory Task. Secondly, we signify the superiority of RUM by obtaining a state-of-the-art result in the Associative Recall Task. Thirdly, we show that even without external memory, RUM achieves comparable to state-of-the-art results in the bAbI Question Answering data set. Finally, we utilize RUM's rotational memory to reach 1.189 BPC in the Character Level Penn Treebank.
We experiment with λ = 0 RUM and λ = 1 RUM, the latter model corresponding to tuning in the rotational associative memory.
COPYING MEMORY TASK
A standard way to evaluate the memory capacity of a neural network is to test its performance in the Copying Memory Task (Hochreiter & Schmidhuber (1997), Henaff et al. (2016) Arjovsky et al. (2016). We follow the setup in Jing et al. (2017b). The objective of the RNN is to remember (copy) information received T time steps earlier (see section A for details about the data).
Our results in this task demonstrate: 1. RUM utilizes a different representation of memory that outperforms those of LSTM and GRU; 2. RUM solves the task completely, despite its update gate, which does not allow all of the information encoded in the hidden stay to pass through. The only other gated RNN model successful at copying is GORU. Figure 2 reveals that LSTM and GRU hit a predictable baseline, which is equivalent to random guessing. RUM falls bellow the baseline, and subsequently learns the task by achieving zero loss after a few thousands iterations. With the help of figure 2 we will explain how the additional hyper-parameters for RUM affect its training. We observe that when we remove the normalization (η = N/A) then RUM learns more quickly than the case of requiring a norm η = 1.0. At the same time, though, the training entails more fluctuations. Hence we believe that choosing a finite η to normalize the hidden state is an important tool for stable learning. Moreover, it is necessary for the NLP task in this paper (see section 4.4): for our character level predictions we use large hidden sizes, which if left unnormalized, can make the cross entropy loss blow up.
We also observe the benefits of tuning in the associative rotational memory. Indeed, a λ = 1 RUM has a smaller hidden size, N h = 100, yet it learns much more quickly than a λ = 0 RUM. It is possible that the accumulation of phase via λ = 1 to enable faster long-term dependence learning than the λ = 0 case. Either way, both models overcome the vanishing/exploding gradients, and eventually learn the task completely.
ASSOCIATIVE RECALL TASK
Another important synthetic task to test the memory ability of recurrent neural network is the Associative Recall. This task requires RNN to remember the whole sequence of the data and perform extra logic on the sequence.
We follow the same setting as in Ba et al. (2016a) and Zhang & Zhou (2017) and modify the original task so that it can test for longer sequences. In detail, the RNN is fed into a sequence of characters, e.g. "a1s2d3f4g5??d". The RNN is supposed to output the character based on the "key" which is located at the end of the sequence. The RNN needs to look back into the sequence and find the "key" and then to retrieve the next character. In this example, the correct answer is "3". See section B for further details about the data.
In this experiment, we compare RUM to an LSTM, , a Fast-weight RNN (Ba et al. (2016a)) and a recent successful RNN WeiNet (Zhang & Zhou (2017)). All the models have the same hidden state N h = 50 for different lengths T . We use a batch size 128. The optimizer is RMSProp with a learning rate 0.001. We find that LSTM fails to learn the task, because of its lack of sufficient memory capacity. NTM and Fast-weight RNN fail longer tasks, which means they cannot learn to manipulate their memory efficiently.
QUESTION ANSWERING
Question answering remains one of the most important applicable tasks in NLP. Almost all stateof-the-art performance is achieved by the means of attention mechanisms. Few works have been done to improve the performance by developing stronger RNN. Here, we tested RUM on the bAbI Question Answering data set ) to demonstrate its ability to memorize and reason without any attention. In this task, we train 20 sub-tasks jointly for each model. See section C for detailed experimental settings and results on each sub-task.
We compare our model with several baselines: a simple LSTM, an End-to-end Memory Network (Sukhbaatar et al. (2015)) and a GORU. We find that RUM outperforms significantly LSTM and GORU and achieves competitive result with those of MemN2N, which has an attention mechanism. We summarize the results in Table 2. We emphasize that for some sub-tasks in the table, which require large memory, RUM outperforms models with attention mechanisms (MemN2N).
Model
Test Accuracy (%) LSTM ) 49 GORU (Jing et al. (2017a)) 60 MemN2N (Sukhbaatar et al. (2015)) 86 RUM (ours) 73.2 Table 2: Question Answering task on bAbI dataset. Test accuracy (%) on LSTM, MemN2N, GORU and RUM. RUM significantly outperforms LSTM/GORU and has a performance close to that of MemN2N, which uses an attention mechanism.
CHARACTER LEVEL LANGUAGE MODELING
The rotational unit of memory is a natural architecture that can learn long-term structure in data while avoiding significant overfitting. Perhaps, the best way to demonstrate this unique property, among other RNN models, is to test RUM on real world character level NLP tasks.
PENN TREEBANK CORPUS DATA SET
The corpus is a collection of articles in The Wall Street Journal (Marcus et al. (1993)). The text is in English and its vocabulary consists of 10000 words. We split the data into train, validation and test sets according to . We train by feeding mini-batches of size N b that consist of sequences of T consecutive characters.
We incorporate RUM into the state-of-the-art high-level model: Fast-Slow RNN (FS-RNN, Mujika et al. (2017)). The FS-RNN-k architecture consists of two hierarchical layers: one of them is a "fast" layer that connects k RNN cells F 1 , . . . F k in series; the other is a "slow" layer that consists of a single RNN cell S. The organization is roughly as follows: F 1 receives the input from the mini-batch and feeds its state into S; S feeds its state into F 2 ; the output of F k is the probability distribution of the predicted character. Table 3 outlines the performance of some FS-RNN models along with other results in the PTB data set, in which we present the improved test BPC. Mujika et al. (2017) achieve their record with FS-LSTM-2, by setting F 1,2 and S to LSTM. The authors in the same paper suggest that the "slow" cell has the function of capturing long-term dependencies from the data. Hence, it is natural to set S to be a RUM, given its memorization advantages. In particular, we experiment with FS-RUM-2, for which S is a RUM and F 1,2 are LSTM. Additionally, we test the performance of a simple RUM and a two-layer RUM.
As the models are prone to overfitting, for each of our models we follow the experimental settings for regularization in Mujika et al. (2017), presented in section D. Those techniques work particularly well in combination with the rotational structure of RUM. More specifically, FS-RUM-2 needs more than 350 epochs to converge by following a suitable learning rate pattern (see table 6 in the appendix). FS-RUM-2 generalizes better than other gated models, such as GRU and LSTM, because it learns efficient patterns for activation in its kernels. Such a skill is useful for the large Penn Treebank data set, as with its special diagonal structure, the RUM cell in FS-RUM-2 activates almost all neurons in the hidden state. We discuss this representational advantage in section 5.1.
DISCUSSION
VISUAL ANALYSIS
One advantage of the Rotational Unit of Memory is that it allows the model to encode information in the phase of the hidden state. In order to demonstrate the structure behind such learning, we look at the kernels that generate the target memory τ in the RUM model. Figure 3 (a) is a visualization for the Recall task that demonstrates the diagonal structure of W
hh which generates τ (a diagonal structure is also present W
hh , but it is contrasted less). One way to interpret the importance of the diagonal contrast is that each neuron in the hidden state plays an important role for learning since 1.189 11.2M Table 3: With FS-RUM-2 we achieve the state-of-the-art test result on the Penn Treebank task. Additionally, a non-extensive grid search for vanilla RNN models yields comparable results to that of Zoneout LSTM.
each element on the diagonal activates a distinct neuron. Therefore, it seems that RUM utilizes the capacity of the hidden state almost completely. For this reason, we might consider RUM as an architecture that is close to the theoretical optimum of the representational power of RNN models.
Moreover, the diagonal structure is not task specific. For example, in Figure 3 (b) we observe a particular W
(2)
hh for the target τ on the Penn Treebank task. The way we interpret the meaning of the diagonal structure, combined with the off-diagonal activations, is that probably they encode grammar and vocabulary, as well as the links between various components of language.
THEORETICAL ANALYSIS
It is natural to view the Rotational Unit of Memory and many other approaches using orthogonal matrices to fall into the category of phase-encoding architectures: R = R(θ), where θ is a phase information matrix. For instance, we can parameterize any orthogonal matrix according to the Efficient Unitary Neural Networks (EUNN, Jing et al. (2017b)) architecture: R = N i=0 U 0 (θ i ), where U 0 is a block diagonal matrix containing N/2 numbers of 2-by-2 rotations. The component θ i is an one-by-(N/2) parameter vector. Therefore, the rotational memory equation in our model can be represented as
R t = N i=0 U 0 (θ i t ) = N i=0 U 0 (θ i t−1 ) · N i=0 U 0 (φ i t )(2)
where θ t are rotational memory phase vectors at time t and φ represents the phases generated by the operation Rotation correspondingly. Note that each element of the matrix multiplication U 0 (θ i ) · U 0 (φ i ) only depends on one element from θ i and φ i each. This means that, to cancel out one element θ i , the model only needs to learn to express φ i as the negation of θ i .
As a result, our RNN implementation does not require a reset gate, as in GRU or GORU, because the forgetting mechanism is automatically embedded into the representation (2) of phase-encoding.
Thus, the concept of phase-encoding is simply a special sampling on manifolds generated by the special orthogonal Lie group SO(N ). Now, let N = N h be the hidden size. One way to extend the current RUM model is to allow for λ to be any real number in the associative memory equation R t = (R t−1 ) λ · Rotation(ε t , τ t ). This will expand the representational power of the rotational unit.
The difficulty is to mathematically define the raising of a matrix to a real power, which is equivalent to defining a logarithm of a matrix. Again, rotations prove to be a natural choice since they are elements of SO(N h ), and their logarithms correspond to elements of the vector space of the Lie algebra so(N h ), associatied to SO(N h ). (2017)), HyperNetwork (Ha et al. (2016)) structures, etc. The fusion of RUM with such architectures could lead to more state-of-the-art results in sequential tasks.
FUTURE WORK
CONCLUSION
We proposed a novel RNN architecture: Rotational Unit of Memory. The model takes advantage of the unitary and associative memory concepts. RUM outperforms many previous state-of-the-art models, including LSTM, GRU, GORU and NTM in synthetic benchmarks: Copying Memory and Associative Recall tasks. Additionally, RUM's performance in real-world tasks, such as question answering and language modeling, is competetive with that of advanced architectures, some of which include attention mechanisms. We claim the rotational unit of memory can serve as the new benchmark model that absorbs all advantages of existing models in a scalable way. Indeed, the rotational operation can be applied to many other fields, not limited only to RNN, such as Convolutional and Generative Adversarial Neural Networks.
APPENDIX A COPYING MEMORY TASK
The alphabet of the input consists of symbols {a i }, i ∈ {0, 1, · · · , n − 1, n, n + 1}, the first n of which represent data for copying, and the remaining two form "blank" and "marker" symbols, respectively. In our experiment n = 8 and the data for copying is the first 10 symbols of the input. The expectation from the RNN model is to output "blank" and, after the "marker" appears in the input, to output (copy) sequentially the initial data of 10 steps.
B ASSOCIATIVE RECALL TASK
The sequences for training are randomly generated, and consist of pairs of "character" and "number" elements. We set the key to always be a "character". We fix the size of the "character" set equal to half of the length of the sequence and the size of the "number" set equal to 10. Therefore, the total category has a size of T /2 + 10 + 1.
C QUESTION ANSWERING BABI TASK
In this task, we train 20 models jointly on each sub-task. All of them use a 10k data set, which is divided into 90% of training and 10% of validation. We first tokenize all the words in the data set and combine the story and question by simply concatenating two sequences. Different length sequences are filled with "blank" at the beginning and the end. Words in the sequence are embedded into dense vectors and then fed into RNN in a sequential manner. The RNN model outputs the answer prediction at the end of the question through a softmax layer. We use batch size of 32 for all 20 subsets. The model is trained with Adam Optimizer with a learning rate 0.001. Each subset is trained with 20 epochs and no other regularization is applied. For the training of all models we use RMSProp optimization with a learning rate of 0.001 and a decay rate of 0.9; the batch size N b is 128. We observe that it is necessary to tune in the associative memory via λ = 1 since λ = 0 RUM does not learn the task.
D CHARACTER LEVEL PENN TREEBANK TASK
For all RNN cells we apply layer normalization (Ba et al. (2016b)) to the cells and to the LSTM gates and RUM's update gate and target memory, zoneout (Krueger et al. (2016)) to the recurrent connections, and dropout (Srivastava et al. (2014)) to the FS-RNN. For training we use Adam optimization (Kingma & Ba (2014)). We apply gradient clipping with maximal norm of the gradients equal to 1.0. Table 5 lists the hyper-parameters we use for our models.
We embed the inputs into a higher-dimensional space. The output of each models passes through a softmax layer; then the probabilities are evaluated by a standard cross entropy loss function. The bits-per-character (BPC) loss is simply the cross entropy with a binary logarithm. E VISUALIZATION Figure 5: The collection of kernels for λ = 1 RUM, N h = 100, η = N/A for the Copying task, T = 500.
Figure 1 (
1a) demonstrates the projection to the plane span(a, b) in the brackets of equation(1). The mini-rotation in this space isR(θ) = cos θ − sin θ sin θ cos θ . Hence, Rotation(a, b) ≡ R(a, b).
Figure 1 :
1Rotation is a universal differentiable operation that enables the advantages of the RUM architecture. (a) The rotation R(a, b) in the plane defined by a =ε and b = τ acts on the hidden state h. (b) The RUM cell, in which Rotation encodes the kernel R. The matrix R t acts on h t−1 and thus keeps the norm of the hidden state.
Figure 2 :
2The orthogonal operation Rotation enables RUM to solve the Copying Memory Task. The delay times are 200, 500 and 1000. For all models N h = 250 except for the RUM models with λ = 1, for which N h = 100. For the training of all models we use RMSProp optimization with a learning rate of 0.001 and a decay rate of 0.9; the batch size N b is 128.
Figure 3 :
3The kernel generating the target memory for RUM is following a diagonal activation pattern, which signifies the sequential learning of the model.(a) A temperature map of the values of the variables when the model is learned. The task is Associative Recall, T = 50, and the model is RUM, λ = 1, with N h = 50 and without time normalization. (b) An interpretation of the function of the diagonal and off-diagonal activations of RUM's W hh kernel on NLP tasks. The task is Character Level Penn Treebank and the model is λ = 0 RUM, N h = 2000, η = 1.0. See section E for additional examples.
Figure 4 :
4The associative memory provided by rotational operation Rotation enables RUM to solve the Associative Recall Task. The input sequences is 50 . For all models N h = 50.
Figure 6 :
6The collection of kernels for λ = 0 RUM, N h = 256, η = N/A for the Question Answering bAbI Task.
Table 1 Table 1 :
11gives a numerical summary of the results and figure 4, in the appendix, compares graphically RUM to LSTM. Comparison of the models on convergence validation accuracy. Only RUM and the recent WeiNet are able to successfully solve the T = 50 Associative Recall task with a hidden state of 50. RUM has significantly less parameters.Model
Length T = 30 Length T = 50 # Parameters
LSTM
25.6%
20.5%
17k
FW-LN (Ba et al. (2016a)
100%
20.8%
9k
WeiNet (Zhang & Zhou (2017))
100%
100%
22k
RUM (ours)
100%
100%
13k
For future work, the RUM model can be applied to other higher-level RNN structures. For instance, in section 4.4 we already showed how to successfully embed RUM into FS-RNN to achieve stateof-the-art results. Other examples may include Recurrent Highway Networks (Zilly et al.
Weston et al.) (Jing et al.) (Sukhbaatar el al.) Task
RUM
LSTM
GORU
MemN2N
(ours) (Single Supporting Fact
79.7
50
46
100
Two Supporting Facts
39.4
20
40
92
Three Supporting Facts
46.6
20
34
60
Two Arg. Relations
100
61
63
97
Three Arg. Relations
96.8
70
87
87
Yes/No Questions
84.7
48
54
94
Counting
89.1
49
78
83
Lists/Sets
74.0
45
75
90
Simple Negation
84.0
64
63
87
Indefinite Knowledge
75.7
44
45
85
Basic Coreference
90.6
72
69
99
Conjunction
95.0
74
70
99
Compound Coref.
91.5
94
93
99
Time Reasoning
43.9
27
38
98
Basic Deduction
58.8
21
55
100
Basic Induction
47.1
23
44
99
Positional Reasoning
60.3
51
60
49
Size Reasoning
98.5
52
91
89
Path Finding
10.2
8
9
17
Agent's Motivations
98.0
91
98
100
Mean Performance
73.2
49
60
86
Table 4 :
4Question Answering task on bAbI dataset. Test accuracy (%) on LSTM, MemN2N, GORU and RUM. RUM significantly outperforms LSTM/GORU and has a performance close to that of MemoryNN, which uses an attention mechanism.
Table 5 :
5Hyper-parameters for the Character Level Penn Treebank Task.Learning rate
Epochs
0.002
1-180
0.0001
181-240
0.00001
241-360
Table 6 :
6Suggested learning rate pattern for training FS-RUM-2 with a standard Adam optimizer.
Our code is collected in https://github.com/jingli9111/RUM.git.
ACKNOWLEDGMENTSWe would like to thank Konstantin Rangelov for the supply of some of the computational power used for this research. We are grateful to Yichen Shen, Charles Roques-Carmes, Peter Lu, Rawn Henry, Fidel Cano-Renteria and Rumen Hristov for fruitful discussions. Many thanks to Pamela Siska and Irina Tomova for their comments on the paper. This work was partially supported by the Army
Unitary evolution recurrent neural networks. Martin Arjovsky, Amar Shah, Yoshua Bengio, International Conference on Machine Learning. Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. International Conference on Machine Learning, pp. 1120-1128, 2016.
Using fast weights to attend to the recent past. Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, Catalin Ionescu, Advances in Neural Information Processing Systems. 29Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fast weights to attend to the recent past. In Advances in Neural Information Processing Systems 29, pp. 4331-4339, 2016a.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016b.
Learning long-term dependencies with gradient descent is difficult. Yoshua Bengio, Patrice Simard, Paolo Frasconi, IEEE transactions on neural networks. 52Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157-166, 1994.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, Yoshua Bengio, arXiv:1409.1259arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
Junyoung Chung, Sungjin Ahn, Yoshua Bengio, arXiv:1609.01704Hierarchical multiscale recurrent neural networks. International Conference on Learning Representations. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. International Conference on Learning Representations 2017 arXiv:1609.01704, 2016.
Alex Graves, Greg Wayne, Ivo Danihelka, arXiv:1410.5401Neural turing machines. arXiv preprintAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
David Ha, Andrew Dai, Quoc V Le, arXiv:1611.01578Hypernetworks. International Conference on Learning Representations. David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. International Conference on Learning Representations 2016 arXiv:1611.01578, 2016.
Recurrent orthogonal networks and long-memory tasks. Mikael Henaff, Arthur Szlam, Yann Lecun, International Conference on Machine Learning. Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long-memory tasks. International Conference on Machine Learning, pp. 2034-2042, 2016.
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-Rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, IEEE Signal Processing Magazine. 296Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, Yoshua Bengio, arXiv:1706.02761Gated orthogonal recurrent units: On learning to forget. arXiv preprintLi Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. Gated orthogonal recurrent units: On learning to forget. arXiv preprint arXiv:1706.02761, 2017a.
Tunable efficient unitary neural networks (EUNN) and their application to RNNs. Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann Lecun, Max Tegmark, Marin Soljačić, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningPMLR70Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljačić. Tunable efficient unitary neural networks (EUNN) and their application to RNNs. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 1733- 1741. PMLR, 2017b.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations 2015 arXiv:1412.6980, 2014.
David Krueger, Tegan Maharaj, János Kramár, Mohammad Pazeshki, Nicolas Ballas, arXiv:1606.01305Anirudh Goyal, Aaron Courville, and Chris Pal. Zoneout: Regularizing rnns by randomly preserving hidden activations. Nan Rosemary KearXiv preprintDavid Krueger, Tegan Maharaj, János Kramár, Mohammad Pazeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Aaron Courville, and Chris Pal. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Building a large annotated corpus of english: The penn treebank. Computational linguistics. Mitchell P Marcus, Marry Ann Marcinkiewicz, Beatrice Santorini, Mitchell P. Marcus, Marry Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 1993.
Subword language modeling with neural networks. Tomáš Mikolov, Anoop Sutskever, Ilya Deoras, Hai-Son Le, Stefan Kombrink, Jančernocký , preprintTomáš Mikolov, Anoop Sutskever, Ilya Deoras, Hai-Son Le, Stefan Kombrink, and JanČernocký. Subword language modeling with neural networks. preprint, 2012. URL http://www.fit. vutbr.cz/˜imikolov/rnnlm/char.pdf.
Asier Mujika, Florian Meier, Angelika Steger, arXiv:1705.08639Fast-slow recurrent neural networks. arXiv preprintAsier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. arXiv preprint arXiv:1705.08639, 2017.
Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, arXiv:1211.5063On the difficulty of training recurrent neural networks. arXiv preprintRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 115Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 1(15):1929-1958, 2014.
End-to-end memory networks. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, arXiv:1503.08895arXiv preprintSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv: 1503.08895, 2015.
On orthogonality and learning recurrent networks with long term dependencies. Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learn- ing recurrent networks with long term dependencies. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 3570-3578, 2017.
Towards ai-complete question answering: A set of prerequisite toy tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Armand Bart Van Merrinboer, Tomas Joulin, Mikolov, arXiv:1502.05698arXiv preprintJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merrinboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Full-capacity unitary recurrent neural networks. Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, Les Atlas, Advances In Neural Information Processing Systems. Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. In Advances In Neural Information Processing Systems, pp. 4880-4888, 2016.
Learning to update auto-associative memory in recurrent neural networks for improving sequence memorization. Wei Zhang, Bowen Zhou, arXiv:1709.06493arXiv preprintWei Zhang and Bowen Zhou. Learning to update auto-associative memory in recurrent neural net- works for improving sequence memorization. arXiv preprint arXiv:1709.06493, 2017.
Recurrent highway networks. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, Jürgen Schmidhuber, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningPMLR70Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 4189-4198. PMLR, 2017.
Neural architecture search with reinforcement learning. Barret Zoph, Quoc V Le, arXiv:1611.01578International Conference on Learning Representations. Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. International Conference on Learning Representations 2017 arXiv:1611.01578, 2016. |
203,641,721 | A FUNCTION SPACE VIEW OF BOUNDED NORM INFINITE WIDTH RELU NETS: THE MULTIVARIATE CASE | A key element of understanding the efficacy of overparameterized neural networks is characterizing how they represent functions as the number of weights in the network approaches infinity. In this paper, we characterize the norm required to realize a function f : R d → R as a single hidden-layer ReLU network with an unbounded number of units (infinite width), but where the Euclidean norm of the weights is bounded, including precisely characterizing which functions can be realized with finite norm. This was settled for univariate functions f : R → R in Savarese et al.(2019), where it was shown that the required norm is determined by the L 1 -norm of the second derivative of the function. We extend the characterization to multi-variate functions (d ≥ 2, i.e., multiple input units), relating the required norm to the L 1 -norm of the Radon transform of a (d + 1)/2-power Laplacian of the function. This characterization allows us to show that all functions in Sobolev spaces W s,1 (R d ), s ≥ d + 1, can be represented with bounded norm, to calculate the required norm for several specific functions, and to obtain a depth separation result. These results have important implications for understanding generalization performance and the distinction between neural networks and more traditional kernel learning. | [] | A FUNCTION SPACE VIEW OF BOUNDED NORM INFINITE WIDTH RELU NETS: THE MULTIVARIATE CASE
Greg Ongie [email protected]
Rebecca Willett [email protected]
Daniel Soudry [email protected]
Nathan Srebro
Department of Statistics
Department of Statistics & Computer Science
University of Chicago Chicago
60637ILUSA
Electrical Engineering Department Technion
Institute of Technology Haifa
University of Chicago Chicago
60637ILUSA, Israel, Israel
Toyota Technological Institute at Chicago Chicago
60637ILUSA
A FUNCTION SPACE VIEW OF BOUNDED NORM INFINITE WIDTH RELU NETS: THE MULTIVARIATE CASE
A key element of understanding the efficacy of overparameterized neural networks is characterizing how they represent functions as the number of weights in the network approaches infinity. In this paper, we characterize the norm required to realize a function f : R d → R as a single hidden-layer ReLU network with an unbounded number of units (infinite width), but where the Euclidean norm of the weights is bounded, including precisely characterizing which functions can be realized with finite norm. This was settled for univariate functions f : R → R in Savarese et al.(2019), where it was shown that the required norm is determined by the L 1 -norm of the second derivative of the function. We extend the characterization to multi-variate functions (d ≥ 2, i.e., multiple input units), relating the required norm to the L 1 -norm of the Radon transform of a (d + 1)/2-power Laplacian of the function. This characterization allows us to show that all functions in Sobolev spaces W s,1 (R d ), s ≥ d + 1, can be represented with bounded norm, to calculate the required norm for several specific functions, and to obtain a depth separation result. These results have important implications for understanding generalization performance and the distinction between neural networks and more traditional kernel learning.
INTRODUCTION
It has been argued for a while, and is becoming increasingly apparent in recent years, that in terms of complexity control and generalization in neural network training, "the size [magnitude] of the weights is more important then the size [number of weights or parameters] of the network" (Bartlett, 1997;Neyshabur et al., 2014;Zhang et al., 2016). That is, inductive bias and generalization are not achieved by limiting the size of the network, but rather by explicitly (Wei et al., 2019) or implicitly (Nacson et al., 2019;Lyu & Li, 2019) controlling the magnitude of the weights.
In fact, since networks used in practice are often so large that they can fit any function (any labels) over the training data, it is reasonable to think of the network as virtually infinite-sized, and thus able to represent essentially all functions. Training and generalization ability then rests on fitting the training data while controlling, either explicitly or implicitly, the magnitude of the weights. That is, training searches over all functions, but seeks functions with small representational cost, given by the minimal weight norm required to represent the function. This "representational cost of a function" is the actual inductive bias of learning-the quantity that defines our true model class, and the functional we are actually minimizing in order to learn. Understanding learning with overparameterized (virtually infinite) networks thus rests on understanding this "representational cost", which is the subject of our paper. Representational cost appears to play an important role in generalization performance; indeed Mei & Montanari (2019) show that minimum norm solutions are optimal for generalization in certain simple cases, and recent work on "double descent" curves is an example of this phenomenon (Belkin et al., 2019;Hastie et al., 2019).
We can also think of understanding the representational cost as asking an approximation theory question: what functions can we represent, or approximate, with our de facto model class, namely the class of functions representable with small magnitude weights? There has been much celebrated work studying approximation in terms of the network size, i.e., asking how many units are necessary in order to approximate a target function (Hornik et al., 1989;Cybenko, 1989;Barron, 1993;Pinkus, 1999). But if complexity is actually controlled by the norm of the weights, and thus our true model class is defined by the magnitude of the weights, we should instead ask how large a norm is necessary in order to capture a target function. This revised view of approximation theory should also change how we view issues such as depth separation: rather then asking how increasing depth can reduce the number of units required to fit a function, we should instead ask how increasing depth can reduce the norm required, i.e., how the representational cost we study changes with depth.
Our discussion above directly follows that of Savarese et al. (2019), who initiated the study of the representational cost in term of weight magnitude. Savarese et al. considered two-layer (i.e., single hidden layer) ReLU networks, with an unbounded (essentially infinite) number of units, and where the overall Euclidean norm (sum of squares of all the weights) is controlled. (Infinite width networks of this sort have been studied from various perspectives by e.g., Bengio et al. (2006);Neyshabur et al. (2015); Bach (2017); Mei et al. (2018)). For univariate functions f : R → R, corresponding to networks with a single one-dimensional input and a single output, Savarese et al. obtained a crisp and precise characterization of the representational cost, showing that minimizing overall Euclidean norm of the weights is equivalent to fitting a function by controlling:
max |f (x)|dx, |f (−∞) + f (+∞)| .(1)
While this is an important first step, we are of course interested also in more than a single onedimensional input. In this paper we derive the representational cost for any function f : R d → R in any dimension d. Roughly speaking, the cost is captured by:
f R≈ R{∆ (d+1)/2 f } 1 ≈ ∂ d+1 b R{f } 1(2)
where R is the Radon transform, ∆ is the Laplacian, and ∂ b is a partial derivative w.r.t. the offset in the Radon transform (see Section 3 for an explanation of the Radon transform). This characterization is rigorous for odd dimensions d and for functions where the above expressions are classically welldefined (i.e., smooth enough such that all derivatives are finite, and the integrand in the Radon transform is integrable). But for many functions of interest these quantities are not well-defined classically. Instead, in Definition 1, we use duality to rigorously define a semi-norm f R that captures the essence of the above quantities and is well-defined (though possibly infinite) for any f in any dimension. We show that f R precisely captures the representational cost of f , and in particular is finite if and only if f can be approximated arbitrarily well by a bounded norm, but possibly unbounded width, ReLU network. Our precise characterization applies to an architecture with unregularized bias terms (as in Savarese et al. (2019)) and a single unregularized linear unitotherwise a correction accounting for a linear component is necessary, similar but more complex than the term |f (−∞) + f (+∞)| in the univariate case, i.e., (1).
As we uncover, the characterization of the representational cost for multivariate functions is unfortunately not as simple as the characterization (1) in the univariate case, where the Radon transform degenerates. Nevertheless, it is often easy to evaluate, and is a powerful tool for studying the representational power of bounded norm ReLU networks. Furthermore, as detailed in Section 5.5, there is no kernel function for which the associated RKHS norm is the same as (2); i.e., training bounded norm neural networks is fundamentally different from kernel learning. In particular, using our characterization we show the following:
• All sufficiently smooth functions have finite representational cost, but the necessary degree of smoothness depends on the dimension. In particular, all functions in the Sobolev space W d+1,1 (R d ), i.e., when all derivatives up to order d + 1 are L 1 -bounded, have finite representational cost, and this cost can be bounded using the Sobolev norm. (Section 5.1)
• We calculate the representational cost of radial "bumps", and show there are bumps with finite support that have finite representational cost in all dimensions. The representational cost increases as 1/ε for "sharp" bumps of radius ε (and fixed height). (Section 5.2) • In dimensions greater than one, we show a general piecewise linear function with bounded support has infinite representational cost (i.e., cannot be represented with a bounded norm, even with infinite networks). (Section 5.3) • We obtain a depth separation in terms of norm: we demonstrate a function in two dimensions that is representable using a depth three ReLU network (i.e., with two hidden layers) with small finite norm, but cannot be represented by any bounded-norm depth two (single hidden layer) ReLU network. As far as we are aware, this is the first depth separation result in terms of the norm required for representation. (Section 5.4) Barron (1993;1994).
The connection between the Radon transform and two-layer neural networks was previously made by Carroll & Dickinson (1989) and Ito (1991), who used it to obtain constructive approximations when studying approximation theory in terms of network size (number of units) for threshold and sigmoidal networks. This connection also forms the foundation of ridgelet transform analysis of functions Candès & Donoho (1999);Candès (1999). More recently, Sonoda & Murata (2017) used ridgelet transform analysis to study the approximation properties of two-layer neural networks with unbounded activation functions, including the ReLU.
While working on this manuscript, we learned through discussions with Matus Telgarsky of his related parallel work. In particular, Telgarsky obtained a calculation formula for the norm required to represent a radial function, paralleling our calculations in Section 5.2, and used it to show that sufficiently smooth radial functions have finite norm in any dimension, and studied how this norm changes with dimension.
INFINITE WIDTH RELU NETWORKS
We repeat here the discussion of Savarese et al. (2019) defining the representational cost of infinitewidth ReLU networks, with some corrections and changes that we highlight.
Consider the collection of all two-layer networks having an unbounded number of rectified linear units (ReLUs), i.e., all g θ : R d → R defined by
g θ (x) = k i=1 a i [w i x − b i ] + + c, for all x ∈ R d (3) with parameters θ = (k, W = [w 1 , ..., w k ], b = [b 1 , ..., b k ] , a = [a 1 , ..., a k ] , c),
where the width k ∈ N is unbounded. Let Θ be the collection of all such parameter vectors θ. For any θ ∈ Θ we let C(θ) be the sum of the squared Euclidean norm of the weights in the network excluding the bias terms, i.e.,
C(θ) = 1 2 W 2 F + a 2 = 1 2 k i=1 w i 2 2 + |a i | 2 ,(4)
and consider the minimal representation cost necessary to exactly represent a function f ∈ R d → R
R(f ) := inf θ∈Θ C(θ) s.t. f = g θ .(5)
By the 1-homogeneity of the ReLU, it is shown in Neyshabur et al. (2014) (see also Appendix A of Savarese et al. (2019)) that minimizing C(θ) is the same as constraining the inner layer weight vectors {w i } k i=1 to be unit norm while minimizing the 1 -norm of the outer layer weights a. Therefore, letting Θ be the collection of all θ ∈ Θ with each w i constrained to the unit sphere S d−1 := {w ∈ R d : w = 1}, we have
R(f ) = inf θ∈Θ a 1 s.t. f = g θ .(6)
However, we see R(f ) is finite only if f is exactly realizable as a finite-width two layer ReLU network, i.e., f must be a continuous piecewise linear function with finitely many pieces. Yet, we know that any continuous function can be approximated uniformly on compact sets by allowing the number of ReLU units to grow to infinity. Since we are not concerned with the number of units, only their norm, we modify our definition of representation cost to capture this larger space of functions, and define 1
R(f ) := lim ε→0 inf θ∈Θ C(θ) s.t. |g θ (x) − f (x)| ≤ ε ∀ x ≤ 1/ε and g θ (0) = f (0) (7)
In words, R(f ) is the minimal limiting representational cost among all sequences of networks converging to f uniformly (while agreeing with f at zero).
Intuitively, if R(f ) is finite this means f is expressible as an "infinite-width" two layer ReLU network whose outer-most weights are described by a density α(w, b) over all weight and bias pairs
(w, b) ∈ S d−1 × R. To make this intuition precise, let M (S d−1 × R) denote the space of signed measures α defined on (w, b) ∈ S d−1 × R with finite total variation norm α 1 = S d−1 ×R d|α| (i.e.
, the analog of the L 1 -norm for measures), and let c ∈ R. Then we define the infinite-width two-layer ReLU network h α,c (or "infinite-width net" for short) by 2
h α,c (x) := S d−1 ×R [w x − b] + − [−b] + dα(w, b) + c (8) We prove in Appendix B that R(f ) is equivalent to R(f ) = min α∈M (S d−1 ×R),c∈R α 1 s.t. f = h α,c .(9)
Hence, learning an unbounded width ReLU network g θ by fitting some loss functional L(·) while controlling the Euclidean norm of the weights C(θ) by minimizing
min θ∈Θ L(g θ ) + λC(θ)(10)
is effectively the same as learning a function f by controlling R(f ):
min f :R d →R L(f ) + λR(f ).(11)
In other words, R(f ) captures the true inductive bias of learning with unbounded width ReLU networks having regularized weights. Our goal is then to calculate R(f ) for any function f : R d → R, and in particular characterize when it is finite in order to understand what functions can be approximated arbitrarily well with bounded norm but unbounded width ReLU networks.
SIMPLIFICATION VIA UNREGULARIZED LINEAR UNIT
Every two-layer ReLU network decomposes into the sum of a network with absolute value units plus a linear part 3 . As demonstrated by Savarese et al. (2019) in the 1-D setting, the weights on the absolute value units typically determine the representational cost, with a correction term needed if 1 Our definition of R(f ) differs from the one given in Savarese et al. (2019). We require |g θ (x) − f (x)| ≤ ε on the ball of radius 1/ε rather than all of R d , and we additionally require g θ (0) = f (0). These modifications are needed to ensure (7) and (9) are equivalent. Also, we note the choice of zero in the condition g θ (0) = f (0) is arbitrary and can be replaced with any point x0 ∈ R d . 2 Our definition of hα,c also differs from the one given in Savarese et al. (2019). To ensure the integral is well-defined, we include the additional −[−b]+ term in the integrand. See Remark 1 in Appendix A for more discussion on this point.
3 Such a decomposition follows immediately from the identity [t]+ = 1 2 (|t| + t) the linear part has large weight. To allow for a cleaner formulation of the representation cost without this correction term, we consider adding in one additional unregularized linear unit v x (similar to a "skip connection") to "absorb" any representational cost due to the linear part.
Namely, for any θ ∈ Θ and v ∈ R d we define the class of unbounded with two-layer ReLU networks g θ,v with a linear unit by g θ,v (x) = g θ (x) + v x where g θ is as defined in (3), and associate g θ,v with the same weight norm C(θ) as defined in (4) (i.e., we exclude the norm of the weight v on the additional linear unit from the cost). We then define the representational cost R 1 (f ) for this class of networks by
R 1 (f ) := lim ε→0 inf θ∈Θ ,v∈R d C(θ) s.t. |g θ,v (x) − f (x)| ≤ ε ∀ x ≤ 1/ε and g θ (0) = f (0) . (12) Likewise, for all α ∈ M (S d−1 × R), v ∈ R d , c ∈ R, we define an infinite width net with a linear unit by h α,v,c (x) := h α,c (x) + v x. We prove in Appendix B that R 1 (f ) is equivalent to: R 1 (f ) = min α∈M (S d−1 ×R),v∈R d ,c∈R α 1 s.t. f = h α,v,c .(13)
In fact, we show the minimizer of (13) is unique and is characterized as follows:
Lemma 1. R 1 (f ) = α + 1 where α + ∈ M (S d−1 × R) is the unique even measure 4 such that f = h α + ,v,c for some v ∈ R d , c ∈ R.
The proof of Lemma 1 is given in Appendix C. The uniqueness in Lemma 1 allows for a more explicit characterization R 1 (f ) in function space relative to R(f ), as we show in Section 4.
THE RADON TRANSFORM AND ITS DUAL
Our characterization of the representational cost R 1 (f ) in Section 4 is posed in terms of the Radon transform -a transform that is fundamental to computational imaging, and whose inverse is the basis of image reconstruction in computed tomography. For an investigation of its properties and applications, see Helgason (1999). Here we give a brief review of the Radon transform and its dual as needed for subsequent derivations; readers familiar with these topics can skip to Section 4.
The Radon transform R represents a function f : R d → R in terms of its integrals over all possible hyperplanes in R d , as parameterized by the unit normal direction to the hyperplane w ∈ S d−1 and the signed distance of the hyperplane from the origin b ∈ R:
R{f }(w, b) := w x=b f (x) ds(x) for all (w, b) ∈ S d−1 × R,(14)
where ds(x) represents integration with respect to (d−1)-dimensional surface measure on the hyperplane w x = b. Note the Radon transform is an even function, i.e., R{f }(w, b) = R{f }(−w, −b) for all (w, b) ∈ S d−1 × R, since the equations w x = b and −w x = −b determine the same hyperplane. See Figure 1 for an illustration of the Radon transform in dimension d = 2.
The Radon transform is invertible for many common spaces of functions, and its inverse is a composition of the dual Radon transform R * (i.e., the adjoint of R) followed by a filtering step in Fourier domain. The dual Radon transform R * maps a function ϕ : S d−1 × R → R to a function over x ∈ R d by integrating over the subset of coordinates (w, b) ∈ S d−1 × R corresponding to all hyperplanes passing through x:
R * {ϕ}(x) := S d−1 ϕ(w, w x) dw for all x ∈ R d(15)
where dw represents integration with respect to the surface measure of the unit sphere S d−1 . The filtering step is given by a (d − 1)/2-power of the (negative) Laplacian (−∆) (d−1)/2 , where for any s > 0 the operator (−∆) s/2 is defined in Fourier domain by
(−∆) s/2 f (ξ) = ξ s f (ξ),(16)(x) = δ(x − (−1, −1)) (red), f (x) = δ(x − (1, 0)) (green), and f (x) = δ(x − (0, 1)) (blue)
. If a function f is a superposition of such δ functions, then R{f } is the sum of the curves in (b); this is typically referred to as a "sinogram". Furthermore, the dual Radon transform in equation (15) integrates any function ϕ(w, b) over all curves like one of the three in (b).
using g(ξ) := (2π) −d/2 g(x)e −iξ x dx to denote the d-dimensional Fourier transform at the Fourier domain (frequency) variable ξ ∈ R d . When d is odd, (−∆) (d−1)/2 is the same as applying the usual Laplacian
(d − 1)/2 times, i.e., (−∆) (d−1)/2 = (−1) (d−1)/2 ∆ (d−1)/2 , while if d is even
it is a pseudo-differential operator given by convolution with a singular kernel. Combining these two operators gives the inversion formula
f = γ d (−∆) (d−1)/2 R * {R{f }}, where γ d is a constant depending on dimension d,
which holds for f belonging to many common function spaces (see, e.g., Helgason (1999)).
The dual Radon transform is also invertible by a similar formula, albeit under more restrictive conditions on the function space. We use the following formula due to Solmon (1987) that holds for all Schwartz class functions 5 on S d−1 × R, which we denote by S(S d−1 × R): Lemma 2 (Solmon (1987)). If ϕ is an even function 6 , i.e., ϕ(−w, −b) = ϕ(w, b) for all (w, b) ∈ S d−1 × R, belonging to the Schwartz class S(S d−1 × R), then
γ d R{(−∆) (d−1)/2 R * {ϕ}} = ϕ,(17)where γ d = 1 2(2π) d−1 .
Finally, we recall the Fourier slice theorem for Radon transform (see, e.g., Helgason (1999)): Let f ∈ L 1 (R d ), then for all σ ∈ R and w ∈ S d−1 we have
F b R{f }(w, σ) = f (σ · w)(18)
where F b indicates the 1-D Fourier transform in the offset variable b. From this it is easy to establish the following intertwining property of the Laplacian and the Radon transform: assuming f and ∆f
are in L 1 (R d ), we have R{∆f } = ∂ 2 b R{f }(19)
where ∂ b is the partial derivative in the offset variable b. More generally for any positive integer s, assuming f and
(−∆) s/2 f are in L 1 (R d ) we have R{(−∆) s/2 f } = (−∂ 2 b ) s/2 R{f }(20)
where fractional powers of −∂ 2 b can be defined in Fourier domain, same as fractional powers of the Laplacian. In particular, if d is odd,
(−∂ 2 b ) (d−1)/2 = (−1) (d−1)/2 ∂ d−1 b , while if d is even, (−∂ 2 b ) (d−1)/2 = (H∂ b ) d−1
where H is the Hilbert transform in the offset variable b.
REPRESENTATIONAL COST IN FUNCTION SPACE: THE R-NORM
Our starting point is to relate the Laplacian of an infinite-width net to the dual Radon transform of its defining measure. In particular, consider an infinite width net f defined in terms of a smooth density α(w, b) over S d−1 × R that decreases rapidly in b, so that we can write
f (x) = S d−1 ×R [w x − b] + − [−b] + α(w, b) dw db + v x + c.(21)
Differentiating twice inside the integral, the Laplacian ∆f (
x) = d i=1 ∂ 2 xi f (x) is given by ∆f (x) = S d−1 ×R δ(w x − b)α(w, b) dw db = S d−1 α(w, w x) dw.(22)
where δ(·) denotes a Dirac delta. We see that the right-hand side of (22) is precisely the dual Radon transform of α, i.e., we have shown ∆f = R * {α}. Applying the inversion formula for the dual Radon transform given in (17) to this identity, and using the characterization of R 1 (f ) given in Lemma 1, immediately gives the following result.
Lemma 3. Suppose f = h α,v,c for some α ∈ S S d−1 × R with α even, and v ∈ R d , c ∈ R. Then α = −γ d R{(−∆) (d+1)/2 f }, and R 1 (f ) = γ d R{(−∆) (d+1)/2 f } 1 where γ d = 1 2(2π) d−1 .
See Figure 2 for an illustration of Lemma 3 in the case d = 2. This result suggests that more generally if we are given a function f , we ought to be able to compute R 1 (f ) using the formula in Lemma 3. The following result, proved in Appendix C, shows this is indeed the case assuming f is integrable and sufficiently smooth, which for simplicity we state in the case of odd dimensions d. 7 .
Proposition 1. Suppose d is odd. If both f ∈ L 1 (R d ) and ∆ (d+1)/2 f ∈ L 1 (R d ), then R 1 (f ) = γ d R{∆ (d+1)/2 f } 1 = γ d ∂ d+1 b R{f } 1 < ∞.(23)
Here we used the intertwining property of the Radon transform and the Laplacian to write
R{∆ (d+1)/2 f } = ∂ d+1 b
R{f } (see Section 3 for more details).
Given these results, one might expect for an arbitrary function f we should have R 1 (f ) equal to one of the expressions in (23). However, for many functions of interest these quantities are not classically well-defined. For example, the finite-width ReLU net
f (x) = n i=1 a i [w i x − b i ] +
is a piecewise linear function that is non-smooth along each hyperplane w i x = b i , so its derivatives can only be understood in the sense of generalized functions or distributions. Similarly, in this case the Radon transform of f is not well-defined since f is unbounded and not integrable along hyperplanes.
Instead, we use duality to define a functional (the "R-norm") that extends to the more general case where f is possibly non-smooth or not integrable along hyperplanes. In particular, we define a functional on the space of all Lipschitz continuous functions 8 . The main idea is to re-express the L 1norm in (23) as a supremum of the inner product over a space of dual functions ψ : S d−1 × R → R, i.e., using the fact R * is the adjoint of R and the Laplacian ∆ is self-adjoint we write
R{∆ (d+1)/2 f } 1 = sup ψ ∞≤1 R{∆ (d+1)/2 f }, ψ = sup ψ ∞≤1 f, ∆ (d+1)/2 R * {ψ}(24)
then restrict ψ to a space where ∆ (d+1)/2 R * {ψ} is always well-defined. More formally, we have: Definition 1. For any Lipschitz continuous function f :
R d → R define its R-norm 9 f R by f R := sup −γ d f, (−∆) (d+1)/2 R * {ψ} : ψ ∈ S(S d−1 × R), ψ even , ψ ∞ ≤ 1 . (25) where γ d = 1 2(2π) d−1 , S(S d−1 × R) is the space of Schwartz functions on S d−1 × R, and f, g := R d f (x)g(x)dx. If f is not Lipschitz we define f R = +∞.
7 For d even, Proposition 1 holds with the pseudo-differential operators
(−∆) (d+1)/2 and (−∂ 2 b ) (d+1)/2 in place of ∆ (d+1)/2 and ∂ d+1 b ; see Section 3. 8 Recall that f is Lipschitz continuous if there exists a constant L (depending on f ) such that |f (x) − f (y)| ≤ L x − y for all x, y ∈ R d . 9
Strictly speaking, the functional · R is not a norm, but it is a semi-norm on the space of functions for which it is finite; see Appendix E. Following this, we apply the Radon transform R{g}, which gives "sinogram" shown in the right-most figure, plotted as a function of angle θ of the unit direction w(θ) = [cos(θ), sin(θ)] and offset parameter b. Up to a scaling, R{g} are the weights used to represent f as an infinite-width two-layer ReLU network, and the R-norm is its scaled L 1 -norm:
f (x) g(x) = (−∆) 3/2 f (x) R{g}(w(θ), b) -2 0 2 -2 -R1(f ) = f R = 1 4π R{g} 1.
We prove in Appendix C that the R-norm is well-defined, though not always finite, for all Lipschitz functions and, whether finite or infinite, is always equal to the representational cost R 1 (·):
Theorem 1. R 1 (f ) = f R for all functions f . In particular, R 1 (f ) is finite if and only if f is Lipschitz and f R is finite.
We give the proof of Theorem 1 in Appendix C, but the following example illustrates many key elements of the proof. Example 1. We compute R 1 (f ) = f R in the case where f is a finite-width two-layer ReLU network. First, consider the case where f consists of a single ReLU unit:
f (x) = a 1 [w 1 x − b 1 ] + for some a 1 ∈ R and (w 1 , b 1 ) ∈ S d−1 . Note that ∆f (x) = a δ(w 1 x−b 1 ) in a distributional sense, i.e., for any smooth test function ϕ we have ∆f, ϕ = f, ∆ϕ = a 1 ϕ(x)δ(w 1 x − b 1 )dx = a 1 R{ϕ}(w 1 , b 1 ). So for any even ψ ∈ S(S d−1 × R) we have −γ d f, (−∆) (d+1)/2 R * {ψ} = γ d ∆f, (−∆) (d−1)/2 R * {ψ} (26) = a 1 γ d R{(−∆) (d−1)/2 R * {ψ}}(w 1 , b 1 ) (27) = a 1 ψ(w 1 , b 1 )(28)
where in the last step we used the inversion formula (17). Since the supremum defining f R is over all even ψ ∈ S(S d−1 ×R) such that ψ ∞ ≤ 1, taking any ψ * such that ψ * (w 1 , b 1 ) = sign(a 1 ) and |ψ * (w 1 , b 1 )| ≤ 1 otherwise, we see that f R = |a 1 |. The general case now follows by linearity: let
f (x) = k i=1 a i [w i x − b i ] + such that all the pairs {(w i , b i )} k i=1 ∪ {(−w i , −b i )} k i=1 are distinct. Then for any ψ ∈ S(S d−1 × R) we have − γ d f, (−∆) (d+1)/2 R * {ψ} = k i=1 a i ψ(w i , b i ).(29)
Letting ψ * be any even Schwartz function such that ψ
* (w i , b i ) = ψ * (−w i , −b i ) = sign(a i ) for all i = 1, ..., k and |ψ * (w, b)| ≤ 1 otherwise, we see that R 1 (f ) = f R = k i=1 |a i |.
The representational cost R(f ) defined without the unregularized linear unit is more difficult to characterize explicitly. However, we prove that R(f ) is finite if and only if f R is finite, and give bounds for R(f ) in terms of f R and the norm of the gradient of the function "at infinity", similar to the expressions derived in Savarese et al. (2019) in the 1-D setting.
Theorem 2. R(f ) is finite if and only if f R is finite, in which case we have the bounds max{ f R , 2 ∇f (∞) } ≤ R(f ) ≤ f R + 2 ∇f (∞) ,(30)
where ∇f (∞) := lim r→∞
1 c d r d−1 x =r ∇f (x)ds(x) ∈ R d with c d := S d−1 dw = 2π d/2 Γ(d/2) . In particular, if ∇f (∞) = 0 then R(f ) = R 1 (f ) = f R .
We give the proof of Theorem 2 in Appendix D. The lower bound max{ f R , 2 ∇f (∞) } is analogous to the expression for the 1D representational cost (1) obtained in Savarese et al. (2019). From this, one might speculate that R(f ) is equal to max{ f R , 2 ∇f (∞) 2 }. However, in Appendix D we show this is not the case: there are examples of functions f in all dimensions such that R(f ) attains the upper bound in a non-trivial way (e.g., f (x, y) = |x| + y in d = 2).
PROPERTIES OF THE R-NORM
In Appendix E we prove several useful properties for the R-norm. In particular, we show the Rnorm is in fact a semi-norm, i.e., it is absolutely homogeneous and satisfies the triangle inequality, while f R = 0 if and only if f is affine. We also show R-norm is invariant to coordinate translation and rotations, and prove the following scaling law under contractions/dilation:
Proposition 2. If f ε (x) := f (x/ε) for any ε > 0, then f ε R = ε −1 f R
Proposition 2 shows that "spikey" functions will necessarily have large R-norm. For example, let f be any non-negative function supported on the ball of radius 1 with maximum height 1 such that f R is finite. Then the contraction f ε is supported on the ball of radius ε with maximum height 1,
but f ε R = ε −1 f R blows up as ε → 0.
From a generalization perspective, the fact that the R-norm blows up with contractions is a desirable property, since otherwise the minimum norm fit to data would be spikes on data points. In particular, this is what would happen if the representational cost involved derivatives lower than d + 1, and so in this sense it is not a coincidence that f R involves derivatives of order d + 1.
Finally, we show the smoothness requirements of the R-norm are also reflected in Fourier domain. In particular, we show that for a broad class of functions in order R-norm to be finite the Fourier transform of f must decay rapidly along every ray. A precise statement is given in Proposition 12 in Appendix E.
CONSEQUENCES, APPLICATIONS AND DISCUSSION
Our characterization of the representational cost for multivariate functions in terms of the R-norm is unfortunately not as simple as the characterization in the univariate case. Nevertheless, it is often easy to evaluate, and is a powerful tool for studying the representational power of bounded norm ReLU networks.
SOBOLEV SPACES
Here we relate Sobolev spaces and the R-norm. The key result is the following upper bound, which is proved in Appendix F.
Proposition 3. If f : R d → R is Lipschitz and (−∆) (d+1)/2 f exists in a weak sense 10 then f R ≤ c d γ d (−∆) (d+1)/2 f 1 . (31) where c d = S d−1 dw = 2π d/2 Γ(d/2) , and γ d = (−∆) (d+1)/2 f 1 ≤ c d γ d f W d+1,1 , where f W d+1,1
is the Sobolev norm given by the sum of L 1 -norm of f and the L 1 -norms of all its weak partial derivatives up to order d + 1. This gives the following immediate corollary to Proposition 3:
Corollary 1. Suppose d is odd. If f belongs to the Sobolev space W d+1,1 (R d ),
i.e., f and all its weak derivatives up to order d + 1 are in
L 1 (R d ), then f R is finite and f R ≤ c d γ d f W d+1,1 .
Corollary 1 shows that the space of functions with finite R-norm is "dense" in the space of all functions, in the sense that it contains a full Sobolev space.
RADIAL BUMP FUNCTIONS
Here we study the case where f is a radially symmetric function, i.e., f (x) = g( x ) for some function g : [0, ∞) → R. In this case, the R-norm is expressible entirely in terms of derivatives of the radial profile function g, as shown in the following result, which is proved in Appendix G.
Proposition 4. Suppose d ≥ 3 is odd. If f ∈ L 1 (R d ) with f (x) = g( x ) then f R = 2 (d − 2)! ∞ 0 ∂ (d+1) ρ(b) db. (32) where ρ(b) = ∞ b g(t)(t 2 − b 2 ) (d−3)/2 t dt, For example, in the d = 3 dimensional case, we have f R = 2 ∞ 0 |b ∂ 3 g(b) + 3∂ 2 g(b)|db, (d = 3)(33)
More generally, for any odd dimension d ≥ 3 a simple induction shows (32) is equivalent to
f R = 2 (d − 2)! ∞ 0 |Q d {g}(b)|db(34)
where Q d is a differential operator of degree (d + 3)/2 having the form
Q d = (d+3)/2 k=2 p k,d (b)∂ k where each p k,d (b) is a polynomial in b of degree k−2.
In particular, if the weak derivative ∂ (d+1)/2 g exists and has bounded variation, then f R is finite.
Example 2. Consider the radial bump function f (x) = g( x ) with x ∈ R 3 where g(r) = (1 − r 2 ) 2 if 0 ≤ r < 1 0 if r ≥ 1.(35)
which is non-negative, supported on the unit ball, and has maximum height f (0) = 1, and let f ε (x) = f (x/ε) be the contraction of f to a ball of radius ε with the same height. Then using formula (33), and the dilation property (2), we can compute
f ε R = f R /ε = 16(1 + 1 5 (5 + 2 √ 5))/ε.(36)
Note that if we move up to dimension d = 5, then the function defined by (35) no longer has finite norm since its derivatives of order (d + 3)/2 = 4 do not exist; this phenomenon is explored in more detail in the next example.
Example 3. Suppose d ≥ 3 is odd. Consider the radial bump function f d,k (x) = g d,k ( x ) with x ∈ R d where g d,k (r) = (1 − r 2 ) k if 0 ≤ r < 1 0 if r ≥ 1.(37)
for any k > 0. We prove f d,k R is finite if and only if k ≥ d+1 2 (see Appendix G). To illustrate the scaling with dimension d, in Appendix G we prove that for the choice k d = (d + 1)/2 + 2 we have the bounds
(d + 5)d ≤ f d,k d R ≤ 2d(d + 5), hence f d,k d R ∼ d 2 .
Similarly, by the dilation property (2), a contraction of f d,k d to the ball of radius ε will have R-norm scaling as ∼ d 2 /ε. The next example 11 shows there there is a universal choice of radial bump function in all (odd) dimensions with finite R-norm:
Example 4. Suppose d ≥ 3 is odd. Consider the radial bump function f (x) = g( x ) with x ∈ R d where g(r) = e − 1 1−r 2 if 0 ≤ r < 1 0 if r ≥ 1.(38)
Since g is C ∞ -smooth and its derivatives of all orders are L 1 -bounded, f has finite R-norm by Proposition 4.
PIECEWISE LINEAR FUNCTIONS
Every finite-width two-layer ReLU network is a continuous piecewise linear function. However, the reverse is not true. For example, in dimensions two and above no compactly supported piecewise linear function is expressible as a finite-width two-layer ReLU network. A natural question then is: what piecewise linear functions are represented by bounded norm infinite-width nets, i.e., have finite R-norm? In particular, can a compactly supported piecewise linear function have finite R-norm?
Here we show this is generally not the case.
Before stating our result, we will need a few definitions relating to the geometry of piecewise linear functions. Recall that any piecewise linear function (with finitely many pieces) is divided into polyhedral regions separately by a finite number of boundaries. Each boundary is (d − 1)-dimensional and contained in a unique hyperplane. Hence, with every boundary we associate the unique (up to sign) unit normal to the hyperplane containing it, which we call the boundary normal. Additionally, in the case of compactly supported piecewise linear function, every boundary set that touches the complement of the support set we call an outer boundary, otherwise we call it an inner boundary.
The following result is proved in Appendix H, and is a consequence of the Fourier decay estimates established in Appendix E.
Proposition 5. Suppose f : R d → R is a continuous piecewise linear function with compact support such that one (or both) of the following conditions hold:
(a) at least one of the boundary normals is not parallel with every other boundary normal, or
(b) f is everywhere convex (or everywhere concave) when restricted to its support, and at least one of the inner boundary normals is not parallel with all outer boundary normals.
Then f has infinite R-norm.
Note that condition (a) holds for a "generic" piecewise linear function with compact support, i.e., if a function fails to satisfy (a) we can always perturb it slightly such that (a) holds. In this sense no "generic" compactly supported piecewise linear function has finite R-norm. In fact, we are not aware of any compactly supported piecewise linear function with finite R-norm, but our theory does not rule them out a priori.
This result suggests that the space of piecewise linear functions expressible as a bounded norm infinite-width two-layer ReLU network is not qualitatively different than those captured by finitewidth networks. We go further and make the following conjecture: Conjecture 1. A continuous piecewise linear function f has finite R-norm if and only if it is exactly representable by a finite-width two-layer ReLU network.
DEPTH SEPARATION
In (2017); Yarotsky (2017)). The following example shows that, also in terms of the norm, such a depth separation exists for ReLU nets: Example 5. The pyramid function f (x) = [1 − x 1 ] + is a compactly supported piecewise linear function that satisfies condition (b) of Proposition 5, hence has infinite representational cost as a two-layer ReLU network (R(f ) = R 1 (f ) = +∞), but can be exactly represented as a finite-width three-layer ReLU network.
Interestingly, this result shows that, in terms of the norm, we have a qualitative rather then quantitative depth separation: the required norm with three layers is finite, while with only two layers it is not merely very large, but infinite. In contrast, in standard depth separation results, the separation is quantitative: we can compensate for a decrease in depth and use more neurons to achieve the same approximation quality. It would be interesting to further strengthen Example 5 by obtaining a quantitative lower bound on the norm required to -approximate the pyramid with an infinite-width two-layer ReLU network.
THE R-NORM IS NOT A RKHS NORM
There is an ongoing debate in the community on whether neural network learning can be simulated or replicated by kernel machines with the "right" kernel. In this context, it is interesting to ask whether the inductive bias we uncover can be captured by a kernel, or in other words whether the R-norm is an RKHS (semi-)norm. The answer is no:
Proposition 6. The R-norm is not a RKHS (semi-)norm.
This is seen immediately by the failure of the parallelogram law to hold. For example,
if f 1 (x) = [w 1 x] + , f 2 = [w 2 x] + with w 1 , w 2 ∈ S d−1 distinct, then by Exam- ple 1 we have f 1 R = f 2 R = 1, while f 1 + f 2 R = f 1 − f 2 R = 2, and so 2( f 1 2 R + f 2 2 R ) = f 1 + f 2 2 R + f 1 − f 2 2 R .
GENERALIZATION IMPLICATIONS
Neyshabur et al. (2015) shows that training an unbounded-width neural network while regularizing the 2 norm of the weights results in a sample complexity proportional to a variant 12 of R(f ). This paper gives an explicit characterization of R(f ) and thus of the sample complexity of learning a function using regularized unbounded-width neural networks.
APPENDICES A INFINITE-WIDTH NETS
Measures and infinite-width nets Let α be a signed measure 13 defined on S d−1 × R, and let α 1 = d|α| denote its total variation norm. We let M (S d−1 × R) denote the space of measures on S d−1 ×R with finite total variation norm. Since S d−1 ×R is a locally compact space, M (S d−1 ×R) is the Banach space dual of C 0 (S d−1 × R), the space of continuous functions on S d−1 × R vanishing at infinity (Malliavin, 2012, Chapter 2, Theorem 6.6), and
α 1 = sup ϕ dα : ϕ ∈ C 0 (S d−1 × R), ϕ ∞ ≤ 1 .(39)
For any α ∈ M (S d−1 × R) and ϕ ∈ C 0 (S d−1 × R), we often use α, ϕ to denote ϕdα.
Any α ∈ M (S d−1 ×R) can be extended uniquely to a continuous linear functional on C b (S d−1 ×R), the space continuous and bounded functions on S d−1 ×R. In particular, since the function ϕ
(w, b) = [w x − b] + − [−b] + belongs to C b (S d−1 × R), we see that the infinite-width net h α (x) := S d−1 ×R ([w x − b] + − [−b] + )dα(w, b)(40)
is well-defined for all x ∈ R d . As shown above, this ensures the integral is always well-defined for any measure α with finite total variation. Alternatively, we could have restricted to measures that have finite first moment, i.e., (2019) is always well-defined. However, restricting to measures with finite first moment complicates the function space description, and excludes from our analysis certain functions that are still naturally defined as limits of bounded norm finite-width networks, and so we opt for the definition above instead. In the case that α has a finite first moment the difference between definitions is immaterial since h α and h α are equal up to an additive constant, which implies they have the same representational cost under R(·) and R 1 (·).
S d−1 ×R |b| d|α|(w, b) < ∞, which ensures the definition h α (x) := S d−1 ×R [w x − b] + dα(w, b) proposed in Savarese et al.
Even and odd measures We say
α ∈ M (S d−1 × R) is even if S d−1 ×R ϕ(w, b)dα(w, b) = S d−1 ×R ϕ(−w, −b)dα(w, b) for all ϕ ∈ C 0 (S d−1 × R) (41) or α is odd if S d−1 ×R ϕ(w, b)dα(w, b) = − S d−1 ×R ϕ(−w, −b)dα(w, b) for all ϕ ∈ C 0 (S d−1 × R). (42)
It is easy to show every measure α ∈ M (S d−1 × R) is uniquely decomposable as α = α + + α − where α + is even and α − is odd, which we call the even and odd decomposition of α. For example, if α has a density µ(w, b) then α + is the measure with density µ + (w, b) = 1 2 (µ(w, b)+µ(−w, −b)) and α − is the measure with density µ − (w, b) = 1 2 (µ(w, b) − µ(−w, −b)). We let M (P d ) denote the subspace of all even measures in M (S d−1 × R), which is the Banach space dual of C 0 (P d ), the subspace of all even functions ϕ ∈ C 0 (S d−1 × R). Even measures play an important role in our results because of the following observations. Let α ∈ M (S d−1 × R) with even and odd decomposition α = α + + α − . Then we have h α = h α + + h α − . By the identity [t] + + [−t] + = |t| we can show
h α + (x) = 1 2 S d−1 ×R (|w x + b| − |b|)dα + (w, b).(43)
Likewise, by the identity
[t] + − [−t] + = t we have h α − (x) = v 0 x.(44)
where v 0 = 1 2 S d−1 ×R wdα − (w, b). Hence, h α decomposes into a sum of a component with absolute value activations and a linear function. In particular, if f = h α,v,c for some α ∈ M (S d−1 × R), v ∈ R d , c ∈ R, letting α + be the even part of α, we always have f = h α + ,v ,c for some v ∈ R d . In other words, we lose no generality by restricting ourselves to infinite width nets of the form f = h α,v,c where α is even (i.e., α ∈ M (P d )).
We will need the following fact about even and odd decompositions of measures under the total variation norm:
Proposition 7. Let α ∈ M (S d−1 × R) with α = α + + α − where α + is even and α − is odd. Then α + 1 ≤ α 1 and α − 1 ≤ α 1 . Proof. For any ϕ ∈ C 0 (S d−1 × R) we can write ϕ = ϕ + + ϕ − where ϕ + (w, b) = 1 2 (ϕ(w, b) + ϕ(−w, −b)) is even and ϕ − (w, b) = 1 2 (ϕ(w, b) − ϕ(−w, −b)) is odd. Note that ϕ dα + = ϕ + dα + since ϕ − dα + = 0. Furthermore, if |ϕ(w, b)| ≤ 1 for all (w, b) ∈ S d−1 × R we see that |ϕ + (w, b)| ≤ 1 2 (|ϕ(w, b)| + |ϕ(−w, −b)|) ≤ 1 for all (w, b) ∈ S d−1 × R.
Therefore, in the dual definition of α + 1 given in (39) it suffices to take the supremum over all even functions ϕ ∈ C 0 (S d−1 × R). Hence,
α 1 = sup ϕ dα : ϕ ∈ C 0 (S d−1 × R), ϕ ∞ ≤ 1 (45) = sup ϕ dα + + ϕ dα − : ϕ ∈ C 0 (S d−1 × R), ϕ ∞ ≤ 1 (46) ≥ sup ϕ dα + + ϕ dα − : ϕ ∈ C 0 (S d−1 × R), ϕ ∞ ≤ 1, ϕ even (47) = sup ϕ dα + : ϕ ∈ C 0 (S d−1 × R), ϕ ∞ ≤ 1, ϕ even (48) = α + 1(49)
A similar argument shows α − 1 ≤ α 1 .
Lipschitz continuity of infinite-width nets
Let f = h α,v,c for any α ∈ M (S d−1 × R), v ∈ R d , c ∈ R. Then f ∈ Lip(R d ) with f L ≤ α 1 + v .
Proof. First we prove for all even α ∈ M (P d ), h α L ≤ α 1 /2.
By the reverse triangle inequality we have |w x − b| − |w y − b| ≤ w (x − y) for all x, y ∈ R d , w ∈ S d−1 , b ∈ R. Therefore, using identity (43), for all x, y ∈ R d we see that
|h α (x) − h α (y)| = 1 2 S d−1 ×R |w x − b| − |w y − b| dα(w, b) (50) ≤ 1 2 S d−1 ×R |w x − b| − |w y − b| d|α|(w, b)(51)≤ 1 2 S d−1 ×R |w (x − y)|d|α|(w, b)(52)≤ 1 2 x − y α 1(53)
which shows h α is Lipschitz with h α L ≤ α 1 /2.
More generally, for any infinite-width net
f = h α,v,c with α ∈ M (S d−1 × R), v ∈ R d and c ∈ R.
From the even and odd decomposition ). Hence, v 0 2 ≤ α − 1 /2, Therefore, by the triangle inequality, f L ≤ α + 1 /2 + α − 1 /2 + v ≤ α 1 + v , which gives the claim.
α = α + + α − we have f = h α + ,v0+v,c , where v 0 = 1 2 S d−1 ×R wdα − (w, b
B OPTIMIZATION CHARACTERIZATION OF REPRESENTATIONAL COST
Here we establish the optimization equivalents of the representational costs R(f ) and R 1 (f ) given in (9) and (13).
As an intermediate step, we first give equivalent expressions for R(f ) and R 1 (f ) in terms of sequences finite-width two-layer ReLU networks converging pointwise to f . For this we need to introduce some additional notation and definitions.
We let A(S d−1 × R) denote the space of all measures given by a finite linear combination of Diracs,
i.e., all α ∈ M (S d−1 × R) of the form α = k i=1 a i δ (wi,bi) for some a i ∈ R, (w i , b i ) ∈ S d−1 × R, i = 1, ..., k, where δ (w,b) denotes a Dirac delta at location (w, b) ∈ S d−1 × R. We call any α ∈ A(S d−1 × R) a discrete measure.
Note there is a one-to-one correspondence between discrete measures and finite-width two layer ReLU nets (up to a bias term). Namely, for any θ ∈ Θ defining a finite-width net g θ (
x) = k i=1 a i [w i x − b i ] + + c, setting α = k i=1 a i δ (wi,bi) we have f = h α,c with c = g θ (0). We write θ ∈ Θ ↔ α ∈ A(S d−1 × R) to indicate this correspondence. Furthermore, in this case C(θ) = k i=1 |a i | = α 1 .
We also recall some facts related to the convergence of sequences of measures. Let C b (S d−1 × R) denote the set of all continuous and bounded functions on S d−1 × R. A sequence of measures {α n }, with α n ∈ M (S d−1 × R) is said to converge narrowly to a measure α ∈ M (S d−1 × R) if ϕ dα n → ϕ dα for all ϕ ∈ C b (S d−1 × R). Also, a sequence {α n } is called tight if for all ε > 0 there exists a compact set K ε ⊂ S d−1 × R such that |α n |(K c ε ) ≤ ε for all n sufficiently large. Every narrowly convergent sequence of measures is tight (Malliavin, 2012, Theorem 6.8). Conversely, any sequence {α n } that is tight and uniformly bounded in total variation norm has a narrowly convergent subsequence; this is due to a version of Prokhorov's Theorem for signed measures (Bogachev, 2007, Theorem 8.6.2). Now we establish the following equivalent expressions for the representational costs R(·) and R 1 (·).
Lemma 4. For any
f : R d → R let f 0 denote the function f 0 (x) = f (x) − f (0).
For R(f ) as defined in (7) and R 1 (f ) as defined in (12), we have
R(f ) = inf lim sup n→∞ α n 1 : α n ∈ A(S d−1 × R), h αn → f 0 pointwise, {α n } tight . (54)
and
R 1 (f ) = inf lim sup n→∞ α n 1 : α n ∈ A(S d−1 × R), v n ∈ R d , h αn,vn,0 → f 0 pointwise, {α n } tight .(55)
Proof. We prove the identity in (54) for R(f ); the identity in (55) for R 1 (f ) follows by the same argument. Define
R ε (f ) := inf θ∈Θ C(θ) s.t. |g θ (x) − f (x)| ≤ ε ∀ x ≤ 1/ε and g θ (0) = f (0) (56)
so that R(f ) = lim ε→0 R ε (f ). Also, let L(f ) denote the right-hand side of (54).
First, suppose R(f ) is finite. Let ε n = 1/n. Then by definition of R(f ), for all n there exists θ n ∈ Θ such that C(θ n ) ≤ R εn (f ) + ε n , while |g θn (x) − f (x)| ≤ ε n for x ≤ 1/ε n and g θn (0) = f (0). Note that θ n ∈ Θ ↔ α n ∈ M (S d−1 × R) with g θn = h αn,c where c = g θn (0) = f (0) and α n 1 = C(θ n ). Hence, h αn (x) = g θn (x) − f (0), and we have |h
αn (x) − f 0 (x)| = |g θn (x) − f (x)| ≤ ε n for x ≤ 1/ε n . Therefore, h αn → f 0 pointwise, while lim sup n→∞ α n 1 ≤ lim sup n→∞ (R εn (f ) + ε n ) = R(f ),(57)
which shows L(f ) ≤ R(f ). Finally, it suffices to show {α n } has a tight subsequence, since we can reproduce the steps above with respect to the subsequence. Towards this end, define q n (x) = |w x − b|d|α n |(w, b), which is well-defined since α n is discrete and has compact support. Then q n is Lipschitz with q n L ≤ α n 1 ≤ B for some finite B, hence the sequence {q n } is uniformly Lipschitz. By the Arzela-Ascoli Theorem, {q n } has a subsequence {q n k } that converges uniformly on compact subsets. In particular, q n k (0) = |b|d|α n k |(w, b) ≤ L < ∞ for some L, which implies the sequence {α n k } is tight.
Conversely, suppose L(f ) is finite. Fix any ε > 0. Then by definition of L(f ) there exists a sequence α n ∈ M (S d−1 × R) ↔ θ n ∈ Θ such that lim n→∞ α n 1 exists with lim n→∞ α n 1 < L(f ) + ε, while f n := h αn,c = g θn with c = f (0) converges to f pointwise and satisfies f n (0) = f (0) for all n. Since, lim n→∞ α n 1 < L(f ) + ε, there exists an N 1 such that for all n ≥ N 1 we have α n 1 ≤ L(f ) + ε. By Proposition 8, the Lipschitz constant of f n is bounded above by α n 1 for all n, hence the sequence f n is uniformly Lipschitz. This implies f n → f uniformly on compact subsets, and so there exists an N 2 such that f n (x) − f (x) ≤ ε for all x ≤ 1/ε and f n (0) = f (0) for all n ≥ N 2 . For all n ≥ N 2 , f n satisfies the constraints in the definition of R ε (·). Therefore, for all n ≥ max{N 1 ,
N 2 } we have R ε (f ) ≤ C(θ n ) = α n 1 ≤ L(f ) + ε.(58)
Taking the limit as
ε → 0, we get R(f ) ≤ L(f ). Therefore, we have shown R(f ) is finite if and only if L(f ) is finite, in which case R(f ) = L(f ), giving the claim.
The following lemma shows every infinite-width net is the pointwise limit of a sequence of finitewidth nets defined in terms of sequence of measures uniformly bounded in total variation norm.
Lemma 5. Let f = h α,v,c for any α ∈ M (S d−1 × R),v ∈ R d , and c ∈ R. Then there exists a sequence of discrete measures α n ∈ A(S d−1 × R) with α n 1 ≤ α 1 such that f n = h αn,v,c converges to f pointwise.
Proof. For any α ∈ M (S d−1 × R) there exists a sequence of discrete measures α n converging narrowly to α such that α n 1 ≤ α 1 (Malliavin, 2012, Chapter 2, Theorem 6.9). Let f n = h αn,v,c . Since the function
(w, b) → [w x − b] + − [−b] + is continuous and bounded, we have f n (x) → f (x) for all x ∈ R d , i.e., f n → f pointwise.
Lemma 6. We have the equivalences
R(f ) = min α∈M (S d−1 ×R),c∈R α 1 s.t. f = h α,c ,(59)
and
R 1 (f ) = min α∈M (S d−1 ×R),v∈R d ,c∈R α 1 s.t. f = h α,v,c .(60)
Proof. We prove the R(f ) case; the R 1 (f ) case follows by the same argument. Throughout the proof we use the equivalence of R(f ) given in Lemma 4, and let M(f ) denote the right-hand side of (59).
Assume R(f ) is finite. Then there exists a tight sequence {α n }, α n ∈ A(S d−1 × R) , that is uniformly bounded in total variation norm such that h αn → f 0 pointwise. Therefore, by Prokhokov's Theorem, {α n } has a subsequence {α n k } converging narrowly to a measure α, hence f 0 = h α . Furthermore, narrow convergence implies α 1 ≤ lim sup k→∞ α n k 1 ≤ lim sup n→∞ α n 1 , and so M(f ) ≤ lim sup n→∞ α n 1 . Taking the infimum over all such sequences {α n }, we have
M(f ) ≤ R(f ).
Conversely, assume M(f ) is finite. Let α ∈ M (S d−1 × R) be any measure such that f 0 = h α . By Lemma 5 there exists a sequence {α n }, α n ∈ A(S d−1 × R), such that h αn → f 0 pointwise, while α n 1 ≤ α 1 . Hence, R(f ) ≤ lim sup n→∞ α n 1 ≤ α 1 . Since this holds for any α with f 0 = h α , we see that R(f ) ≤ M (f ), proving the claim. Now we show that if f is an infinite-width net, R 1 (f ) is equal to the minimal total variation norm of all even measures defining f (in fact, later we show for every infinite-width net is defined in terms of a unique even measure, whose total variation norm is equal to R 1 (f ); see Lemma 10).
Lemma 7. We have
R 1 (f ) = min α + ∈M (P d ),v∈R d ,c∈R α + 1 s.t. f = h α + ,v,c .(61)
where the minimization is over all even α + ∈ M (P d ).
Proof
. Suppose f = h α,v,c for some α ∈ M (S d−1 × R), v ∈ R d , c ∈ R.
If α has even and odd
decomposition α = α + + α − then f = h α + ,0,0 + h α − ,v,c = h α+,v ,c for some v ∈ R d .
Also, by Proposition 7, we have α + 1 ≤ α + + α − 1 = α 1 for any α − odd. Hence, the optimization problem describing R 1 (f ) in (60) reduces to (61).
C EXTENSION OF R-NORM TO LIPSCHITZ FUNCTIONS AND PROOF OF THEOREM 1
To simplify notation we let S(P d ) denote the space of even Schwartz functions on
S d−1 ×R, i.e., ψ ∈ S(P d ) if ψ ∈ S(S d−1 × R) with ψ(w, b) = ψ(−w, −b) for all (w, b) ∈ S d−1 × R.
We will need a finer characterization of the image of Schwartz functions under the dual Radon transform than what is given in Lemma 9, which is also due to Solmon (1987):
Lemma 8 (Solmon (1987), Theorem 7.7). Let ψ ∈ S(P d ) and define ϕ = γ d (−∆) (d−1)/2 R * {ψ}. Then ϕ ∈ C ∞ (R d ) with ϕ(x) = O( x −d ) and ∆ϕ(x) = O( x −d−2 ) as x → ∞. Moreover, R{ϕ} = ψ.
Using the above result we show the functional f R given in Definition 1 is well-defined:
Proposition 9. For any f ∈ Lip(R d ), the map L f (ψ) := −γ d f, (−∆) (d+1)/2 R * {ψ} is finite for all ψ ∈ S(P d ), hence f R = sup L f (ψ) : ψ ∈ S(P d ), ψ ∞ ≤ 1 is a well-defined functional taking on values in [0, +∞].
Proof. Since f is globally Lipschitz we have |f (x)| = O( x ), while for any ψ ∈ S(P d ) we
have |(−∆) (d+1)/2 R * {ψ}| = O( x −d−2 ) by Lemma 8, hence |f (x)(−∆) (d+1)/2 R * {ψ}(x)| = O( x −d−1 ) is absolutely integrable, and so f, (−∆) (d+1)/2 R * {ψ} is finite.
If f, (−∆) (d+1)/2 R * {ψ} = 0, we can choose the sign of ψ so that the inner product is positive, which shows that f R ≥ 0.
In Section 4 we showed ∆h α = R * {α} when α was a measure with a smooth density having rapid decay. The next key lemma shows this equality still holds in the sense of distributions when α is any measure in M (P d ).
Lemma 9. Let f = h α,v,c for any α ∈ M (P d ), v ∈ R d , c ∈ R. Then we have f, ∆ϕ = α, R{ϕ} for all ϕ ∈ C ∞ (R d ) such that ϕ(x) = O( x −d ) and ∆ϕ(x) = O( x −d−2 ) as x → ∞.
Proof. Consider the ridge function r w,b (x) := 1 2 |w x−b|, which is generated by the even measure
α 0 (w , b ) = 1 2 (δ(w − w, b − b) + δ(w + w, b + b)
). An easy calculation shows that ∆r w,b (x) = δ(w x − b) in the sense of distributions, i.e., for all test functions ϕ ∈ S(R d ) we have
r w,b (x)∆ϕ(x) dx = w x=b ϕ(x) ds(x) = R{ϕ}(w, b).(62)
Since R{ϕ}(w, b) is well-defined for all ϕ ∈ C ∞ (R d ) with decay like O( x −d ), by continuity ∆r w,b (x) extends uniquely to a distribution acting on this larger space of test functions.
Now consider the more general case of
f = h α with α ∈ M (P d ). Then for all ϕ ∈ C ∞ (R d ) with ϕ(x) = O( x −d ) and ∆ϕ(x) = O( x −d−2 ) as x → ∞ we have R d f (x)∆ϕ(x) dx = R d S d−1 ×R 1 2 (|w x − b| − |b|) dα(w, b) ∆ϕ(x) dx (63) = S d−1 ×R R d 1 2 (|w x − b| − |b|)∆ϕ(x) dx dα(w, b) (64) = S d−1 ×R R d r w,b (x)∆ϕ(x) dx dα(w, b) (65) = S d−1 ×R R{ϕ}(w, b) dα(w, b)(66)
where in (64) we applied Fubini's theorem to exchange the order of integration, whose application is justified since
h + (x) := 1 2 S d−1 ×R (|w x − b| − |b|) d|α|(w, b) ≤ α 1 x(67)
and by assumption ∆ϕ(
x) = O( x −d−2 ), hence h + (x)|∆ϕ(x)| = O( x ) −d−1 , and so h + (x)|∆ϕ(x)| dx < ∞.
Finally, if f = h α,v,c for any α ∈ M (P d ), v ∈ R d , c ∈ R, since affine functions vanish under the Laplacian we have f, ∆ϕ = h α , ∆ϕ , reducing this to the previous case, which gives the claim.
The following lemma shows f R is finite if and only if f is an infinite-width net, in which case f R is given by the total variation norm of the unique even measure defining f .
Lemma 10. Let f ∈ Lip(R d ). Then f R is finite if and only if there exists a unique even measure α ∈ M (P d ) and unique v ∈ R d , c ∈ R with f = h α,v,c , in which case f R = α 1 .
Proof. Suppose f R is finite. Then by definition f belongs to Lip(R d ) and the linear functional
L f (ψ) = −γ d f, (−∆) (d−1)/2 R * {ψ} is continuous on S(P d ) with norm f R . Since S(P d )
is a dense subspace of C 0 (P d ), by continuity there exists a unique extensionL f to all of C 0 (P d ) with the same norm. Hence, by the Riesz representation theorem, there is a unique measure α ∈ M (P d ) such thatL f (ψ) = ψ dα for all ψ ∈ C 0 (P d ) and f R = α 1 .
We now show f = h α,v,c for some v ∈ R d , c ∈ R. First, we prove ∆f = ∆h α as tempered distributions (i.e., as linear functionals on the space of Schwartz functions S(R d )). By Lemma 9 we have ∆h α , ϕ = α, R{ϕ} for any ϕ ∈ S(R d ), hence
∆h α , ϕ = α, R{ϕ}(68)=L f (R{ϕ}) (69) = L f (R{ϕ}) (70) = γ d f, (−∆) (d+1)/2 R * {R{ϕ}} (71) = −γ d f, ∆(−∆) (d−1)/2 R * {R{ϕ}} (72) = f, ∆ϕ (73) = ∆f, ϕ(74)
where in (70) we used the fact that R{ϕ} ∈ S(P d ) for all ϕ ∈ S(R d ) (Helgason, 1999, Theorem 2.4), and in (73) we used the inversion formula for Radon transform: −γ d (−∆) (d−1)/2 R * {R{ϕ}} = ϕ for all ϕ ∈ S(R d ) (Helgason, 1999, Theorem 3.1).
Hence, we have shown ∆f = ∆h α as tempered distributions. This means f − h α is in null space of the Laplacian acting on tempered distributions, which implies f −h α = p where p is some harmonic polynomial (i.e., p is a polynomial in x = (x 1 , ..., x d ) such that ∆p(x) = 0 for all x ∈ R d ). Finally, since both f and h α are Lipschitz they have at most linear growth at infinity, so must p. This implies p must be an affine function p(x) = v x + c, which shows f = h α,v,c as claimed.
Conversely
, suppose f = h α,v,c for some α ∈ M (P d ), v ∈ R d , c ∈ R. Let ψ ∈ S(P d ). By Lemma 8, the function ϕ = −γ d (−∆) (d−1)/2 R * {ψ} is in C ∞ (R d ) with ϕ(x) = O( x −d ), ∆ϕ(x) = O( x −d−2
) as x → ∞, and ψ = R{ϕ}. Hence, by Lemma 9 we have
L f (ψ) = f, ∆ϕ = α, R{ϕ} = α, ψ .(75)
This shows
f R = sup{ α, ψ : ψ ∈ S(P d ), ψ ∞ ≤ 1} (76) = sup{ α, ψ : ψ ∈ C 0 (P d ), ψ ∞ ≤ 1} (77) = α 1(78)
where the second to last equality holds since S(R d ) is a dense subspace of C 0 (R d ), and the last equality is by the dual characterization of the total variation norm.
Finally, to show uniqueness, suppose h α,v,c = h β,v ,c for some other even
β ∈ M (P d ), v ∈ R d , c ∈ R. Then the function h α,v,c − h β,v ,c = h α−β,v−v ,c−c is identically zero, hence by the argument above h α−β,v−v ,c−c R = α − β 1 = 0, which implies α = β. Therefore, h α,v,c = h α,v ,c , which also implies v = v and c = c .
Note that Lemma 1 is essentially a corollary of the uniqueness in the preceding result; we give the proof here for completeness.
Proof of Lemma 1. Suppose R 1 (f ) is finite. Then by the optimization characterization in Lemma 7, we have f = h α,v,c for some even α ∈ M (P d ), v ∈ R d , c ∈ R d , and R 1 (f ) is the minimum of α + 1 over all even measures α + ∈ M (P d ) and v ∈ R d , c ∈ R such that f = h α + ,v ,c . By Lemma 10, there is a unique even measure α + ∈ M (P d ), v ∈ R d , and c ∈ R such that f = h α + ,v,c . Hence, R 1 (f ) = α + 1 .
Now we give the proof of our main theorem, which shows f R = R 1 (f ).
Proof of Theorem 1. Suppose R 1 (f ) is finite. By Lemma 1, R 1 (f ) = α 1 where α ∈ M (P d ) is the unique even measure such that f = h α,v,c for some v ∈ R d , c ∈ R. Furthermore, f R = α 1 by Lemma 10. Hence, R 1 (f ) = f R . Conversely, if f R is finite, then by Lemma 10 we have f = h α,v,c for a unique even measure α ∈ M (P d ), and again by Lemma 1, f R = α 1 = R 1 (f ).
Proof of Proposition 1. The Radon transform is a bounded linear operator from L 1 (R d ) to
L 1 (S d−1 × R) (see, e.g., Boman & Lindskog (2009)). Hence, if ∆ (d+1)/2 f ∈ L 1 (R d ) then R{∆ (d+1)/2 f } ∈ L 1 (R d ). Let α ∈ M (P d ) be the even measure on S d−1 × R with density γ d R{∆ (d+1)/2 f }. Then α 1 = γ d R{∆ (d+1)/2 f } 1 ,
i.e., the total variation norm of α coincides with the L 1 -norm of its density. Therefore, by definition of f R we have
f R = sup{γ d f, ∆ (d+1)/2 R * {ψ} : ψ ∈ S(P d ), ψ ∞ ≤ 1} (79) = sup{ γ d R{∆ (d+1)/2 f }, ψ : ψ ∈ S(P d ), ψ ∞ ≤ 1} (80) = sup{ α, ψ : ψ ∈ S(P d ), ψ ∞ ≤ 1} (81) = α 1 = γ d R{∆ (d+1)/2 f } 1 .(82)
where we used the fact that the Schwartz class S(P d ) is dense in C 0 (P d ) and the dual definition of the total variation norm (39). If additionally f ∈ L 1 (R d ), we have
R{∆ (d+1)/2 f } = ∂ d+1 b
R{f } by the Fourier slice theorem, which gives
f R = γ d ∂ d+1 b
R{f } 1 .
D PROOF OF THEOREM 2
We show how our results change without the addition of the unregularized linear unit v x in (3). Specifically, we want to characterize R(f ) given in (7) (or equivalently its optimization formulation (9)). Unlike in the univariate setting, R(f ) does not have a simple closed form expression in higher dimensions. However, for any f ∈ Lip(R d ) we prove the bounds
max{ f R , 2 ∇f (∞) } ≤ R(f ) ≤ f R + 2 ∇f (∞)(83)
where the vector ∇f (∞) ∈ R d can be thought of as the gradient of the function f "at infinity"; see below for a formal definition. In particular, if f (x) vanishes at infinity then ∇f (∞) = 0 and we have
R(f ) = f R = R 1 (f ). For any f ∈ Lip(R d ), define ∇f (∞) ∈ R d by 14 ∇f (∞) := lim r→∞ 1 c d r d−1 x =r ∇f (x) ds(x),(84)
where c d = S d−1 dw. We will relate ∇f (∞) to the "linear part" of an infinite-width net. Towards this end, define V : M (S d−1 × R) → R d to be the linear operator given by
V(α) = 1 2 S d−1 ×R w dα(w, b).(85)
Note that if α = α + + α − where α + is even and α − is odd, then
V(α) = V(α − ) since S d−1 ×R w dα + (w, b) = 0. In particular, if we set v 0 = V(α − ), then h α − (x) = v 0 x. Lemma 11. Suppose f = h α,c for any α ∈ M (S d−1 × R), c ∈ R. Then, ∇f (∞) = V(α).
Proof. A simple calculation shows the weak gradient of f = h α,c is given by
∇f (x) = S d−1 ×R H(w x − b)w dα(w, b)(86)
where H is defined as H(t) = 1 if t ≥ 0 and H(t) = 0 if t < 0 otherwise. Therefore, we have
lim r→∞ 1 r d−1 x =r ∇f (x) ds(x) = lim r→∞ S d−1 ×R S d−1 H(rw w − b)w dw dα(w, b) (87) = lim r→∞ S d−1 ×R w w w ≥b/r dw dα(w, b) (88) = 1 2 S d−1 dw S d−1 ×R w dα(w, b)(89)
Finally, dividing both sides by c d = S d−1 dw gives the result.
Lemma 12. If f (x) = v 0 x + c then R(f ) = 2 v 0 .
Proof. Note that f = h α,c only if α is odd and V(α) = v 0 . Hence, we have
R(f ) = min α odd α 1 s.t. V(α) = v 0(90)
The adjoint V * :
R d → C b (S d−1 × R) is given by [V * y](w, b) = 1 2 w y.
Therefore, the dual of the convex program above is given by
max y∈R d V * y ∞≤1 v 0 y = max y ≤2 v 0 y = 2 v 0(91)
where we used the fact that V * y ∞ = max w∈S d−1 1 2 w y ≤ 1 holds if and only if y ≤ 2. This means 2 v 0 is a lower bound for R(f ). Since this bound is reached with the primal feasible choice α defined by
α(w, b) = v 0 δ w − v 0 v 0 , b − δ w + v 0 v 0 , b(92)
we have R(f ) = 2 v 0 as claimed.
Now we give the proof of Theorem 2.
Proof of Theorem 2. Suppose f R is finite. Set v 0 = ∇f (∞). Then by Lemma 10, there is a unique even measure α + such that f = h α + ,v0,c for some
unique v 0 ∈ R d , c ∈ R, with f R = α + 1 . Therefore, R(f ) is equivalent to the optimization problem R(f ) = min α − odd α + + α − 1 s.t. V(α − ) = v 0(93)
Since α + + α − 1 ≤ α + 1 + α − 1 , by Lemma 12 we see that R(f ) ≤ α + 1 + 2 v 0 . Now we show the lower bound. The above optimization problem is equivalent to
R(f ) = min α α 1 s.t. V(α) = v 0 , E(α) = α +(94)
where E(α) projects onto the even part of α. The Banach space adjoint E * :
C 0 (S d−1 × R) → C 0 (S d−1 × R) is also projection onto the even part, i.e., [E * ϕ](w, b) = 1 2 (ϕ(w, b) + ϕ(−w, −b)
). Therefore, the dual problem is given by
sup ϕ∈C0(S d−1 ×R),y∈R d V * y+E * ϕ ∞ ≤1 v 0 y + S d−1 ×R ϕ(w, b)dα + (w, b)(95)
We can constrain ϕ to be even without changing the maximum since α + is even. Thus the dual feasible set reduces to pairs (ϕ, y) with ϕ ∈ C 0 (S d−1 ×R) even and y ∈ R d are such that |ϕ(w, b)+ 1 2 w y| ≤ 1 for all (w, b) ∈ S d−1 × R. Taking the supremum over all dual feasible pairs (ϕ, 0) such that ϕ ∞ ≤ 1, we see R(f ) ≥ α + 1 = f R . Likewise, if we choose the dual feasible pair (ϕ, y) = (0, 2v 0 / v 0 ) then the dual objective is 2 v 0 , hence R(f ) ≥ 2 v 0 . This gives R(f ) ≥ max{ f R , 2 v 0 }, as desired.
R(f ) = f R + 2 ∇f (∞) .(96)
Proof. Let w + , w − ∈ S d−1 be orthogonal. Consider f = h α defined by α = α + + α − with
α + = δ(w − w + , b) + δ(w + w + , b)(97)α − = δ(w − w − , b) − δ(w + w − , b)(98)
Hence, f (x) = |w + x| + w − x (e.g., in 2-D one such function is f (x, y) = x + |y|). The dual problem for R(f ) in this instance is given by:
sup ϕ∈C0(S d−1 ×R),y∈R d W * y+E * ϕ ∞≤1 w − y + S d−1 ×R ϕ(w, b)dα + (w, b)(99)
Set y * = 2w + − , and let ϕ * be a continuous approximation to sign(α + ) whose support is localized to an arbitrarily small neighborhood of ±(w + , 0). Then the pair (ϕ * , y * ) is dual feasible since
ψ(w, b) := [V * y * ](w, b)+E * ϕ * (w, b) = w w − +ϕ * (w, b) = 1
if w = ±w + and b = 0 w w − else and so |ψ(w, b)| ≤ 1. For these choices of (β * , y * ) the dual objective is 2 w − + f R , which gives a lower bound on R(f ). But this is also an upper bound on R(f ) hence R(f ) = f R + 2 w − . Since ∇f (∞) = w − , the result follows.
E PROPERTIES OF THE R-NORM
Here we prove the properties of R-norm discuseed in Section 4.1, including Proposition 2. Proposition 11. The R-norm has the following properties:
• (1-homogeneity and triangle inequality) If f R , g R < ∞, then c · f R = |c| f R for all c ∈ R and f + g R ≤ f R + g R , i.e., · R is a semi-norm.
• (Annihilation of affine functions) f R = 0 if and only if f is affine, i.e., f (x) = v x + c for some v ∈ R d , c ∈ R.
• (Translation and rotation invariance) If g(x) = f (U x + y) where y ∈ R d and U ∈ R d×d is any orthogonal matrix, then g R = f R .
• (Scaling with dilations/contractions) Suppose f R < ∞. Let f ε (x) := f (x/ε), then f ε R = ε −1 f R .
Proof. The 1-homogenity and triangle inequality properties follow immediate from the linearity of all operations and the definition by way of a set supremum.
Clearly f R = 0 if f is affine. Conversely, suppose f R = 0 then by the uniqueness in Lemma 10, we have α = 0, and so f = h 0,v,c for some v ∈ R d and c ∈ R, hence f is affine.
For simplicity we demonstrate proofs of the remaining properties under the same conditions of Proposition 1, i.e., d odd, and where f ,
∆ (d+1)/2 f ∈ L 1 (R d ) so that f R = γ d R{∆ (d+1)/2 f } 1 = γ d ∂ d+1 b
R{f } 1 < ∞. The general case follows from standard duality arguments.
To show translation invariance, define f (y) (x) := f (x − y). Then since ∆ commutes with translations we have ∆ (d+1)/2 f (y) = [∆ (d+1)/2 f ] (y) . Also, for any function g we see that
R{g (y) }(w, b) = R{g}(w, b + w y),(100)
Therefore,
f (y) R = S d−1 ×R |R{∆ (d+1)/2 f (y) }(w, b)| dw db (101) = S d−1 ×R |R{∆ (d+1)/2 f }(w, b + w y)| dw db (102) = S d−1 ×R |R{∆ (d+1)/2 f }(w, b)| dw db = f R .(103)
To show rotation invariance, let f U (x) = f (U x) where U is any orthogonal d × d matrix.
Then, using the fact that the Laplacian commutes with rotations, we have
∆ (d+1)/2 f U (x) = ∆ (d+1)/2 f (U x), and since R{g U }(w, b) = R{g}(U w, b), we see that R{∆ (d+1)/2 f U }(w, b) = R{∆ (d+1)/2 f }(U w, b), and so f U R = f R .(104)
To show the scaling under contractions/dilations (i.e., Proposition 2), let f ε (x) = f (x/ε) for ε > 0. Then
R{f ε }(w, b) = w x=b f (x/ε)ds(x) (105) = ε d−1 w x=b/ε f (x)ds(x) (106) = ε d−1 R{f }(w, b/ε).(107)
Hence, we have
|∂ d+1 b R{f ε }(w, b)| = ε d−1 ε −d−1 |∂ d+1 b R{f }(w, b/ε)| (108) = ε −2 |∂ d+1 b R{f }(w, b/ε)|(109)
and so
S d−1 ×R |∂ d+1 b R{f ε }(w, b)| dw db = ε −2 S d−1 ×R |∂ d+1 b R{f }(w, b/ε)| dw db (110) = ε −1 S d−1 ×R |∂ d+1 b R{f }(w,b)| dw db (111) = ε −1 f R .(112)
Fourier estimates For any Lipschitz function f we can always interpret ∆f in a distributional sense. An interesting special case is when ∆f is a distribution of order zero, i.e., when there exists a constant C such that | ∆f, ϕ | ≤ C ϕ ∞ for all smooth compactly supported functions ϕ so that ∆f extends uniquely to a measure having finite total variation. In this case, the Fourier transform of ∆f , defined as ∆f (ξ) := ∆f, e −j2πx ξ for all ξ ∈ R d , is a continuous and bounded function, and we can make use of an extension of the Fourier slice theorem to Radon transforms of measures (see, e.g., Boman & Lindskog (2009)) to analyze properties of f R . In particular, the following result shows that in order for f R to be finite, the Fourier transform of ∆f (or the Fourier transform of f if it exists classically) must decay at a dimensionally dependent rate.
Proposition 12. Suppose ∆f is a distribution of order zero. Then
f R is finite only if ∆f (σ ·w) = O(|σ| −(d−1) ) as |σ| → ∞ for all w ∈ S d−1 . If additionally f ∈ L 1 (R d ), then f R is finite only if f (σ · w) = O(|σ| −(d+1) ) as |σ| → ∞ for all w ∈ S d−1 .
Proof. If ∆f ∈ M (R d ) is a finite measure then its Radon transform R{∆f } ∈ M (P d ) exists as a finite measure, i.e., we can define R{∆f } via duality as R{∆f }, ϕ = ∆f, R * {ϕ} for all ϕ ∈ C 0 (R d ) (see, e.g., Boman & Lindskog (2009)). Additionally, the restriction R{∆f }(w, ·) ∈ M (R) is well-defined finite measure for all w ∈ S d−1 , and its 1-D Fourier transform in the b variable is given by
F b R{∆f }(w, σ) = ∆f (σ · w) for all w ∈ S d−1 , σ ∈ R.(113)
By Lemma 10, f R is finite if and only if the functional L f (ψ) = −γ d f, (−∆) (d+1)/2 R * {ψ} defined for all ψ ∈ S(P d ) extends to a unique measure α ∈ M (P d ). We compute the Fourier transform of α in the b variable via duality: for all ϕ ∈ S(P d ) we have
F b α, ϕ = α, F b ϕ (114) = −γ d f, (−∆) (d+1)/2 R * {F b ϕ} (115) = γ d ∆f, (−∆) (d−1)/2 R * {F b ϕ} (116) = γ d ∆f, R * {(−∂ 2 b ) (d−1)/2 F b ϕ} (117) = γ d ∆f, R * {F b (|σ| d−1 ϕ)} (118) = γ d R{∆f }, F b (|σ| d−1 ϕ) (119) = γ d F b R{∆f }, |σ| d−1 ϕ (120) = γ d |σ| d−1 F b R{∆f }, ϕ(121)This shows F b α = γ d |σ| d−1 F b R{∆f } in the sense of distributions. Since F b R{∆f } is defined pointwise for all (w, b) ∈ S d−1 × R so is F b α and we have F b α(w, σ) = γ d |σ| d−1 F b R{∆f }(w, σ) = γ d |σ| d−1 ∆f (σ · w).(122)
Finally, since α is a finite measure, we know F b α ∞ ≤ α 1 = O(1), which gives the first result. If additionally f ∈ L 1 (R d ) then we have ∆f (ξ) = ξ 2 f (ξ), and so (F b α)(w, b) = |σ| d+1 f (σ·w) which gives the second result.
F UPPER AND LOWER BOUNDS
Here we prove several upper and lower bounds for the R-norm. Proposition 3 is an immediate corollary of the following upper bound:
Proposition 13. If (−∆) (d+1)/2 f is a finite measure, then f R ≤ γ d c d (−∆) (d+1)/2 f 1 ,(123)
In particular, if (−∆) (d+1)/2 f exists in a weak sense then · 1 can be interpreted as the L 1 -norm.
Proof. Straight from definitions we have
f R = sup γ d f, (−∆) (d+1)/2 R * {ψ} : ψ ∈ S(P d ), ψ ∞ ≤ 1 (124) = sup γ d (−∆) (d+1)/2 f, R * {ψ} : ψ ∈ S(P d ), ψ ∞ ≤ 1 (125) ≤ sup γ d (−∆) (d+1)/2 f, ϕ : ϕ ∈ C 0 (R d ), ϕ ∞ ≤ c d (126) = γ d c d (−∆) (d+1)/2 f 1(127)
where we used the fact that R * {ϕ} ∈ C 0 (R d ) for ϕ ∈ S(P d ) (Solmon, 1987, Corollary 3.6) and we have
R * {ϕ} ∞ ≤ c d for all ϕ ∈ S(P d ) such that ϕ ∞ ≤ 1 since |R * {ϕ}(x)| ≤ S d−1 |ϕ(w, w x)| dw ≤ S d−1 dw = c d .(128)
The following result also gives a useful lower bound on the R-norm.
Proposition 14. If f ∈ Lip(R d ) then
f R ≥ sup f, ∆ϕ : ϕ ∈ S(R d ), R{ϕ} ∞ ≤ 1 .(129)
Proof. Let S H (P d ) ⊂ S(P d ) denote the image of S(R d ) under the Radon transform. Then
f R = sup γ d f, (−∆) (d+1)/2 R * {ψ} : ψ ∈ S(P d ), ψ ∞ ≤ 1 (130) ≥ sup γ d f, (−∆) (d+1)/2 R * {ψ} : ψ ∈ S H (P d ), ψ ∞ ≤ 1 (131) = sup γ d f, (−∆) (d+1)/2 R * {R{ϕ}} : ϕ ∈ S(R d ), R{ϕ} ∞ ≤ 1 (132) = sup f, ∆ϕ : ϕ ∈ S(R d ), R{ϕ} ∞ ≤ 1(133)
where in the last step we used the inversion formula:
ϕ = γ d (−∆) (d−1)/2 R * {R{ϕ}} for all ϕ ∈ S(R d ).
Further simplifying the lower bound above gives the following.
Proposition 15. If f ∈ Lip(R d ) then
f R ≥ sup f, ∆ϕ : ϕ ∈ S(R d ), ϕ 1 ≤ 1 .(134)
In particular, if ∆f exists in a weak sense then f R ≥ ∆f ∞ .
Proof. If ϕ 1 = |ϕ(x)| dx ≤ 1 then clearly |R{ϕ}(w, b)| = | w x=b ϕ(x)ds(x)| ≤
G RADIAL BUMP FUNCTIONS
Proof of Proposition 4. Assume f ∈ L 1 (R d ) so that its Radon transform R{f } is welldefined, and for simplicity assume d is odd. Note that for a radially symmetric function we have R{f }(w, b) = ρ(b) for some even function ρ ∈ L 1 (R), i.e., the Radon transform of a radially symmetric function does not depend on the unit direction w ∈ S d−1 . Supposing ∂ (d+1) ρ(b) exists either as a function or a measure, we have
f R = γ d ∂ d+1 b R{f } 1 = γ d c d |∂ d+1 ρ(b)|db,(135)
where c d = S d−1 dw = 2π d/2 Γ(d/2) . Now we derive an expression for ρ(b) in terms of g. First, since ρ(b) = R{f }(w, b) for any w ∈ S d−1 , we can choose w = e 1 = (1, 0, ..., 0), which gives
ρ(b) = R{f }(e 1 , b) = x1=b g( x )dx 2 · · · dx d = R d−1 g( b 2 + x 2 )dx(136)
where we have setx = (x 2 , ..., x d ). Changing to polar coordinates over R d−1 , we have
ρ(b) = R d−1 g( b 2 + x 2 )dx = c d−1 ∞ 0 g( b 2 + r 2 )r d−2 dr.(137)
By the change of variables t 2 = b 2 + r 2 , t > 0, we have
ρ(b) = c d−1 ∞ b g(t)(t 2 − b 2 ) (d−3)/2 t dt.(138)
Hence, we see that
f R = 1 (d − 2)! ∂ (d+1) b ∞ b g(t)(t 2 − b 2 ) (d−3)/2 t dt 1(139)
where we used the fact that γ d c d c d−1 = 1 (d−2)! .
Calculations in Example 3. Let f (x) = g d,k ( x ) with x ∈ R d where g d,k (r) = (1 − r 2 ) k if 0 ≤ r < 1 0 if r ≥ 1.
for any k > 0. Then a straightforward calculation using (138) gives
ρ(b) = C d,k (1 − b 2 ) k+ d−1 2 if |b| < 1 0 if b ≥ 1.(141)
where C d,k = Γ((d−3)/2)·Γ(1+k) 2Γ((d+1)/2)+k) . Hence, we have f R finite if and only if ∂ d b ρ(b) has bounded variation, which is true if and only if k − d + d−1 2 ≥ 0, or equivalently, k ≥ d+1 2 . For example, if d = 3 then we need k ≥ 2 in order for f R to be finite, consistent with the previous example.
To illustrate scaling of f R with dimension d, we set k = (d + 1)/2 + 2 = (d + 5)/2 so that ρ(b) = C d,(d+5)/2 (1 − b 2 ) d+2 for |b| ≤ 1 and ρ(b) = 0 otherwise. Then we can show that |∂ d+1 ρ(b)| ≤ |∂ d+1 ρ(0)| for |b| ≤ 1 and ∂ d+1 ρ(b) = 0 for all |b| ≥ 1. Therefore,
f R = 1 (d − 2)! 1 −1 |∂ d+1 ρ(b)| ≤ 2 (d − 2)! |∂ d+1 ρ(0)|(142)
Performing a binomial expansion of ρ(b) and taking derivatives, we obtain 2 (d − 2)! |∂ d+1 ρ(0)| = 2C d,(d+5)/2 d + 2 (d + 1)/2 (d + 1)d(d − 1) = 2d(d + 5)
for all odd d ≥ 3. By the lower bound in Proposition 15, we also have f R ≥ ∆f ∞ = |∆f (0)| = d(d + 5). Hence f R ∼ d 2 .
H PIECEWISE LINEAR FUNCTIONS
Proof of Proposition 5
Proof. Assume f is a continuous piecewise linear function with compact support satisfying assumption (a) or (b). Let B 1 , ..., B n denote the boundaries between the regions. Since f is piecewise linear and continuous, the distributional Laplacian ∆f decomposes into a linear combination of Dirac measures supported on the d − 1 dimensional boundary sets B k , i.e., for all smooth test functions ϕ we have
∆f, ϕ = n k=1 c k B k ϕ(x) ds(x).(144)
for some non-zero coefficients c k ∈ R, where ds indicates integration with respect to the d − 1 dimensional surface measure on B k . In particular, if B k is the boundary separating neighboring regions R p and R q , then c k = ± g p − g q where g p and g q are the gradient vectors of f in the region R p and R q , respectively, with sign determined by whether the function is locally concave (+) or convex (-) at the boundary. Note that ∆f is a distribution of order zero, i.e., it can be identified with a measure having finite total variation, and it has a well-defined Fourier transform given by
∆f (ξ) = n k=1 c k B k e −i2πξ x ds(x).(145)
We show that ∆f (ξ) violates the necessary decay requirements of Proposition 12 in order for f to have finite R-norm. In particular, we show under both conditions (a) and (b) there exists a w such that ∆f (σ · w) is asymptotically constant as |σ| → ∞, which gives the claim.
For all k = 1, ..., n, let w k denote a boundary normal to the boundary B k (i.e., a vector w k ∈ S d−1 such that w k x = 0 for all x ∈ B k , which is unique up to sign).
We first prove the claim under condition (a). Suppose, without loss of generality, that the boundary normal w 1 is not parallel with all the others, i.e., w 1 = w k for all k = 2, ..., n. We will write ∆f (σ · w 1 ) = F 1 (σ) + F 2 (σ)
where F 1 (σ) = c 1 B1 e −i2πσw 1 x ds(x) and F 2 (σ) = n k=2 c k B k e −i2πσw 1 x ds(x), and give decay estimates for F 1 and F 2 separately.
First, consider F 1 (σ). Since w 1 x = 0 for all x ∈ B 1 we have
F 1 (σ) = B1 e −i2πσw 1 x ds(x) = B1 ds(x) = s(B 1 ),(147)
where s(B 1 ) is the (d − 1)-dimensional surface measure of B 1 . In particular F (σ) is a non-zero constant for all σ ∈ R.
Now consider F 2 (σ). In this case, the integrand of B k e −i2πσw 1 x ds(x) for all k = 2, ..., n is not constant, since by assumption w 1 not parallel with any of the boundary normals w 2 , ..., w n . By an orthogonal change of coordinates, we can rewrite the surface integral over B k as a volume integral over a setB k embedded in (d − 1)-dimensional spacex = (x 1 , ...,x d−1 ), so that B k e −i2πσw j x ds(x) = B k e −i2πσw 1x dx for some for some non-zerow 1 ∈ R d−1 . Observe that g(x) := −w 1 i2πσ w 1 e −i2πσw 1x has divergence ∇ · g(x) = e −i2πσw 1x . Therefore, by the divergence theorem we have
B k e −i2πσw 1x dx = B k ∇ · g(x)dx (148) = ∂B k g(x) n(x)ds(x)(149)
= − 1 i2πσ w 1 ∂B k e −i2πσw 1xw 1 n(x)ds(x)
where n(x) is the outward unit normal to the boundary ∂B k . This gives the estimate
B k e −i2πσw 1x dx = O(1/σ), |σ| → ∞,(151)
which holds for any k = 2, ..., n. Therefore, F 2 (σ) = n k=2 c k B k e −i2πσw i x ds(x) = O(1/σ) as |σ| → ∞. This shows that ∆f (σ · w 1 ) → c 1 s(B 1 ), i.e., ∆f (σ · w 1 ) is asymptotically constant, which proves the claim. Now we prove the claim under condition (b). Without loss of generality, let w 1 be an inner boundary normal that is not parallel with any outer boundary normal, and assume f is concave when restricted to its support. Let I 1 be the indices of all inner boundary normals parallel with w 1 (including itself), let I 2 be the indices of all inner boundary normals that are not parallel with w 1 , and let O be the indices of all outer boundary normals. Then we write
∆f (σ · w 1 ) = F I1 (σ) + F I2 (σ) + F O (σ)(152)
where F I1 (σ) = k∈I1 c k B k e −i2πσw 1 x ds(x), F I2 (σ) = k∈I2 c k B k e −i2πσw 1 x ds(x), and F O (σ) = k∈O c k B k e −i2πσw 1 x ds(x). By the same argument as above, we can show F I1 (σ) = k∈I1 c k s(B k ). Since the function is concave when restricted to its support, all of the c k with k ∈ I 1 are positive, hence the sum k∈I1 c k s(B k ) is non-zero, which shows F I1 (σ) is a nonzero constant for all σ ∈ R. Likewise, by the same argument as above, we can show F I1 (σ) = O(1/σ) and F O (σ) = O(1/σ). Therefore, ∆f (σ · w 1 ) is asymptotically constant, which proves the claim.
Figure 1 :
1Radon transform. (a) Illustration of the Radon transform in equation (14) in dimension d = 2. The red line of points x satisfying w x = b defines the domain of the integral over f (x), where w determines the line orientation (angle relative to the coordinate axes) and b determines its offset from the origin. (b) Illustration of the support of the Radon transform for f
Figure 2 :
2We illustrate the steps in computing the R-norm of the 2-D function f (x) shown in the left-most figure using the formula for R1(f ) in Lemma 3. First, we apply the 3/2-power negative Laplacian (−∆) 3/2 (roughly speaking, a third-order derivative of the function), which gives the function g(x) shown in the middle figure.
Remark 1 .
1Our definition of an infinite-width net in differs slightly from Savarese et al. (2019): we integrate a constant shift of the ReLU [w x − b] + − [−b] + with respect to the measure α(w, b) rather than [w x − b] + as in Savarese et al. (2019).
Define Lip(R d ) to be the space of all real-valued Lipschitz continuous functions on R d . For anyf ∈ Lip(R d ), define f L := sup x =y |f (x)−f (y)|x−y , i.e., the smallest possible Lipschitz constant. The following result shows that Lip(R d ) is a natural space to work in when considering infinite-width nets: Proposition 8 (Infinite-width nets are Lipschitz).
Finally
, we show there are examples where the upper bound in Theorem 2 is attained. Proposition 10. There exist infinite nets f : R d → R in all dimensions d such that
1.1 RELATED WORKAlthough the focus of most previous work on approximation theory for neural networks was on the number of units, the norm of the weights was often used as an intermediate step. However, this use does not provide an exact characterization of the representational cost, only a (often very loose) upper bound, and in particular does not allow for depth separation results where a lower bound is needed. See Savarese et al.(2019)for a detailed discussion, e.g., contrasting with the work of
an effort to understand the power of deeper networks, there has been much work showing how some functions can be much more easily approximated in terms of number of required units by deeper networks compared to shallower ones, including results showing how functions that can be well-approximated by three-layer networks require a much larger number of units to approximateif using a two-layer network (e.g. Pinkus (1999); Telgarsky (2016); Liang & Srikant (2016); Safran
& Shamir
Peter L Bartlett. For valid generalization the size of the weights is more important than the size of the network. In Neural Information Processing Systems (NeurIPS), pp. 134-140, 1997. Sean M. Carroll and Bradley W. Dickinson. Construction of neural nets using the Radon transform. In International Joint Conference on Neural Networks, volume 1, pp. 607-611, 1989. doi: 10. 1109/IJCNN.1989.118639. Shiyu Liang and R. Srikant. Why Deep Neural Networks for Function Approximation? In ICLR, 2016. URL http://arxiv.org/abs/1610.04161. Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets v.s. their induced kernel. arXiv preprint arXiv:1810.05369, 2019. Dmitry Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103-114, 2017. ISSN 18792782. doi: 10.1016/j.neunet.2017.07.002. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.Andrew R Barron. Approximation and estimation bounds for artificial neural networks. Machine
learning, 14(1):115-133, 1994.
Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv
preprint arXiv:1903.07571, 2019.
Yoshua Bengio, Nicolas L Roux, Pascal Vincent, Olivier Delalleau, and Patrice Marcotte. Convex
neural networks. In Neural Information Processing Systems (NeurIPS), pp. 123-130, 2006.
Vladimir I Bogachev. Measure theory, volume 2. Springer Science & Business Media, 2007.
Jan Boman and Filip Lindskog. Support theorems for the Radon transform and Cramér-Wold theo-
rems. Journal of theoretical probability, 22(3):683-710, 2009.
Emmanuel J Candès. Harmonic analysis of neural networks. Applied and Computational Harmonic
Analysis, 6(2):197-218, 1999.
Emmanuel J Candès and David L Donoho. Ridgelets: A key to higher-dimensional intermittency?
Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and
Engineering Sciences, 357(1760):2495-2509, 1999.
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control,
signals and systems, 2(4):303-314, 1989.
Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high-
dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, 2019.
Sigurdur Helgason. The Radon transform. Springer, 1999.
Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are uni-
versal approximators. Neural networks, 2(5):359-366, 1989.
Yoshifusa Ito. Representation of functions by superpositions of a step or sigmoid function and their
applications to neural network theory. Neural Networks, 4(3):385-394, 1991.
Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks.
arXiv preprint arXiv:1906.05890, 2019.
Paul Malliavin. Integration and probability, volume 157. Springer Science & Business Media, 2012.
Song Mei and Andrea Montanari. The generalization error of random features regression: Precise
asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019.
Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-
layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665-E7671,
2018.
Mor Shpigel Nacson, Suriya Gunasekar, Jason D Lee, Nathan Srebro, and Daniel Soudry. Lexico-
graphic and depth-sensitive margins in homogeneous and non-homogeneous deep models. arXiv
preprint arXiv:1905.07325, 2019.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the
role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014.
Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural
networks. In Conference on Learning Theory (COLT), pp. 1376-1401, 2015.
Allan Pinkus. Approximation theory of the MLP model in neural networks. Acta numerica, 8:
143-195, 1999.
Itay Safran and Ohad Shamir. Depth-Width Tradeoffs in Approximating Natural Functions with
Neural Networks. ICML, pp. 1-27, 2017. ISSN 1938-7228. URL http://arxiv.org/
abs/1610.09887.
Pedro Savarese, Itay Evron, Daniel Soudry, and Nathan Srebro. How do infinite width bounded
norm networks look in function space? In Conference on Learning Theory (COLT), 2019.
Donald C Solmon. Asymptotic formulas for the dual Radon transform and applications. Mathema-
tische Zeitschrift, 195(3):321-343, 1987.
Sho Sonoda and Noboru Murata. Neural network with unbounded activation functions is universal
approximator. Applied and Computational Harmonic Analysis, 43(2):233-268, 2017.
Matus Telgarsky. Benefits of depth in neural networks. COLT, Feb 2016. URL http://arxiv.
org/abs/1602.04485.
Roughly speaking, a measure α is even if α(w, b) = α(−w, −b) for all (w, b) ∈ S d−1 ×R; see Appendix A for a precise definition.
i.e., functions ϕ : S d−1 × R → R that are C ∞ -smooth such that ϕ(w, b) and all its partial derivatives decrease faster than O(|b| −N ) as |b| → ∞ for any N ≥ 0 6 The assumption that ϕ is even is necessary since odd functions are annihilated by R * .6
2(2π) d−1 .Recall that if the dimension d is odd then (−∆) (d+1)/2 is just an integer power of the negative Laplacian, which is a linear combination of partial derivatives of order d + 1. Hence, we have 10 i.e., for all compactly supported smooth functions ϕ there exists a locally integrable function g ∈ L 1 loc (R d ) such that f (−∆) (d+1)/2 ϕ dx = gϕ dx.
The existence of such a radial function was noted in parallel work by Matus Telgarsky. Discussions with Telgarsky motivated us to construct and analyze it using the R-norm.
Their analysis does not allow for unregularized bias, but can be extended to allow for it.
To be precise, we assume α is a signed Radon measure; see, e.g., Malliavin (2012) for a formal definition. We omit the word "Radon" and simply call α a measure to avoid confusion with the Radon transform, which is central to this work.
Note every Lipschitz function has a weak gradient ∇f ∈ L ∞ (R d ), so ∇f (∞) is well-defined.
w x=b |ϕ(x)| ds(x) ≤ 1. Hence ϕ 1 ≤ 1 implies R{ϕ} ∞ ≤ 1. Combining this with the previous proposition gives the first bound. Additionally, by the dual definition of the L ∞ norm, and since S(R d ) is dense in L 1 (R d ), the second bound follows.
ACKNOWLEDGMENTSWe are grateful to Matus Telgarsky (University of Illinois, Urbana-Champaign) for stimulating discussions, including discussing his yet unpublished work with us. In particular, Telgarsky helped us refine our view of radial bumps and realize a fixed radial function can have finite norm in all dimensions. We would also like to thank Guillaume Bal (University of Chicago) for helpful discussions regarding the Radon transform, and Jason Altschuler (MIT) for pointers regarding convergence of measures and Prokhorov's Theorem. Some of the work was done while DS and NS were visiting the Simons Institute for Theoretical Computer Science as participants in the Foundations of Deep Learning Program. NS was partially supported by NSF awards 1764032 and 1546500. DS was partially supported by the Israel Science Foundation (grant No. 31/1031), and by the Taub Foundation. RW and GO were partially supported by AFOSR FA95501810166, NSF IIS1447449, NSF DMS1930049, and DMS1925101.
Breaking the curse of dimensionality with convex neural networks. Francis Bach, The Journal of Machine Learning Research. 181Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research, 18(1):629-681, 2017.
Universal approximation bounds for superpositions of a sigmoidal function. Andrew R Barron, IEEE Transactions on Information Theory. 393Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930-945, 1993. |